union bit mapping of BOOL and WORD - union

I'm tryng to map some bit in a single word , but I see for the compile the size of bool is equal to an byte. When I see the code in execution every BOOL is 8 bit size. How can I specify a bit field in struct or union?
That is my code:
TYPE FAULT_CODE:
STRUCT
fault1,falut2,fault3: BOOL;
END_STRUCT
END_TYPE
TYPE U_fault :
UNION
faultCode: FAULT_CODE;
in: WORD;
END_UNION
END_TYPE

Answer
The ST datatype that you are looking for is the BIT
BOOL: 8 Bit
BIT: 1 Bit
You can only use the data type BIT for individual variables within structures or function blocks. The possible values are TRUE (1) and FALSE (0).
A BIT element requires 1 bit of memory space, and you can use it to address individual bits of a structure or function block using its name. BIT elements, which are declared sequentially, are consolidated to bytes. This allows you to optimize memory usage compared to BOOL types, which each occupy at least 8 bits. However, bit access takes significantly longer. Therefore, you should only use the data type BIT if you want to define the data in a specified format.
Example
TYPE st_Flags :
STRUCT
Bit1 : BIT;
Bit2 : BIT;
Bit3 : BIT;
Bit4 : BIT;
Bit5 : BIT;
Bit6 : BIT;
Bit7 : BIT;
Bit8 : BIT;
END_STRUCT
END_TYPE
TYPE u_Error :
UNION
_Byte : BYTE;
_Flag : st_Flags;
END_UNION
END_TYPE

Referring to the example of Steve, personally I use also this more compact form, by extending the bitstructure with a union
TYPE u_Error EXTENDS st_Flags :
UNION
_Byte : BYTE;
END_UNION
END_TYPE
So to have this in output:
VAR
uError : u_Error ;
END_VAR
uError._Byte;
uError.Bit1;
uError.Bit2;
uError.Bit3;
...
Instead of
uError._Byte;
uError._Flag.Bit1;
uError._Flag.Bit2;
uError._Flag.Bit3;
...

You should avoid the BIT datatype, since Beckhoff PC-Based Control does not have the memory constraints that small embedded systems or older PLC-Systems have.
In the Beckhoff documentation is stated that BIT access operations take way longer than normal BOOL access operations.
CPU-Time is a more important resource to take into account, since a faster CPU is way more expensive than a RAM stick (and with a 4gb of RAM you can allocate a lot of BOOLs).
That said, if you want to evaluate a WORD because you want to extract the fault code from it, use a CASE OF statement.
Every CASE is then a type of ERROR which can also be declared as an ENUM Type.
Example for the ENUM:
TYPE E_Error :
(
eNO_ERROR := 0,
eGENERAL_ERROR,
eMOTION_ERROR,
eSAFETY_ERROR
);
END_TYPE
Example for the CASE OF statement:
CASE wError OF
eNO_ERROR:
;
eGENERAL_ERROR:
;
eMOTION_ERROR:
;
eSAFETY_ERROR:
;
END_CASE

To add to Steves' answer, you can also use Bit Access to Variables forgoing the need to create a custom data type!
If your variable is of an integer type (SINT, INT, DINT, USINT, UINT, UDINT, BYTE, WORD, DWORD), then you can access it's individual bits like so:
VAR
myWord: WORD;
END_VAR
myWord.0 := FALSE;
myWord.1 := TRUE;
myWord.2 := FALSE;
myWord.3 := TRUE;
myWord.4 := FALSE;
myWord.5 := TRUE;
// And so on
And just as Filippo Boido mentioned above, BOOL is inherently much faster, thus unless you are low on memory, or need to pass data in a WORD through a bus, using a BOOL array is preferred.

Related

Modeling hardware registers with out of order data fields

I have the following memory structure:
struct {
uint16_t MSB_VALUE : 8;
uint16_t : 8;
uint16_t LSB_VALUE;
} BIG_VALUE;
This structure, all together, represents a 32-bit section of memory that is fixed by hardware. The value of BIG_VALUE can be represented using Verilog concatenation notation thus:
BIG_VALUE = { MSB_VALUE[7:0], LSB_VALUE[15:0] }
I would like to be able to write a union (or something) such that I can access the value of BIG_VALUE using dot notation. Maybe something stupid like this:
union {
uint32_t val;
struct {
uint16_t MSB_VALUE : 8;
uint16_t : 8;
uint16_t LSB_VALUE;
} sub;
} BIG_VALUE;
But, the issue is that the MSB comes before the LSB in memory (with an 8-bit gap too), and so calling BIG_VALUE.val isn't going to get the hoped-for value.
I have a vague idea of something to try, but I'm just confusing myself. Is there a way to do this within the union/struct formalism, or should I give up now? Giving up, I guess, means having to manually split up the 24-bit value and then to store those into the appropriate fields. Maybe I could write a function to do that later, if it makes sense.
Having this work means that I could store a 24-bit value using dot notation and have the data go into the appropriate locations in memory. For example:
BIG_VALUE.val = 0x0031FFFE
Then
BIG_VALUE.MSB_VALUE == 0x31
and
BIG_VALUE.LSB_VALUE == 0xFFFE
But the memory layout would be
addr : 0x0031
addr +4 : 0xFFFE

How to initialize the bits in a register using C in a readable manner

I have a 24 bit register that comprises a number of fields. For example, the 3 upper bits are "mode", the bottom 10 bits are "data rate divisor", etc. Now, I can just work out what has to go into this 24 bits and code it as a single hex number 0xNNNNNN. However, that is fairly unreadable to anyone trying to maintain it.
The question is, if I define each subfield separately what's the best way of coding it all together?
The classic way is to use the << left shift operator on constant values and combine all values with either + or |. For example:
*register_address = (SYNC_MODE << 21) | ... | DEFAULT_RATE;
Solution 1
The "standard" approach for this problem is to use a struct with bitfield members. Something like this:
typedef struct {
int divisor: 10;
unsigned int field1: 9;
char field2: 2;
unsigned char mode: 3;
} fields;
The numbers after each field name specify the number of bits used by that member. In the example above, field divisor uses 10 bits and can store values between -512 and 511 (signed integer) while mode can store unsigned values on 3 bits: between 0 and 7.
The range of values for each field use the usual rules regarding signed/unsigned and but the field length (char/int/long) is limited to the specified number of bits. Of course, a char can still hold up to 8 bits, a short up to 16 a.s.o. The coercion rules are the usual rules for the types of the fields, taking into account their size (i.e. storing -5 in mode will convert it to unsigned (and the actual value will probably be 3).
There are several issues you need to pay attention of (some of them are also mentioned in the Notes section of the documentation page about bit fields:
the total amount of bits declared in the structure must be 24 (the size of your register);
because your structure uses 3 bytes, it's possible that some positions in arrays of such structures to behave strange because they span the allocation unit size (which is usually 4 or 8 bytes, depending on the hardware);
the order of the bit fields in the allocation unit is not guaranteed by the standard; depending on the architecture, it's possible that in the final 3-bytes pack, the field mode contains either the most significant 3 bits or the least significant 3 bites; you can sort this thing out easily, though.
You probably need to handle the values you store in a fields structure all at once. For that you can embed the structure in an union:
typedef union {
fields f;
unsigned int a;
} reg;
reg x;
/* Access individual fields */
x.f.mode = 2;
x.f.divisor = 42;
/* Get the entire register */
printf("%06X\n", x.a);
Solution 2
An alternative way to do (kind of) the same thing is to use macros to extract the fields and to compose the entire register:
#define MAKE_REG(mode, field2, field1, divisor) \
((((mode) & 0x07) << 21) | \
(((field2) & 0x03) << 19) | \
(((field1) & 0x01FF) << 10 )| \
((divisor) & 0x03FF))
#define GET_MODE(reg) (((reg) & 0xE00000) >> 21)
#define GET_FIELD2(reg) (((reg) & 0x180000) >> 19)
#define GET_FIELD1(reg) (((reg) & 0x07FC00) >> 10)
#define GET_DIVISOR(reg) ((reg) & 0x0003FF)
The first macro assembles the mode, field2, field1, divisor values into a 3-bytes integer. The other set of macros extract the values of individual fields. All of them assume the processed numbers are unsigned.
Pros and cons
The struct (embedded in an union) solution:
[+] it allows the compiler to do some checks of the values you want to put into the fields (and issue warnings); also, it does the correct conversions between signed and unsigned;
The macro solution:
[+] it is not sensible to memory alignment issues, you put the bits exactly where you want;
(-) it doesn't check the range of the values you put in fields;
(-) the handling of signed values is a little bit trickier using macros; the macros suggested here work only for unsigned values; more shifting is required in order to use signed values.

Bit Field of a specific size and order

There are several times in C in which a type is guaranteed to be at LEAST a certain size, but not necessarily exactly that size (sizeof(int) can result in 2 or 4). However, I need to be absolutely certain of some sizes and memory locations. If I have a union such as below:
typedef union{
struct{
unsigned int a:1, b:1, c:1, d:1, e:1, f:1, g:1, h:1;
};
unsigned int val:8;
} foo;
Is it absolutely guaranteed that the value of val is 8 bits long? Moreover, is it guaranteed that a is the most significant bit of val, and b is the second-most significant bit? I wish to do something like this:
foo performLogicalNOT(foo x){
foo product;
product.val = ~x.val;
return product;
}
And thus with an input of specific flags, return a union with exactly the opposite flags (11001100 -> 00110011). The actual functions are more complex, and require that the size of val be exactly 8. I also want to perform AND and OR in the same manner, so it is crucial that each a and b value be where I expect them to be and the size I expect them to be.
How the bits would be packed are not standard and pretty much implementation defined. Have a look at this answer.
Instead of relying on Union, it is better to use bitmask to derive the values. For the above example, char foo can be used. All operations (like ~) would be done on foo only. To get or set the bit specific values, appropriate bitmask can be used.
#define BITMASK_A 0x80
#define BITMASK_B 0x40
and so on..
To get the value of 'a' bit, use:
foo & BITMASK_A
To set the bit to 1, use:
foo | BITMASK_A
To reset the bit to 0, use:
foo & (~BITMASK_A)

Creating bitflag variables with large amounts of flags or how to create large bit-width numbers

Lets say I have an enum with bitflag options larger than the amount of bits in a standard data type:
enum flag_t {
FLAG_1 = 0x1,
FLAG_2 = 0x2,
...
FLAG_130 = 0x400000000000000000000000000000000,
};
This is impossible for several reasons. Enums are max size of 128 bits (in C/gcc on my system from experimentation), single variables are also of max size 128 bits etc.
In C you can't perform bitwise operations on arrays, though in C++ I suppose you could overload bitwise operators to do the job with a loop.
Is there any way in C other than manually remembering which flags go where to have this work for large numbers?
This is exactly what bit-fields are for.
In C, it's possible to define the following data layout :
struct flag_t
{
unsigned int flag1 : 1;
unsigned int flag2 : 1;
unsigned int flag3 : 1;
(...)
unsigned int flag130 : 1;
(...)
unsigned int flag1204 : 1; // for fun
};
In this example, all flags occupy just one bit. An obvious advantage is the unlimited number of flags. Another great advantage is that you are no longer limited to single-bit flags, you could have some multi-value flags merged in the middle.
But most importantly, testing and attribution would be a bit different, and probably simplified, as far as unit operations are concerned : you no longer need to do any masking, just access the flag directly by naming it. And by the way, use the opportunity to give these flags more comprehensive names :)
Instead of trying to assign absurdly large numbers to an enum so you can have a hundreds-of-bits-wide bitfield, let the compiler assign a normal zero-based sequence of numbers to your flag names, and simulate a wide bitfield using an array of unsigned char. You can have a 1024-bit bitfield using unsigned char bits[128], and write get_flag() and set_flag() accessor functions to mask the minor amount of extra work involved.
However, a far better piece of advice would be to look at your design again, and ask yourself "Why do I need over a hundred different flags?". It seems to me that what you really need is a redesign.
In this answer to a question related to bitflags, Bit Manipulation and Flags, I provided an example of using an unsigned char array that is an approach for very large sets of bitflags which I am moving to this posting.
This source example provides the following:
a set of Preprocessor defines for the bitflag values
a set of Preprocessor macros to manipulate bits
a couple of functions to implement bitwise operations on the arrays
The general approach for this is as follows:
create a set of defines for the flags which specify an array offset and a bit pattern
create a typedef for an unsigned char array of the proper size
create a set of functions that implement the bitwise logical operations
The Specifics from the Answer with a Few Improvements and More Exposition
Use a set of C Preprocessor defines to create a set of bitflags to be used with the array. These bitflag defines specify an offset within the unsigned char array along with the bit to manipulate.
The defines in this example are 16 bit values in which the upper byte contains the array offset and the lower byte contains the bit flag(s) for the byte of the unsigned char array whose offset is in the upper byte. Using this technique you can have arrays up to 256 elements, 256 * 8 or 2,048 bitflags, or by going from a 16 bit define to a 32 bit long you could have much more. (In the comments below bit 0 means least significant bit of a byte and bit 7 means most significant bite of a byte).
#define ITEM_FLG_01 0x0001 // array offset 0, bit 0
#define ITEM_FLG_02 0x0002 // array offset 0, bit 1
#define ITEM_FLG_03 0x0101 // array offset 1, bit 0
#define ITEM_FLG_04 0x0102 // array offset 1, bit 1
#define ITEM_FLG_05 0x0201 // array offset 2, bit 0
#define ITEM_FLG_06 0x0202 // array offset 2, bit 1
#define ITEM_FLG_07 0x0301 // array offset 3, bit 0
#define ITEM_FLG_08 0x0302 // array offset 3, bit 1
#define ITEM_FLG_10 0x0908 // array offset 9, bit 7
Next you have a set of macros to set and unset the bits along with a typedef to make it a bit easier to use. Unfortunately using a typedef with C does not provide you better type checking from the compiler but it does make it easier to use. These macros do no checking of their arguments so you might feel safer using regular functions instead.
#define SET_BIT(p,b) (*((p) + (((b) >> 8) & 0xf)) |= (b) & 0xf)
#define TOG_BIT(p,b) (*((p) + (((b) >> 8) & 0xf)) ^= (b) & 0xf)
#define CLR_BIT(p,b) (*((p) + (((b) >> 8) & 0xf)) &= ~ ((b) & 0xf))
#define TST_BIT(p,b) (*((p) + (((b) >> 8) & 0xf)) & ((b) & 0xf))
typedef unsigned char BitSet[10];
An example of using this basic framework is as follows.
BitSet uchR = { 0 };
int bValue;
SET_BIT(uchR, ITEM_FLG_01);
bValue = TST_BIT(uchR, ITEM_FLG_01);
SET_BIT(uchR, ITEM_FLG_03);
TOG_BIT(uchR, ITEM_FLG_03);
TOG_BIT(uchR, ITEM_FLG_04);
CLR_BIT(uchR, ITEM_FLG_05);
CLR_BIT(uchR, ITEM_FLG_01);
Next you can introduce a set of utility functions to do some of the bitwise operations we want to support. These bitwise operations would be analogous to the built in C operators such as bitwise Or (|) or bitwise And (&). These functions use the built in C operators to perform the designated operator on all array elements.
These particular examples of the utility functions modify one of the sets of bitflags provided. However if that is a problem, you can modify the functions to accept three arguments, one being for the result of the operation and the other two for the two sets of bitflags to use in the operation.
void AndBits(BitSet s1, const BitSet s2)
{
size_t nLen = sizeof(BitSet);
for (; nLen > 0; nLen--) {
*s1++ &= *s2++;
}
}
void OrBits(BitSet s1, const BitSet s2)
{
size_t nLen = sizeof(BitSet);
for (; nLen > 0; nLen--) {
*s1++ |= *s2++;
}
}
void XorBits(BitSet s1, const BitSet s2)
{
size_t nLen = sizeof(BitSet);
for (; nLen > 0; nLen--) {
*s1++ ^= *s2++;
}
}
If you need more than one size of a bitflags type using this approach then the most flexible approach to eliminate the typedef and just use straight unsigned char arrays of various sizes. This change would entail modifying the interface of the utility functions replacing BitSet with unsigned char pointer and unsigned char arrays where bitflag variables are defined. Along with the unsigned char pointers, you would also need to specify a length for the arrays.
You may also consider an approach similar to what is being done for text strings in Is concatenating arbitrary number of strings with nested function calls in C undefined behavior?.

C: how to build up a binary integer

I have some logic that I would like to store as an integer. I have 30 "positions" that can be either yes or no and I would like to represent this as an integer. As I am looping through these positions what would be the easiest way to store this information as an integer?
You can use a 32 bit uint:
uint32_t flags = 0;
flags |= UINT32_C(1) << x; // set x'th bit from right
flags &= ~(UINT32_C(1) << x); // unset x'th bit from right
if (flags & UINT32_C(1) << x) // test x'th bit from right
struct{
int flag0:1;
int flag1:1;
...
int flag31:1;
} myFlags;
Using :x in definition of an integer struct member means bitfield with x bits assigned.
You can access each struct member as usual, but the values can only be according to the size in bits (in my example - either 1 or 0 because only 1 bit is available), and the compiler will enforce it. The struct will be (probably, depends on the compiler settings) packed to a total size of integers needed to represent the total bits.
Another option would be using a int and bitwise operators & and | to access specific bits. In this case you have to make sure yourself that setting one bit won't affect another, and that there are no overflows etc.
#define POSITION_A 1
#define POSITION_B 2
unsigned int position = 0;
// set a position
position |= POSITION_A;
// clear a position
position &= = ~(POSITION_A);
Yes, as WTP's comment, you could save all your data in one unsigned int (uint32_t), and access it with AND(&), OR(|), NOT(~).
If saving storage is not a primary concern, however, I recommend not to use this compact technique.
You may need to expand your code to support more than 2 types(yes/no) of answers such as (yes/no/maybe).
You may have more than 30 questions which does not fit into one unsigned int.
If I were you, I'll use some array/list of small int (short or char) to store the values. It's somewhat waste of storage, but much easier to read, and much easier to add more features.

Resources