Can declared variables within my struct be of different size - c

Why I'm asking this is because following happens:
Defined in header:
typedef struct PID
{
// PID parameters
uint16_t Kp; // pGain
uint16_t Ki; // iGain
uint16_t Kd; // dGain
// PID calculations OLD ONES WHERE STATICS
int24_t pTerm;
int32_t iTerm;
int32_t dTerm;
int32_t PID;
// Extra variabels
int16_t CurrentError;
// PID Time
uint16_t tick;
}_PIDObject;
In C source:
static int16_t PIDUpdate(int16_t target, int16_t feedback)
{
_PIDObject PID2_t;
PID2_t.Kp = pGain2; // Has the value of 2000
PID2_t.CurrentError = target - feedback; // Has the value of 57
PID2_t.pTerm = PID2_t.Kp * PID2_t.CurrentError; // Should count this to (57x2000) = 114000
What happens when I debug is that it don't. The largest value I can define (kind of) in pGain2 is 1140. 1140x57 gives 64980.
Somehow it feels like the program thinks PID2_t.pTerm is a uint16_t. But it's not; it's declared bigger in the struct.
Has PID2_t.pTerm somehow got the value uint16_t from the first declared variables in the struct or
is it something wrong with the calculations, I have a uint16_t times a int16_t? This won't happen if I declare them outside a struct.
Also, here is my int def (have never been a problem before:
#ifdef __18CXX
typedef signed char int8_t; // -128 -> 127 // Char & Signed Char
typedef unsigned char uint8_t; // 0 -> 255 // Unsigned Char
typedef signed short int int16_t; // -32768 -> 32767 // Int
typedef unsigned short int uint16_t; // 0 -> 65535 // Unsigned Int
typedef signed short long int int24_t; // -8388608 -> 8388607 // Short Long
typedef unsigned short long int uint24_t; // 0 -> 16777215 // Unsigned Short Long
typedef signed long int int32_t; // -2147483648 -> 2147483647 // Long
typedef unsigned long int uint32_t; // 0 -> 4294967295 // Unsigned Long
#else
# include <stdint.h>
#endif

Try
PID2_t.pTerm = ((int24_t) PID2_t.Kp) * ((int24_t)PID2_t.CurrentError);
Joachim's comment explains why this works. The compiler isn't promoting the multiplicands to int24_t before multiplying, so there's an overflow. If we manually promote using casts, there is no overflow.

My system doesn't have an int24_t, so as some comments have said, where is that coming from?
After Joachim's comment, I wrote up a short test:
#include <stdint.h>
#include <stdio.h>
int main() {
uint16_t a = 2000, b = 57;
uint16_t c = a * b;
printf("%x\n%x\n", a*b, c);
}
Output:
1bd50
bd50
So you're getting the first 2 bytes, consistent with an int16_t. So the problem does seem to be that your int24_t is not defined correctly.

As others have pointed out, your int24_t appears to be defined to be 16 bits. Beside the fact that it's too small, you should be careful with this type definition in general. stdint.h specifies the uint_Nt types to be exactly N bits. So assuming your processor and compiler don't actually have a 24-bit data type, you're breaking with the standard convention. If you're going to end up defining it as a 32-bit type, it'd be more reasonable to name it uint_least24_t, which follows the pattern of integer types that are at least big enough to hold N bits. The distinction is important because somebody might expect uint24_t to rollover above 16777215.

Related

I need to write a C program that uses a Union structure to set the bits of an int

Here are the instructions:
Define a union data struct WORD_T for a uint16_t integer so that a value can be assigned to a WORD_T integer in three ways:
(1) To assign the value to each bit of the integer,
(2) To assign the value to each byte of the integer,
(3) To assign the value to the integer directly.
I know I need to do something different with that first struct, but I'm pretty lost in general. Here is what I have:
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <sys/types.h>
#define BIT(n) (1 << n)
#define BIT_SET(var, mask) (n |= (mask) )
#define BIT_CLEAR(var, mask) (n &= ~(mask) )
#define BIT_FLIP(var, mask) (n ^= (mask) )
union WORD_T{
struct{
uint16_t integerVar:16
};
struct{
unsigned bit1:0;
unsigned bit2:0;
unsigned bit3:0;
unsigned bit4:0;
unsigned bit5:0;
unsigned bit6:0;
unsigned bit7:0;
unsigned bit8:0;
unsigned bit9:0;
unsigned bit10:0;
unsigned bit11:0;
unsigned bit12:0;
unsigned bit13:0;
unsigned bit14:0;
unsigned bit15:0;
unsigned bit16:0;
};
void setIndividualBit(unsigned value, int bit) {
mask = BIT(value) | BIT(bit);
BIT_SET(n, mask);
}
};
The most obvious problem is that :0 means "a bit-field of zero bits" which doesn't make any sense. You should change this to 1 and then you can assign individual bits through code like the_union.the_struct.bit1 = 1;.
This format is an alternative to the bit set/clear macros you wrote.
Your union should look like:
typedef union
{
uint16_t word;
uint8_t byte[2];
struct{
uint16_t bit1:1;
...
However. This is a really bad idea and your teacher should know better than to give such assignments. Bit-fields in C are a bag of worms - they are very poorly specified by the standard. Problems:
You can't know which bit that is the MSB, it is not defined.
You can't know if there will be padding bits/bytes somewhere.
You can't even use uint16_t or uint8_t because bit-fields are only specified to work with int.
And so on. In addition you have to deal with endianness and alignment. See Why bit endianness is an issue in bitfields?.
Essentially, code using bit-fields like this will rely heavily on the specific compiler. It will be completely non-portable.
All of these problems could be avoided by dropping the bit-field and use bitwise operators on a uint16_t instead, like you did in your macros. With bitwise operators only, your code will turn deterministic and 100% portable. You can even use them to dodge endianess, by using bit shifts.
Here's the union definition:
union WORD_T
{
uint16_t word;
struct
{
uint8_t byte0;
uint8_t byte1;
};
struct
{
uint8_t bit0:1;
uint8_t bit1:1;
uint8_t bit2:1;
uint8_t bit3:1;
uint8_t bit4:1;
uint8_t bit5:1;
uint8_t bit6:1;
uint8_t bit7:1;
uint8_t bit8:1;
uint8_t bit9:1;
uint8_t bit10:1;
uint8_t bit11:1;
uint8_t bit12:1;
uint8_t bit13:1;
uint8_t bit14:1;
uint8_t bit15:1;
};
};
To assign to the individual components, do something similar to the following:
union WORD_T data;
data.word = 0xF0F0; // (3) Assign to entire word
data.byte0 = 0xAA; // (2) Assign to individual byte
data.bit0 = 0; // (1) Assign to individual bits
data.bit1 = 1;
data.bit2 = 1;
data.bit2 = 0;

bit-declaration over two registers

I made a bit-declaration for a dspic30f4011
I declared a part
typedef struct tagCxTXxSIDBITS_tagCxTXxEIDBITS {
unsigned : 2;
unsigned SRC7_2 : 6;
unsigned : 8;
unsigned : 14;
unsigned SRC1_0 : 2;
} CxTXxSRCBITS;
extern volatile unsigned int C1TX0SRC __attribute__((__sfr__));
extern volatile CxTXxSRCBITS C1TX0SRCbits __attribute__((__sfr__));
extern volatile unsigned int C1TX1SRC __attribute__((__sfr__));
extern volatile CxTXxSRCBITS C1TX1SRCbits __attribute__((__sfr__));
extern volatile unsigned int C1TX2SRC __attribute__((__sfr__));
extern volatile CxTXxSRCBITS C1TX2SRCbits __attribute__((__sfr__));
Is this correct? Are the first two the CxTXxSID bits 0 and 1? Then CxTXxSID bits 2-7 the 8 are CxTXxSID bits 8-15 are the 14 for CxTXxEID bits 0-13 ? And the last two for CxTXxEID bits 14 and 15?
If it is, I made it right
If I write in my code
C1TX0SRC = 0x0001;
do I get following in the registers ?
C1TX0SID = 0b0000000000000000
C1TX0EID = 0b0100000000000000
The bit order isn't specified by the standard, nor is the byte order, nor is anything else regarding bit fields, see this.
Are the first two the CxTXxSID bits 0 and 1?
Nobody knows, since there are no guarantees by the standard. You have to check your compiler documentation for this specific system.
Then CxTXxSID bits 2-7 the 8 are CxTXxSID bits 8-15 are the 14 for CxTXxEID bits 0-13 ? And the last two for CxTXxEID bits 14 and 15?
Nobody knows. Bit order, byte order, endianess, bit/byte padding... everything can be an issue and nothing is specified by the standard.
do I get following in the registers ?
Nobody knows. You'll have to check the compiler documentation. In practice you can get any binary goo as result. The portable, safe solution is to not use bit fields at all.
the bit order should be correct. But why don't you test it by yourself? I did something like this some days ago:
#include <stdint.h>
typedef union {
uint8_t byte;
struct {
unsigned bit0:1;
unsigned bit1:1;
unsigned bit2:1;
unsigned bit3:1;
unsigned bit4:1;
unsigned bit5:1;
unsigned bit6:1;
unsigned bit7:1;
};
} byte_t;
/*
Main application
*/
void main(void)
{
byte_t testbyte;
testbyte.byte = 0b01100011;
while (1)
{
uint8_t bit0 = testbyte.bit0;
uint8_t bit1 = testbyte.bit1;
uint8_t bit2 = testbyte.bit2;
uint8_t bit3 = testbyte.bit3;
uint8_t bit4 = testbyte.bit4;
uint8_t bit5 = testbyte.bit5;
uint8_t bit6 = testbyte.bit6;
uint8_t bit7 = testbyte.bit7;
uint8_t temp = 0;
}
}
you can test this in the mplabx simulator, don't even need hardware. just set a debug breakpoint at the end.
EDIT: I may have been a little too fast with my answer. After rereading your question I understand that your main concern was not the bit order but the question if your struct will align to your registers. I do not understand how the sfr attribute works. In xc8 there is the "#" operator which can assign some value to a specific memory region. But you could use the debugger to get the actual address where you write your values to.
EDIT of the EDIT: Now I have some understanding of the sfr attribute. As you did not state what you did I assume you haven't changed your linker file, did you? To place your struct on top of the special function registers, you have to change your linker script (*.gld file) found in
Microchip\xc16\vx.xx\support\dsPIC30F.
Here is a little extract from the standard p30F4011.gld linker script:
C1TX2SID = 0x340;
_C1TX2SID = 0x340;
_C1TX2SIDbits = 0x340;
C1TX2EID = 0x342;
_C1TX2EID = 0x342;
_C1TX2EIDbits = 0x342;
add your own declaration to them, modify your makefile to include your modified linker script and you should be good to go (but I haven't tried that myself).

assignment of packed bitfield struct gives wrong values

I have a packed structure which should be 16 bits:
#pragma pack(1)
typedef unsigned int uint;
typedef unsigned short ushort;
typedef struct Color Color;
struct Color
{
uint r : 5;
uint g : 5;
uint b : 5;
uint _ : 1;
};
I have confirmed that
&(((Color*)0x06000000)[x]) == &(((ushort*)0x06000000)[x])
For various values of x. However, in this code, these two lines give me different results.
void write_pixel (uint x, uint y, Color color)
{
#if 0
((Color*)0x06000000)[x+y*240] = color;
#else
((ushort*)0x06000000)[x+y*240] = ((ushort*)&color)[0];
#endif
}
With the second option being correct.
What could be the cause?
note. I'm compiling for GBA emulator so no debugger or printf, just yes/no statements by way of a green/red pixel.
the problem is the value seen from the parameter list is going to be (due to promotion of parameters) as an unsigned int (32bits). then the incorrect line is trying to copy a 32bit unsigned int into a 16bit short.
Normally this would work However the pragma modifies how the 16bit Color is being placed in the promoted parameter and/or how the code is referencing the 16bits of Color.
While packed values are (normally) not promoted,
if there is no prototype for the called function
then all parameters are assumed to 'int'
so the code assumes the data is (sizeof int) bytes long (4 bytes)
and the assignment takes the last 2 bytes
of that 4 byte value rather than the first 2 bytes.

atomic variable with bitflags from struct

I have a single atomic variable which I read and write to from with the C11 atomics.
Now I have a struct which contains every flag like this:
typedef struct atomic_container {
unsigned int flag1 : 2;
unsigned int flag2 : 2;
unsigned int flag3 : 2;
unsigned int flag4 : 2;
unsigned int progress : 8;
unsigned int reserved : 16;
}atomic_container;
then I use a function to convert this struct to a unsigned int with a 32 bit width using bitshifts. and then write to it with the atomic functions.
I wonder if I can directly write this struct atomically rather than first bitshifting it into a unsigned int. This does seem to work but I am worried this might be implementation defined and could lead to undefined behavior. the struct in question is 32 bit wide exactly as I want it.

Simple C data types [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
The difference of int8_t, int_least8_t and int_fast8_t?
I'm a quite confused. I think... (correct me if I'm wrong)
u_int8_t = unsigned short ?
u_int16_t = unsigned int ?
u_int32_t = unsigned long ?
u_int64_t = unsigned long long ?
int8_t = short ?
int16_t = int ?
int32_t = long ?
int64_t = long long ?
Then what does int_fast8_t mean? int_fastN_t? int_least8_t?
These are all specified in the C99 standard (section 7.18).
The [u]intN_t types are generic types (in the ISO C standard) where N represents a bit width (8, 16, etc). 8-bit is not necessarily a short or char since shorts/ints/longs/etc are defined to have a minimum range (not bitwidth) and may not be two's complement.
These newer types are two's complement regardless of the encoding of the more common types, which may be ones' complement or sign/magnitude ( see Representation of negative numbers in C? and Why is SCHAR_MIN defined as -127 in C99?).
fast and least are exactly what they sound like, fast-minimum-width types and types of at least a given width.
The standard also details which type are required and which are optional.
I writing where int is of 16-bit:
u_int8_t = unsigned char
u_int16_t = unsigned int
u_int32_t = unsigned long int
u_int64_t = unsigned long long int
int8_t = char
int16_t = int
int32_t = long int
int64_t = long long int
Q: "Then what does int_fast8_t mean? int_fastN_t? int_least8_t?"
As dan04 states in his answer here:
Suppose you have a C compiler for a 36-bit system, with char = 9
bits, short = 18 bits, int = 36 bits, and long = 72 bits. Then
int8_t does not exist, because there is no way to satisfy the constraint of having exactly 8 value bits with no padding.
int_least8_t is a typedef of char. NOT of short or int, because the standard requires the smallest type with at least 8
bits.
int_fast8_t can be anything. It's likely to be a typedef of int if the "native" size is considered to be "fast".
If you are in Linux most these are defined in /usr/include/linux/coda.h. e.g.
#ifndef __BIT_TYPES_DEFINED__
#define __BIT_TYPES_DEFINED__
typedef signed char int8_t;
typedef unsigned char u_int8_t;
typedef short int16_t;
typedef unsigned short u_int16_t;
typedef int int32_t;
typedef unsigned int u_int32_t;
#endif
And
#if defined(DJGPP) || defined(__CYGWIN32__)
#ifdef KERNEL
typedef unsigned long u_long;
typedef unsigned int u_int;
typedef unsigned short u_short;
typedef u_long ino_t;
typedef u_long dev_t;
typedef void * caddr_t;
#ifdef DOS
typedef unsigned __int64 u_quad_t;
#else
typedef unsigned long long u_quad_t;
#endif

Resources