assignment of packed bitfield struct gives wrong values - c

I have a packed structure which should be 16 bits:
#pragma pack(1)
typedef unsigned int uint;
typedef unsigned short ushort;
typedef struct Color Color;
struct Color
{
uint r : 5;
uint g : 5;
uint b : 5;
uint _ : 1;
};
I have confirmed that
&(((Color*)0x06000000)[x]) == &(((ushort*)0x06000000)[x])
For various values of x. However, in this code, these two lines give me different results.
void write_pixel (uint x, uint y, Color color)
{
#if 0
((Color*)0x06000000)[x+y*240] = color;
#else
((ushort*)0x06000000)[x+y*240] = ((ushort*)&color)[0];
#endif
}
With the second option being correct.
What could be the cause?
note. I'm compiling for GBA emulator so no debugger or printf, just yes/no statements by way of a green/red pixel.

the problem is the value seen from the parameter list is going to be (due to promotion of parameters) as an unsigned int (32bits). then the incorrect line is trying to copy a 32bit unsigned int into a 16bit short.
Normally this would work However the pragma modifies how the 16bit Color is being placed in the promoted parameter and/or how the code is referencing the 16bits of Color.
While packed values are (normally) not promoted,
if there is no prototype for the called function
then all parameters are assumed to 'int'
so the code assumes the data is (sizeof int) bytes long (4 bytes)
and the assignment takes the last 2 bytes
of that 4 byte value rather than the first 2 bytes.

Related

RGB color system as a data type

An RGB color system can represent 256 different colors by combining different shades of red, green and blue. Each shade can be represented as a number. Define a C type to represent the RGB state.
From my understanding this would be a proper C type, not sure if I am correct though:
struct{
int R[256];
int G[256];
int B[256];
} color;
Would that be correct or is there a more proper / more efficient way of doing this?
All you need is a structure that can hold three values, each in the range 0..255. Your proposed structure holds 3*256 * sizeof(int) = 768 * sizeof(int), way too much. Use this instead:
struct color {
unsigned char R;
unsigned char G;
unsigned char B;
};
Note the correct placement of the struct identifier color.
You would use this structure further on in your program like this:
struct color bright_orange;
bright_orange.R = 255;
bright_orange.G = 128;
bright_orange.B = 0;
The array fields are unnecessary. A single int can hold values from 0 to 255 and is thus adequate for representing each of the shades. However, uint8_t is smaller and conveys the desired range of values explicitly:
#include <stdint.h> //Contains uint8_t
typedef struct color {
uint8_t R;
uint8_t G;
uint8_t B;
} color_t;
color_t very_red;
very_red.R = 255U;
very_red.G = 0U;
very_red.B = 0U;
Note how typedef lets you subsequently reference color_t rather than having to write struct color.
One int is 4 bytes so with 3 int you take 12 bytes.
If you use 3 unsigned char (which take only 1 byte each) you can save 75% of space (if we don't count the eventual padding).
And there is no need for array here.
Also this statement is false: "can represent 256 different colors" it's 256 * 256 * 256 colors...
This is wrong. Remove the array and you move color right after struct. You try (with wrong syntax) to create a color struct with R,G and B arrays of size 256 (256 int values each).
struct Color{
int R,
int G,
int B
};
By the way: You only need 8bits to address 256 states of R,G,B. So uint8_t (unsigned char) will be a better candidate in comparison to int.

Casting pointer type based on integer size (C99)

How do you (if possible) define a type by an integer size? For example if I wanted to make a type which was 3 bytes long how could I accomplish doing something like this? (I am aware this is incorrect)
typedef int24_t 3;
I am trying to write a generalized function which takes a character string parameter containing multiple numbers, and stores the numbers in a pointer, passed as another parameter.
However I want to make it so you can pass a numerical parameter which determines how big the variable type storing the numbers will be: i.e. if it were 1 the type could be char, if it were 4 the type could be int etc.
I am aware that it is possible to just store the number in a temporary fixed size variable, and then only copy the relevant bytes to the pointer depending on the requested size, but I want the code to be portable and I don't want to be messing around with Endianness as I've had trouble with that in the past.
Thanks
You can use a struct, it's not elegant but sounds like what you're looking for.
Note that you must define the struct alignment to 1 byte. You're also limited to 64bit.
typedef struct Int24 {
int value : 24;
} Int;
typedef struct UInt24 {
unsigned value : 24;
} UInt24;
typedef struct Int48 {
long long value : 48;
} Int48;
With templates:
template<int bytes> struct Int {
long long value : bytes * 8;
};
typedef Int<1> Int8;
typedef Int<6> Int48;
With macro:
#define DECL_INT(n) \
typedef struct _Int##n { \
long long value : n; \
} Int##n
// declaration of type
DECL_INT(48); // produces Int48
// usage
Int48 i48;
struct smallerint
{
unsigned int integer:24; //(24=24bits=3bytes)
};
typedef struct smallerint int24_t;
If i understand well what you're trying to do, and if you want a nice generalized function, i would use linked list of bytes. Maybe you should have a look on a bigint implementation.

How to address all fields in a struct like these registers?

If you've programmed a microcontroller, you're probably familiar with manipulating select bits of a given register, or writing a byte to the whole thing. On a PIC using C for example, I can write an entire byte to PORTA to set all the bits, or I can simply address PORTAbits.RA# to set a single bit. I'm trying to mimic the way these structs/unions are defined so I can do the same thing with a variable in my program. Specifically, when the microcontroller turns on I want to be able to reset a register I myself have defined with something like
REGISTER = 0;
versus
REGISTERbits.BIT0 = 0;
REGISTERbits.BIT1 = 0;
...
//or
REGISTERbits = (0,0,0,0,0,0,0,0);
etc.
Obviously the former is more elegant and saves a lot of line space. The header file of the microcontroller does it like this:
#ifndef __18F2550_H
#define __18F2550_H
....
extern volatile near unsigned char LATA;
extern volatile near struct {
unsigned LATA0:1;
unsigned LATA1:1;
unsigned LATA2:1;
unsigned LATA3:1;
unsigned LATA4:1;
unsigned LATA5:1;
unsigned LATA6:1;
} LATAbits;
...for each and every register, and registers with multiple bytes use unions of structs for their Registerbits. Since my initialization/declaration is in the main source file and not a header, I've dropped the extern and near off mine:
volatile unsigned char InReg;
volatile struct{
unsigned NSENS:1; //One magnetic sensor per direction
unsigned SSENS:1;
unsigned ESENS:1;
unsigned WSENS:1;
unsigned YBTN:1; //One crosswalk button input per axis
unsigned XBTN:1; //(4 buttons tied together each)
unsigned :2;
} InRegbits;
...but on compile, InReg and InRegbits are defined as two separate locations in memory, which means I can't write to InReg to change InRegbits. How do I change this so that it works? Does the one I'm trying to copy only work because it's a special microcontroller register?
Thanks for any help
volatile union InReg {
unsigned char InRegAll;
struct near {
unsigned NSENS:1; //One magnetic sensor per direction
unsigned SSENS:1;
unsigned ESENS:1;
unsigned WSENS:1;
unsigned YBTN:1; //One crosswalk button input per axis
unsigned XBTN:1; //(4 buttons tied together each)
unsigned :2;
} InRegbits;
}
Be aware that this code may not be portable.
To guarantee the same result, you'll need to have two structs within a union. The standard says that if the members of a union are structs, where the first struct member types are compatible (and relate to the same bitwidth), you can operate on any of them as the same. Otherwise accessing any union member via another is undefined behaviour.
e.g.
volatile union {
volatile struct {
unsigned int InReg;
} InReg;
volatile struct {
unsigned NSENS:1; //One magnetic sensor per direction
unsigned SSENS:1;
unsigned ESENS:1;
unsigned WSENS:1;
unsigned YBTN:1; //One crosswalk button input per axis
unsigned XBTN:1; //(4 buttons tied together each)
unsigned:2;
} InRegbits;
} Reg_s;

Can declared variables within my struct be of different size

Why I'm asking this is because following happens:
Defined in header:
typedef struct PID
{
// PID parameters
uint16_t Kp; // pGain
uint16_t Ki; // iGain
uint16_t Kd; // dGain
// PID calculations OLD ONES WHERE STATICS
int24_t pTerm;
int32_t iTerm;
int32_t dTerm;
int32_t PID;
// Extra variabels
int16_t CurrentError;
// PID Time
uint16_t tick;
}_PIDObject;
In C source:
static int16_t PIDUpdate(int16_t target, int16_t feedback)
{
_PIDObject PID2_t;
PID2_t.Kp = pGain2; // Has the value of 2000
PID2_t.CurrentError = target - feedback; // Has the value of 57
PID2_t.pTerm = PID2_t.Kp * PID2_t.CurrentError; // Should count this to (57x2000) = 114000
What happens when I debug is that it don't. The largest value I can define (kind of) in pGain2 is 1140. 1140x57 gives 64980.
Somehow it feels like the program thinks PID2_t.pTerm is a uint16_t. But it's not; it's declared bigger in the struct.
Has PID2_t.pTerm somehow got the value uint16_t from the first declared variables in the struct or
is it something wrong with the calculations, I have a uint16_t times a int16_t? This won't happen if I declare them outside a struct.
Also, here is my int def (have never been a problem before:
#ifdef __18CXX
typedef signed char int8_t; // -128 -> 127 // Char & Signed Char
typedef unsigned char uint8_t; // 0 -> 255 // Unsigned Char
typedef signed short int int16_t; // -32768 -> 32767 // Int
typedef unsigned short int uint16_t; // 0 -> 65535 // Unsigned Int
typedef signed short long int int24_t; // -8388608 -> 8388607 // Short Long
typedef unsigned short long int uint24_t; // 0 -> 16777215 // Unsigned Short Long
typedef signed long int int32_t; // -2147483648 -> 2147483647 // Long
typedef unsigned long int uint32_t; // 0 -> 4294967295 // Unsigned Long
#else
# include <stdint.h>
#endif
Try
PID2_t.pTerm = ((int24_t) PID2_t.Kp) * ((int24_t)PID2_t.CurrentError);
Joachim's comment explains why this works. The compiler isn't promoting the multiplicands to int24_t before multiplying, so there's an overflow. If we manually promote using casts, there is no overflow.
My system doesn't have an int24_t, so as some comments have said, where is that coming from?
After Joachim's comment, I wrote up a short test:
#include <stdint.h>
#include <stdio.h>
int main() {
uint16_t a = 2000, b = 57;
uint16_t c = a * b;
printf("%x\n%x\n", a*b, c);
}
Output:
1bd50
bd50
So you're getting the first 2 bytes, consistent with an int16_t. So the problem does seem to be that your int24_t is not defined correctly.
As others have pointed out, your int24_t appears to be defined to be 16 bits. Beside the fact that it's too small, you should be careful with this type definition in general. stdint.h specifies the uint_Nt types to be exactly N bits. So assuming your processor and compiler don't actually have a 24-bit data type, you're breaking with the standard convention. If you're going to end up defining it as a 32-bit type, it'd be more reasonable to name it uint_least24_t, which follows the pattern of integer types that are at least big enough to hold N bits. The distinction is important because somebody might expect uint24_t to rollover above 16777215.

Using unions to simplify casts

I realize that what I am trying to do isn't safe. But I am just doing some testing and image processing so my focus here is on speed.
Right now this code gives me the corresponding bytes for a 32-bit pixel value type.
struct Pixel {
unsigned char b,g,r,a;
};
I wanted to check if I have a pixel that is under a certain value (e.g. r, g, b <= 0x10). I figured I wanted to just conditional-test the bit-and of the bits of the pixel with 0x00E0E0E0 (I could have wrong endianness here) to get the dark pixels.
Rather than using this ugly mess (*((uint32_t*)&pixel)) to get the 32-bit unsigned int value, i figured there should be a way for me to set it up so I can just use pixel.i, while keeping the ability to reference the green byte using pixel.g.
Can I do this? This won't work:
struct Pixel {
unsigned char b,g,r,a;
};
union Pixel_u {
Pixel p;
uint32_t bits;
};
I would need to edit my existing code to say pixel.p.g to get the green color byte. Same happens if I do this:
union Pixel {
unsigned char c[4];
uint32_t bits;
};
This would work too but I still need to change everything to index into c, which is a bit ugly but I can make it work with a macro if i really needed to.
(Edited) Both gcc and MSVC allow 'anonymous' structs/unions, which might solve your problem. For example:
union Pixel {
struct {unsigned char b,g,r,a;};
uint32_t bits; // use 'unsigned' for MSVC
}
foo.b = 1;
foo.g = 2;
foo.r = 3;
foo.a = 4;
printf ("%08x\n", foo.bits);
gives (on Intel):
04030201
This requires changing all your declarations of struct Pixel to union Pixel in your original code. But this defect can be fixed via:
struct Pixel {
union {
struct {unsigned char b,g,r,a;};
uint32_t bits;
};
} foo;
foo.b = 1;
foo.g = 2;
foo.r = 3;
foo.a = 4;
printf ("%08x\n", foo.bits);
This also works with VC9, with 'warning C4201: nonstandard extension used : nameless struct/union'. Microsoft uses this trick, for example, in:
typedef union {
struct {
DWORD LowPart;
LONG HighPart;
}; // <-- nameless member!
struct {
DWORD LowPart;
LONG HighPart;
} u;
LONGLONG QuadPart;
} LARGE_INTEGER;
but they 'cheat' by suppressing the unwanted warning.
While the above examples are ok, if you use this technique too often, you'll quickly end up with unmaintainable code. Five suggestions to make things clearer:
(1) Change the name bits to something uglier like union_bits, to clearly indicate something out-of-the-ordinary.
(2) Go back to the ugly cast the OP rejected, but hide its ugliness in a macro or in an inline function, as in:
#define BITS(x) (*(uint32_t*)&(x))
But this would break the strict aliasing rules. (See, for example, AndreyT's answer: C99 strict aliasing rules in C++ (GCC).)
(3) Keep the original definiton of Pixel, but do a better cast:
struct Pixel {unsigned char b,g,r,a;} foo;
// ...
printf("%08x\n", ((union {struct Pixel dummy; uint32_t bits;})foo).bits);
(4) But that is even uglier. You can fix this by a typedef:
struct Pixel {unsigned char b,g,r,a;} foo;
typedef union {struct Pixel dummy; uint32_t bits;} CastPixelToBits;
// ...
printf("%08x\n", ((CastPixelToBits)foo).bits); // not VC9
With VC9, or with gcc using -pedantic, you'll need (don't use this with gcc--see note at end):
printf("%08x\n", ((CastPixelToBits*)&foo)->bits); // VC9 (not gcc)
(5) A macro may perhaps be preferred. In gcc, you can define a union cast to any given type very neatly:
#define CAST(type, x) (((union {typeof(x) src; type dst;})(x)).dst) // gcc
// ...
printf("%08x\n", CAST(uint32_t, foo));
With VC9 and other compilers, there is no typeof, and pointers may be needed (don't use this with gcc--see note at end):
#define CAST(typeof_x, type, x) (((union {typeof_x src; type dst;}*)&(x))->dst)
Self-documenting, and safer. And not too ugly. All these suggestions are likely to compile to identical code, so efficiency is not an issue. See also my related answer: How to format a function pointer?.
Warning about gcc: The GCC Manual version 4.3.4 (but not version 4.3.0) states that this last example, with &(x), is undefined behaviour. See http://davmac.wordpress.com/2010/01/08/gcc-strict-aliasing-c99/ and http://gcc.gnu.org/ml/gcc/2010-01/msg00013.html.
The problem with a structure inside a union, is that the compiler is allowed to add padding bytes between members of a structure (or class), except bit fields.
Given:
struct Pixel
{
unsigned char red;
unsigned char green;
unsigned char blue;
unsigned char alpha;
};
This could be laid out as:
Offset Field
------ -----
0x00 red
0x04 green
0x08 blue
0x0C alpha
So the size of the structure would be 16 bytes.
When put in a union, the compiler would take the larger capacity of the two to determine space. Also, as you can see, a 32 bit integer would not align correctly.
I suggest creating functions to combine and extract pixels from a 32-bit quantity. You can declare it inline too:
void Int_To_Pixel(const unsigned int word,
Pixel& p)
{
p.red = (word & 0xff000000) >> 24;
p.blue = (word & 0x00ff0000) >> 16;
p.green = (word & 0x0000ff00) >> 8;
p.alpha = (word & 0x000000ff);
return;
}
This is a lot more reliable than a struct inside a union, including one with bit fields:
struct Pixel_Bit_Fields
{
unsigned int red::8;
unsigned int green::8;
unsigned int blue::8;
unsigned int alpha::8;
};
There is still some mystery when reading this whether red is the MSB or alpha is the MSB. By using bit manipulation, there is no question when reading the code.
Just my suggestions, YMMV.
Why not make the ugly mess into an inline routine? Something like:
inline uint32_t pixel32(const Pixel& p)
{
return *reinterpret_cast<uint32_t*>(&p);
}
You could also provide this routine as a member function for Pixel, called i(), which would allow you to access the value via pixel.i() if you preferred to do it that way. (I'd lean on separating the functionality from the data structure when invariants need not be enforced.)

Resources