Using array of chars as an array of long ints - c

On my AVR I have an array of chars that hold color intensity information in the form of {R,G,B,x,R,G,B,x,...} (x being an unused byte). Is there any simple way to write a long int (32-bits) to char myArray[4*LIGHTS] so I can write a 0x00BBGGRR number easily?
My typecasting is rough, and I'm not sure how to write it. I'm guessing just make a pointer to a long int type and set that equal to myArray, but then I don't know how to arbitrarily tell it to set group x to myColor.
uint8_t myLights[4*LIGHTS];
uint32_t *myRGBGroups = myLights; // ?
*myRGBGroups = WHITE; // sets the first 4 bytes to WHITE
// ...but how to set the 10th group?
Edit: I'm not sure if typecasting is even the proper term, as I think that would be if it just truncated the 32-bit number to 8-bits?

typedef union {
struct {
uint8_t red;
uint8_t green;
uint8_t blue;
uint8_t alpha;
} rgba;
uint32_t single;
} Color;
Color colors[LIGHTS];
colors[0].single = WHITE;
colors[0].rgba.red -= 5;
NOTE: On a little-endian system, the low-order byte of the 4-byte value will be the alpha value; whereas it will be the red value on a big-endian system.

Your code is perfectly valid. You can use myRGBGroups as regular array, so to access 10th pixel you can use
myRGBGroups[9]

Think of using C union, where the first field of the union is a int32 and the second a vector of 4*chars. But, not sure if this is the best way for you.

You need to account for the endianness of uint32_t on the AVR to make sure the components are being stored in the correct order (for later dereferencing via myLights array) if you're going to do this. A quick Google seems to indicate that AVRs store data in memory little-endian, but other registers vary in endianness.
Anyway, assuming you've done that, you can dereference myRGBGroups using array indexing (where each index will reference a block of 4 bytes). So, to set the 10th group, you can just do myRGBGroups[ 9 ] = COLOR.

if can use arithmetic on myRGBgroup, for example myRGBgroups ++ will give next group, similarly you can use plus, minus, etc. operators.those operators operate using type sizes, rather than single byte
myRGBgroups[10] // access group as int
((uint8_t*)(myRGBgroups + 10)) // access group as uint8 array

Union of a struct and uint32_t is a much better idea than making a uint8_t of size 4 * LIGHTS. Another fairly common way to do this is to use macros or inline functions that do the bitwise arithmetic necessary to create the correct uint32_t:
#define MAKE_RGBA32(r,g,b,a) (uint32_t)(((r)<<24)|((g)<<16)|((b)<<8)|(a))
uint32_t colors[NUM_COLORS];
colors[i] = MAKE_RGBA32(255,255,255,255);
Depending on your endianness the values may need to be placed into the int in a different order. This technique is common because for older 16bpp color formats like RGBA5551 or RGB565, it makes more sense to think of the colors in terms of the bitwise arithmetic than in units of bytes.

You can perform something similar using struct assignment - this gets around the endian problem:
typedef struct Color {
unsigned char r, g, b, a;
} Color;
const Color WHITE = {0xff, 0xff, 0xff, 0};
const Color RED = {0xff, 0, 0, 0};
const Color GREEN = {0, 0xff, 0, 0};
Color colors[] = {WHITE, RED};
Color c;
colors[1] = GREEN;
c = colors[1];
However comparison is not defined in the standard you can't use c == GREEN and you can't use the {} shortcut in assignment (only initialisation) so c = {0, 0, 0, 0} would fail.
Also bear in mind that if it's an 8 bit AVR (as opposed to an AVR32 say), then you most likely won't see any performance benefit from either technique.

Related

C - Big-endian struct interconvert with little-endian struct

I have two structs which have the same data members. (one is a big_endian struct, the other is little_endian ) now I have to interconvert with them. But when I code, I found that there are lots of repeated codes with little change. How can I change these codes to be more elegant without repeated code? (repeated code means these code may be similar such as mode == 1 and mode == 2, which only differ in assignment position. It doesn't look elegant but works.)
here is my code:
#pragma scalar_storage_order big-endian
typedef struct {
int a1;
short a2;
char a3;
int a4;
} test_B;
#pragma scalar_storage_order default
typedef struct {
int a1;
short a2;
char a3;
int a4;
} test_L;
void interconvert(test_L *little, test_B *big, int mode) {
// if mode == 1 , convert little to big
// if mode == 2 , convert big to little
// it may be difficult and redundant when the struct has lots of data member!
if(mode == 1) {
big->a1 = little->a1;
big->a2 = little->a2;
big->a3 = little->a3;
big->a4 = little->a4;
}
else if(mode == 2) {
little->a1 = big->a1;
little->a2 = big->a2;
little->a3 = big->a3;
little->a4 = big->a4;
}
else return;
}
Note:The above code must run on gcc-7 or higher ,because of the #pragma scalar_storage_order
An answer was posted which suggested to use memcpy for this problem, but that answer has been deleted. Actually that answer was right, if used correctly, and I want to explain why.
The #pragma specified by the OP is central, as he notes out:
Note: the above code must run on gcc-7 or higher because of the #pragma scalar_storage_order
The struct from the OP:
#pragma scalar_storage_order big-endian
typedef struct {
int a1;
short a2;
char a3;
int a4;
} test_B;
means that the instruction "test_B.a2=256" writes, in the two consecutive bytes belonging to the a2 member, respectively 1 and 0. This is big-endian. The similar instruction "test_L.a2=256" would instead strore the bytes 0 and 1 (little endian).
The following memcpy:
memcpy(&test_L, &test_B, sizeof test_L)
would make the bytes for test_L.a2 equal to 1 and 0, because that is the ram content of test_B.a2. But now, reading test_L.a2 in little endian mode, those two bytes mean 1. We wrote 256 and read back 1. This is exactly the wanted conversion.
To use correctly this mechanism, it is sufficient to write in one struct, memcpy() in the other, and read the other - member by member. What was big-endian becomes little-endian and viceversa. Of course, if the intention is to elaborate data and apply calculations on it, it is important to know what endianness has the data; if it matches the default mode, no transformation has to be done before the calculations, but the transformation has to be applied later. On the contrary, if the incoming data does not match the "default endianness" of the processor, it must be transformed first.
EDIT
After the comment of the OP, below, I investigated more. I took a look at this https://gcc.gnu.org/onlinedocs/gcc/Structure-Layout-Pragmas.html
Well, there are three #pragma available to choose the byte layout: big-endian, little-endian, and default. One of the first two is equal to the last: if the target machine is little-endian, default means little-endian; if it is big-endian, default means big-endian. This is more than logical.
So, doing a memcpy() between big-endian and default does nothing on a big-endian machine; and also this is logical. Ok, better I stress more that memcpy() does absolutely nothing per se: it only moves data from a ram area treated in a certain manner to another area treated in another manner. The two different areas are treated differently only when a normal member access is done: here come to play the #pragma scalar_storage_order. And as I written before, it is important to know what endiannes have the data entering the program. If they come from TCP network, for example, we know that is big-endian; more in general, if it is taken from outside the "program" and respect a protocol, we should know what endianness has.
To convert from an endianness to the other, one should use little and big, NOT default, because that default is surely equal to one of the former two.
Still another edit
Stimulated by comments, and by Jamesdlin who used an online compiler, I tried to do it too. At this url http://tpcg.io/lLe5EW
there is the demonstration that assigning to a member of one struct, memcpy to another, and reading that, the endian conversion is done. That's all.

Copy 6 byte array to long long integer variable

I have read from memory a 6 byte unsigned char array.
The endianess is Big Endian here.
Now I want to assign the value that is stored in the array to an integer variable. I assume this has to be long long since it must contain up to 6 bytes.
At the moment I am assigning it this way:
unsigned char aFoo[6];
long long nBar;
// read values to aFoo[]...
// aFoo[0]: 0x00
// aFoo[1]: 0x00
// aFoo[2]: 0x00
// aFoo[3]: 0x00
// aFoo[4]: 0x26
// aFoo[5]: 0x8e
nBar = (aFoo[0] << 64) + (aFoo[1] << 32) +(aFoo[2] << 24) + (aFoo[3] << 16) + (aFoo[4] << 8) + (aFoo[5]);
A memcpy approach would be neat, but when I do this
memcpy(&nBar, &aFoo, 6);
the 6 bytes are being copied to the long long from the start and thus have padding zeros at the end.
Is there a better way than my assignment with the shifting?
What you want to accomplish is called de-serialisation or de-marshalling.
For values that wide, using a loop is a good idea, unless you really need the max. speed and your compiler does not vectorise loops:
uint8_t array[6];
...
uint64_t value = 0;
uint8_t *p = array;
for ( int i = (sizeof(array) - 1) * 8 ; i >= 0 ; i -= 8 )
value |= (uint64_t)*p++ << i;
// left-align
value <<= 64 - (sizeof(array) * 8);
Note using stdint.h types and sizeof(uint8_t) cannot differ from1`. Only these are guaranteed to have the expected bit-widths. Also use unsigned integers when shifting values. Right shifting certain values is implementation defined, while left shifting invokes undefined behaviour.
Iff you need a signed value, just
int64_t final_value = (int64_t)value;
after the shifting. This is still implementation defined, but all modern implementations (and likely the older) just copy the value without modifications. A modern compiler likely will optimize this, so there is no penalty.
The declarations can be moved, of course. I just put them before where they are used for completeness.
You might try
nBar = 0;
memcpy((unsigned char*)&nBar + 2, aFoo, 6);
No & needed before an array name caz' it's already an address.
The correct way to do what you need is to use an union:
#include <stdio.h>
typedef union {
struct {
char padding[2];
char aFoo[6];
} chars;
long long nBar;
} Combined;
int main ()
{
Combined x;
// reset the content of "x"
x.nBar = 0; // or memset(&x, 0, sizeof(x));
// put values directly in x.chars.aFoo[]...
x.chars.aFoo[0] = 0x00;
x.chars.aFoo[1] = 0x00;
x.chars.aFoo[2] = 0x00;
x.chars.aFoo[3] = 0x00;
x.chars.aFoo[4] = 0x26;
x.chars.aFoo[5] = 0x8e;
printf("nBar: %llx\n", x.nBar);
return 0;
}
The advantage: the code is more clear, there is no need to juggle with bits, shifts, masks etc.
However, you have to be aware that, for speed optimization and hardware reasons, the compiler might squeeze padding bytes into the struct, leading to aFoo not sharing the desired bytes of nBar. This minor disadvantage can be solved by telling the computer to align the members of the union at byte-boundaries (as opposed to the default which is the alignment at word-boundaries, the word being 32-bit or 64-bit, depending on the hardware architecture).
This used to be achieved using a #pragma directive and its exact syntax depends on the compiler you use.
Since C11/C++11, the alignas() specifier became the standard way to specify the alignment of struct/union members (given your compiler already supports it).

Endianness macro in C

I recently saw this post about endianness macros in C and I can't really wrap my head around the first answer.
Code supporting arbitrary byte orders, ready to be put into a file
called order32.h:
#ifndef ORDER32_H
#define ORDER32_H
#include <limits.h>
#include <stdint.h>
#if CHAR_BIT != 8
#error "unsupported char size"
#endif
enum
{
O32_LITTLE_ENDIAN = 0x03020100ul,
O32_BIG_ENDIAN = 0x00010203ul,
O32_PDP_ENDIAN = 0x01000302ul
};
static const union { unsigned char bytes[4]; uint32_t value; } o32_host_order =
{ { 0, 1, 2, 3 } };
#define O32_HOST_ORDER (o32_host_order.value)
#endif
You would check for little endian systems via
O32_HOST_ORDER == O32_LITTLE_ENDIAN
I do understand endianness in general. This is how I understand the code:
Create example of little, middle and big endianness.
Compare test case to examples of little, middle and big endianness and decide what type the host machine is of.
What I don't understand are the following aspects:
Why is an union needed to store the test-case? Isn't uint32_t guaranteed to be able to hold 32 bits/4 bytes as needed? And what does the assignment { { 0, 1, 2, 3 } } mean? It assigns the value to the union, but why the strange markup with two braces?
Why the check for CHAR_BIT? One comment mentions that it would be more useful to check UINT8_MAX? Why is char even used here, when it's not guaranteed to be 8 bits wide? Why not just use uint8_t? I found this link to Google-Devs github. They don't rely on this check... Could someone please elaborate?
Why is a union needed to store the test case?
The entire point of the test is to alias the array with the magic value the array will create.
Isn't uint32_t guaranteed to be able to hold 32 bits/4 bytes as needed?
Well, more-or-less. It will but other than 32 bits there are no guarantees. It would fail only on some really fringe architecture you will never encounter.
And what does the assignment { { 0, 1, 2, 3 } } mean? It assigns the value to the union, but why the strange markup with two braces?
The inner brace is for the array.
Why the check for CHAR_BIT?
Because that's the actual guarantee. If that doesn't blow up, everything will work.
One comment mentions that it would be more useful to check UINT8_MAX? Why is char even used here, when it's not guaranteed to be 8 bits wide?
Because in fact it always is, these days.
Why not just use uint8_t? I found this link to Google-Devs github. They don't rely on this check... Could someone please elaborate?
Lots of other choices would work also.
The initialization has two set of braces because the inner braces initialize the bytes array. So byte[0] is 0, byte[1] is 1, etc.
The union allows a uint32_t to lie on the same bytes as the char array and be interpreted in whatever the machine's endianness is. So if the machine is little endian, 0 is in the low order byte and 3 is in the high order byte of value. Conversely, if the machine is big endian, 0 is in the high order byte and 3 is in the low order byte of value.
{{0, 1, 2, 3}} is the initializer for the union, which will result in bytes component being filled with [0, 1, 2, 3].
Now, since the bytes array and the uint32_t occupy the same space, you can read the same value as a native 32-bit integer. The value of that integer shows you how the array was shuffled - which really means which endian system are you using.
There are only 3 popular possibilities here - O32_LITTLE_ENDIAN, O32_BIG_ENDIAN, and O32_PDP_ENDIAN.
As for the char / uint8_t - I don't know. I think it makes more sense to just use uint_8 with no checks.

Bit ordering in a byte when using bitfields

C a reference manual states that "The precise manner in which components( and especially bit fields) are packed into a structure is implementation dependent but is predictable for each implementation".
I read that some compilers pack bit fields left to right ( MSB to LSB) in Big endian machines whereas right to left(LSB to MSB) in Little endian machines.
is there a reason/advantage about representing bitfields in two diffrent ways depends on the endianness?
I've not implemented this, but I can imagine that it has to do with working with bit fields in registers, and reading/writing entire words to/from the structure when possible. If you implement it that way, instead of doing byte-level accesses, you will of course "feel" the endianness as the word is byte-swapped in memory.
So if you have
struct color {
uint32_t red : 8;
uint32_t green : 8;
uint32_t blue : 8;
uint32_t alpha : 8;
};
When you do
struct color orange = { .red = 255, .green = 127, .blue = 0, .alpha = 0 };
It might be implemented (since the fields are conveniently sized) as
struct color orange;
uint32_t *tmp = *(uint32_t *) &orange;
*tmp = 0xff7f0000; /* The field values, mapping red to the MSBs. */
Now, since the above does one single uint32_t-sized memory write, the value will be byte-swapped on a little-endian machine but not on a big-endian one, i.e. when viewed byte by byte, the representations are different.
Layout of bit fields inside a structure is implementation defined. It is not a good idea to use them if you need portable code.

How to treat a struct with two unsigned shorts as if it were an unsigned int? (in C)

I created a structure to represent a fixed-point positive number. I want the numbers in both sides of the decimal point to consist 2 bytes.
typedef struct Fixed_t {
unsigned short floor; //left side of the decimal point
unsigned short fraction; //right side of the decimal point
} Fixed;
Now I want to add two fixed point numbers, Fixed x and Fixed y. To do so I treat them like integers and add.
(Fixed) ( (int)x + (int)y );
But as my visual studio 2010 compiler says, I cannot convert between Fixed and int.
What's the right way to do this?
EDIT: I'm not committed to the {short floor, short fraction} implementation of Fixed.
You could attempt a nasty hack, but there's a problem here with endian-ness. Whatever you do to convert, how is the compiler supposed to know that you want floor to be the most significant part of the result, and fraction the less significant part? Any solution that relies on re-interpreting memory is going to work for one endian-ness but not another.
You should either:
(1) define the conversion explicitly. Assuming short is 16 bits:
unsigned int val = (x.floor << 16) + x.fraction;
(2) change Fixed so that it has an int member instead of two shorts, and then decompose when required, rather than composing when required.
If you want addition to be fast, then (2) is the thing to do. If you have a 64 bit type, then you can also do multiplication without decomposing: unsigned int result = (((uint64_t)x) * y) >> 16.
The nasty hack, by the way, would be this:
unsigned int val;
assert(sizeof(Fixed) == sizeof(unsigned int)) // could be a static test
assert(2 * sizeof(unsigned short) == sizeof(unsigned int)) // could be a static test
memcpy(&val, &x, sizeof(unsigned int));
That would work on a big-endian system, where Fixed has no padding (and the integer types have no padding bits). On a little-endian system you'd need the members of Fixed to be in the other order, which is why it's nasty. Sometimes casting through memcpy is the right thing to do (in which case it's a "trick" rather than a "nasty hack"). This just isn't one of those times.
If you have to you can use a union but beware of endian issues. You might find the arithmetic doesn't work and certainly is not portable.
typedef struct Fixed_t {
union {
struct { unsigned short floor; unsigned short fraction };
unsigned int whole;
};
} Fixed;
which is more likely (I think) to work big-endian (which Windows/Intel isn't).
Some magic:
typedef union Fixed {
uint16_t w[2];
uint32_t d;
} Fixed;
#define Floor w[((Fixed){1}).d==1]
#define Fraction w[((Fixed){1}).d!=1]
Key points:
I use fixed-size integer types so you're not depending on short being 16-bit and int being 32-bit.
The macros for Floor and Fraction (capitalized to avoid clashing with floor() function) access the two parts in an endian-independent way, as foo.Floor and foo.Fraction.
Edit: At OP's request, an explanation of the macros:
Unions are a way of declaring an object consisting of several different overlapping types. Here we have uint16_t w[2]; overlapping uint32_t d;, making it possible to access the value as 2 16-bit units or 1 32-bit unit.
(Fixed){1} is a compound literal, and could be written more verbosely as (Fixed){{1,0}}. Its first element (uint16_t w[2];) gets initialized with {1,0}. The expression ((Fixed){1}).d then evaluates to the 32-bit integer whose first 16-bit half is 1 and whose second 16-bit half is 0. On a little-endian system, this value is 1, so ((Fixed){1}).d==1 evaluates to 1 (true) and ((Fixed){1}).d!=1 evaluates to 0 (false). On a big-endian system, it'll be the other way around.
Thus, on a little-endian system, Floor is w[1] and Fraction is w[0]. On a big-endian system, Floor is w[0] and Fraction is w[1]. Either way, you end up storing/accessing the correct half of the 32-bit value for the endian-ness of your platform.
In theory, a hypothetical system could use a completely different representation for 16-bit and 32-bit values (for instance interleaving the bits of the two halves), breaking these macros. In practice, that's not going to happen. :-)
This is not possible portably, as the compiler does not guarantee a Fixed will use the same amount of space as an int. The right way is to define a function Fixed add(Fixed a, Fixed b).
Just add the pieces separately. You need to know the value of the fraction that means "1" - here I'm calling that FRAC_MAX:
// c = a + b
void fixed_add( Fixed* a, Fixed* b, Fixed* c){
unsigned short carry = 0;
if((int)(a->floor) + (int)(b->floor) > FRAC_MAX){
carry = 1;
c->fraction = a->floor + b->floor - FRAC_MAX;
}
c->floor = a->floor + b->floor + carry;
}
Alternatively, if you're just setting the fixed point as being at the 2 byte boundary you can do something like:
void fixed_add( Fixed* a, Fixed *b, Fixed *c){
int ia = a->floor << 16 + a->fraction;
int ib = b->floor << 16 + b->fraction;
int ic = ia + ib;
c->floor = ic >> 16;
c->fraction = ic - c->floor;
}
Try this:
typedef union {
struct Fixed_t {
unsigned short floor; //left side of the decimal point
unsigned short fraction; //right side of the decimal point
} Fixed;
int Fixed_int;
}
If your compiler puts the two short on 4 bytes, then you can use memcpy to copy your int in your struct, but as said in another answer, this is not portable... and quite ugly.
Do you really care adding separately each field in a separate method?
Do you want to keep the integer for performance reason?
// add two Fixed
Fixed operator+( Fixed a, Fixed b )
{
...
}
//add Fixed and int
Fixed operator+( Fixed a, int b )
{
...
}
You may cast any addressable type to another one by using:
*(newtype *)&var

Resources