I'm using 32-bit variable for storing 4 8-bit values into one 32-bit value.
32_bit_buf[0]= cmd[9]<<16 | cmd[10]<<8| cmd[11] <<0;
cmd is of unsigned char type with data
cmd [9]=AA
cmd[10]=BB
cmd[11]=CC
However when 32-bit variable is printed I'm getting 0xFFFFBBCC.
Architecture- 8-bit AVR Xmega
Language- C
Can anyone figure out where I'm going wrong.
Your architecture uses 16bit int, so shifting by 16 places is undefined. Cast your cmd[9] to a wider type, e.g. (uint32_t)cmd[9] << 16 should work.
You should also apply this cast to the other components: When you shift cmd[10] by 8 places, you could shift into the sign-bit of the 16bit signed int your operands are automatically promoted to, leading to more strange/undefined behavior.
That is because you are trying to shift value in 8 bit container (unsigned char) and get a 32 bit. The 8-bit value will be expanded to int (16 bit), but this is still not enough. You can solve the issue in many ways, one of them for e.g. could be by using the destination variable as accumulator.
32_bit_buf[0] = cmd[9];
32_bit_buf[0] <<= 8;
32_bit_buf[0] |= cmd[10];
32_bit_buf[0] <<= 8;
32_bit_buf[0] |= cmd[11];
Related
I'm a bit troubled by this code:
typedef struct _slink{
struct _slink* next;
char type;
void* data;
}
assuming what this describes is a link in a file, where data is 4bytes long representing either an address or an integer(depending on the type of the link)
Now I'm looking at reformatting numbers in the file from little-endian to big-endian, and so what I wanna do is change the order of the bytes before writing back to the file, i.e.
for 0x01020304, I wanna convert it to 0x04030201 so when I write it back, its little endian representation is gonna look like the big endian representation of 0x01020304, I do that by multiplying the i'th byte by 2^8*(3-i), where i is between 0 and 3. Now this is one way it was implemented, and what troubles me here is that this is shifting bytes by more than 8 bits.. (L is of type _slink*)
int data = ((unsigned char*)&L->data)[0]<<24) + ((unsigned char*)&L->data)[1]<<16) +
((unsigned char*)&L->data)[2]<<8) + ((unsigned char*)&L->data)[3]<<0)
Can anyone please explain why this actually works? without having explicitly cast these bytes to integers to begin with(since they're only 1 bytes but are shifted by up to 24 bits)
Thanks in advance.
Any integer type smaller than int is promoted to type int when used in an expression.
So the shift is actually applied to an expression of type int instead of type char.
Can anyone please explain why this actually works?
The shift does not occur as an unsigned char but as a type promoted to an int1. #dbush.
Reasons why code still has issues.
32-bit int
Shifting a int 1 into the the sign's place is undefined behavior UB. See also #Eric Postpischil.
((unsigned char*)&L->data)[0]<<24) // UB
16-bit int
Shifting by the bit width or more is insufficient precision even if the type was unsigned. As int it is UB like above. Perhaps then OP would have only wanted a 2-byte endian swap?
Alternative
const uint8_t *p = &L->data;
uint32_t data = (uint32_t)p[0] << 24 | (uint32_t)p[1] << 16 | //
(uint32_t)p[2] << 8 | (uint32_t)p[3] << 0;
For the pedantic
Had int used non-2's complement, the addition of a negative value from ((unsigned char*)&L->data)[0]<<24) would have messed up the data pattern. Endian manipulations are best done using unsigned types.
from little-endian to big-endian
This code does not swap between those 2 endians. It is a big endian to native endian swap. When this code is run on a 32-bit unsigned little endian machine, it is effectively a big/little swap. On a 32-bit unsigned big endian machine, it could have been a no-op.
1 ... or posibly an unsigned on select platforms where UCHAR_MAX > INT_MAX.
The problem is simple:
Take an 32-bit or 64-bit integer and split it up to send over an (usually)1-byte interface like uart, spi or i2c.
To do this I can easily use bit masking and shifting to get what I want. However, I want this to be portable that will work on big and little endian, but also make it work for platforms that don't discard bits but rotate through carry(masking gets rid of excess bits right?).
Example code:
uint32_t value;
uint8_t buffer[4];
buffer[0] = (value >> 24) & 0xFF;
buffer[1] = (value >> 16) & 0xFF;
buffer[2] = (value >> 8) & 0xFF;
buffer[3] = value & 0xFF;
I want to guarantee this works on any platform that supports 32 bit integers or more. I don't know if this is correct.
The code you presented is the most portable way of doing it. You convert a single unsigned integer value with 32 bits width into an array of unsigned integer values of exactly 8 bits width. The resulting bytes in the buffer array are in big endian order.
The masking is not needed. From C11 6.5.7p5:
The result of E1 >> E2 is E1 right-shifted E2 bit positions. If E1 has an unsigned type or if E1 has
a signed type and a nonnegative value, the value of the result is the integral part of the quotient of
E1 / 2^E2.
and casting to an integer with 8 bits width is (to the value) equal to masking 8 bits. So (result >> 24) & 0xff is equal to (uint8_t)(result >> 24) (to the value). As you assign to uint8_t variable the masking is not needed. Anyway I would safely assume it will be optimized out by a sane compiler.
I can recommend to take a look at one implementation that I remembered, that I guess has implemented in a really safe manner all the possible variants of splitting and composing fixed-width integers up to 64 bits from bytes and back, that is at gpsd bits.h.
I am kinda new to bit operations. I am trying to store information in an int64_t variable like this:
int64_t u = 0;
for(i=0;i<44;i++)
u |= 1 << i;
for(;i<64;i++)
u |= 0 << i;
int t = __builtin_popcountl(u);
and what I intended with this was to store 44 1s in variable u and make sure that the remaining positions are all 0, so "t" returns 44. However, it always returns 64. With other variables, e.g. int32, it also fails. Why?
The type of an expression is generally determined by the expression itself, not by the context in which it appears.
Your variable u is of type int64_t (incidentally, uint64_t would be better since you're performing bitwise operations).
In this line:
u |= 1 << i;
since 1 is of type int, 1 << i is also of type int. If, as is typical, int is 32 bits, this has undefined behavior for larger values of i.
If you change this line to:
u |= (uint64_t)1 << i;
it should do what you want.
You could also change the 1 to 1ULL. That gives it a type of unsigned long long, which is guaranteed to be at least 64 bits but is not necessarily the same type as uint64_t.
__builtin_popcountl takes unsigned long as its paremeter, which is not always 64-bit integer. I personally use __builtin_popcountll, which takes long long. Looks like it's not the case for you
Integers have type 'int' by default, and by shifting int by anything greater or equal to 32 (to be precise, int's size in bits), you get undefined behavior. Correct usage: u |= 1LL << i; Here LL stands for long long.
Oring with zero does nothing. You can't just set bit to a particular value, you should either OR with mask (if you want to set some bits to 1s) or AND with mask's negation (if you want to set some bits to 0s), negation is done by tilda (~).
When you shift in the high bit of the 32-bit integer and and convert to 64-bit the sign bit will extend through the upper 32 bits; which you will then OR in setting all 64 bits, because your literal '1' is a signed 32 bit int by default. The shift will also not effect the upper 32 bits because the value is only 32 bit; however the conversion to 64-bit will when the the value being converted is negative.
This can be fixed by writing your first loop like this:
for(i=0;i<44;i++)
u |= (int64_t)1 << i;
Moreover, this loop does nothing since ORing with 0 will not alter the value:
for(;i<64;i++)
u |= 0 << i;
This question already has answers here:
embedding chars in int and vice versa
(3 answers)
Copying a 4 element character array into an integer in C
(6 answers)
Closed 9 years ago.
Just trying to make sure I got it right.
On SO I encountered an answer on a question: how to store chars in int like this:
unsigned int final = 0;
final |= ( data[0] << 24 );
final |= ( data[1] << 16 );
final |= ( data[2] << 8 );
final |= ( data[3] );
But to my understanding this is wrong isn't it?
Why: say data has stored the integer in little endian way (e.g., data[0]=LSB_ofSomeInt).
Then if machine executing above code is little endian, final will hold correct value,
else if the machine running above code is big endian it will hold a wrong value, isn't it?
Just trying to make sure I got this right, I am not going to ask more question in this direction for now.
Do not do this when you have functions like htonl etc.
Takes the hassle out of things
This code does not depend on the endianness of the platform: data[0] is always stored as the most significant byte of the int, followed by the rest, and data[3] is always the least significant byte.
Whether that's "right" or "wrong" depends on how the integer has been encoded in the data array itself.
There is one problem though: if data has been declared using char rather than unsigned char, the signed data[i] will be promoted first to a signed int and you end up setting many more bits than you intended.
This is wrong in little and big endian systems.
If data elements are of type char, you then need to cast all data elements to unsigned char before doing the bitwise left shift, otherwise you may encounter sign extension on data elements with negative values. The signedness of char is implementation defined and char can be a signed type.
Also data[0] << 24 (or even (unsigned char) data[0] << 24) will invoke undefined behavior if data[0] is a negative value as the resulting value is then not representable in an int and so you'll need an extra cast to unsigned int.
The best is to declare an unsigned char array for data and then cast each data elements to unsigned int before the left shift.
Now assuming you cast it correctly, this will work only if data[0] holds the most significant byte of your value.
Besides the obvious problem of platform-specific byte-ordering (which other answers have addressed), you should be careful about promotion of data types.
I'm assuming that data is an array of type unsigned char. In which case, the expression
data[0] << 24
is zero; you just left shifted an 8-bit operand 24 bits! I haven't compiled it to check, or reviewed the type promotion rules, but I believe, the way you have parenthesized it,
data[0] << 24 is still an unsigned char. It gets promoted when you bit-wise or the result with final. At best, it leaves too much to interpretation. A safer, more explicit way to do this is to bit-wise or first, then shift:
final |= data[0]; final <<= 8;
final |= data[1]; final <<= 8;
final |= data[2]; final <<= 8;
final |= data[3]; final <<= 8;
or you could promote explicitly and then shift:
final |= ((unsigned int)data[0]) << 24;
final |= ((unsigned int)data[1]) << 16;
final |= ((unsigned int)data[2]) << 8;
final |= ((unsigned int)data[3]);
Of course, this doesn't deal with the endianness problem at all. But that may or may not be a problem, depending on where data came from.
I have an image which captures 8 bit. I'm looking to convert the 8 bit values to 16 bit. I used the following
short temp16 = (short)val[i] << 8 ;
where val is an array of 8 bit samples.
The above statement makes noisy.
Can anybody suggest a method for 8bit to 16bit conversion?
Pure bitshifting won't give you pure white. 0xff << 8 == 0xff00, not 0xffff as expected.
One trick is to use val[i] << 8 + val[i] and remember proper datatypes (size, signedness). That way you get 0x00 -> 0x0000 and 0xff -> 0xffff.
Is val[] signed or unsigned 8bit? Cast it to unsigned (assuming you've got the usual convention of 0=darkest, 255=brightest) then cast it to signed short (I assume that's what you want, since plain 'short' is by default signed).
A good example is given there: Converting two uint8_t words into one of uint16_t and one of uint16_t into two of uint8_t.