I have an image which captures 8 bit. I'm looking to convert the 8 bit values to 16 bit. I used the following
short temp16 = (short)val[i] << 8 ;
where val is an array of 8 bit samples.
The above statement makes noisy.
Can anybody suggest a method for 8bit to 16bit conversion?
Pure bitshifting won't give you pure white. 0xff << 8 == 0xff00, not 0xffff as expected.
One trick is to use val[i] << 8 + val[i] and remember proper datatypes (size, signedness). That way you get 0x00 -> 0x0000 and 0xff -> 0xffff.
Is val[] signed or unsigned 8bit? Cast it to unsigned (assuming you've got the usual convention of 0=darkest, 255=brightest) then cast it to signed short (I assume that's what you want, since plain 'short' is by default signed).
A good example is given there: Converting two uint8_t words into one of uint16_t and one of uint16_t into two of uint8_t.
Related
I'm a bit troubled by this code:
typedef struct _slink{
struct _slink* next;
char type;
void* data;
}
assuming what this describes is a link in a file, where data is 4bytes long representing either an address or an integer(depending on the type of the link)
Now I'm looking at reformatting numbers in the file from little-endian to big-endian, and so what I wanna do is change the order of the bytes before writing back to the file, i.e.
for 0x01020304, I wanna convert it to 0x04030201 so when I write it back, its little endian representation is gonna look like the big endian representation of 0x01020304, I do that by multiplying the i'th byte by 2^8*(3-i), where i is between 0 and 3. Now this is one way it was implemented, and what troubles me here is that this is shifting bytes by more than 8 bits.. (L is of type _slink*)
int data = ((unsigned char*)&L->data)[0]<<24) + ((unsigned char*)&L->data)[1]<<16) +
((unsigned char*)&L->data)[2]<<8) + ((unsigned char*)&L->data)[3]<<0)
Can anyone please explain why this actually works? without having explicitly cast these bytes to integers to begin with(since they're only 1 bytes but are shifted by up to 24 bits)
Thanks in advance.
Any integer type smaller than int is promoted to type int when used in an expression.
So the shift is actually applied to an expression of type int instead of type char.
Can anyone please explain why this actually works?
The shift does not occur as an unsigned char but as a type promoted to an int1. #dbush.
Reasons why code still has issues.
32-bit int
Shifting a int 1 into the the sign's place is undefined behavior UB. See also #Eric Postpischil.
((unsigned char*)&L->data)[0]<<24) // UB
16-bit int
Shifting by the bit width or more is insufficient precision even if the type was unsigned. As int it is UB like above. Perhaps then OP would have only wanted a 2-byte endian swap?
Alternative
const uint8_t *p = &L->data;
uint32_t data = (uint32_t)p[0] << 24 | (uint32_t)p[1] << 16 | //
(uint32_t)p[2] << 8 | (uint32_t)p[3] << 0;
For the pedantic
Had int used non-2's complement, the addition of a negative value from ((unsigned char*)&L->data)[0]<<24) would have messed up the data pattern. Endian manipulations are best done using unsigned types.
from little-endian to big-endian
This code does not swap between those 2 endians. It is a big endian to native endian swap. When this code is run on a 32-bit unsigned little endian machine, it is effectively a big/little swap. On a 32-bit unsigned big endian machine, it could have been a no-op.
1 ... or posibly an unsigned on select platforms where UCHAR_MAX > INT_MAX.
I'm currently trying to write a Ninja VM, a task which was given to me as an exercise. Opcode and value are saved in a 32-Bit unsigned int, with the first 8 bit being the opcode and the rest being the value. To deal with a negative value, I use a macro given to me with the exercise
#define SIGN_EXTEND(i) ((i) & 0x00800000 ? (i) | 0xFF000000 : (i))
However, when trying to retrieve the op-code of
unsigned int i = (PUSHC << 24) | SIGN_EXTEND(-2); (where PUSHC equals 1)
by using a bit-wise AND as seen here ->
int opcode = (i>>24)&0xff
my result is 255, although this extraction method works when the value is positive and reproduces the correct op-code in these cases.
While researching this, I found "How I get the value from the Immediate part of a 32 Bit sequence in C?", which seemingly deals with what I'm asking, however the answer given in that question is exactly what I'm doing, anyone know more?
SIGN_EXTEND looks like it is intended to be used in decoding, to take the 24 bits that have been extracted from the instruction and convert them to a 32-bit signed integer. It looks like it is not intended to be used in encoding. For encoding, IMMEDIATE(x) appears to be intended for that. Given a 32-bit signed integer, it produces a 24-bit signed integer, providing the value fits.
So:
unsigned int i = (PUSHC << 24) | SIGN_EXTEND(-2);
should be:
unsigned int i = (PUSHC << 24) | IMMEDIATE(-2);
I'm using 32-bit variable for storing 4 8-bit values into one 32-bit value.
32_bit_buf[0]= cmd[9]<<16 | cmd[10]<<8| cmd[11] <<0;
cmd is of unsigned char type with data
cmd [9]=AA
cmd[10]=BB
cmd[11]=CC
However when 32-bit variable is printed I'm getting 0xFFFFBBCC.
Architecture- 8-bit AVR Xmega
Language- C
Can anyone figure out where I'm going wrong.
Your architecture uses 16bit int, so shifting by 16 places is undefined. Cast your cmd[9] to a wider type, e.g. (uint32_t)cmd[9] << 16 should work.
You should also apply this cast to the other components: When you shift cmd[10] by 8 places, you could shift into the sign-bit of the 16bit signed int your operands are automatically promoted to, leading to more strange/undefined behavior.
That is because you are trying to shift value in 8 bit container (unsigned char) and get a 32 bit. The 8-bit value will be expanded to int (16 bit), but this is still not enough. You can solve the issue in many ways, one of them for e.g. could be by using the destination variable as accumulator.
32_bit_buf[0] = cmd[9];
32_bit_buf[0] <<= 8;
32_bit_buf[0] |= cmd[10];
32_bit_buf[0] <<= 8;
32_bit_buf[0] |= cmd[11];
Could someone explain to me what can happen if I'll forget suffix(postfix) for constants(literals) in ANSI C?
For example I saw for bit shift operations such defines:
#define AAR_INTENSET_NOTRESOLVED_Pos (2UL) /*!< Position of NOTRESOLVED field. */
#define AAR_INTENSET_NOTRESOLVED_Msk (0x1UL << AAR_INTENSET_NOTRESOLVED_Pos) /*!< Bit mask of NOTRESOLVED field. */
#define AAR_INTENSET_NOTRESOLVED_Disabled (0UL) /*!< Interrupt disabled. */
#define AAR_INTENSET_NOTRESOLVED_Enabled (1UL) /*!< Interrupt enabled. */
#define AAR_INTENSET_NOTRESOLVED_Set (1UL) /*!< Enable interrupt on write. */
It's used in 32bit architecture. But it could be ported to 16bit or 8bit.
What can happen if postfix UL is not used and I'll use these macros for bit shift operations as it is supposed?
I just assume that e.g. in 8-bit architecture can (1<<30) leads to overflow.
EDIT: I have found nice link: http://dystopiancode.blogspot.cz/2012/08/constant-suffixes-and-prefixes-in-ansi-c.html
But is it safe to use suffixes if the code is supposed to be ported on various architectures?
For instance if suffix U represents unisgned int so for 8bit architecture it's usually 16bit but for 32bit it's 32bit variable, so 0xFFFFAAAAU is ok for 32bit compiler but not for 8bit compiler, right?
A decimal number like -1,1,2,12345678, etc. without any suffix will get the smallest type it will fit, starting with int, long, long long.
An octal or hex number like 0, 0123, 0x123, 0X123 without any suffix will get the smallest type it will fit, starting with int, unsigned, long, unsigned long, long long, unsigned long long.
The following is a potential problem should AAR_INTENSET_NOTRESOLVED_Pos exceed 31. Note: unsigned long must be at least 32 bits. It would result in 0 ** if unsigned long was 32 bits, but non-zero if longer.
(0x1UL << AAR_INTENSET_NOTRESOLVED_Pos)
The following is a similar potential problem should AAR_INTENSET_NOTRESOLVED_Pos exceed 15. 0x1 is an unsigned, which must only be at least 16 bits. Also if unsigned/int is 16 bits, the minimum, 0x1 will be int. So without explicitly using U, 0x1 could be a problem if AAR_INTENSET_NOTRESOLVED_Pos == 15. [#Matt McNabb]
(0x1 << AAR_INTENSET_NOTRESOLVED_Pos)
Bitwise shift operators
"The integer promotions are performed on each of the operands. The type of the result is that of the promoted left operand. If the value of the right operand is negative or is greater than or equal to the width of the promoted left operand, the behavior is undefined." C11dr ยง6.5.7 3
Machine width is not the key issue. An 8bit or 16bit machine could use 16, 32, etc. bit size int. Again, 16 bit is the minimum size for a compliant C compiler.
[Edit] **
I should have said " It (shifting more than 31 bits) would result in Undefined behavior, UB, if unsigned long was 32 bits."
It can break.
It might be better to include the cast in the code itself, i.e.
uint32_t something = (uint32_t) AAR_INTENSET_NOTRESOLVED_Set << 30;
This makes it work even if the #define for the constant is simply an integer.
I'm trying to read a binary file into a C# struct. The file was created from C and the following code creates 2 bytes out of the 50+ byte rows.
unsigned short nDayTimeBitStuffed = atoi( LPCTSTR( strInput) );
unsigned short nDayOfYear = (0x01FF & nDayTimeBitStuffed);
unsigned short nTimeOfDay = (0x01F & (nDayTimeBitStuffed >> 9) );
Binary values on the file are 00000001 and 00000100.
The expected values are 1 and 2, so I think some bit ordering/swapping is going on but not sure.
Any help would be greatly appreciated.
Thanks!
The answer is 'it depends' - most notably on the machine, and also on how the data is written to the file. Consider:
unsigned short x = 0x0102;
write(fd, &x, sizeof(x));
On some machines (Intel), the low-order byte (0x02) will be written before the high-order byte (0x01); on others (PPC, SPARC), the high-order byte will be written before the low-order one.
So, from a little-endian (Intel) machine, you'd see the bytes:
0x02 0x01
But from a big-endian (PPC) machine, you'd see the bytes:
0x01 0x02
Your bytes appear to be 0x01 and 0x04. Your calculation for 0x02 appears flawed.
The C code you show doesn't write anything. The value in nDayOfYear is the bottom 9 bits of the input value; the nTimeOfDay appears to be the next 5 bits (so 14 of the 16 bits are used).
For example, if the value in strInput is 12141 decimal, 0x2F6D, then the value in nDayOfYear would be 365 (0x16D) and the value in nTimeOfDay would be 23 (0x17).
It is a funny storage order; you can't simply compare the two values whereas if you packed the day of year in the more significant portion of the value and time into the less significant, then you could compare values as simple integers and get the correct comparison.
The expected file contents are very much related to the processor and compiler used to create the file, if it's binary.
I'm assuming a Windows machine here, which uses 2 bytes for a short and puts them in little endian order.
Your comments don't make much sense either. If it's two bytes then it should be using two chars, not shorts. The range of the first is going to be 1-365, so it definitely needs more than a single byte to represent. I'm going to assume you want the first 4 bytes, not the first 2.
This means that the first byte will be bits 0-7 of the DayOfYear, the second byte will be bits 8-15 of the DayOfYear, the third byte will be bits 0-7 of the TimeOfDay, and the fourth byte will be bits 8-15 of the TimeOfDay.