I recently found out BMP images can have negative height, meaning the pixels are stored top-to-bottom in the file. Our production code rejected a file from a user, as it interpreted the height as 4294966272 ( unsigned 32 bit value ) , while the it really was -1024 ( signed 32 bit value ) , actually meaning a height of 1024 pixels. The most "official" documentation of BITMAPINFOHEADER format I managed to find is on Wikipedia, which says the width is also a signed integer. How would one correctly validate a BMP input?
Since the BMP format has been used in Windows since version 2.0 I would use MSDN as the most "official" documentation. In MSDN we find the following definition of BITMAPINFOHEADER:
typedef struct tagBITMAPINFOHEADER {
DWORD biSize;
LONG biWidth;
LONG biHeight;
WORD biPlanes;
WORD biBitCount;
DWORD biCompression;
DWORD biSizeImage;
LONG biXPelsPerMeter;
LONG biYPelsPerMeter;
DWORD biClrUsed;
DWORD biClrImportant;
} BITMAPINFOHEADER, *PBITMAPINFOHEADER;
As you can see, biHeight is defined as long, which is usually the same as signed int. If your code is not reading this value properly, I would say that there is a bug there somewhere.
Change from unsigned 32bit to signed 32bit
That should fix it
Related
I am trying to learn C after working with Java for a couple of years.
I've found some code that I wanted to reproduce which looked something like this:
U64 attack_table[...]; // ~840 KiB
struct SMagic {
U64* ptr; // pointer to attack_table for each particular square
U64 mask; // to mask relevant squares of both lines (no outer squares)
U64 magic; // magic 64-bit factor
int shift; // shift right
};
SMagic mBishopTbl[64];
SMagic mRookTbl[64];
U64 bishopAttacks(U64 occ, enumSquare sq) {
U64* aptr = mBishopTbl[sq].ptr;
occ &= mBishopTbl[sq].mask;
occ *= mBishopTbl[sq].magic;
occ >>= mBishopTbl[sq].shift;
return aptr[occ];
}
U64 rookAttacks(U64 occ, enumSquare sq) {
U64* aptr = mRookTbl[sq].ptr;
occ &= mRookTbl[sq].mask;
occ *= mRookTbl[sq].magic;
occ >>= mRookTbl[sq].shift;
return aptr[occ];
}
The code is not that important but I already failed at using the same datatype: U64, I only found uint64_t. Now I would like to know where the difference in U64, uint64_t and long is.
I am very happy if someone could briefly explain this one to me, including the advantage of each of them.
Greetings,
Finn
TL;DR - for a 64-bit exact width unsigned integer, #include <stdint.h> and use uint64_t.
Presumably, U64 is a custom typedef for 64-bit wide unsigned integer.
If you're using at least a C99 compliant compiler, it would have <stdint.h> with a typedef for 64-bit wide unsigned integer with no padding bits: uint64_t. However, it might be that the code targets a compiler that doesn't have the standard uint64_t defined. In that case, there might be some configuration header where a type is chosen for U64. Perhaps you can grep the files for typedef.*U64;
long on the other hand, is a signed type. Due to various undefined and implementation-defined aspects of signed math, you wouldn't want to use a signed type for bit-twiddling at all. Another complication is that unlike in Java, the C long doesn't have a standardized width; instead long is allowed to be only 32 bits wide - and it is so on most 32-bit platforms, and even on 64-bit Windows. If you ever need exact width types, you wouldn't use int or long. Only long long and unsigned long long are guaranteed to be at least 64 bits wide.
I'm trying to check some homework answers about overflow for 2's complement addition, subtraction, etc. and I'm wondering if I can specify the size of a data type. For instance if I want to see what happens when I try to assign -128 or -256 to a 7-bit unsigned int.
On further reading I see you wanted bit sizes that are not normal ones, such as 7 bit and 9 bit etc.
You can achieve this using bitfields
struct bits9
{
int x : 9;
};
Now you can use this type bits9 which has one field in it x that is only 9 bits in size.
struct bits9 myValue;
myValue.x = 123;
For an arbitrary sized value, you can use bitfields in structs. For example for a 7-bit value:
struct something {
unsigned char field:7;
unsigned char padding:1;
};
struct something value;
value.field = -128;
The smallest size you have have is char which is an 8 bit integer. You can have unsigned and signed chars. Take a look at the stdint.h header. It defines a int types for you in a platform independent way. Also there is no such thing as an 7 bit integer.
Using built in types you have things like:
char value1; // 8 bits
short value2; // 16 bits
long value3; // 32 bits
long long value4; // 64 bits
Note this is the case with Microsoft's compiler on Windows. The C standard does not specify exact widths other than "this one must be at least as big as this other one" etc.
If you only care about a specific platform you can print out the sizes of your types and use those once you have figured them out.
Alternatively you can use stdint.h which is in the C99 standard. It has types with the width in the name to make it clear
int8_t value1; // 8 bits
int16_t value2; // 16 bits
int32_t value3; // 32 bits
int64_t value4; // 64 bits
I have a problem translate byte order between host(CPU dependent) and network(big endian). These are all the APIs(in "arpa/inet.h" for Linux) I've found that might solve my problem.
uint32_t htonl(uint32_t hostlong);
uint16_t htons(uint16_t hostshort);
uint32_t ntohl(uint32_t netlong);
uint16_t ntohs(uint16_t netshort);
Except for one thing, they only handle unsigned integer(2 bytes or 4 byte).
So is there any approach to handle signed integer case? In other words, how to implement the following functions(APIs)?
int32_t htonl(int32_t hostlong);
int16_t htons(int16_t hostshort);
int32_t ntohl(int32_t netlong);
int16_t ntohs(int16_t netshort);
Technically speaking, it doesn't matter what the value is inside the variable since you just want to borrow the functionality. When assigning a signed to an unsigned, its value changes but the bits are the same. So converting it back to signed is alright.
Edit: As amrit said, it is a duplicate of Signed Integer Network and Host Conversion.
I try to define a 64 bits width integer using C language on Ubnutu 9.10. 9223372036854775808 is 2^23
long long max=9223372036854775808
long max=9223372036854775808
When I compile it, the compiler gave the warning message:
binary.c:79:19: warning: integer constant is so large that it is unsigned
binary.c: In function ‘bitReversal’:
binary.c:79: warning: this decimal constant is unsigned only in ISO C90
binary.c:79: warning: integer constant is too large for ‘long’ type
Is the long type 64 bits width?
Best Regards,
long long max=9223372036854775808LL; // Note the LL
// long max=9223372036854775808L; // Note the L
A long long type is at least 64 bit, a long type is at least 32 bit. Actual width depends on the compiler and targeting platform.
Use int64_t and int32_t to ensure the 64-/32-bit integer can fit into the variable.
The problem you are having is that the number you're using (9223372036854775808) is 2**63, and the maximum value that your long long can hold (as a 64 bit signed 2s complement type) is one less than that - 2**63 - 1, or 9223372036854775807 (that's 63 binary 1s in a row).
Constants that are too large for long long (as in this case) are given type unsigned long long, which is why you get the warning integer constant is so large that it is unsigned.
The long type is at least 32 bits, and the long long type at least 64 bits (both including the sign bit).
I'm not sure it's will fix your problem (The LL solution looks good). But here is a recomandation :
You should use hexadecimal notation to write the maximal value of something.
unsigned char max = 0xFF;
unsigned short max = 0xFFFF;
uint32_t max = 0xFFFFFFFF;
uint64_t max = 0xFFFFFFFFFFFFFFFF;
As you can see it's readable.
To display the value :
printf("%lld\n", value_of_64b_int);
I think it depends on what platform you use. If you need to know exactly what kind of integer type you use you should use types from Stdint.h if you have this include file on you system.
I'm looking at stdint.h and given that it has uint16_t and uint_fast16_t, what is the use for uint_least16_t what might you want that couldn't be done equally well with one of the other two?
Say you're working on a compiler with:
unsigned char is 8 bits
unsigned short is 32 bits
unsigned int is 64 bits
And unsigned int is the 'fastest'. On that platform:
uint16_t would not be available
uint_least16_t would be a 32 bit value
uint_fast16_t would be a 64 bit value
A bit arcane, but that's what it's for.
How useful they are is another story - I see the exact size variants all the time. That's what people want. The 'least' and 'fast' versions I've seen used pretty close to never (it's possible that it was only in example code - I'm really not sure).
Ah, the link Patrick posted includes this "The typedef name uint_leastN_t designates an unsigned integer type with a width of at least N, such that no unsigned integer type with lesser size has at least the specified width."
So my current understanding is:
uint_least16_t the smallest thing that is capable of holding a uint16
uint_fast16_t the fastest thing that is capable of holding a uint16
uint16_t exactly a uint16, unfortunately may not be available on all platforms, on any platform where is is available uint_least16_t will refer to it. So if it were guaranteed to exist on all platforms we wouldn't need uint_least16_t at all.
It's part of the c standard. It doesn't need a good use case. :P
See this page and look for the section titled "Minimum-width integer types".