I'm writing a java binding for a C code and I'm not really familiar with C.
I have a uint64_t and need to cast it to a int. Does anyone know how to do that?
(My binding returns then a jint...)
Usually, using an exact width integer like 'uint64_t' is for a good reason.
If you cast it to an int, which may not be 64bits long, you may have serious problems...
The short answer:
uint64_t foo;
int bar;
bar = foo;
Technically this has undefined behavior if the value of foo does not fit in an int. In practice, it will always simply truncate the upper bits. If you want to be more correct, then:
if (foo-INT_MIN <= (uint64_t)INT_MAX-INT_MIN)
bar = foo;
else
/* error case here */
If you are writing JNI wrappers, the best match for uint64_t is long, not int. Even then you will lose accuracy, as Java doesn't have unsigned types (hence the u), and you need to be prepared to check the sign of the value.
Note that java does not support unsigned types. The closest type in java will be long, since it also has 64 bits.
Casting uint64 to long doesn't loose data because the number of bits are the same, but large values will be displayed as negative numbers then.
whatever_your_int_var_name_is = (int)whatever_your_uint64_t_var_name_is;
hm...
I have method in C which returns a page_ID which is an uint64_t number. Now I need to store that number in a Java HashMap. So what's the best way to store that uint64_t in the Java HashMap? The problem is, that when I do a lookup operation, I need to take that specific value in the HashMap and cast it back to an uint64_t so that I can call a function in the C code, which expects a uint64_t as the argument.
P.S. i've done it with jlong pid = (jlong) pid_in_uint64_t;
so far so good...
Related
I have always, for as long as I can remember and ubiquitously, done this:
for (unsigned int i = 0U; i < 10U; ++i)
{
// ...
}
In other words, I use the U specifier on unsigned integers. Now having just looked at this for far too long, I'm wondering why I do this. Apart from signifying intent, I can't think of a reason why it's useful in trivial code like this?
Is there a valid programming reason why I should continue with this convention, or is it redundant?
First, I'll state what is probably obvious to you, but your question leaves room for it, so I'm making sure we're all on the same page.
There are obvious differences between unsigned ints and regular ints: The difference in their range (-2,147,483,648 to 2,147,483,647 for an int32 and 0 to 4,294,967,295 for a uint32). There's a difference in what bits are put at the most significant bit when you use the right bitshift >> operator.
The suffix is important when you need to tell the compiler to treat the constant value as a uint instead of a regular int. This may be important if the constant is outside the range of a regular int but within the range of a uint. The compiler might throw a warning or error in that case if you don't use the U suffix.
Other than that, Daniel Daranas mentioned in comments the only thing that happens: if you don't use the U suffix, you'll be implicitly converting the constant from a regular int to a uint. That's a tiny bit extra effort for the compiler, but there's no run-time difference.
Should you care? Here's my answer, (in bold, for those who only want a quick answer): There's really no good reason to declare a constant as 10U or 0U. Most of the time, you're within the common range of uint and int, so the value of that constant looks exactly the same whether its a uint or an int. The compiler will immediately take your const int expression and convert it to a const uint.
That said, here's the only argument I can give you for the other side: semantics. It's nice to make code semantically coherent. And in that case, if your variable is a uint, it doesn't make sense to set that value to a constant int. If you have a uint variable, it's clearly for a reason, and it should only work with uint values.
That's a pretty weak argument, though, particularly because as a reader, we accept that uint constants usually look like int constants. I like consistency, but there's nothing gained by using the 'U'.
I see this often when using defines to avoid signed/unsigned mismatch warnings. I build a code base for several processors using different tool chains and some of them are very strict.
For instance, removing the ‘u’ in the MAX_PRINT_WIDTH define below:
#define MAX_PRINT_WIDTH (384u)
#define IMAGE_HEIGHT (480u) // 240 * 2
#define IMAGE_WIDTH (320u) // 160 * 2 double density
Gave the following warning:
"..\Application\Devices\MartelPrinter\mtl_print_screen.c", line 106: cc1123: {D} warning:
comparison of unsigned type with signed type
for ( x = 1; (x < IMAGE_WIDTH) && (index <= MAX_PRINT_WIDTH); x++ )
You will probably also see ‘f’ for float vs. double.
I extracted this sentence from a comment, because it's a widely believed incorrect statement, and also because it gives some insight into why explicitly marking unsigned constants as such is a good habit.
...it seems like it would only be useful to keep it when I think overflow might be an issue? But then again, haven't I gone some ways to mitigating for that by specifying unsigned in the first place...
Now, let's consider some code:
int something = get_the_value();
// Compute how many 8s are necessary to reach something
unsigned count = (something + 7) / 8;
So, does the unsigned mitigate potential overflow? Not at all.
Let's suppose something turns out to be INT_MAX (or close to that value). Assuming a 32-bit machine, we might expect count to be 229, or 268,435,456. But it's not.
Telling the compiler that the result of the computation should be unsigned has no effect whatsoever on the typing of the computation. Since something is an int, and 7 is an int, something + 7 will be computed as an int, and will overflow. Then the overflowed value will be divided by 8 (also using signed arithmetic), and whatever that works out to be will be converted to an unsigned and assigned to count.
With GCC, arithmetic is actually performed in 2s complement so the overflow will be a very large negative number; after the division it will be a not-so-large negative number, and that ends up being a largish unsigned number, much larger than the one we were expecting.
Suppose we had specified 7U instead (and maybe 8U as well, to be consistent). Now it works.. It works because now something + 7U is computed with unsigned arithmetic, which doesn't overflow (or even wrap around.)
Of course, this bug (and thousands like it) might go unnoticed for quite a lot of time, blowing up (perhaps literally) at the worst possible moment...
(Obviously, making something unsigned would have mitigated the problem. Here, that's pretty obvious. But the definition might be quite a long way from the use.)
One reason you should do this for trivial code1 is that the suffix forces a type on the literal, and the type may be very important to produce the correct result.
Consider this bit of (somewhat silly) code:
#define magic_number(x) _Generic((x), \
unsigned int : magic_number_unsigned, \
int : magic_number_signed \
)(x)
unsigned magic_number_unsigned(unsigned) {
// ...
}
unsigned magic_number_signed(int) {
// ...
}
int main(void) {
unsigned magic = magic_number(10u);
}
It's not hard to imagine those function actually doing something meaningful based on the type of their argument. Had I omitted the suffix, the generic selection would have produced a wrong result for a very trivial call.
1 But perhaps not the particular code in your post.
In this case, it's completely useless.
In other cases, a suffix might be useful. For instance:
#include <stdio.h>
int
main()
{
printf("%zu\n", sizeof(123));
printf("%zu\n", sizeof(123LL));
return 0;
}
On my system, it will print 4 then 8.
But back to your code, yes it makes your code more explicit, nothing more.
Can I set all bits in an unsigned variable of any width to 1s without triggering a sign conversion error (-Wsign-conversion) using the same literal?
Without -Wsign-conversion I could:
#define ALL_BITS_SET (-1)
uint32_t mask_32 = ALL_BITS_SET;
uint64_t mask_64 = ALL_BITS_SET;
uintptr_t mask_ptr = ALL_BITS_SET << 12; // here's the narrow problem!
But with -Wsign-conversion I'm stumped.
error: negative integer implicitly converted to unsigned type [-Werror=sign-conversion]
I've tried (~0) and (~0U) but no dice. The preprocessor promotes the first to int, which triggers -Wsign-conversion, and the second doesn't promote past 32 bits and only sets the lower 32 bits of the 64 bit variable.
Am I out of luck?
EDIT: Just to clarify, I'm using the defined ALL_BITS_SET in many places throughout the project, so I hesitate to litter the source with things like (~(uint32_t)0) and (~(uintptr_t)0).
one's complement change all zeros to ones, and vise versa.
so try
#define ALL_BITS_SET (~(0))
uint32_t mask_32 = ALL_BITS_SET;
uint64_t mask_64 = ALL_BITS_SET;
Try
uint32_t mask_32 = ~((uint32_t)0);
uint64_t mask_64 = ~((uint64_t)0);
uintptr_t mask_ptr = ~((uintptr_t)0);
Maybe clearer solutions exist - this one being a bit pedantic but confident it meets your needs.
The reason you're getting the warning "negative integer implicitly converted to unsigned type" is that 0 is a literal integer value. As a literal integer value, it of type int, which is a signed type, so (~(0)), as an all-bits-one value of type int, has the value of (int)-1. The only way to convert a negative value to an unsigned value non-implicitly is, of course, to do it explicitly, but you appear to have already rejected the suggestion of using a type-appropriate cast. Alternative options:
Obviously, you can also eliminate the implicit conversion to unsigned type by negating an unsigned 0... (~(0U)) but then you'd only have as many bits as are in an unsigned int
Write a slightly different macro, and use the macro to declare your variables
`#define ALL_BITS_VAR(type,name) type name = ~(type)0`
`ALL_BITS_VAR(uint64_t,mask_32);`
But that still only works for declarations.
Someone already suggested defining ALL_BITS_SET using the widest available type, which you rejected on the grounds of having an absurdly strict dev environment, but honestly, that's by far the best way to do it. If your development environment is really so strict as to forbid assignment of an unsigned value to an unsigned variable of a smaller type, (the result of which is very clearly defined and perfectly valid), then you really don't have a choice anymore, and have to do something type-specific:
#define ALL_BITS_ONE(type) (~(type)0)
uint32_t mask_32 = ALL_BITS_SET(uint32_t);
uint64_t mask_64 = ALL_BITS_SET(uint64_t);
uintptr_t mask_ptr = ALL_BITS_SET(uintptr_t) << 12;
That's all.
(Actually, that's not quite all... since you said that you're using GCC, there's some stuff you could do with GCC's typeof extension, but I still don't see how to make it work without a function macro that you pass a variable to.)
I have the following code where I have an array. I add a large number to that array, but when printing it, it shows a smaller, incorrect value. Why is that, and is there a way to fix this?
int x[10];
x[0] = 252121521121;
printf(" %i " , x[0]); //prints short wrong value
Your number requires 38 bit. If your platform's int isn't that big (and there's no reason it should be), the number simply won't fit. (In fact, even the int literal should already have triggered a compiler warning, supposing that this is C or C++.)
You could always use a data type of guaranteed size, like an int64 or something like that, depending on your language and platform. Probably no need for arbitrary-precision libraries here.
In C, include <stdint.h> and use int64_t, or just use long long int, and make sure you initialize it from a long long integer literal, e.g. 252121521121LL. (Long longs are only officially part of the most recent language standards, I might add.)
(Edit: long long int is guaranteed to be at least 64 bit, so it should be a good choice.)
An int, on most systems, is 32 bits. That's enough to store a number of about 2 billion signed, or 4 billion unsigned. To store larger numbers you need a larger form of int. (Unfortunately, on some systems a long int is the same as an int -- good ol' standardization -- so you need to go to a long long int. Better if you can find a typedef in your library such as int64_t.)
If you only have the problem with this particular number, then just use a long long int as suggested in previous answers.
Otherwise, for even larger numbers (>1E19 for signed numbers), you might want to switch to a large number library or code yourself this kind of data type. You basically need to store each digit of your number in an array (or linked list) and manually code basic operations you need on them : adding, subtracting, multiplying etc.
Some libraries include
https://mattmccutchen.net/bigint/
or GMP.
Well, your number just seems to exceed the maximum value a 32bit integer can hold..
I am doing some microcontroller programming in C. I am reading from various sensors 4 bytes that represent either float, int or unsigned int. Currently, I am storing them in unsigned int format in the microcontroller even though the data may be float since to a microcontroller they are just bytes. This data is then transferred to PC. I want the PC to interpret the binary data as a float or an int or an unsigned int whenever I wish. Is there a way to do that?
Ex.
unsigned int value = 0x40040000; // This is 2.0625 in double
double result = convert_binary_to_double(value); // result = 2.0625
Thanks.
PS: I tried typecasting and that does not work.
Keeping in mind that what you're asking for isn't entirely portable, something like this will probably do the job:
float value = *(float *)&bits;
The other obvious possibility is to use a union:
typedef union {
unsigned int uint_val;
int int_val;
float float_val;
} values;
values v;
v.uint_val = 0x40040000;
float f = v.float_val;
Either will probably work fine, but neither guarantees portability.
The shortest way is to cast the address of the float (resp int) to the address of an int (resp float) and to dereference that: for instance, double result = * (float*) &value;. Your optimizing compiler may compile this code into something that does not work as you intended though (see strict aliasing rules).
A way that works more often is to use an union with an int field and a float field.
Why don't you do something like:
double *x = &value;
or a union?
It's a terrible job :)
this talks about their representation in memory (according to the IEEE754), so with various bitwise operations you have to extract the sign, the exponent and the mantissa from your's micro controller's output, then do number = (-1)^e * mantissa ^ (exponent - 1023).
What do you mean by saying "type casting does not work"?
What exactly did you try?
For example, did you try something like this:
double convert_binary_to_double(unsigned int value)
{
return *((double*)&value);
}
Have you tried using the itoa() function? It's a neat little function often used for converting int to ASCII.
To the PC their also just bytes, and as such could be copied into any 4 byte Int, 4 byte Unsigned Int or 4 byte Float field and the computer would be quite happy. You will need to envelope them, or somehow tag them as int, unsigned, or float. There is NO WAY the compiler can tell from looking at any 32 bit collection of bits what it's type is. - If you need a better explanation, comment me back and I'll give you the real long version - Joe
- Maybe I misunderstod you question. I thought you wanted to ship over 4 bytes of data, and have the compuuter magucly know if the data was origenally a 32 bit Int, 32 bit Unsigned or 32 bit Float. There is no way for the computer to know the answer without additional information.
I always use unsigned int for values that should never be negative. But today I
noticed this situation in my code:
void CreateRequestHeader( unsigned bitsAvailable, unsigned mandatoryDataSize,
unsigned optionalDataSize )
{
If ( bitsAvailable – mandatoryDataSize >= optionalDataSize ) {
// Optional data fits, so add it to the header.
}
// BUG! The above includes the optional part even if
// mandatoryDataSize > bitsAvailable.
}
Should I start using int instead of unsigned int for numbers, even if they
can't be negative?
One thing that hasn't been mentioned is that interchanging signed/unsigned numbers can lead to security bugs. This is a big issue, since many of the functions in the standard C-library take/return unsigned numbers (fread, memcpy, malloc etc. all take size_t parameters)
For instance, take the following innocuous example (from real code):
//Copy a user-defined structure into a buffer and process it
char* processNext(char* data, short length)
{
char buffer[512];
if (length <= 512) {
memcpy(buffer, data, length);
process(buffer);
return data + length;
} else {
return -1;
}
}
Looks harmless, right? The problem is that length is signed, but is converted to unsigned when passed to memcpy. Thus setting length to SHRT_MIN will validate the <= 512 test, but cause memcpy to copy more than 512 bytes to the buffer - this allows an attacker to overwrite the function return address on the stack and (after a bit of work) take over your computer!
You may naively be saying, "It's so obvious that length needs to be size_t or checked to be >= 0, I could never make that mistake". Except, I guarantee that if you've ever written anything non-trivial, you have. So have the authors of Windows, Linux, BSD, Solaris, Firefox, OpenSSL, Safari, MS Paint, Internet Explorer, Google Picasa, Opera, Flash, Open Office, Subversion, Apache, Python, PHP, Pidgin, Gimp, ... on and on and on ... - and these are all bright people whose job is knowing security.
In short, always use size_t for sizes.
Man, programming is hard.
Should I always ...
The answer to "Should I always ..." is almost certainly 'no', there are a lot of factors that dictate whether you should use a datatype- consistency is important.
But, this is a highly subjective question, it's really easy to mess up unsigneds:
for (unsigned int i = 10; i >= 0; i--);
results in an infinite loop.
This is why some style guides including Google's C++ Style Guide discourage unsigned data types.
In my personal opinion, I haven't run into many bugs caused by these problems with unsigned data types — I'd say use assertions to check your code and use them judiciously (and less when you're performing arithmetic).
Some cases where you should use unsigned integer types are:
You need to treat a datum as a pure binary representation.
You need the semantics of modulo arithmetic you get with unsigned numbers.
You have to interface with code that uses unsigned types (e.g. standard library routines that accept/return size_t values.
But for general arithmetic, the thing is, when you say that something "can't be negative," that does not necessarily mean you should use an unsigned type. Because you can put a negative value in an unsigned, it's just that it will become a really large value when you go to get it out. So, if you mean that negative values are forbidden, such as for a basic square root function, then you are stating a precondition of the function, and you should assert. And you can't assert that what cannot be, is; you need a way to hold out-of-band values so you can test for them (this is the same sort of logic behind getchar() returning an int and not char.)
Additionally, the choice of signed-vs.-unsigned can have practical repercussions on performance, as well. Take a look at the (contrived) code below:
#include <stdbool.h>
bool foo_i(int a) {
return (a + 69) > a;
}
bool foo_u(unsigned int a)
{
return (a + 69u) > a;
}
Both foo's are the same except for the type of their parameter. But, when compiled with c99 -fomit-frame-pointer -O2 -S, you get:
.file "try.c"
.text
.p2align 4,,15
.globl foo_i
.type foo_i, #function
foo_i:
movl $1, %eax
ret
.size foo_i, .-foo_i
.p2align 4,,15
.globl foo_u
.type foo_u, #function
foo_u:
movl 4(%esp), %eax
leal 69(%eax), %edx
cmpl %eax, %edx
seta %al
ret
.size foo_u, .-foo_u
.ident "GCC: (Debian 4.4.4-7) 4.4.4"
.section .note.GNU-stack,"",#progbits
You can see that foo_i() is more efficient than foo_u(). This is because unsigned arithmetic overflow is defined by the standard to "wrap around," so (a + 69u) may very well be smaller than a if a is very large, and thus there must be code for this case. On the other hand, signed arithmetic overflow is undefined, so GCC will go ahead and assume signed arithmetic doesn't overflow, and so (a + 69) can't ever be less than a. Choosing unsigned types indiscriminately can therefore unnecessarily impact performance.
The answer is Yes. The "unsigned" int type of C and C++ is not an "always positive integer", no matter what the name of the type looks like. The behavior of C/C++ unsigned ints has no sense if you try to read the type as "non-negative"... for example:
The difference of two unsigned is an unsigned number (makes no sense if you read it as "The difference between two non-negative numbers is non-negative")
The addition of an int and an unsigned int is unsigned
There is an implicit conversion from int to unsigned int (if you read unsigned as "non-negative" it's the opposite conversion that would make sense)
If you declare a function accepting an unsigned parameter when someone passes a negative int you simply get that implicitly converted to a huge positive value; in other words using an unsigned parameter type doesn't help you finding errors neither at compile time nor at runtime.
Indeed unsigned numbers are very useful for certain cases because they are elements of the ring "integers-modulo-N" with N being a power of two. Unsigned ints are useful when you want to use that modulo-n arithmetic, or as bitmasks; they are NOT useful as quantities.
Unfortunately in C and C++ unsigned were also used to represent non-negative quantities to be able to use all 16 bits when the integers where that small... at that time being able to use 32k or 64k was considered a big difference. I'd classify it basically as an historical accident... you shouldn't try to read a logic in it because there was no logic.
By the way in my opinion that was a mistake... if 32k are not enough then quite soon 64k won't be enough either; abusing the modulo integer just because of one extra bit in my opinion was a cost too high to pay. Of course it would have been reasonable to do if a proper non-negative type was present or defined... but the unsigned semantic is just wrong for using it as non-negative.
Sometimes you may find who says that unsigned is good because it "documents" that you only want non-negative values... however that documentation is of any value only for people that don't actually know how unsigned works for C or C++. For me seeing an unsigned type used for non-negative values simply means that who wrote the code didn't understand the language on that part.
If you really understand and want the "wrapping" behavior of unsigned ints then they're the right choice (for example I almost always use "unsigned char" when I'm handling bytes); if you're not going to use the wrapping behavior (and that behavior is just going to be a problem for you like in the case of the difference you shown) then this is a clear indicator that the unsigned type is a poor choice and you should stick with plain ints.
Does this means that C++ std::vector<>::size() return type is a bad choice ? Yes... it's a mistake. But if you say so be prepared to be called bad names by who doesn't understand that the "unsigned" name is just a name... what it counts is the behavior and that is a "modulo-n" behavior (and no one would consider a "modulo-n" type for the size of a container a sensible choice).
Bjarne Stroustrup, creator of C++, warns about using unsigned types in his book The C++ programming language:
The unsigned integer types are ideal
for uses that treat storage as a bit
array. Using an unsigned instead of an
int to gain one more bit to represent
positive integers is almost never a
good idea. Attempts to ensure that
some values are positive by declaring
variables unsigned will typically be
defeated by the implicit conversion
rules.
I seem to be in disagreement with most people here, but I find unsigned types quite useful, but not in their raw historic form.
If you consequently stick to the semantic that a type represents for you, then there should be no problem: use size_t (unsigned) for array indices, data offsets etc. off_t (signed) for file offsets. Use ptrdiff_t (signed) for differences of pointers. Use uint8_t for small unsigned integers and int8_t for signed ones. And you avoid at least 80% of portability problems.
And don't use int, long, unsigned, char if you mustn't. They belong in the history books. (Sometimes you must, error returns, bit fields, e.g)
And to come back to your example:
bitsAvailable – mandatoryDataSize >= optionalDataSize
can be easily rewritten as
bitsAvailable >= optionalDataSize + mandatoryDataSize
which doesn't avoid the problem of a potential overflow (assert is your friend) but gets you a bit nearer to the idea of what you want to test, I think.
if (bitsAvailable >= optionalDataSize + mandatoryDataSize) {
// Optional data fits, so add it to the header.
}
Bug-free, so long as mandatoryDataSize + optionalDataSize can't overflow the unsigned integer type -- the naming of these variables leads me to believe this is likely to be the case.
You can't fully avoid unsigned types in portable code, because many typedefs in the standard library are unsigned (most notably size_t), and many functions return those (e.g. std::vector<>::size()).
That said, I generally prefer to stick to signed types wherever possible for the reasons you've outlined. It's not just the case you bring up - in case of mixed signed/unsigned arithmetic, the signed argument is quietly promoted to unsigned.
From the comments on one of Eric Lipperts Blog Posts (See here):
Jeffrey L. Whitledge
I once developed a system in which
negative values made no sense as a
parameter, so rather than validating
that the parameter values were
non-negative, I thought it would be a
great idea to just use uint instead. I
quickly discovered that whenever I
used those values for anything (like
calling BCL methods), they had be
converted to signed integers. This
meant that I had to validate that the
values didn't exceed the signed
integer range on the top end, so I
gained nothing. Also, every time the
code was called, the ints that were
being used (often received from BCL
functions) had to be converted to
uints. It didn't take long before I
changed all those uints back to ints
and took all that unnecessary casting
out. I still have to validate that the
numbers are not negative, but the code
is much cleaner!
Eric Lippert
Couldn't have said it better myself.
You almost never need the range of a
uint, and they are not CLS-compliant.
The standard way to represent a small
integer is with "int", even if there
are values in there that are out of
range. A good rule of thumb: only use
"uint" for situations where you are
interoperating with unmanaged code
that expects uints, or where the
integer in question is clearly used as
a set of bits, not a number. Always
try to avoid it in public interfaces.
Eric
The situation where (bitsAvailable – mandatoryDataSize) produces an 'unexpected' result when the types are unsigned and bitsAvailable < mandatoryDataSize is a reason that sometimes signed types are used even when the data is expected to never be negative.
I think there's no hard and fast rule - I typically 'default' to using unsigned types for data that has no reason to be negative, but then you have to take to ensure that arithmetic wrapping doesn't expose bugs.
Then again, if you use signed types, you still have to sometimes consider overflow:
MAX_INT + 1
The key is that you have to take care when performing arithmetic for these kinds of bugs.
No you should use the type that is right for your application. There is no golden rule. Sometimes on small microcontrollers it is for example more speedy and memory efficient to use say 8 or 16 bit variables wherever possible as that is often the native datapath size, but that is a very special case. I also recommend using stdint.h wherever possible. If you are using visual studio you can find BSD licensed versions.
If there is a possibility of overflow, then assign the values to the next highest data type during the calculation, ie:
void CreateRequestHeader( unsigned int bitsAvailable, unsigned int mandatoryDataSize, unsigned int optionalDataSize )
{
signed __int64 available = bitsAvailable;
signed __int64 mandatory = mandatoryDataSize;
signed __int64 optional = optionalDataSize;
if ( (mandatory + optional) <= available ) {
// Optional data fits, so add it to the header.
}
}
Otherwise, just check the values individually instead of calculating:
void CreateRequestHeader( unsigned int bitsAvailable, unsigned int mandatoryDataSize, unsigned int optionalDataSize )
{
if ( bitsAvailable < mandatoryDataSize ) {
return;
}
bitsAvailable -= mandatoryDataSize;
if ( bitsAvailable < optionalDataSize ) {
return;
}
bitsAvailable -= optionalDataSize;
// Optional data fits, so add it to the header.
}
You'll need to look at the results of the operations you perform on the variables to check if you can get over/underflows - in your case, the result being potentially negative. In that case you are better off using the signed equivalents.
I don't know if its possible in c, but in this case I would just cast the X-Y thing to an int.
If your numbers should never be less than zero, but have a chance to be < 0, by all means use signed integers and sprinkle assertions or other runtime checks around. If you're actually working with 32-bit (or 64, or 16, depending on your target architecture) values where the most significant bit means something other than "-", you should only use unsigned variables to hold them. It's easier to detect integer overflows where a number that should always be positive is very negative than when it's zero, so if you don't need that bit, go with the signed ones.
Suppose you need to count from 1 to 50000. You can do that with a two-byte unsigned integer, but not with a two-byte signed integer (if space matters that much).