I have code that loops through numbers and creates the character array representation of that integer. So for a number like 1234 I get an array that looks like {'1', '2', '3', '4'}
Part of the code is shown below:
do {
//print here
c[i++] = (char)(((int)'0')+(num - (num/10)*10 ));
} while ((num = num/10) != 0);
I am having an issue when it comes to large data types like long long int: 18446612134627563776
I printed the values in the loop are:
18446612134627563776
18446730879801352832
18446742754318731738
...
18446744073709551615
The values should be
18446612134627563776
1844661213462756377
184466121346275637
...
18
1
The strange thing is that the loop terminates. The last printed value is 18446744073709551615 != 0, so not sure why it terminated there. I think its some issue with the data type that i am not doing right.
This is the print statement:
printk("long=%llu sec=%llu , char=%c\n", num, (num/10)*10, (char)(((int)'0')+((num - (num/10)*10 ))));
Your code is fine. The problem is that the type of num is signed (i.e. just long long). Change it to (unsigned long long) and you should be good to go.
long long int: 18446612134627563776
long long int is a signed type, usually 64 bits wide, with the maximal representable number
2^63-1 = 9223372036854775807
Your value is larger than that, and overflows, probably to
2^63 - 18446612134627563776 = -131939081987840
The printed values are
2^64 + (-131939081987840)/(10^k)
Change the type to unsigned long long to get the expected results.
Why don't you use the modulo operator to compute the remainder of the division to obtain the last digit?
integer below
do {
c[i++] = (char)(((int)'0')+(num %10 ));
} while ((num = (num/10)) != 0);
It isnt that it overflows, Ive tested it with this code.
unsigned long long num = 18446612134627563776;
char c[100];
unsigned int i = 0;
do {
//print here
c[i++] = '0'+ (char)(num - (num/10)*10 );
} while ((num = num/10) != 0);
c[i] = '\0';
cout << c << endl;
The problem is that the compiler is probably trying to use 10 as an int, and is having problems converting. After putting the casting in, it actually works. Problem is that the algorithm is backwards as it will bring in the 6, then the 7, etc... The output is as follows from the above code which is exactly the same as his with the exception of the casting.
This is actual code, and actual output.
677365726431221664481
Hope this helps :-)
Related
I dont understand why the interger Value "hash" is getting lower in/after the 3 loop.
I would guess this happen because the uint limitation is 2,147,483,647.
BUT... when i try to go step by step the value is equal to 2146134658?.
I´m not that good in math but it should be lower than the limitation.
#define FNV_PRIME_32 16777619
#define FNV_OFFSET_32 2166136261U
unsigned int hash_function(const char *string, unsigned int size)
{
unsigned int str_len = strlen(string);
if (str_len == 0) exit(0);
unsigned int hash = FNV_OFFSET_32;
for (unsigned int i = 0; i < str_len; i++)
{
hash = hash ^ string[i];
// Multiply by prime number found to work well
hash = hash * FNV_PRIME_32;
if (hash > 765010506)
printf("YO!\n");
else
printf("NOO!\n");
}
return hash % size;
}
If you are wondering this if statement is only for me.
if (hash > 765010506)
printf("YO!\n");
else
printf("NOO!\n");
765010506 is the value for hash after the next run through the loop.
I dont understand why the interger Value "hash" is getting lower in/after the 3 loop.
All unsigned integer arithmetic in C is modular arithmetic. For unsigned int, it is modulo UINT_MAX + 1; for unsigned long, modulo ULONG_MAX + 1, and so on.
(a modulo m means the remainder of a divided by m; in C, a % m if both a and m are unsigned integer types.)
On many current architectures, unsigned int is a 32-bit unsigned integer type, with UINT_MAX == 4294967295.
Let's look at what this means in practice, for multiplication (by 65520, which happens to be an interesting value; 216 - 16):
unsigned int x = 1;
int i;
for (i = 0; i < 10; i++) {
printf("%u\n", x);
x = x * 65520;
}
The output is
1
65520
4292870400
50327552
3221291008
4293918720
16777216
4026531840
0
0
What? How? How come the result ends up zero? That cannot happen!
Sure it can. In fact, you can show mathematically that it happens eventually whenever the multiplier is even, and the modulo is with respect to a power of two (232, here).
Your particular multiplier is odd, however; so, it does not suffer from the above. However, it still wraps around due to the modulo operation. If we retry the same with your multiplier, 16777619, and a bit longer sequence,
unsigned int x = 1;
int i;
for (i = 0; i < 20; i++) {
printf("%u\n", x);
x = x * 16777619;
}
we get
1
16777619
637696617
1055306571
1345077009
1185368003
4233492473
878009595
1566662433
558416115
1485291145
3870355883
3549196337
924097827
3631439385
3600621915
878412353
2903379027
3223152297
390634507
In fact, it turns out that this sequence is 1,073,741,824 iterations long (before it repeats itself), and will never yield 0, 2, 4, 5, 6, 7, 8, 10, 12, 13, 14, or 15, for example -- that is, if it starts from 1. It even takes 380 iterations to get a result smaller than 16,777,619 (16,689,137).
For a hash function, that is okay. Each new nonzero input changes the state, so the sequence is not "locked". But, there is no reason to expect the hash value increases monotonically as the length of the hashed data increases; it is much better to assume it is "roughly random" instead: not really random, as it depends on the input only, but also not obviously regular-looking.
I would guess this happen because the uint limitation is 2,147,483,647.
The maximum value of a 32-bit unsigned integer is roughly 4 billion (232 - 1 = 4,294,967,295). The number you're thinking of is the maximum value of a signed integer (231 - 1).
2,146,134,658 is slightly less than 231 (so it could fit in even an unsigned 32-bit integer), but it's still very close to the limit. Multiplying it by FNV_PRIME_32 -- which is roughly 224 -- will give a result of roughly 255, which will cause overflow.
I'm trying to figure out maximum value for type long by calculating an exponential of base 2 to the power of the bit number.
Unfortunately the calculation overflows at step 61 and I don't understand why.
long exponential(int base, int exponent)
{
long result = (long)base;
for (int i = 0; i < exponent; i++) {
result *= base;
}
return result;
}
unsigned int sLong = sizeof(long);
long lResult = exponential(2, (sLong * 8) - 1);
lResult is 0 after running the function.
What's odd is that when I do this for char, short and int it works fine.
The code here has an off-by-one error.
Consider the following: what is the result of exponential(10, 2)? Empirical debugging (use a printf statement) shows that it's 1000. So exponential calculates the mathematical expression be+1.
The long type usually has 64 bits. This seems to be your case (seeing that the overflow happens around step 64). Seeing that it's a signed type, its range is (typically) from -263 to 263-1. That is, the maximal power of 2 that the data type can represent is 262; if the code tries to calculate 263, then overflow happens (you probably want to avoid it).
So, because of the off-by-one error, the code will cause an overflow for exponent greater or equal to 62.
To fix the off-by-one error, start multiplying from 1:
long power_of(int base, int exponent)
{
long result = (long)1; // 0th power of base is 1
for (int i=0; i<exponent;i++) {
result*=base;
}
return result;
}
However, this will not get rid of the overflow, because the long data type cannot represent the number 263. Fortunately, you can use unsigned long long, which is guaranteed to be big enough for the type of calculations you are doing:
unsigned long long power_of(int base, int exponent)
{
unsigned long long result = 1ULL; // 0th power of base is 1
for (int i=0; i<exponent;i++) {
result*=base;
}
return result;
}
Print it with the llu format:
printf("%llu", power_of(2, 63));
Could someone explain to me what's happening to "n" in this situation?
main.c
unsigned long temp0;
PLLSYS0_FWD_DIV_A_DECODE(n);
main.h
#define PLLSYS0_FWD_DIV_A_DECODE(n) ((((unsigned long)(n))>>8)& 0x0000000f)
I understand that n is being shifted 8 bits and then anded with 0x0000000f. So what does (unsigned long)(n) actually do?
#include <stdio.h>
int main(void)
{
unsigned long test1 = 1;
printf("test1 = %d \n", test1);
printf("(unsigned long)test1 = %d \n", (unsigned long)(test1));
return 0;
}
Output:
test1 = 1
(unsigned long)test1 = 1
In your code example, the cast doesn't make much sense because test1 is already an unsigned long, but it makes sense when the macro is used on a different type like unsigned char etc.
Also you should use %lu in printf to print unsigned long.
printf("(unsigned long)test1 = %lu\n", (unsigned long)(test1));
// ^^
It widens it to be the size of an unsigned long. Imagine if you called this with a char and shifted it 8 bits to the right, the anding wouldn't work the same.
Also just found this (look under right-shift operator) for why it's unsigned. Apparently unsigned forces a logical shift in which the left-most bit is replaced with a zero for each position shifted. Whereas a signed value shifted performs an arithmetic shift where the left-most bit is replaced by the dropped rightmost bit.
Example:
11000011 ( unsigned, shifted to the right by 1 )
01100001
11000011 ( signed, shifted to the right by 1 )
11100001
Could someone explain to me what's happening to "n" in this situation?
You are casting n to unsigned long.
So what does (unsigned long)(n) actually do?
It will promote n to unsigned long.
Casting the input is all it's doing before the bit shift and the anding. Being careful about order if operations and precedence of operators. It's pretty ugly.
But looks like they're avoiding hitting the sign bit and by doing this instead of a function, there's no type checking on n.
It's just ugly.
Better form would be to have a clean clear function that has input type checking.
That ensures that n has the proper size (in bits) and most importantly is treated as unsigned. As the shift operators perform sign extension, when a number is signed and negative, the extension will be done with 1 not zero. It means that a negative number shifted will always result in a negative number.
For example:
int main()
{
long i = -1;
long x, y;
x = ((unsigned long)i) >> 8;
y = i >> 8;
printf("%ld %ld\n", x, y);
}
On my machine it outputs:
72057594037927935 -1
Because of the sign extension in y, the number continues to be -1:
Alright i know this question might some weird , but still i wanted to demystify it.
1.)an int type in C can stores number in the range of -2147483648 to 2147483647.
2.)If we append an unsigned it front of it , the range would become 0 to 2147483647.
3.)The thing is , why do we even bother to use the keyword unsigned when the code below could actually works.
The Code:
#include <stdio.h>
int main(void)
{
int num = 2147483650;
printf("%u\n" , num);
return 0;
}
4.)As you see , i can still print out the integer as unsigned type if I use the %u specifier and it will print me the value 2147483650.
5.)Even if I create another integer type with value 50 and sum it up with num , although it's overflow but yet I still can print out the correct sum value by using %u specifier.So why unsigned keyword is still a necessity??
Thanks for spending time reading my question.
No, this is true only on certain platforms (where an int is 32-bit, 2's-complement).
No, in that case the range would be 0 to 4294967295.
That code exhibits undefined behaviour.
See 3.
See 2. and 3.
Considering only Q3, "Why do we bother to use unsigned", consider this program fragment:
int main(void) {
int num = MAX_INT;
num += 50;
printf("%u\n", num); /* Yes, this *might* print what you want */
/* But this "if" almost certainly won't do what you want. */
if(num > 0) {
printf("Big numbers are big\n");
} else {
printf("Big numbers are small\n");
}
}
We use "unsigned" because unsigned int behaves differently from int. There are more interesting behaviors than just how printf works.
Well, firstly the result of assigning an out-of-range number is implementation-defined - it doesn't have to give the value that will "work" when printed with the %u format specifier. But to really answer your question, consider this modification to your code:
#include <stdio.h>
int main(void)
{
int num = 2147483650;
unsigned num2 = 2147483650;
num /= 2;
num2 /= 2;
printf("%u, %u\n" , num, num2);
return 0;
}
(If 2147483650 is out of range of int on your platform, but within the range of unsigned, then you will only get the correct answer of 1073741825 using the unsigned type).
Wrong. The range is INT_MIN (which in 2 completment systems usually is -INT_MAX+1) to INT_MAX; INT_MAX and INT_MIN depends on the compiler, architecture, etc.
Wrong. The range is 0 to UINT_MAX which is usually INT_MAX*2 + 1
Unsigned integers have a different behaviour regarding overflow and semantics. In 2 complement there's one value undefined for signed integers (that is if only the uppermost bit is set, the rest zero), that has somewhat a double meaning. Unsigned integers can make use of the full range of bit patterns.
Here's an exercise: On a 32 bit machine compare the output of printf("%d %u", 0xffffffff, 0xffffffff);
Because unsigned integers behave differently than signed ones.
I'm trying to get the numerical (double) value from a byte array of 16 elements, as follows:
unsigned char input[16];
double output;
...
double a = input[0];
distance = a;
for (i=1;i<16;i++){
a = input[i] << 8*i;
output += a;
}
but it does not work.
It seems that the temporary variable that contains the result of the left-shift can store only 32 bits, because after 4 shift operations of 8 bits it overflows.
I know that I can use something like
a = input[i] * pow(2,8*i);
but, for curiosity, I was wondering if there's any solution to this problem using the shift operator...
Edit: this won't work (see comment) without something like __int128.
a = input[i] << 8*i;
The expression input[i] is promoted to int (6.3.1.1) , which is 32bit on your machine. To overcome this issue, the lefthand operand has to be 64bit, like in
a = (1L * input[i]) << 8*i;
or
a = (long long unsigned) input[i] << 8*i;
and remember about endianness
The problem here is that indeed the 32 bit variables cannot be shifted more than 4*8 times, i.e. your code works for 4 char's only.
What you could do is find the first significant char, and use Horner's law: anxn + an-1n-1 + ... = ((...( anx + an-1 ).x + an-2 ) . x + ... ) + a0 as follows:
char coefficients[16] = { 0, 0, ..., 14, 15 };
int exponent=15;
double result = 0.;
for(int exponent = 15; exp >= 0; --exp ) {
result *= 256.; // instead of <<8.
result += coefficients[ exponent ];
}
In short, No, you can't convert a sequence of bytes directly into a double by bit-shifting as shown by your code sample.
byte, an integer type and double, a floating point type (i.e. not an integer type) are not bitwise compatible (i.e. you can't just bitshift to values of a bunch of bytes into a floating point type and expect an equivalent result.)
1) Assuming the byte array is a memory buffer referencing an integer value, you should be able to convert your byte array into a 128-bit integer via bit-shifting and then convert that resulting integer into a double. Don't forget that endian-issues may come into play depending on the CPU architecture.
2) Assuming the byte array is a memory buffer that contains a 128-bit long double value, and assuming there are no endian issues, you should be able to memcpy the value from the byte array into the long double value
union doubleOrByte {
BYTE buffer[16];
long double val;
} dOrb;
dOrb.val = 3.14159267;
long double newval = 0.0;
memcpy((void*)&newval, (void*)dOrb.buffer, sizeof(dOrb.buffer));
Why not simply cast the array to a double pointer?
unsigned char input[16];
double* pd = (double*)input;
for (int i=0; i<sizeof(input)/sizeof(double); ++i)
cout << pd[i];
if you need to fix endian-ness, reverse the char array using the STL reverse() before casting to a double array.
Have you tried std::atof:
http://www.cplusplus.com/reference/clibrary/cstdlib/atof/
Are you trying to convert a string representation of a number to a real number? In that case, the C-standard atof is your best friend.
Well based off of operator precedence the right hand side of
a = input[i] << 8*i;
gets evaluated before it gets converted to a double, so you are shifting input[i] by 8*i bits, which stores its result in a 32 bit temporary variable and thus overflows. You can try the following:
a = (long long unsigned int)input[i] << 8*i;
Edit: Not sure what the size of a double is on your system, but on mine it is 8 bytes, if this is the case for you as well the second half of your input array will never be seen as the shift will overflow even the double type.