I am measuring AC Voltage and I am calculating rms value for ADC.
I have an array which is having 128 samples of my signal.
while squaring number I get an error.
unsigned long int calval=0;
unsigned int loop;
float adcbufval;
for(loop=0;loop<128;loop++)
{
printf("adcval %d = %d\t ", loop, adc_temp[loop]);
calval = (adc_temp[loop])*(adc_temp[loop]);
printf("\t %ld \n", calval);
}
output:
adcval 1 = 168 28224
adcval 2 = 32 1024
adcval 3 = -88 7744
adcval 4 = -211 44521 // sqr(211) 44521 , it fine here
adcval 5 = -314 33060 // sqr(314) 98596-65536 = 33060 instead of 98596.
adcval 6 = -416 41984
adcval 7 = -522 10340
adcval 8 = -655 35809
adcval 9 = -773 7705
adcval 10 = -889 3889
Though I defined 'calval' as unsigned long int (range 0-4,294,967,295), it get over flowed at 65536 value.
In normal C compiler its working fine.
Any suggestions?
Hardware is dsPIC30f5011 / MPLAB 8.8x
You haven't shown it (or I've overlooked it), but if adc_temp[] is an array of int, then to safely square the values you must cast at least one side of the multiplication to long before doing so. Otherwise the result of the multiplication will still be int, and it will only be converted to unsigned long for the assignment, after overflow has already occurred.
Like so:
calval = (long)(adc_temp[loop])*(adc_temp[loop]);
That cast may be unsigned long if adc_temp[] is also unsigned.
According to the datasheet the dsPIC30f5011 is a 16bit microcontroller, according to C99 you are correct unsigned long should be 2^32. However it appears that the compiler you are using treats long as an alias for int which is still compliant with C90 which simply required sizeof(short) <= sizeof(int) <= sizeof(long). You might have better luck if you really need 32bit math using unsigned long long.
Related
The following code is not working properly. I wanted it to print the sum of 2 billion + 2 billion. Even though I'm using long here, I can't get the answer correct.
#include<stdio.h>
#include<stdlib.h>
int main(void){
long a = 2000000000;
long b = 2000000000;
printf("%li\n",a + b);
return 0;
}
When I run this program I get the value -294967296. I'm using VS Code IDE, Mingw compiler and windows version is windows 11.
long is at least 32-bit and its range must span at least +/- 231 - 1 or +/-2,147,483,647.
On some machines a long is wider such as [-263 ...
263 - 1].
On OP's machine, it is [-231 ... 231 - 1].
For OP. a + b inccurs signed integer overflow which in undefined behavior (UB) as the sum of two long is outside the long range. A common result is the sum is "wrapped" and results in a long of -294,967,296 although many other results are possible including code crash.
To handle sums like 4,000,000,000, perform the addition with unsigned long math with its minimum rage of [0 ... 232 - 1] or use long long.
// printf("%li\n",a + b);
printf("%lli\n",0LL + a + b);
Or ...
long long a = 2000000000;
long long b = 2000000000;
printf("%lli\n",a + b);
The size of long integer is not fixed unlike other data types. It varies from architectures, operating system and even with compiler that we are using. In some of the systems it behaves like an int data type or a long long data type as follows:
OS Architecture Size
Windows IA-32 4 bytes
Windows Intel® 64 or IA-64 4 bytes
Linux IA-32 4 bytes
Linux Intel® 64 or IA-64 8 bytes
Mac OS X IA-32 4 bytes
Mac OS X Intel® 64 or IA-64 8 bytes
long might be 32 bit in your case. To avoid this confusion, use stdint where you can be sure of the sizes and also portable
#include<stdio.h>
#include<stdlib.h>
#include<stdint.h>
#include<inttypes.h>
int main(void){
int64_t a = 2000000000;
int64_t b = 2000000000;
// printf("%lli\n",a + b);
printf("res = %" PRI64d "!\n", a+b);
return 0;
}
The max size of a long may be the same as an int, you should use long long instead.
Integer size in C
I'm hoping that somebody can give me an understanding of why the code works the way it does. I'm trying to wrap my head around things but am lost.
My professor has given us this code snippet which we have to use in order to generate random numbers in C. The snippet in question generates a 64-bit integer, and we have to adapt it to also generate 32-bit, 16-bit, and 8-bit integers. I'm completely lost on where to start, and I'm not necessarily asking for a solution, just on how the original snippet works, so that I can adapt it form there.
long long rand64()
{
int a, b;
long long r;
a = rand();
b = rand();
r = (long long)a;
r = (r << 31) | b;
return r;
}
Questions I have about this code are:
Why is it shifted 31 bits? I thought rand() generated a number between 0-32767 which is 16 bits, so wouldn't that be 48 bits?
Why do we say | (or) b on the second to last line?
I'm making the relatively safe assumption that, in your computer's C implementation, long long is a 64-bit data type.
The key here is that, since long long r is signed, any value with the highest bit set will be negative. Therefore, the code shifts r by 31 bits to avoid setting that bit.
The | is a logical bit operator which combines the two values by setting all of the bits in r which are set in b.
EDIT:
After reading some of the comments, I realized that my answer needs correction. rand() returns a value no more than RAND_MAX which is typically 2^31-1. Therefore, r is a 31-bit integer. If you shifted it 32 bits to the left, you'd guarantee that its 31st bit (0-up counting) would always be zero.
rand() generates a random value [0...RAND_MAX] of questionable repute - but let us set that reputation aside and assume rand() is good enough and it is a
Mersenne number (power-of-2 - 1).
Weakness to OP's code: If RAND_MAX == pow(2,31)-1, a common occurrence, then OP's rand64() only returns values [0...pow(2,62)). #Nate Eldredge
Instead, loop as many times as needed.
To find how many random bits are returned with each call, we need the log2(RAND_MAX + 1). This fortunately is easy with an awesome macro from Is there any way to compute the width of an integer type at compile-time?
#include <stdlib.h>
/* Number of bits in inttype_MAX, or in any (1<<k)-1 where 0 <= k < 2040 */
#define IMAX_BITS(m) ((m)/((m)%255+1) / 255%255*8 + 7-86/((m)%255+12))
#define RAND_MAX_BITWIDTH (IMAX_BITS(RAND_MAX))
Example: rand_ul() returns a random value in the [0...ULONG_MAX] range, be unsigned long 32-bit, 64-bit, etc.
unsigned long rand_ul(void) {
unsigned long r = 0;
for (int i=0; i<IMAX_BITS(ULONG_MAX); i += RAND_MAX_BITWIDTH) {
r <<= RAND_MAX_BITWIDTH;
r |= rand();
}
return r;
}
My thoughts: if one declares an int it basically gets an unsigned int. So if I need a negative value I have to explicitly create a signed int.
I tried
int a = 0b10000101;
printf("%d", a); // i get 138 ,what i've expected
signed int b = 0b10000101; // here i expect -10, but i also get 138
printf("%d", b); // also tried %u
So am I wrong that an signed integer in binary is a negative value?
How can I create a negative value in binary format?
Edit Even if I use 16/32/64 bits I get the same result. unsigned/signed doest seems to make a difference without manually shifting the bits.
If numbers are represented as two's complement you just need to have the sign bit set to ensure that the number is negative. That's the MSB. If an int is 32 bits, then 0b11111111111111111111111111111111 is -1, and 0b10000000000000000000000000000000 is INT_MIN.
To adjust for the size int(8|16|64)_t, just change the number of bits. The sign bit is still the MSB.
Keep in mind that, depending on your target, int could be 2 or 4 bytes. This means that int a=0b10000101 is not nearly enough bits to set the sign bit.
If your int is 4 bytes, you need 0b10000000 0000000 0000000 00000000 (spaces added for clarity).
For example on a 32-bit target:
int b = 0b11111111111111111111111111111110;
printf("%d\n", b); // prints -2
because int a = 0b10000101 has only 8 bits, where you need 16 or 32. Try thi:
int a = 0b10000000000000000000000000000101
that should create negative number if your machine is 32bits. If this does not work try:
int a = 0b1000000000000101
there are other ways to produce negative numbers:
int a = 0b1 << 31 + 0b101
or if you have 16 bit system
int a = 0b1 << 15 + 0b101
or this one would work for both 32 or 16 bits
int a = ~0b0 * 0b101
or this is another one that would work on both if you want to get -5
int a = ~0b101 + 1
so 0b101 is 5 in binary, ~0b101 gives -6 so to get -5 you add 1
EDIT:
Since I now see that you have confusion of what signed and unsigned numbers are, I will try to explain it as simple as possible int
So when you have:
int a = 5;
is the same as:
signed int a = 5;
and both of them would be positive. Now it would be the same as:
unsigned int a = 5;
because 5 is positive number.
On the other hand if you have:
int a = -5;
this would be the same as
signed int a = -5;
but it would not be the same as following:
unsigned int a = -5;
the first 2 would be -5, the third one is not the same. In fact it would be the same if you entered 4294967291 because they are the same in binary form but the fact that you have unsigned in front means that compiler would store it the same way but treat it as positive value.
How to create a negative binary number using signed/unsigned in C?
Simply negate the constant of a positive value. To attempt to do so with many 1's
... 1110110 assumes a bit width for int. Better to be portable.
#include <stdio.h>
int main(void) {
#define NEGATIVE_BINARY_NUMBER (-0b1010)
printf("%d\n", NEGATIVE_BINARY_NUMBER);
}
Output
-10
I'm implementing RSA in C.
I'm using "unsigned long long int" (Top limit: 18446744073709551615).
The problems come when I have to calculate thing like 4294967296 ^ 2.
It should be 18446744073709551616, but I get 0 (overflow).
I mean, I need to calculate things that result is over the top limit.
I've tried using float, double, long double, but the results are incorrect.
Example:
4294967000.0 * 4294967000.0 the result is 18446741874686296064.0
but it should be 18446741531089000000
Openssl example:
#include <stdio.h>
#include <openssl/bn.h>
/* compile with -lcrypto */
int main ()
{
char p_sa[] = "4294967296";
char p_sb[] = "4294967296";
BN_CTX *c = BN_CTX_new();
BIGNUM *pa = BN_new();
BIGNUM *pb = BN_new();
BN_dec2bn(&pa, p_sa);
BN_dec2bn(&pb, p_sb);
BN_mul (pa,pb, pa,c);
char * number_str = BN_bn2hex(pa);
printf("%s\n", number_str);
OPENSSL_free(number_str);
BN_free(pa);
BN_free(pb);
BN_CTX_free(c);
return 0;
}
"implementing RSA" + "I have to calculate thing like 4294967296 ^ 2" is contradictory. To implement RSA, that calculation is not needed. The algorithm does not need wider than 64-bit integers.
I've tried using float, double, long double, but the results are incorrect.
4294967000.0 * 4294967000.0 the result is 18446741874686296064.0
Use unsigned long long math. Typical double has only 53 bits of precision, yet this calculation needs 60+ bits for a precise product.
int main(void) {
unsigned long x = 4294967000;
printf("%lu * %lu the result is %lu\n", x,x,x*x); // overflow
printf("%lu * %lu the result is %llu\n", x,x,(unsigned long long) (x*x)); // overflow
printf("%lu * %lu the result is %llu\n", x,x,(unsigned long long)x*x);// good 64-bit math
return 0;
}
Output
4294967000 * 4294967000 the result is 87616
4294967000 * 4294967000 the result is 87616
4294967000 * 4294967000 the result is 18446741531089000000
4294967296 is this:
> witch 4294967296
witch (c) 1981-2017 Alf Lacis Build: 20200823.144838
Param ________Hex_______ _______Uns.Dec______
1 0x0000000100000000 4294967296
As you can see, it is already using the bottom bit of byte 5 (01).
If you square this number, it will try to set the bottom bit of byte 9, but there are only 8 bytes in a 64-bit unsigned integer, so the answer, 0, is what the processor believes to be correct. And that's why the value of UINT64_MAX (stdint.h) is 18446744073709551615, not 18446744073709551616.
18446744073709551615 is this in hex:
>witch 18446744073709551615
witch (c) 1981-2017 Alf Lacis Build: 20200823.144838
Param ________Hex_______ _______Uns.Dec______
1 0xffffffffffffffff 18446744073709551615
I wrote this bit of code to learn about bit shifting. To my surprise, even though I declared x to be an unsigned int, the output includes a negative number, namely when the leftmost bit is set to 1. My question: why? I thought an unsigned int was never negative. Per sizeof(x), x is 4 bytes wide.
Here is the code fragment:
int main(void)
{
unsigned int x;
x = 1;
for (int i = 0; i < 32; i++)
{
printf("2^%i = %i\n", i, x);
x <<= 1;
}
return 0;
}
Here is the output:
2^0 = 1
2^1 = 2
2^2 = 4
2^3 = 8
2^4 = 16
2^5 = 32
2^6 = 64
2^7 = 128
2^8 = 256
2^9 = 512
2^10 = 1024
2^11 = 2048
2^12 = 4096
2^13 = 8192
2^14 = 16384
2^15 = 32768
2^16 = 65536
2^17 = 131072
2^18 = 262144
2^19 = 524288
2^20 = 1048576
2^21 = 2097152
2^22 = 4194304
2^23 = 8388608
2^24 = 16777216
2^25 = 33554432
2^26 = 67108864
2^27 = 134217728
2^28 = 268435456
2^29 = 536870912
2^30 = 1073741824
2^31 = -2147483648
Just use correct conversion symbol
printf("2^%u = %u\n", i, x);
You're using the %i format specifier, which prints its argument as a signed int.
If you want to print as unsigned, use the %u format specifier.
printf("2^%i = %u\n", i, x);
When you talk about the sign of a integral value in C (C++ and many other programming languages) you are just talking about how you interpret some data.
You must understand that what's stored inside an unsigned int are just bits regardless of the sign, the fact that they behave as "unsigned" when used is a mere interpretation of the value.
So by using %i specifier you are treating it as a signed value, regardless how it is declared. Try with %u which specifies that you want to treat them as unsigned.
According to the C++ reference page on printf, using %i in the string passed to printf means the corresponding argument will be treated as signed decimal integer. This means that your unsigned int will be casted to a signed int.
In C++, casting unsigned to signed (and reverse) only changes the interpretation, not the bit values. So setting the leftmost bit to 1 makes the number negative because that is what it corresponds to in signed integer interpretation.
To achieve the expected number, use %u instead for unsigned integer interpretation.