Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
this is the code (copy & pasted):
#include <stdio.h>
int main(){
char x,y,result;
//Sample 1:
x = 35;
y = 85;
result = x + y;
printf("Sample 1: %hi + %hi = %hi\n",x ,y, result);
//Sample 2:
x = 85;
y = 85;
result = x + y;
printf("Sample 2: %hi + %hi = %hi\n",x ,y, result);
return 0;
}
I've tried to compile it but it doesn't work. Am I stupid or is it "int" or "short" instead of char at the beginning? Once I change this it works, but I'm worried that it should work as is...
Does the program really just add x and y and show the result? That's what it does if I use short instead of char.
Thanks in advance!
E: Why the down votes? What is wrong with my post?
Thoughts:
For an introductory course, this is a terrible example. Depending on your implementation, char is either a signed or unsigned number. And the code will behave very differently depending on this fact.
That being said, yes, this code is basically adding two numbers and printing the result. I agree that the %hi is odd. That expects a short int. I'd personally expect either %hhi or just %i, and let integer promotion do it's thing.
If the numbers are unsigned chars
85 + 35 == 120, which is probably less than CHAR_MAX (which is probably 255). So there's no problem and everything works fine.
85 + 85 == 170, which is probably less than CHAR_MAX (which is probably 255). So there's no problem and everything works fine.
If the numbers are signed chars
85 + 35 == 120, which is probably less than CHAR_MAX (which is probably 127). So there's no problem and everything works fine.
85 + 85 == 170, which is probably greater than CHAR_MAX. This causes signed integer overflow, which is undefined behavior.
The output of the program appears to be
Sample 1: 35 + 85 = 120
Sample 2: 85 + 85 = -86
I compiled this on http://ideone.com/ and it worked fine.
The output is in fact what you would expect. The program is working! The reason you are seeing a number that you do not expect is due to the width of a char data type - 1 byte.
The C standard does not dictate whether char is signed or unsigned but assuming it is signed it can represent numbers in the range -128 to 127 (a char is 8 bits or 1 byte). 85 + 85 = 170 which is outside of this range... the MSB of the byte becomes 1 and the number system wraps round to give you a negative number. Try reading up on twos compliment arithmetic.
The arithmetic is:
01010101 +
01010101
--------
10101010
Because the data type is signed and the MSB is set, the number is now negative, in this case -86
Note: Bill Lynch's answer... he has rightly pointed out that signed overflow is UB
Related
I have 4 bytes like with the value as unsigned char are: 63 129 71 174.
Supposedly, when convert it to float, it should become 1.0099999904632568.
However, all I got in return is 1.01, which is not enough precision for what I am doing.
I have used popular methods like memcpy or uninion but no avail, which led me to believe... is this some kind of limitation in C?
If so, what is the optimal solution? Thanks.
EDIT: Sorry for the bad example. I should have taken a better one for my case. Consider this 4 bytes: 0 1 229 13.
It is very small, like really really small. However, it's 4 bytes, so it still does represent a float number. However, C will just return 0. I put 16 number after decimal, and it just does not work.
So why, and how to work with such number?
EDIT 2: Sorry. My friend messed up. She gave me the 4 bytes sequence and said its 32 bit float, but turn out it's 32 but unsigned int. It pretty much messed up my entire afternoon. REALLY SORRY FOR BOTHERING.
I guess the conclusion here is: do not always trust your friend.
Using memcpy() really is the way to go.
#include <stdio.h>
#include <string.h>
int main(void)
{
const unsigned char raw1[] = { 174, 71, 129, 63 };
const unsigned char raw2[] = { 0, 1, 229, 13 };
float x, y;
memcpy(&x, raw1, sizeof x);
memcpy(&y, raw2, sizeof y);
printf("%.6f\n", x);
printf("%g\n", y);
return 0;
}
This prints 1.010000, I don't think it's reasonable to expect more precision out of a float. Note that I swapped the order of the bytes, tested on little-endian system I think (ideone.com).
With %g for the second number, it prints 1.41135e-30.
Really simple question here. I have a really simple program for adding two numbers and printing out the sum of those numbers (below). When running the program, it works as expected and prints out 40 000 for 20 000 + 20 000. But when I change int a, b and sum to short a,b and sum, I get -25 536 as an answer. Anyone who can explain why this happens? I have an idea, but would love to hear it from someone who knows it. Thanks for reading.
int a, b, sum;
a = 20000; b = 20000; sum = a+b;
printf("%d + %d = %d\n", a, b, sum);
On your system, short is presumably 16 bits, so the range of values is -32768 to 32767. 20000 + 20000 is larger that the maximum value, so this causes overflow, which results in undefined behavior.
If you change to unsigned short, the range becomes 0 to 65525, and the addition will work. In addition, overflow is well-defined with unsigned integers, it simply wraps around using modular arithmetic, e.g. (unsigned short)65535 + 2 = 1.
The maximum value of a signed short is 32767
In binary, this is a 16-bit number, rather than the 32 bit number (as is the case with ints). Because it's signed, it is represented as follows:
0 11111 11111 11111
If you add 1 to that, it becomes
1 00000 00000 00000
Which is back to -32768
You probably get the idea.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
I am recieving data from a ublox GPS module in 24 bit long bitfields (3 bytes of a 4 byte message), and I need to convert these 24 bit data fields to signed decimal values, but I can't find the description of how to do this in the specification. Also I know certain values from another program that came with the module.
For positive values, it seems that it just simply converts the 24 bit binary number to dec and that's it, e.g. 0x000C19 = 3097 and 0x000BD0 = 3024 , but for negative numbers I'm in trouble. 2's complement doesn't seem to work. Here are some known values: 0xFFFFC8 = -57, 0xFCB9FE = -214528, 0xFF2C3B = -54215 and 0xFFFA48 = -1462. Using 2's complement, the conversion is a few numbers off every time ( -56, -214530, -54213, -1464, respectively). (Hex numbers are used to avoid having to write 24 digits every time.)
Thanks for your help in advance!
First things first the "known" values you have there are not what you think they are:
#include <stdio.h>
static void p24(int x)
{
printf("%8d = 0x%06X\n", x, (0x00ffffff & (unsigned)x));
}
int main(int argc, char *argv[])
{
p24(-57);
p24(-214528);
p24(-54215);
p24(-1462);
return 0;
}
Compiling and running on a 2s complement machine prints
-57 = 0xFFFFC7
-214528 = 0xFCBA00
-54215 = 0xFF2C39
-1462 = 0xFFFA4A
When converting to 2s complement you'll have of course to pad to the full length of the target datatype you're working with, so that the sign is properly carried over. Then you divide the signed data type down to the designed number of bits.
Ex:
#include <stdio.h>
#include <stdint.h>
/* 24 bits big endian */
static char const m57[] = {0xFF, 0xFF, 0xC7};
static char const m214528[] = {0xFC, 0xBA, 0x00};
static char const m54215[] = {0xFF, 0x2C, 0x39};
static char const m1462[] = {0xFF, 0xFA, 0x4A};
static
int32_t i_from_24b(char const *b)
{
return (int32_t)(
(((uint32_t)b[0] << 24) & 0xFF000000)
| (((uint32_t)b[1] << 16) & 0x00FF0000)
| (((uint32_t)b[2] << 8) & 0x0000FF00)
) / 256;
}
int main(int argc, char *argv[])
{
printf("%d\n", i_from_24b(m57) );
printf("%d\n", i_from_24b(m214528) );
printf("%d\n", i_from_24b(m54215) );
printf("%d\n", i_from_24b(m1462) );
return 0;
}
Will print
-57
-214528
-54215
-1462
OP's information is inconsistent and likely #John Bollinger comment applies: " ... program performing the conversions is losing precision ..."
OP needs to review and post more information/code.
OP's OP's 2's comp. Diff
Hex Dec
0xFFFFC8 -57 -56 -1
0xFCB9FE -214528 -214530 2
0xFF2C3B -54215 -54213 -2
0xFFFA48 -1462 -1464 2
Due to complexity of this comment, posted as an answer
Thanks everyone for trying to help, but I just realised that I'm retarded and even more retarded! The program I got the "known values" from shows the scaled values of the measured data and it seems that +-1 +-2 deviation is in the range of the same 0.001 precision shown value after the conversion, so it indeed uses 2s complement.
Sorry for your lost time everyone! (SO is awesome tho)
So Im using this (its from another question I did),
unsigned char *y = resultado->informacion;
int i = 0;
int tam = data->tamanho;
unsigned char resAfter;
for (int i=0; i<tam;i++)
{
unsigned char x = data->informacion[i];
x <<= 3;
if (i>0)
{
resAfter = (resAfter << 5) | x;
}
else
{
resAfter = x;
}
}
printf("resAfter es %s\n", resAfter);
so at the end I have this really long number (Im estimating about 43 bits), how can I get groups of 8 bits, I think im gettin something like (010101010101010.....000) and I want to separate this in groups of 8.
Another question, I know for sure that resAfter is going to have n number of bits where n is a multiply of 8 plus 3, so my question is: is this possible? or c is going to complete the byte? like if I get 43 bits then c is going to fill them with 0 and complete so I have 48 bits; and is there a way to delete these 3 bits?
Im new on c and bitwise so sorry if what Im doing is reallly bad.
Basically in programming you deal with bytes (i think, at least in most cases), in C you deal with types of specific size (depending on system you run it on).
That said char usually has size of 1 byte, and I don't really think you can playing around with single bits. I mean u can do operation on them (<< for instance) in scale of single bits but i don't know of any standard way to preserve less than 8 bits in variable in C (though i may be wrong about it)
I have a little (big, dumb?) question about int and chars in C. I rememeber from my studies that "chars are little integers and viceversa," and that's okay to me. If I need to use small numbers, the best way is to use a char type.
But in a code like this:
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[]) {
int i= atoi(argv[1]);
printf("%d -> %c\n",i,i);
return 0;
}
I can use as argument every number I want. So with 0-127 I obtain the expected results (the standard ASCII table) but even with bigger or negative numbers it seems to work...
Here is some example:
-181 -> K
-182 -> J
300 -> ,
301 -> -
Why? It seems to me that it's cycling around the ascii table, but I don't understand how.
When you pass an int corresponding to the "%c" conversion specifier, the int is converted to an unsigned char and then written.
The values you pass are being converted to different values when they are outside the range of an unsigned (0 to UCHAR_MAX). The system you are working on probably has UCHAR_MAX == 255.
When converting an int to an unsigned char:
If the value is larger than
UCHAR_MAX, (UCHAR_MAX+1) is
subtracted from the value as many
times as needed to bring it into the
range 0 to UCHAR_MAX.
Likewise, if the
value is less than zero, (UCHAR_MAX+1)
is added to the value as many times
as needed to bring it into the range
0 to UCHAR_MAX.
Therefore:
(unsigned char)-181 == (-181 + (255+1)) == 75 == 'K'
(unsigned char)-182 == (-182 + (255+1)) == 74 == 'J'
(unsigned char)300 == (300 - (255+1)) == 44 == ','
(unsigned char)301 == (301 - (255+1)) == 45 == '-'
The %c format parameter interprets the corresponding value as a character, not as an integer. However, when you lie to printf and pass an int in what you tell it is a char, its internal manipulation of the value (to get a char back, as a char is normally passed as an int anyway, with varargs) happens to yield the values you see.
My guess is that %c takes the first byte of the value provided and formats that as a character. On a little-endian system such as a PC running Windows, that byte would represent the least-significant byte of any value passed in, so consecutive numbers would always be shown as different characters.
You told it the number is a char, so it's going to try every way it can to treat it as one, despite being far too big.
Looking at what you got, since J and K are in that order, I'd say it's using the integer % 128 to make sure it fits in the legal range.
Edit: Please disregard this "answer".
Because you are on a little-endian machine :)
Serously, this is an undefined behavior. Try changing the code to printf("%d -> %c, %c\n",i,i,'4'); and see what happens then...
When we use the %c in printf statement, it can access only the first byte of the integer.
Hence anything greater than 256 is treated as n % 256.
For example
i/p = 321 yields op=A
What atoi does is converting the string to numerical values, so that "1234" gets 1234 and not just a sequence of the ordinal numbers of the string.
Example:
char *x = "1234"; // x[0] = 49, x[1] = 50, x[2] = 51, x[3] = 52 (see the ASCII table)
int y = atoi(x); // y = 1234
int z = (int)x[0]; // z = 49 which is not what one would want