24 bit signed data type [closed] - c

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
I am recieving data from a ublox GPS module in 24 bit long bitfields (3 bytes of a 4 byte message), and I need to convert these 24 bit data fields to signed decimal values, but I can't find the description of how to do this in the specification. Also I know certain values from another program that came with the module.
For positive values, it seems that it just simply converts the 24 bit binary number to dec and that's it, e.g. 0x000C19 = 3097 and 0x000BD0 = 3024 , but for negative numbers I'm in trouble. 2's complement doesn't seem to work. Here are some known values: 0xFFFFC8 = -57, 0xFCB9FE = -214528, 0xFF2C3B = -54215 and 0xFFFA48 = -1462. Using 2's complement, the conversion is a few numbers off every time ( -56, -214530, -54213, -1464, respectively). (Hex numbers are used to avoid having to write 24 digits every time.)
Thanks for your help in advance!

First things first the "known" values you have there are not what you think they are:
#include <stdio.h>
static void p24(int x)
{
printf("%8d = 0x%06X\n", x, (0x00ffffff & (unsigned)x));
}
int main(int argc, char *argv[])
{
p24(-57);
p24(-214528);
p24(-54215);
p24(-1462);
return 0;
}
Compiling and running on a 2s complement machine prints
-57 = 0xFFFFC7
-214528 = 0xFCBA00
-54215 = 0xFF2C39
-1462 = 0xFFFA4A
When converting to 2s complement you'll have of course to pad to the full length of the target datatype you're working with, so that the sign is properly carried over. Then you divide the signed data type down to the designed number of bits.
Ex:
#include <stdio.h>
#include <stdint.h>
/* 24 bits big endian */
static char const m57[] = {0xFF, 0xFF, 0xC7};
static char const m214528[] = {0xFC, 0xBA, 0x00};
static char const m54215[] = {0xFF, 0x2C, 0x39};
static char const m1462[] = {0xFF, 0xFA, 0x4A};
static
int32_t i_from_24b(char const *b)
{
return (int32_t)(
(((uint32_t)b[0] << 24) & 0xFF000000)
| (((uint32_t)b[1] << 16) & 0x00FF0000)
| (((uint32_t)b[2] << 8) & 0x0000FF00)
) / 256;
}
int main(int argc, char *argv[])
{
printf("%d\n", i_from_24b(m57) );
printf("%d\n", i_from_24b(m214528) );
printf("%d\n", i_from_24b(m54215) );
printf("%d\n", i_from_24b(m1462) );
return 0;
}
Will print
-57
-214528
-54215
-1462

OP's information is inconsistent and likely #John Bollinger comment applies: " ... program performing the conversions is losing precision ..."
OP needs to review and post more information/code.
OP's OP's 2's comp. Diff
Hex Dec
0xFFFFC8 -57 -56 -1
0xFCB9FE -214528 -214530 2
0xFF2C3B -54215 -54213 -2
0xFFFA48 -1462 -1464 2
Due to complexity of this comment, posted as an answer

Thanks everyone for trying to help, but I just realised that I'm retarded and even more retarded! The program I got the "known values" from shows the scaled values of the measured data and it seems that +-1 +-2 deviation is in the range of the same 0.001 precision shown value after the conversion, so it indeed uses 2s complement.
Sorry for your lost time everyone! (SO is awesome tho)

Related

bit programing in C [duplicate]

This question already has answers here:
How do I split up a long value (32 bits) into four char variables (8bits) using C?
(6 answers)
Closed 8 months ago.
I am new to bits programming in C and finding it difficult to understand how ipv4_to_bit_string() in below code works.
Can anyone explain that, what is happening when I pass integer 1234 to this function. Why integer is right shifted at 24,16,8 and 4 places?
#include <stdio.h>
#include <string.h>
#include <stdint.h>
#include <stdlib.h>
typedef struct BIT_STRING_s {
uint8_t *buf; /* BIT STRING body */
size_t size; /* Size of the above buffer */
int bits_unused; /* Unused trailing bits in the last octet (0..7) */
} BIT_STRING_t;
BIT_STRING_t tnlAddress;
void ipv4_to_bit_string(int i, BIT_STRING_t *p)
{
do {
(p)->buf = calloc(4, sizeof(uint8_t));
(p)->buf[0] = (i) >> 24 & 0xFF;
(p)->buf[1] = (i) >> 16 & 0xFF;
(p)->buf[2] = (i) >> 8 & 0xFF;
(p)->buf[3] = (i) >> 4 & 0xFF;
(p)->size = 4;
(p)->bits_unused = 0;
} while(0);
}
int main()
{
BIT_STRING_t *p = (BIT_STRING_t*)calloc(1, sizeof(BIT_STRING_t));
ipv4_to_bit_string(1234, p);
}
An IPv4 address is four eight-bit pieces that have been put together into one 32-bit piece. To take the 32-bit piece apart into the four eight-bit pieces, you extract each eight bits separately. To extract one eight-bit piece, you shift right by 0, 8, 16, or 24 bits, according to which piece you want at the moment, and then mask with 0xFF to take only the low eight bits after the shift.
The shift by 4 instead of 0 appears to be an error.
The use of an int for the 32-bit piece appears to be an error, primarily because the high bit may be set, which indicates the int value is negative, and then the right-shift is not fully defined by the C standard; it is implementation-defined. An unsigned type should be used. Additionally, int is not necessarily 32 bits; it is preferable to use uint32_t, which is defined in the <stdint.h> header.

Bitwise conversion of int64 to IEEE double?

I'm trying to find or figure out the algorithm for converting a signed 64-bit int (twos-complement, natch) to closest value IEEE double (64-bit), staying within bitwise operations.What I'm looking for is for the generic "C-like" pseudocode; I'm implementing a toy JVM on a platform that is not C and doesn't have a native int64 types, so I'm operating on 8 byte arrays (details of that are mercifully outside this scope) and that's the domain the data needs to stay in.
So: input is a big-endian string of 64 bits, signed twos-complement. Output is a big-endian string of 64 bits in IEEE double format that represents as near the original int64 value as possible. In between is some set of masks, shifts, etc! Algorithm absolutely does not need to be especially clever or optimized. I just want to be able to get to the result and ideally understand what the process is.
Having trouble tracking this down because I suspect it's an unusual need. This answer addresses a parallel question (I think) in x86 SSE, but I don't speak SSE and my attempts and translation leave me more confused than enlightened.
Would love someone to either point in the right direction for a recipe or ideally explain the bitwise math behind so I actually understand it. Thanks!
Here's a simple (and wrong in several ways) implementation, including a test harness.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
double do_convert(int64_t input)
{
uint64_t sign = (input < 0);
uint64_t magnitude;
// breaks on INT64_MIN
if (sign)
magnitude = -input;
else
magnitude = input;
// use your favourite algorithm here instead of the builtin
int leading_zeros = __builtin_clzl(magnitude);
uint64_t exponent = (63 - leading_zeros) + 1023;
uint64_t significand = (magnitude << (leading_zeros + 1)) >> 12;
uint64_t fake_double = sign << 63
| exponent << 52
| significand;
double d;
memcpy(&d, &fake_double, sizeof d);
return d;
}
int main(int argc, char** argv)
{
for (int i = 1; i < argc; i++)
{
long l = strtol(argv[i], NULL, 0);
double d = do_convert(l);
printf("%ld %f\n", l, d);
}
return 0;
}
The breakages here are many - the basic idea is to first extract the sign bit, then treat the number as positive the rest of the way, which won't work if the input is INT64_MIN. It also doesn't handle input 0 correctly because it doesn't correctly deal with the exponent in that case. These extensions are left as an exercise for the reader. ;-)
Anyway - the algorithm just figures out the exponent by calculating log2 of the input number and offsetting by 1023 (because floating point) and then getting the significand by shifting the number up far enough to drop off the most significant bit, then shifting back down into the right field position.
After all that, the assembly of the final double is pretty straightforward.
Edit:
Speaking of exercises for the reader - I also implemented this program using _builtin_clzl(). You can expand that part as necessary.

converting 2 byte hex numbers back to a decimal

The code below, produces the following output on a serial console
[42][25][f][27][0][0].
My question is - if just had the serial output - how would you figure out that the number was 9999? How does the maths work? I think it has something to do with little endian?
int a = 9999;
buf[0] = 'B';
buf[1] = '%';
buf[2] = a&0xff;
buf[3] = (a>>8)&0xff;
buf[4] = (a>>16)&0xff;
buf[5] = (a>>24)&0xff;
Endianness determines how numbers are stored in memory, not how arithmetic is performed on it. Since the C code you provided only uses integer arithmetic (i.e. does not deal with pointers and memory access), the resulting data will be the same whatever the endianness is.
To serialize your number, you extract every byte (&0xff) of your number by applying bit shifting (respectively 0, 8, 16 and 24 bits); e.g. 0xAABBCCDD >> 8 becomes 0xAABBCC, and the binary AND operation &0xff discards the upper bytes to keep the least significant one, in case of the example it is 0xCC.
To undo that operation, you have to take the bytes and AND them together, applying bit shifts in the opposite direction. Parsing i would use the following code:
int a = buf[2] & (buf[3] << 8) & (buf[4] << 16) & (buf[5] << 24);
There is no need to cast any of the operands here as using bitwise operators in C implies integer promotion (ISO/IEC 9899§6.3.1.1), and your resulting variable type is int — that is, assuming buf is an array of an unsigned 8-bit integer type.
Note this assumes the emitter of the serialized data also has a 32-bit int length, and uses the same signed number representation (often two's complement).
The hexi decimal number system is a base 16 numeral system.
This means that it consists of 16 different symbols and has The weight of 16.
What this means is that instead of only having 10 symbols(0-9)for illustrating numbers as in the decimal (base 10 ) system, you have 16 0-f.
Where the symbols a=10,b=11,c=12,d=13,e=14 and f=15 in The decimal system.
The weight part of the system means that the frist symbol in a hex digit has the weight of 1 which is also the case in the decimal system.
But if the number is represented by more than one digit, that weight is increased by 16.
So in hex:
f = 16^0*15 in decimal.
ff = (16^1*15)+15 in decimal.
fff=(16^2*15)+(16^1*15)+15 in decimal
n...fff = ((16^n-1)*m)+...+(16^2*15)+(16^1*15)+15 in decimal where m represents a given hex symbol.
This number system is widely used in electrical Engineering and hardware near software development, because it allows one to group bits in groups of fours and thereby illustrate large binary numbers more compact.
knowing how data are embedded into message, you can extrapolate them converting in the right format.
#include <stdio.h>
#include <stdint.h>
int main(void)
{
int32_t a = 9999;
unsigned char buf[6];
buf[0] = 'B';
buf[1] = '%';
buf[2] = a&0xff;
buf[3] = (a>>8)&0xff;
buf[4] = (a>>16)&0xff;
buf[5] = (a>>24)&0xff;
uint32_t res= buf[2] + ((buf[3] & 0xFFFFFFFFu) << 8) + ((buf[4] & 0xFFFFFFFFu) << 16) + ((buf[5] & 0xFFFFFFFFu) << 24);
printf("Converted from chars: %d\n", res);
return 0;
}
As data are pushed into buffer, you can position they back at the correct location in an int variable.
I think it has something to do with little endian?
Endianness doesn't matter because shift operations are made on the host platform considering its architecture.

C - Method for setting all even-numbered bits to 1

I was charged with the task of writing a method that "returns the word with all even-numbered bits set to 1." Being completely new to C this seems really confusing and unclear. I don't understand how I can change the bits of a number with C. That seems like a very low level instruction, and I don't even know how I would do that in Java (my first language)! Can someone please help me! This is the method signature.
int evenBits(void){
return 0;
}
Any instruction on how to do this or even guidance on how to begin doing this would be greatly appreciated. Thank you so much!
Break it down into two problems.
(1) Given a variable, how do I set particular bits?
Hint: use a bitwise operator.
(2) How do I find out the representation of "all even-numbered bits" so I can use a bitwise operator to set them?
Hint: Use math. ;-) You could make a table (or find one) such as:
Decimal | Binary
--------+-------
0 | 0
1 | 1
2 | 10
3 | 11
... | ...
Once you know what operation to use to set particular bits, and you know a decimal (or hexadecimal) integer literal to use that with in C, you've solved the problem.
You must give a precise definition of all even numbered bits. Bits are numbered in different ways on different architectures. Hardware people like to number them from 1 to 32 from the least significant to the most significant bit, or sometimes the other way, from the most significant to the least significant bit... while software guys like to number bits by increasing order starting at 0 because bit 0 represents the number 20, ie: 1.
With this latter numbering system, the bit pattern would be 0101...0101, thus a value in hex 0x555...555. If you number bits starting at 1 for the least significant bit, the pattern would be 1010...1010, in hex 0xAAA...AAA. But this representation actually encodes a negative value on current architectures.
I shall assume for the rest of this answer that even numbered bits are those representing even powers of 2: 1 (20), 4 (22), 16 (24)...
The short answer for this problem is:
int evenBits(void) {
return 0x55555555;
}
But what if int has 64 bits?
int evenBits(void) {
return 0x5555555555555555;
}
Would handle 64 bit int but would have implementation defined behavior on systems where int is smaller.
Using macros from <limits.h>, you could mask off the extra bits to handle 16, 32 and 64 bit ints:
#include <limits.h>
int evenBits(void) {
return 0x5555555555555555 & INT_MAX;
}
But this code still makes some assumptions:
int has at most 64 bits.
int has an even number of bits.
INT_MAX is a power of 2 minus 1.
These assumptions are valid for most current systems, but the C Standard allows for implementations where one or more are invalid.
So basically every other bit has to be set to one? This is why we have bitwise operations in C. Imagine a regular bitarray. What you want is the right most even bit and set it to 1(this is the number 2). Then we just use the OR operator (|) to modify our existing number. After doing that. we bitshift the number 2 places to the left (<< 2), this modifies the bit array to 1000 compared to the previous 0010. Then we do the same again and use the or operator. The code below describes it better.
#include <stdio.h>
unsigned char SetAllEvenBitsToOne(unsigned char x);
int IsAllEvenBitsOne(unsigned char x);
int main()
{
unsigned char x = 0; //char is one byte data type ie. 8 bits.
x = SetAllEvenBitsToOne(x);
int check = IsAllEvenBitsOne(x);
if(check==1)
{
printf("shit works");
}
return 0;
}
unsigned char SetAllEvenBitsToOne(unsigned char x)
{
int i=0;
unsigned char y = 2;
for(i=0; i < sizeof(char)*8/2; i++)
{
x = x | y;
y = y << 2;
}
return x;
}
int IsAllEvenBitsOne(unsigned char x)
{
unsigned char y;
for(int i=0; i<(sizeof(char)*8/2); i++)
{
y = x >> 7;
if(y > 0)
{
printf("x before: %d\t", x);
x = x << 2;
printf("x after: %d\n", x);
continue;
}
else
{
printf("Not all even bits are 1\n");
return 0;
}
}
printf("All even bits are 1\n");
return 1;
}
Here is a link to Bitwise Operations in C

Convert FFFFFF to decimal value (C language)

I am trying to convert a string representing a 24-bit hexadecimal number (FFFFFF) to its decimal equivalent (-1). Could anyone help me understand why the following code does not return -1?
Thanks, LC
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
int main(void) {
char temp_str[] = "FFFFFF";
long value;
value = strtol(temp_str, NULL, 16);
printf("value is %ld\n", value);
}
It seems like your input is the 24-bit 2's complement representation of the number, but strtol does not handle negative numbers in this way (and even if it did, it has no way of knowing that you meant a 24-bit representation). It only determines the sign of its output based on the existence of a - sign.
You can modify your code to get the result you want by adding this after the strtol:
if (value > 0x7fffff)
value -= 0x1000000;
Of course, this will only work for a 24-bit representation, other sizes will need different constants.
Hacker's delight covers this under sign extension.
For your 24 bit number, the sign bit is the 24th bit from the right and if it was set the hex value would be 0x800000.
The book suggests these:
((x + 0x800000) & 0xFFFFFF) - 0x800000
or
((x & 0xFFFFFF) xor 0x800000) - 0x800000
From your question I would say that your number is never going to be more than 24 bits so I would use the second option in your code as follows:
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
int main(void) {
char temp_str[] = "FFFFFF";
long value;
value = strtol(temp_str, NULL, 16);
value = (value ^ 0x800000) - 0x800000; // Notice that I'm not using the & 0xFFFFFF since I assumed that the number won't have more than 24 bits.
printf("value is %ld\n", value);
}
Edit 1:
I fear that my original answer, though technically sound did not answer the posed question.
Could anyone help me understand why the following code does not return -1?
Others have already covered this by the time I answered but I will restate it here anyway.
Your string is "FFFFFF", it consists of 6 hex digits. Each hex digit represents 4 bits, therefore your string represents a 24 bit number.
Your variable long value is of type long which normally corresponds to your CPU's word width (32bit or 64bit). Since these days long can be either 32 bits or 64 bits depending on your architecture you are not guaranteed to get -1 unless you give exactly the right number of hex digits.
If long on your machine is 32 bits then two things are true:
sizeof(long) will return 4
Using "FFFFFFFF" will return -1
If long on your machine is 64 bits then two things are true:
sizeof(long) will return 8
Using "FFFFFFFFFFFFFFFF" will return -1
Digression
This then lead me down a completely different path. We can generalize this and make a program that constructs a string for your machine, such that it will always return -1 from a string.
#include #include <stdio.h>
#include <string.h>
#include <stdlib.h>
int main(void) {
const char* ff = "FF";
char temp[sizeof(long) * 2 + 1]; // Ensure that the string can store enough hex digits so that we can populate the entire width of long. Include 1 byte for '\0'
int i;
long value;
/* Fill the temp array with FF */
for (i = 0; i < sizeof(long); ++i)
{
strcpy(&temp[i * 2], ff);
}
value = strtol(temp, NULL, 16);
printf("value of %s is %ld\n", temp, value);
}
This is a bad way to get a -1 result since the clear option is to just use
long value = -1;
but I will assume that this was simply an academic exercise.
Don't think as a computer now, just convery (FFFFFF)16 to decimal use ordinary math thinking. This is not about two's complement negative notation.
Because you run this program on 32- or 64-bit machine, not 24-bit. 0xffffff is actually 0x00ffffff, which is 16777215 in decimal.
Hex representation of -1 is 0xffffffff or 0xffffffffffffffff.

Resources