Is there a simple way to split one 64-bit (unsigned long long) variable into eight int8_t values?
For example:
//1001000100011001100100010001100110010001000110011001000110011111
unsigned long long bigNumber = 10455547548911899039;
int8_t parts[8] = splitULongLong(bigNumber);
parts would be something along the lines of:
[0] 10011111
[1] 10010001
[2] 00011001
...
[7] 10010001
{
uint64_t v= _64bitVariable;
uint8_t i=0,parts[8]={0};
do parts[i++]=v&0xFF; while (v>>=8);
}
First you shouldn't play such games with signed values, this only complicates the issue. Then you shouldn't use unsigned long long directly, but the appropriate fixed width type uint64_t. This may be unsigned long long, but not necessarily.
Any byte (assuming 8 bit) in such an integer you may obtain by shifting and masking the value:
#define byteOf(V, I) (((V) >> (I)*8)&UINT64_C(0xFF))
To initialize your array you would place calls to that macro inside an initializer.
BTW there is no standard "binary" format for integers as you seem to be assuming.
I think if it can be this:
64bit num %8 ,save this result,and then minus the result then divide the result by 8
last save divided num and save (64bit num %8) num, and last you get two 8bit num , and you can use this two num to replace 64bit num 。 but when you need to operate , you may need to operate 8bit num to 64 bit mun
You should be able to use a Union to split the data up without any movement or processing.
This leaves you with the problem of the resulting table being in hte wrong order which can be easily solved with a macro (if you have lots of hard coded values) or a simple "8-x" subscript calculation.
#define rv(ss) = 8 - ss;
union SameSpace {
unsigned long long _64bitVariable;
int8_t _8bit[8];
} samespace;
_64bitVariable = 0x1001000100011001100100010001100110010001000110011001000110011111;
if (_8bit[rv(1)] == 0x10011111) {
printf("\n correct");
}
Related
is it possible to divide for example an integer in n bits?
For example, since an int variable has a size of 32 bits (4 bytes) is it possible to divide the number in 4 "pieces" of 8 bits and put them in 4 other variables that have a size of 8 bits?
I solved using unsigned char *pointer pointing to the variable that I want to analyze bytes, something like this:
int x = 10;
unsigned char *p = (unsigned char *) &x;
//Since my cpu is little endian I'll print bytes from the end
for(int i = sizeof(int) - 1; i >= 0; i--)
//print hexadecimal bytes
printf("%.2x ", p[i]);
Yes, of course it is. But generally we just use bit operations directly on the bits (called bitops) using bitwise operators defined for all discrete integer types.
For instance, if you need to test the 5th least significant bit you can use x &= 1 << 4 to have x just to have the 5th bit set, and all others set to zero. Then you can use if (x) to test if it has been set; C doesn't use a boolean type but assumes that zero is false and any other value means true. If you store 1 << 4 into a constant then you have created a "(bit) mask" for that particular bit.
If you need a value 0 or 1 then you can use a shift the other way and use x = (x >> 4) & 1. This is all covered in most C books, so I'd implore you to read about these bit operations there.
There are many Q/A's here how to split integers into bytes, see e.g. here. In principle you can store those in a char, but if you may require integer operations then you can also split the int into multiple values. One problem with that is that an int is just defined to at least store values from -32768 to 32767. That means that the number of bytes in an int can be 2 bytes or more.
In principle it is also possible to use bit fields but I'd be hesitant to use those. With an int you will at least know that the bits will be stored in the least significant bits.
This question already has answers here:
bit vector implementation of sets
(2 answers)
Closed 6 years ago.
In my C class we were given an assignment:
Write an interactive program (standard input/output). Define the new type set using typedef which can hold a set of integers in the range 0-127. The data structure has to be as efficient as possible in terms of storage (hint: working with bits). Also you need to define 6 global variables A,B,C,D,E,F of type set. All operations on sets in the program will be on these 6 variables.
This command read_set A,5,6,7,4,5,4,-1 will read user's input of integers while -1 means end of user's input. Other commands a user can use: print_set A - prints the set in increasing order, union_set A,B,C does union on 2 sets and saves the output in a third set, intersect_set A,B,C - determines the intersection of 2 sets and saves the output to a third set.
As far as I understand I need to use bit-fields. I could create a table of integers from 0-127. Then I could create the 6 variables A,B,C,D,E,F using set type definition and giving 128 bit-fields to each variable. Then if a user inputs 15 I would turn on the the bit which represents 15 in the data type. I'm really not sure if this is the way, because it's not clear to me how I would arrange bit-fields such that I can turn on exactly 15-th bit if I need to, I would need to convert somehow an integer to bit-field name... Also print_set prints the set in increasing order so how could I re-arrange bit-fields for this?
Really hope you have some ideas.
Yes, each of the sets called A, B, C, D, E and F is represented by a couple of unsigned long long integers like this:
typedef struct {
unsigned long long high;
unsigned long long low;
} Set;
See https://en.wikipedia.org/wiki/C_data_types
This gives you 128 bits of data in a Set (64 bits for the high numbers 64 to 127, and 64 bits for the low numbers 0 to 63).
Then you just need to do some bit manipulation like this: http://www.tutorialspoint.com/ansi_c/c_bits_manipulation.htm
For a number between 0 and 63, you'd shift 1 to the left x times and then set that bit on the "low" field.
For a number between 64 and 127, you'd shift 1 to the left x-64 times and then set that bit on the "high" field.
Hope this helps!
Using bitfields for this assignment will prove very cumbersome because of alignment issues, and you cannot define arrays of bitfields anyway. I would suggest using an array of bytes (unsigned char) and packing values into this array. A 7-bit value spanning at most 2 bytes.
The array for count values should be allocated with a size of (count + 7) / 8 bytes. In order to conserve space, you can store small sets in an integer and larger sets using an allocated array.
The datatype would look like:
#include <stdint.h>
#include <stdlib.h>
typedef struct set {
size_t count;
union {
uintptr_t v;
unsigned char *a;
};
} set;
Here is how to extract the n-th value:
int get_7bits(const set *s, size_t n) {
if (s == NULL || n >= s->count) {
return -1;
} else
if (n < sizeof(uintptr_t) * CHAR_BIT / 7) {
return (s->v >> (n * 7)) & 127;
} else {
size_t i = n / 7;
int shift = n % 7;
if (shift <= CHAR_BIT - 7) {
/* value fits in one byte */
return (s->a[i] >> shift) & 127;
} else {
/* value spans 2 bytes */
return ((s->a[i] | (s->a[i + 1] << CHAR_BIT)) >> shift) & 127;
}
}
}
You can write the other access functions and complete your assignment.
I'm looking to convert a input variable of this type:
char var[] = "9876543210000"
to a hex equivalent char
char hex[] = "08FB8FD98210"
I can perform this by e.g. the following code::
long long int freq;
char hex[12];
freq = strtoll(var,NULL,10);
sprintf(hex, "%llX",freq);
However I'm doing this on a avr microcontroller, thus strtoll is not available only strtol (avr-libgcc). So I'm restricted to 32 bits integer which is not enough. Any ideas?
Best regards
Simon
Yes.... this method works fine only with positive number, so if you have a minus sign, just save it before doing next. Just divide the string number in two halves, lets say that you select six digit each, to get two decimal numbers: u_int32_t left_part; and u_int32_t right_part; with the two halves of your number.... you can construct your 64 bit number as follows:
u_int64_t number = (u_int64_t) left_part * 1000000 + right_part;
If you have the same problem on the printing side, that is, you cannot print but 32 bit hex numbers, you can just get left_part = number >> 32; and right_part = number & 0xffffffff; and finally print both with:
if (left_part) printf("%x%08x", left_part, right_part);
else printf("%x", right_part);
the test makes result not to be forced to 8 digits when it is less than 0x100000000.
It looks like you might have to parse the input one digit at a time, and save the result into a uint64_t variable.
Initialize the uint64_t result variable to 0. In a loop, multiply the result by 10 and add the next digit converted to an int.
Now to print the number out in hex, you can use sprintf() twice. First print result >> 32 as a long unsigned int, followed by (long unsigned int)result at &hex[4](or 6 or 8 or wherever) to pick up the remaining 32 bits.
You will need to specify the format correctly to get the characters in the array in the correct places. Perhaps, just pad it with 0s? Don't forget about room for the trailing null character.
Change this:
freq = strtoll(var,NULL,10);
To this:
sscanf(var,"%lld",&freq);
I need to store a large number, but due to limitations in an old game engine, I am restricted to working with signed short (I can, however, use as many of these as I want).
I need to split an unsigned long (0 to 4,294,967,295) into multiple signed short (-32,768 to 32,767). Then I need to recombine the multiple signed short into a new unsigned long later.
For example, take the number 4,000,000,000. This should be split into multiple signed short and then recombined into unsigned long.
Is this possible in C? Thanks.
In addition to dbush's answer you can also use a union, e.g.:
union
{
unsigned long longvalue;
signed short shortvalues[2];
}
value;
The array of two shorts overlays the single long value.
I assume your problem is finding a place to store these large values. There are options we haven't yet explored which don't involve splitting the values up and recombining them:
Write them to a file, and read them back later. This might seem silly at first, but considering the bigger picture, if the values end up in a file later on then this might seem like the most attractive option.
Declare your unsigned long to have static storage duration e.g. outside of any blocks of code A.K.A globally (I hate that term) or using the static keyword inside a block of code.
None of the other answers so far are strictly portable, not that it seems like it should matter to you. You seem to be describing a twos complement 16-bit signed short representation and a 32-bit unsigned long representation (you should put assertions in place to ensure this is the case), which has implications that restrict the options for the implementation (that is, the C compiler, the OS, the CPU, etc)... so the portability issues associated with them are unlikely to occur. In case you're curious, however, I'll discuss those issues anyway.
The portability issues associated are that one type or the other might have padding bits causing the sizes to mismatch, and that there might be trap representations for short.
Changing the type but not the representation is by far much cleaner and easier to get right, though not portable; this includes the union hack, you could also avoid the union by casting an unsigned long * to a short *. These solutions are the cleanest solutions, which makes Ken Clement's answer my favourite so far, despite the non-portability.
Binary shifts (the >> and << operators), and (the & operator), or (|) operators introduce additional portability issues when you use them on signed types; they're also bulky and clumsy leading to more code to debug and a higher chance that mistakes are made.
You need to consider that while ULONG_MAX is guaranteed to be at least 4,294,967,295, SHORT_MIN is not guaranteed by the C standard to be -32,768; it might be -32,767 (which is quite uncommon indeed, though still possible)... There might be a negative zero or trap representation in place of that -32,768 value.
This means you can't portably rely upon a pair of signed shorts being able to represent all of the values of an unsigned long; even when the sizes match up you need another bit to account for the two missing values.
With this in mind, you could use a third signed char... The implementation-defined and undefined behaviours of the shift approaches could be avoided that way.
signed short x = (value ) & 0xFFF,
y = (value >> 12) & 0xFFF,
z = (value >> 24) & 0xFFF;
value = (unsigned long) x
+ ((unsigned long) y << 12)
+ ((unsigned long) z << 24);
You can do it like this (I used fixed size types to properly illustrate how it works):
#include<stdio.h>
#include<stdint.h>
int main()
{
uint32_t val1;
int16_t val2a, val2b;
uint32_t val3;
val1 = 0x11223344;
printf("val1=%08x\n", val1);
// to short
val2a = val1 >> 16;
val2b = val1 & 0xFFFF;
printf("val2a=%04x\n", val2a);
printf("val2b=%04x\n", val2b);
// to long
val3 = (uint32_t)val2a << 16;
val3 |= (uint32_t)val2b;
printf("val3=%08x\n", val3);
return 0;
}
Output:
val1=11223344
val2a=1122
val2b=3344
val3=11223344
There are any number of ways to do it. One thing to consider is that unsigned long may not have the same size on different hardware/operating systems. You can use exact length types found in stdint.h to avoid ambiguity (e.g. uint8_t, uint16_t, etc.). One implementation incorporating exact types (and cheezy hex values) would be:
#include <stdio.h>
#include <stdint.h>
#include <inttypes.h>
#include <limits.h>
int main (void) {
uint64_t a = 0xfacedeadbeefcafe, b = 0;
uint16_t s[4] = {0};
uint32_t i = 0, n = 0;
printf ("\n a : 0x%16"PRIx64"\n\n", a);
/* separate uint64_t into 4 uint16_t */
for (i = 0; i < sizeof a; i += 2, n++)
printf (" s[%"PRIu32"] : 0x%04"PRIx16"\n", n,
(s[n] = (a >> (i * CHAR_BIT))));
/* combine 4 uint16_t into uint64_t */
for (n = i = 0; i < sizeof b; i += 2, n++)
b |= (uint64_t)s[n] << i * CHAR_BIT;
printf ("\n b : 0x%16"PRIx64"\n\n", b);
return 0;
}
Output
$ ./bin/uint64_16
a : 0xfacedeadbeefcafe
s[0] : 0xcafe
s[1] : 0xbeef
s[2] : 0xdead
s[3] : 0xface
b : 0xfacedeadbeefcafe
This is one possible solution (which assumes ulong is 32-bits, and sshort is 16-bits):
unsigned long L1, L2;
signed short S1, S2;
L1 = 0x12345678; /* Initial ulong to store away into two sshort */
S1 = L1 & 0xFFFF; /* Store component 1 */
S2 = L1 >> 16; /* Store component 2*/
L2 = S1 | (S2<<16); /* Retrive ulong from two sshort */
/* Print results */
printf("Initial value: 0x%08lx\n",L1);
printf("Stored component 1: 0x%04hx\n",S1);
printf("Stored component 2: 0x%04hx\n",S2);
printf("Retrieved value: 0x%08lx\n",L2);
I created a structure to represent a fixed-point positive number. I want the numbers in both sides of the decimal point to consist 2 bytes.
typedef struct Fixed_t {
unsigned short floor; //left side of the decimal point
unsigned short fraction; //right side of the decimal point
} Fixed;
Now I want to add two fixed point numbers, Fixed x and Fixed y. To do so I treat them like integers and add.
(Fixed) ( (int)x + (int)y );
But as my visual studio 2010 compiler says, I cannot convert between Fixed and int.
What's the right way to do this?
EDIT: I'm not committed to the {short floor, short fraction} implementation of Fixed.
You could attempt a nasty hack, but there's a problem here with endian-ness. Whatever you do to convert, how is the compiler supposed to know that you want floor to be the most significant part of the result, and fraction the less significant part? Any solution that relies on re-interpreting memory is going to work for one endian-ness but not another.
You should either:
(1) define the conversion explicitly. Assuming short is 16 bits:
unsigned int val = (x.floor << 16) + x.fraction;
(2) change Fixed so that it has an int member instead of two shorts, and then decompose when required, rather than composing when required.
If you want addition to be fast, then (2) is the thing to do. If you have a 64 bit type, then you can also do multiplication without decomposing: unsigned int result = (((uint64_t)x) * y) >> 16.
The nasty hack, by the way, would be this:
unsigned int val;
assert(sizeof(Fixed) == sizeof(unsigned int)) // could be a static test
assert(2 * sizeof(unsigned short) == sizeof(unsigned int)) // could be a static test
memcpy(&val, &x, sizeof(unsigned int));
That would work on a big-endian system, where Fixed has no padding (and the integer types have no padding bits). On a little-endian system you'd need the members of Fixed to be in the other order, which is why it's nasty. Sometimes casting through memcpy is the right thing to do (in which case it's a "trick" rather than a "nasty hack"). This just isn't one of those times.
If you have to you can use a union but beware of endian issues. You might find the arithmetic doesn't work and certainly is not portable.
typedef struct Fixed_t {
union {
struct { unsigned short floor; unsigned short fraction };
unsigned int whole;
};
} Fixed;
which is more likely (I think) to work big-endian (which Windows/Intel isn't).
Some magic:
typedef union Fixed {
uint16_t w[2];
uint32_t d;
} Fixed;
#define Floor w[((Fixed){1}).d==1]
#define Fraction w[((Fixed){1}).d!=1]
Key points:
I use fixed-size integer types so you're not depending on short being 16-bit and int being 32-bit.
The macros for Floor and Fraction (capitalized to avoid clashing with floor() function) access the two parts in an endian-independent way, as foo.Floor and foo.Fraction.
Edit: At OP's request, an explanation of the macros:
Unions are a way of declaring an object consisting of several different overlapping types. Here we have uint16_t w[2]; overlapping uint32_t d;, making it possible to access the value as 2 16-bit units or 1 32-bit unit.
(Fixed){1} is a compound literal, and could be written more verbosely as (Fixed){{1,0}}. Its first element (uint16_t w[2];) gets initialized with {1,0}. The expression ((Fixed){1}).d then evaluates to the 32-bit integer whose first 16-bit half is 1 and whose second 16-bit half is 0. On a little-endian system, this value is 1, so ((Fixed){1}).d==1 evaluates to 1 (true) and ((Fixed){1}).d!=1 evaluates to 0 (false). On a big-endian system, it'll be the other way around.
Thus, on a little-endian system, Floor is w[1] and Fraction is w[0]. On a big-endian system, Floor is w[0] and Fraction is w[1]. Either way, you end up storing/accessing the correct half of the 32-bit value for the endian-ness of your platform.
In theory, a hypothetical system could use a completely different representation for 16-bit and 32-bit values (for instance interleaving the bits of the two halves), breaking these macros. In practice, that's not going to happen. :-)
This is not possible portably, as the compiler does not guarantee a Fixed will use the same amount of space as an int. The right way is to define a function Fixed add(Fixed a, Fixed b).
Just add the pieces separately. You need to know the value of the fraction that means "1" - here I'm calling that FRAC_MAX:
// c = a + b
void fixed_add( Fixed* a, Fixed* b, Fixed* c){
unsigned short carry = 0;
if((int)(a->floor) + (int)(b->floor) > FRAC_MAX){
carry = 1;
c->fraction = a->floor + b->floor - FRAC_MAX;
}
c->floor = a->floor + b->floor + carry;
}
Alternatively, if you're just setting the fixed point as being at the 2 byte boundary you can do something like:
void fixed_add( Fixed* a, Fixed *b, Fixed *c){
int ia = a->floor << 16 + a->fraction;
int ib = b->floor << 16 + b->fraction;
int ic = ia + ib;
c->floor = ic >> 16;
c->fraction = ic - c->floor;
}
Try this:
typedef union {
struct Fixed_t {
unsigned short floor; //left side of the decimal point
unsigned short fraction; //right side of the decimal point
} Fixed;
int Fixed_int;
}
If your compiler puts the two short on 4 bytes, then you can use memcpy to copy your int in your struct, but as said in another answer, this is not portable... and quite ugly.
Do you really care adding separately each field in a separate method?
Do you want to keep the integer for performance reason?
// add two Fixed
Fixed operator+( Fixed a, Fixed b )
{
...
}
//add Fixed and int
Fixed operator+( Fixed a, int b )
{
...
}
You may cast any addressable type to another one by using:
*(newtype *)&var