i have values like 12, 13 which i want assign to single integer example, k.
i tried the following program but i am not getting expected results.
enter code here
#include <stdio.h>
int main()
{
int k = 0;
printf("k address is %u\n", &k);
char* a = &k;
printf("%u\n", a);
*(a) = 12;
a++;
printf("%u\n", a);
*(a) = 13;
printf("k is %d\n",k);
return 0;
}
and the output is:
k address is 3213474664
3213474664
3213474665
k is 3340
On your system, ints are evidently stored in little-endian format, because 13*256 + 12 = 3340.
If you want to modify bytes in an integer in an endian-independent way, you should use shifts and bitwise operators.
For example, if you were trying to store an IP address of 1.2.3.4 into a 32-bit integer, you could do:
unsigned int addr = (1 << 24) | (2 << 16) | (3 << 8) | 4;
This guarantees that 1 is the most significant byte and so forth.
Related
maybe one of you can help me. I don't know what to do anymore. I have the following test code:
#include <stdio.h>
int main() {
unsigned int block = 0;
unsigned int alp = 0;
char *input ="test";
unsigned int *pt = NULL;
pt = (unsigned int*)input;
alp |= ((*pt) >> 8);
printf("pointer value:\t %d \n", alp);
for(int a = 0; a < 3; a++) {
block |= (unsigned char)input[a];
if(a != 2) {
block <<= 8;
}
}
printf("block value:\t %d \n", block);
return 0;
}
I would expect both values to be exactly the same, since they look at exactly 3 bytes. Only the values have a difference. Does anyone have an idea why this is the case or can explain me why?
pointer value: 7631717
block value: 7628147
Compiled with "gcc test.c -Wall -o test" (gcc (Ubuntu 12.2.0-3ubuntu1) 12.2.0)
Many thanks
The value of block is
(input[0] << 16) | (input[1] << 8) | (input[2]);
If you're on a little-endian system(which most people are), then the value of alp is
(input[3] << 16) | (input[2] << 8) | (input[1]);
There's nothing fishy going on. the different results are expected on a little-endian system. Your CPU reads the first byte as the least significant one and the last byte as the most significant one, but your for loop reads the first byte as the most significant one and the last byte as the least significant one.
More info on endianness
IN C Programming, how do I combine (note: not add) two integers into one big integer? So if i have
int a = 8
int b = 6
in binary it would be
int a = 1000
int b = 0110
so combined it would be = 01101000
You would use a combination of the << shift operator and the bitwise | operator. If you are trying to build an 8-bit value from two 4-bit inputs, then:
int a = 8;
int b = 6;
int result = (b << 4) | a;
If you are trying to build a 32-bit value from two 16-bit inputs, then you would write
result = (b << 16) | a;
Example:
#include <stdio.h>
int main( void )
{
int a = 8;
int b = 6;
printf( "a = %08x, b = %08x\n", (unsigned int) a, (unsigned int) b );
int result = (b << 4) | a;
printf( "result = %08x\n", (unsigned int) result );
result = (b << 8) | a;
printf( "result = %08x\n", (unsigned int) result );
result = (b << 16) | a;
printf( "result = %08x\n", (unsigned int) result );
return 0;
}
$ ./bits
a = 00000008, b = 00000006
result = 00000068
result = 00000608
result = 00060008
You can do it as follow using binary mask & 0x0F and bit translation <<:
int a = 0x08
int b = 0x06
int c = (a & 0x0F) + ((b & 0x0F) << 4 )
I hope that it helped
Update 1:
As mentionned in the comment addition + or binary or | are both fine.
What is important to highlight in this answer is the mask & 0x0F, I strongly recommand to use this kind of mecanism to avoid any overflow.
you could use or operator.
int a = 8 ;
int b = 6 ;
int c = (a << 8) | b;
You can use the bit-shift operator << to move the bits into the correct position:
#include <stdio.h>
#include <stdint.h>
#include <inttypes.h>
int main()
{
uint8_t a = 8;
uint8_t b = 6;
uint16_t c = (b << 4) | a;
printf( "The result is: 0x%" PRIX16 "\n", c );
}
This program will print the following:
The result is: 0x68
Note that this program uses fixed-width integer types, which are recommended in this situation, as you cannot rely on the size of an int or unsigned int to have a certain width.
However, there is no need for the result to be 16-bits, if you are only shifting one value by 4 bits, as you are doing in your example. In that case, an integer type with a width of 8-bits would have been sufficient. I am only using 16-bits for the result because you explicitly asked for it.
The macro PRIX16 will probably expand to "hX" or "X" on most platforms. But it is still recommended to use this macro when using fixed-width integer types, as you cannot rely on %hX or %X being the correct format specifier for uint16_t on all platforms.
I have a byte array represented as
char * bytes = getbytes(object); //some api function
I want to check whether the bit at some position x is set.
I've been trying this
int mask = 1 << x % 8;
y= bytes[x>>3] & mask;
However y returns as all zeros? What am I doing incorrectly and is there an easier way to check if a bit is set?
EDIT:
I did run this as well. It didn't return with the expected result either.
int k = x >> 3;
int mask = x % 8;
unsigned char byte = bytes[k];
return (byte & mask);
it failed an assert true ctest I ran. Byte and Mask at this time where "0002" and 2 respectively when printed from gdb.
edit 2: This is how I set the bits in the first place. I'm just trying to write a test to verify they are set.
unsigned long x = somehash(void* a);
unsigned int mask = 1 << (x % 8);
unsigned int location = x >> 3;
char* filter = getData(ref);
filter[location] |= mask;
This would be one (crude perhaps) way from the top of my head:
#include "stdio.h"
#include "stdlib.h"
// this function *changes* the byte array
int getBit(char *b, int bit)
{
int bitToCheck = bit % 8;
b = b + (bitToCheck ? (bit / 8) : (bit / 8 - 1));
if (bitToCheck)
*b = (*b) >> (8 - bitToCheck);
return (*b) & 1;
}
int main(void)
{
char *bytes = calloc(2, 1);
*(bytes + 1)= 5; // writing to the appropiate bits
printf("%d\n", getBit(bytes, 16)); // checking the 16th bit from the left
return 0;
}
Assumptions:
A byte is represented as:
----------------------------------------
| 2^7 | 2^6 | 2^5 | 2^4 | 2^3 |... |
----------------------------------------
The left most bit is considered bit number 1 and the right most bit is considered the max. numbered bit (16th bit in a 2 byte object).
It's OK to overwrite the actual byte object (if this is not wanted, use memcpy).
I have a 32-bit int and I want to set the first 10 bit to a specific number.
IE
The 32-bit int is:
11101010101010110101100100010010
I want the first 10 bit to be the number 123, which is
0001111011
So the result would be
00011110111010110101100100010010
Does anyone know the easiest way I would be able to do this? I know that we have to do bit-shifting but I'm not good at it so I'm not sure
Thank you!
uint32_t result = (input & 0x3fffff) | (newval << 22);
0x3fffff masks out the highest 10 bits (it has the lowest 22 bits set). You have to shift your new value for the highest 10 bits by 22 places.
Convert inputs to unsigned 32-bit integers
uint32_t num = strtoul("11101010101010110101100100010010", 0, 2);
uint32_t firstbits = 123;
Mask off the lower 32-10 bits. Create mask by shifting a unsigned long 1 22 places left making 100_0000_0000_0000_0000_0000 then decrementing to 11_1111_1111_1111_1111_1111
uint32_t mask = (1UL << (32-10)) - 1;
num &= mask;
Or in firstbits shifted left by 32-10
num |= firstbits << (32-10);
Or in 1 line:
(num & (1UL << (32-10)) - 1) | (firstbits*1UL << (32-10))
Detail about firstbits*1UL. The type of firstbits is not defined by OP and may only be a 16-bit int. To insure code can shift and form an answer that exceeds 16 bits (the minimum width of int), multiple by 1UL to insure the value is unsigned and has at least 32 bit width.
You can "erase" bits (set them to 0) by using a bit wise and ('&'); bits that are 0 in either value will be 0 in the result.
You can set bits to 1 by using a bit wise or ('|'); bits that are 1 in either value will be 1 in the result.
So: and your number with a value where the first 10 bits are 0 and the rest are 1; then 'or' it with the first 10 bits you want put in, and 0 for the other bits. If you need to calculate that value, then a left-shift would be the way to go.
You can also take a mask and replace approach where you zero the lower bits required to hold 123 and then simply | (OR) the value with 123 to gain the final result. You can accomplish the exact same thing with shifts as shown by several other answers, or you can accomplish it with masks:
#include <stdio.h>
#ifndef BITS_PER_LONG
#define BITS_PER_LONG 64
#endif
#ifndef CHAR_BIT
#define CHAR_BIT 8
#endif
char *binpad2 (unsigned long n, size_t sz);
int main (void) {
unsigned x = 0b11101010101010110101100100010010;
unsigned mask = 0xffffff00; /* mask to zero lower 8 bits */
unsigned y = 123; /* value to replace zero bits */
unsigned masked = x & mask; /* zero the lower bits */
/* show intermediate results */
printf ("\n x : %s\n", binpad2 (x, sizeof x * CHAR_BIT));
printf ("\n & mask : %s\n", binpad2 (mask, sizeof mask * CHAR_BIT));
printf ("\n masked : %s\n", binpad2 (masked, sizeof masked * CHAR_BIT));
printf ("\n | 123 : %s\n", binpad2 (y, sizeof y * CHAR_BIT));
masked |= y; /* apply the final or with 123 */
printf ("\n final : %s\n", binpad2 (masked, sizeof masked * CHAR_BIT));
return 0;
}
/** returns pointer to binary representation of 'n' zero padded to 'sz'.
* returns pointer to string contianing binary representation of
* unsigned 64-bit (or less ) value zero padded to 'sz' digits.
*/
char *binpad2 (unsigned long n, size_t sz)
{
static char s[BITS_PER_LONG + 1] = {0};
char *p = s + BITS_PER_LONG;
register size_t i;
for (i = 0; i < sz; i++)
*--p = (n>>i & 1) ? '1' : '0';
return p;
}
Output
$ ./bin/bitsset
x : 11101010101010110101100100010010
& mask : 11111111111111111111111100000000
masked : 11101010101010110101100100000000
| 123 : 00000000000000000000000001111011
final : 11101010101010110101100101111011
How about using bit fields in C combined with a union? The following structure lets you set the whole 32-bit value, the top 10 bits or the bottom 22 bits. It isn't as versatile as a generic function but you can't easily make a mistake when using it. Be aware this and most solutions may not work on all integer sizes and look out for endianness as well.
union uu {
struct {
uint32_t bottom22 : 22;
uint32_t top10 : 10;
} bits;
uint32_t value;
};
Here is an example usage:
int main(void) {
union uu myuu;
myuu.value = 999999999;
printf("value = 0x%08x\n", myuu.value);
myuu.bits.top10 = 0;
printf("value = 0x%08x\n", myuu.value);
myuu.bits.top10 = 0xfff;
printf("value = 0x%08x\n", myuu.value);
return 0;
}
The output is:
value = 0x3b9ac9ff
value = 0x001ac9ff
value = 0xffdac9ff
I got a problem that says: Form a character array based on an unsigned int. Array will represent that int in hexadecimal notation. Do this using bitwise operators.
So, my ideas is the following: I create a mask that has 1's for its 4 lowest value bits.
I push the bits of the given int by 4 to the right and use & on that int and mask. I repeat until (int != 0). My question is: when I get individual hex digits (packs of 4 bits), how do I convert them to a char? For example, I get:
x & mask = 1101(2) = 13(10) = D(16)
Is there a function to convert an int to hex representation, or do I have to use brute force with switch statement or whatever else?
I almost forgot, I am doing this in C :)
Here is what I mean:
#include <stdio.h>
#include <stdlib.h>
#define BLOCK 4
int main() {
unsigned int x, y, i, mask;
char a[4];
printf("Enter a positive number: ");
scanf("%u", &x);
for (i = sizeof(usnsigned int), mask = ~(~0 << 4); x; i--, x >>= BLOCK) {
y = x & mask;
a[i] = FICTIVE_NUM_TO_HEX_DIGIT(y);
}
print_array(a);
return EXIT_SUCCESS;
}
You are almost there. The simplest method to convert an integer in the range from 0 to 15 to a hexadecimal digit is to use a lookup table,
char hex_digits[] = "0123456789ABCDEF";
and index into that,
a[i] = hex_digits[y];
in your code.
Remarks:
char a[4];
is probably too small. One hexadecimal digit corresponds to four bits, so with CHAR_BIT == 8, you need up to 2*sizeof(unsigned) chars to represent the number, generally, (CHAR_BIT * sizeof(unsigned int) + 3) / 4. Depending on what print_array does, you may need to 0-terminate a.
for (i = sizeof(usnsigned int), mask = ~(~0 << 4); x; i--, x >>= BLOCK)
initialising i to sizeof(unsigned int) skips the most significant bits, i should be initialised to the last valid index into a (except for possibly the 0-terminator, then the penultimate valid index).
The mask can more simply be defined as mask = 0xF, that has the added benefit of not invoking undefined behaviour, which
mask = ~(~0 << 4)
probably does. 0 is an int, and thus ~0 is one too. On two's complement machines (that is almost everything nowadays), the value is -1, and shifting negative integers left is undefined behaviour.
char buffer[10] = {0};
int h = 17;
sprintf(buffer, "%02X", h);
Try something like this:
char hex_digits[] = "0123456789ABCDEF";
for (i = 0; i < ((sizeof(unsigned int) * CHAR_BIT + 3) / 4); i++) {
digit = (x >> (sizeof(unsigned int) * CHAR_BIT - 4)) & 0x0F;
x = x << 4;
a[i] = hex_digits[digit];
}
Ok, this is where I got:
#include <stdio.h>
#include <stdlib.h>
#define BLOCK 4
void printArray(char*, int);
int main() {
unsigned int x, mask;
int size = sizeof(unsigned int) * 2, i;
char a[size], hexDigits[] = "0123456789ABCDEF";
for (i = 0; i < size; i++)
a[i] = 0;
printf("Enter a positive number: ");
scanf("%u", &x);
for (i = size - 1, mask = ~(~0 << 4); x; i--, x >>= BLOCK) {
a[i] = hexDigits[x & mask];
}
printArray(a, size);
return EXIT_SUCCESS;
}
void printArray(char a[], int n) {
int i;
for (i = 0; i < n; i++)
printf("%c", a[i]);
putchar('\n');
}
I have compiled, it runs and it does the job correctly. I don't know... Should I be worried that this problem was a bit hard for me? At faculty, during exams, we must write our code by hand, on a piece of paper... I don't imagine I would have done this right.
Is there a better (less complicated) way to do this problem? Thank you all for help :)
I would consider the impact of potential padding bits when shifting, as shifting by anything equal to or greater than the number of value bits that exist in an integer type is undefined behaviour.
Perhaps you could terminate the string first using: array[--size] = '\0';, write the smallest nibble (hex digit) using array[--size] = "0123456789ABCDEF"[value & 0x0f], move onto the next nibble using: value >>= 4, and repeat while value > 0. When you're done, return array + size or &array[size] so that the caller knows where the hex sequence begins.