maybe one of you can help me. I don't know what to do anymore. I have the following test code:
#include <stdio.h>
int main() {
unsigned int block = 0;
unsigned int alp = 0;
char *input ="test";
unsigned int *pt = NULL;
pt = (unsigned int*)input;
alp |= ((*pt) >> 8);
printf("pointer value:\t %d \n", alp);
for(int a = 0; a < 3; a++) {
block |= (unsigned char)input[a];
if(a != 2) {
block <<= 8;
}
}
printf("block value:\t %d \n", block);
return 0;
}
I would expect both values to be exactly the same, since they look at exactly 3 bytes. Only the values have a difference. Does anyone have an idea why this is the case or can explain me why?
pointer value: 7631717
block value: 7628147
Compiled with "gcc test.c -Wall -o test" (gcc (Ubuntu 12.2.0-3ubuntu1) 12.2.0)
Many thanks
The value of block is
(input[0] << 16) | (input[1] << 8) | (input[2]);
If you're on a little-endian system(which most people are), then the value of alp is
(input[3] << 16) | (input[2] << 8) | (input[1]);
There's nothing fishy going on. the different results are expected on a little-endian system. Your CPU reads the first byte as the least significant one and the last byte as the most significant one, but your for loop reads the first byte as the most significant one and the last byte as the least significant one.
More info on endianness
Related
I'm trying to reverse the bytes for a 64 bit address pointer for an assignment and have this code:
char swapPtr(char x){
x = (x & 0x00000000FFFFFFFF) << 32 | (x & 0xFFFFFFFF00000000) >> 32;
x = (x & 0x0000FFFF0000FFFF) << 16 | (x & 0xFFFF0000FFFF0000) >> 16;
x = (x & 0x00FF00FF00FF00FF) << 8 | (x & 0xFF00FF00FF00FF00) >> 8;
return x;
}
But, it just messes everything up. However, a similar function works perfectly for a 64bit long. Is there something different that needs to be done for pointers?
Could the way I'm making the function call be an issue?
For a pointer:
*(char*)loc = swapPtr(*(char*)loc);
For a long:
*loc = swapLong(*loc);
You cannot use char x for a pointer!!!! A char is only a single byte long.
You need at the very least
unsigned long int swapPtr(unsigned long int x) {
Or better, use the type of the pointer
void* swapPtr(void* x) {
Quite likely your compiler will complain when you start bit shifting pointers; in that case you're better off explicitly casting your argument to an unsigned 64 bit integer:
#include <stdint.h>
uint64_t x;
Note also that you have to call with the address of a variable, so you call with
result = swapLong(&loc);
not *loc (which looks at the place where loc is pointing - the value, not the address).
Complete program:
#include <stdio.h>
#include <stdint.h>
uint64_t swapLong(void *X) {
uint64_t x = (uint64_t) X;
x = (x & 0x00000000FFFFFFFF) << 32 | (x & 0xFFFFFFFF00000000) >> 32;
x = (x & 0x0000FFFF0000FFFF) << 16 | (x & 0xFFFF0000FFFF0000) >> 16;
x = (x & 0x00FF00FF00FF00FF) << 8 | (x & 0xFF00FF00FF00FF00) >> 8;
return x;
}
int main(void) {
char a;
printf("the address of a is 0x%016llx\n", (uint64_t)(&a));
printf("swapping all the bytes gives 0x%016llx\n",(uint64_t)swapLong(&a));
}
Output:
the address of a is 0x00007fff6b133b1b
swapping all the bytes gives 0x1b3b136bff7f0000
EDIT you could use something like
#include <inttypes.h>
printf("the address of a is 0x%016" PRIx64 "\n", (uint64_t)(&a));
where the macro PRIx64 expands into "the format string you need to print a 64 bit number in hex". It is a little cleaner than the above.
You may also use _bswap64 intrinsic (which has latency of 2 and a throughput of 0.5 on Skylake Architecture). It is a wrapper for the assembly instruction bswap r64 so probably the most efficient :
Reverse the byte order of 64-bit integer a, and store the result in dst. This intrinsic is provided for conversion between little and big endian values.
#include <immintrin.h>
uint64_t swapLongIntrinsic(void *X) {
return __bswap_64((uint64_t) X);
}
NB: Don't forget the header
Here is an alternative way for converting a 64-bit value from LE to BE or vice-versa.
You can basically apply this method any type, by defining var_type:
typedef long long var_type;
Reverse by pointer:
void swapPtr(var_type* x)
{
char* px = (char*)x;
for (int i=0; i<sizeof(var_type)/2; i++)
{
char temp = px[i];
px[i] = px[sizeof(var_type)-1-i];
px[sizeof(var_type)-1-i] = temp;
}
}
Reverse by value:
var_type swapVal(var_type x)
{
var_type y;
char* px = (char*)&x;
char* py = (char*)&y;
for (int i=0; i<sizeof(var_type); i++)
py[i] = px[sizeof(var_type)-1-i];
return y;
}
I got a problem that says: Form a character array based on an unsigned int. Array will represent that int in hexadecimal notation. Do this using bitwise operators.
So, my ideas is the following: I create a mask that has 1's for its 4 lowest value bits.
I push the bits of the given int by 4 to the right and use & on that int and mask. I repeat until (int != 0). My question is: when I get individual hex digits (packs of 4 bits), how do I convert them to a char? For example, I get:
x & mask = 1101(2) = 13(10) = D(16)
Is there a function to convert an int to hex representation, or do I have to use brute force with switch statement or whatever else?
I almost forgot, I am doing this in C :)
Here is what I mean:
#include <stdio.h>
#include <stdlib.h>
#define BLOCK 4
int main() {
unsigned int x, y, i, mask;
char a[4];
printf("Enter a positive number: ");
scanf("%u", &x);
for (i = sizeof(usnsigned int), mask = ~(~0 << 4); x; i--, x >>= BLOCK) {
y = x & mask;
a[i] = FICTIVE_NUM_TO_HEX_DIGIT(y);
}
print_array(a);
return EXIT_SUCCESS;
}
You are almost there. The simplest method to convert an integer in the range from 0 to 15 to a hexadecimal digit is to use a lookup table,
char hex_digits[] = "0123456789ABCDEF";
and index into that,
a[i] = hex_digits[y];
in your code.
Remarks:
char a[4];
is probably too small. One hexadecimal digit corresponds to four bits, so with CHAR_BIT == 8, you need up to 2*sizeof(unsigned) chars to represent the number, generally, (CHAR_BIT * sizeof(unsigned int) + 3) / 4. Depending on what print_array does, you may need to 0-terminate a.
for (i = sizeof(usnsigned int), mask = ~(~0 << 4); x; i--, x >>= BLOCK)
initialising i to sizeof(unsigned int) skips the most significant bits, i should be initialised to the last valid index into a (except for possibly the 0-terminator, then the penultimate valid index).
The mask can more simply be defined as mask = 0xF, that has the added benefit of not invoking undefined behaviour, which
mask = ~(~0 << 4)
probably does. 0 is an int, and thus ~0 is one too. On two's complement machines (that is almost everything nowadays), the value is -1, and shifting negative integers left is undefined behaviour.
char buffer[10] = {0};
int h = 17;
sprintf(buffer, "%02X", h);
Try something like this:
char hex_digits[] = "0123456789ABCDEF";
for (i = 0; i < ((sizeof(unsigned int) * CHAR_BIT + 3) / 4); i++) {
digit = (x >> (sizeof(unsigned int) * CHAR_BIT - 4)) & 0x0F;
x = x << 4;
a[i] = hex_digits[digit];
}
Ok, this is where I got:
#include <stdio.h>
#include <stdlib.h>
#define BLOCK 4
void printArray(char*, int);
int main() {
unsigned int x, mask;
int size = sizeof(unsigned int) * 2, i;
char a[size], hexDigits[] = "0123456789ABCDEF";
for (i = 0; i < size; i++)
a[i] = 0;
printf("Enter a positive number: ");
scanf("%u", &x);
for (i = size - 1, mask = ~(~0 << 4); x; i--, x >>= BLOCK) {
a[i] = hexDigits[x & mask];
}
printArray(a, size);
return EXIT_SUCCESS;
}
void printArray(char a[], int n) {
int i;
for (i = 0; i < n; i++)
printf("%c", a[i]);
putchar('\n');
}
I have compiled, it runs and it does the job correctly. I don't know... Should I be worried that this problem was a bit hard for me? At faculty, during exams, we must write our code by hand, on a piece of paper... I don't imagine I would have done this right.
Is there a better (less complicated) way to do this problem? Thank you all for help :)
I would consider the impact of potential padding bits when shifting, as shifting by anything equal to or greater than the number of value bits that exist in an integer type is undefined behaviour.
Perhaps you could terminate the string first using: array[--size] = '\0';, write the smallest nibble (hex digit) using array[--size] = "0123456789ABCDEF"[value & 0x0f], move onto the next nibble using: value >>= 4, and repeat while value > 0. When you're done, return array + size or &array[size] so that the caller knows where the hex sequence begins.
I wrote this function to remove the most significant bit in every byte. But this function doesn't seem to be working the way I wanted it to be.
The output file size is always '0', I don't understand why nothing's been written to the output file. Is there a better and simple way to remove the most significant bit in every byte??
In relation to shift operators, section 6.5.7 of the C standard says:
If the value of the right operand is negative or is greater than or
equal to the width of the promoted left operand, the behavior is
undefined.
So firstly, remove nBuffer << 8;. Even if it were well defined, it wouldn't be an assignment operator.
As people have mentioned, you'd be better off using CHAR_BIT than 8. I'm pretty sure, instead of 0x7f you mean UCHAR_MAX >> 1 and instead of 7 you meant CHAR_BIT - 1.
Let's just focus on nBuffer and bit_count, here. I shall comment out anything that doesn't use either of these.
bit_count += 7;
if (bit_count == 7*8)
{
*out_buf++ = nBuffer;
/*if((write(out_fd, bit_buf, sizeof(char))) == -1)
oops("Cannot write on the file", "");*/
nBuffer << 8;
bit_count -= 8;
}
nBuffer = 0;
bit_count = 0;
At the end of this code, what is the value of nBuffer? What about bit_count? What impact would that have on your second loop? while (bit_count > 0)
Now let's focus on the commented out code:
if((write(out_fd, bit_buf, sizeof(char))) == -1)
oops("Cannot write on the file", "");
Where are you assigning a value to bit_buf? Using an uninitialised variable is undefined behaviour.
Instead of going through all of the bits to find the high one, this goes through only the 1 bits. high() returns the high bit of the argument, or zero if the argument is zero.
inline int high(int n)
{
int k;
do {
k = n ^ (n - 1);
n &= ~k;
} while (n);
return (k + 1) >> 1;
}
inline int drop_high(int n)
{
return n ^ high(n);
}
unsigned char remove_most_significant_bit(unsigned char b)
{
int bit;
for(bit = 0; bit < 8; bit++)
{
unsigned char mask = (0x80 >> bit);
if( mask & b) return b & ~mask;
}
return b;
}
void remove_most_significant_bit_from_buffer(unsigned char* b, int length)
{
int i;
for(i=0; i<length;i++)
{
b[i] = remove_most_significant_bit(b[i]);
}
}
void test_it()
{
unsigned char data[8];
int i;
for(i = 0; i < 8; i++)
{
data[i] = (1 << i) + i;
}
for(i = 0; i < 8; i++)
{
printf("%d\r\n", data[i]);
}
remove_most_significant_bit_from_buffer(data, 8);
for(i = 0; i < 8; i++)
{
printf("%d\r\n", data[i]);
}
}
I won't go through your entire answer to provide your reworked code, but removing the most significant bit is easy. This comes from the fact that the most significant bit can easily be found by using log base 2 converted to an integer.
#include <stdio.h>
#include <math.h>
int RemoveMSB(int a)
{
return a ^ (1 << (int)log2(a));
}
int main(int argc, char const *argv[])
{
int a = 4387;
printf("MSB of %d is %d\n", a, (int)log2(a));
a = RemoveMSB(a);
printf("MSB of %d is %d\n", a, (int)log2(a));
return 0;
}
Output:
MSB of 4387 is 12
MSB of 291 is 8
As such, 4387 in binary is 1000100100011 with a most significant bit at 12.
Likewise, 291 in binary is 0000100100011 with a most significant bit at 8.
I've been reading this thread Store an int in a char array?
And I need to store the int in the array of chars.
So I read the previous thread and I tried to make my own demo. But it's not working, trying to figure out why not for a long time. Maybe you could give me some clue or ideas please?
#include <stdio.h>
int main(void) {
char buffer[4];
int y = 2200;
buffer[0] = (y >> 0) & 0xff;
buffer[1] = (y >> 8) & 0xff;
buffer[2] = (y >> 16) & 0xff;
buffer[3] = (y >> 24) & 0xff;
int x = buffer[0];
printf("%i = %i\n", y, x);
}
Output
gcc tmp.c && ./a.out
2200 = -104
int x = buffer[0];
Copies the value of the char at buffer[0], implicitly converted to an int, into x. It does not interpret the first sizeof int bytes starting at buffer as an int, which is what you want (think of the evil ways that this behavior would subtly break in common scenarios, i.e., char c = 10; int x = c. Oops!).
Realize that buffer[n] doesn't return a memory address, it returns a char. To interpret sizeof int elements as one whole int just cast buffer to an int* first:
int x = *((int*)buffer);
And for an offset n (measured in ints, not chars):
int x = *((int*)buffer + n);
Also note that your code assumes sizeof int == 4, which is not guaranteed.
x = buffer[0] does not do what you wish. Try memcpy(&x,buffer,sizeof(x)). (You'll need to add #include <string.h>.)
i have values like 12, 13 which i want assign to single integer example, k.
i tried the following program but i am not getting expected results.
enter code here
#include <stdio.h>
int main()
{
int k = 0;
printf("k address is %u\n", &k);
char* a = &k;
printf("%u\n", a);
*(a) = 12;
a++;
printf("%u\n", a);
*(a) = 13;
printf("k is %d\n",k);
return 0;
}
and the output is:
k address is 3213474664
3213474664
3213474665
k is 3340
On your system, ints are evidently stored in little-endian format, because 13*256 + 12 = 3340.
If you want to modify bytes in an integer in an endian-independent way, you should use shifts and bitwise operators.
For example, if you were trying to store an IP address of 1.2.3.4 into a 32-bit integer, you could do:
unsigned int addr = (1 << 24) | (2 << 16) | (3 << 8) | 4;
This guarantees that 1 is the most significant byte and so forth.