Right shift in C - c

I am trying to get a hexadecimal memory address from a text file and shift the 3 most bits and then print the result.
The memory address is A3BC88A0 and I just want to print A3BC8? However, when I run the code, addr = A3BC88AO but result = 14779114. Can someone help me figure out why this is happening or what to do?
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main(int argc, const char * argv[])
{
FILE *f;
myfile = fopen("Test.txt", "r");
unsigned addr;
fscanf(f, "%x", &addr);
printf("%x\n",addr);
unsigned result = addr >> 3;
printf("%x\n",result);
fclose(myfile);
return 0;
}

What you want is not to shift by 3 bits, but by 3 hex digits, each of which is 4 bits. So do this instead:
unsigned result = addr >> 12;

The >> 3 shifts the value by 3 bits, however you asked for A3BC88A0 to be shifted by 3 nybbles (half bytes) to result in A3BC8
Change the line to:
unsigned result = addr >> (3*4);
(I put the 3*4 rather than just 12 to highlight that its nybbles you want to shift by).
Note for clarity: A single hex digit is 4 bits, which is half a byte, which is a nybble (not a very common term admittedly)

Related

Am I using memcpy wrong?

I'm trying to use memcpy to copy part of an unsigned int to another unsigned int within the same struct I made. But my program only prints the first printf statement and then says: Process returned -1073741819 (0xC0000005)
Am I using memcpy wrong?
#include <stdio.h>
#include <string.h>
int main()
{
struct time
{
unsigned int hours:5;
unsigned int minutes:6;
unsigned int seconds:6;
};
struct time t = {0x10,0b101011,45};
printf("The time is : %d:%d:%d\n", t.hours, t.minutes, t.seconds);
memcpy(t.minutes, t.seconds, 2);
printf("The time is : %d:%d:%d\n", t.hours, t.minutes, t.seconds);
return 0;
}
I've already done t.minutes = t.seconds and that copies the whole number, but I only want a portion of it.
In response to your clarification in the comments:
When I say part of number I mean I'm trying to copy the most significant 2 bits of the unsigned int.
The way to copy individual bits is by doing bit manipulation with bitwise operators.
The two most significant bits in your 6-bit fields are therefore represented by the value 0x30 (110000 in binary). To copy these from one to another, simply clear out those bits in the destination, then mask the source and combine with bitwise-OR:
unsigned int mask = 0x30;
t.minutes = (t.minutes & ~mask) | (t.seconds & mask);
Breakdown of the above:
~mask inverts the mask, meaning that bits 4 and 5 will be 0 and all other bits will be 1
this value is then ANDed with minutes, resulting in clearing bits 4 and 5
the opposite occurs when ANDing the mask with seconds, resulting in only bits 4 and 5 being preseved, and all other bits cleared
the two values are then combined with OR and assigned to minutes

bit programing in C [duplicate]

This question already has answers here:
How do I split up a long value (32 bits) into four char variables (8bits) using C?
(6 answers)
Closed 8 months ago.
I am new to bits programming in C and finding it difficult to understand how ipv4_to_bit_string() in below code works.
Can anyone explain that, what is happening when I pass integer 1234 to this function. Why integer is right shifted at 24,16,8 and 4 places?
#include <stdio.h>
#include <string.h>
#include <stdint.h>
#include <stdlib.h>
typedef struct BIT_STRING_s {
uint8_t *buf; /* BIT STRING body */
size_t size; /* Size of the above buffer */
int bits_unused; /* Unused trailing bits in the last octet (0..7) */
} BIT_STRING_t;
BIT_STRING_t tnlAddress;
void ipv4_to_bit_string(int i, BIT_STRING_t *p)
{
do {
(p)->buf = calloc(4, sizeof(uint8_t));
(p)->buf[0] = (i) >> 24 & 0xFF;
(p)->buf[1] = (i) >> 16 & 0xFF;
(p)->buf[2] = (i) >> 8 & 0xFF;
(p)->buf[3] = (i) >> 4 & 0xFF;
(p)->size = 4;
(p)->bits_unused = 0;
} while(0);
}
int main()
{
BIT_STRING_t *p = (BIT_STRING_t*)calloc(1, sizeof(BIT_STRING_t));
ipv4_to_bit_string(1234, p);
}
An IPv4 address is four eight-bit pieces that have been put together into one 32-bit piece. To take the 32-bit piece apart into the four eight-bit pieces, you extract each eight bits separately. To extract one eight-bit piece, you shift right by 0, 8, 16, or 24 bits, according to which piece you want at the moment, and then mask with 0xFF to take only the low eight bits after the shift.
The shift by 4 instead of 0 appears to be an error.
The use of an int for the 32-bit piece appears to be an error, primarily because the high bit may be set, which indicates the int value is negative, and then the right-shift is not fully defined by the C standard; it is implementation-defined. An unsigned type should be used. Additionally, int is not necessarily 32 bits; it is preferable to use uint32_t, which is defined in the <stdint.h> header.

How can I copy 4 letter ascii word to buffer in C?

I am trying to copy the word: 0x0FF0 to a buffer but unable to do so.
Here is my code:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <stdint.h>
#include <math.h>
#include <time.h>
#include <linux/types.h>
#include <fcntl.h>
#include <unistd.h>
#include <errno.h>
void print_bits(unsigned int x);
int main(int argc, char *argv[])
{
char buffer[512];
unsigned int init = 0x0FF0;
unsigned int * som = &init;
printf("print bits of som now: \n");
print_bits(init);
printf("\n");
memset(&buffer[0], 0, sizeof(buffer)); // reinitialize the buffer
memcpy(buffer, som, 4); // copy word to the buffer
printf("print bits of buffer[0] now: \n");
print_bits(buffer[0]);
printf("\n");
return 0;
}
void print_bits(unsigned int x)
{
int i;
for (i = 8 * sizeof(x)-17; i >= 0; i--) {
(x & (1 << i)) ? putchar('1') : putchar('0');
}
printf("\n");
}
this is the result I get in the console:
Why am I getting different values from the bit printing if I am using memcpy?
Don't know if it has something to do with big-little-endian but I am losing 4 bits of 1's here so in both of the methods it shouldn't happen.
When you call
print_bits(buffer[0]);
you're taking just one byte out of the buffer, converting it to unsigned int, and passing that to the function. The other bytes in buffer are ignored.
You are mixing up types and relying on specific settings of your architecture/platform; This already breaks your existing code, and it may get even more harmful once you compile with different settings.
Your buffer is of type char[512], while your init is of type unsigned int.
First, it depends on the settings whether char is signed or unsigned char. This is actually relevant, since it influences how a char-value is promoted to an unsigned int-value. See the following code that demonstrated the difference using explicitly signed and unsigned chars:
signed char c = 0xF0;
unsigned char uc = c;
unsigned int ui_from_c = c;
unsigned int ui_from_uc = uc;
printf("Singned char c:%hhd; Unsigned char uc:%hhu; ui_from_c:%u ui_from_uc:%u\n", c, uc, ui_from_c,ui_from_uc);
// output: Singned char c:-16; Unsigned char uc:240; ui_from_c:4294967280 ui_from_uc:240
Second, int may be represented by 4 or by 8 bytes (which can hold a "word"), yet char will typically be 1 byte and can therefore not hold a "word" of 16 bit.
Third, architectures can be big endian or little endian, and this influences where a constant like 0x0FF0, which requires 2 bytes, would actually be located in a 4 or 8 byte integral representation.
So it is for sure that buffer[0] selects just a portion of that what you think it does, the portion might get promoted in the wrong way to an unsigned int, and it might even be a portion completely out of the 0x0FF0-literal.
I'd suggest to use fixed-width integral values representing exactly a word throughout:
#include <stdio.h>
#include <stdint.h>
void print_bits(uint16_t x);
int main(int argc, char *argv[])
{
uint16_t buffer[512];
uint16_t init = 0x0FF0;
uint16_t * som = &init;
printf("print bits of som now: \n");
print_bits(init);
printf("\n");
memset(buffer, 0, sizeof(buffer)); // reinitialize the buffer
memcpy(buffer, som, sizeof(*som)); // copy word to the buffer
printf("print bits of buffer[0] now: \n");
print_bits(buffer[0]);
printf("\n");
return 0;
}
void print_bits(uint16_t x)
{
int i;
for (i = 8 * sizeof(x); i >= 0; i--) {
(x & (1 << i)) ? putchar('1') : putchar('0');
}
printf("\n");
}
You are not writing the bytes "0F F0" to the buffer. You are writing whatever bytes your platform uses internally to store the number 0x0FF0. There is no reason these need to be the same.
When you write 0x0FF0 in C, that means, roughly, "whatever my implementation uses to encode the number four thousand eighty". That might be the byte string 0F, F0. But it might not be.
I mean, how weird would it be if unsigned int init = 0x0FF0; and unsigned int init = 4080; would do the same thing on some platforms and different things on others? But surely not all platforms store the number 4,080 using the byte string "0F F0".
For example, I might store the number ten as "10" or "ten" or any number of other ways. It's unreasonable for you to expect "ten", "10", or any other particular byte sequence to appear in memory just because you stored the number ten unless you do happen to specifically know how your platform stores the number ten. Given that you asked this question, you don't know that.
Also, you are only printing the value of buffer[0], which is a single character. So it couldn't possibly hold any version of 0x0FF0.

Confusion on printing strings as integers in C

I would like to know how the string is represented in integer, so I wrote the following program.
#include <stdio.h>
int main(int argc, char *argv[]){
char name[4] = {"#"};
printf("integer name %d\n", *(int*)name);
return 0;
}
The output is:
integer name 64
This is understandable because # is 64 in integer, i.e., 0x40 in hex.
Now I change the program into:
#include <stdio.h>
int main(int argc, char *argv[]){
char name[4] = {"##"};
printf("integer name %d\n", *(int*)name);
return 0;
}
The output is:
integer name 16448
I dont understand this. Since ## is 0x4040 in hex. So it should be 2^12+2^6 = 4160
If I count the '\0' at the end of the string, then it should be 2^16+2^10 = 66560
Could someone explain where 16448 comes from?
Your math is wrong: 0x4040 == 16448. The two fours are the 6th and 14th bits respectively.
Your code actually invokes undefined behavior because you must not alias a char * with an int *. This is known as the strict aliasing rule. To see just one reason why this should be disallowed, consider what would otherwise have to happen if the code is run on a little and a big endian machine.
If you want to see the hex pattern of the string, you should simply loop over its bytes and print out each byte.
void
print_string(const char * strp)
{
printf("0x");
do
printf("%02X", (unsigned char) *strp);
while (*strp++);
printf("\n");
}
Of course, instead of printing the bytes, you can shift them into an integer (that will very soon overflow) and only finally output that integer. While doing this, you'll be forced to take a stand on “your” endianness.
/* Interpreting as big endian. */
unsigned long
int_string(const char * strp)
{
unsigned long value = 0UL;
do
value = (value << 8) | (unsigned char) *strp;
while (*strp++);
return value;
}
This is how 16448 comes :
0x4040 can be written like this in binary :
4 0 4 0 -> Hex
0100 0000 0100 0000 -> Binary
2^14 2^6 = 16448
Because here 6th and 14th bit are set.
Hope you got it :)

Why does C print my hex values incorrectly?

So I'm a bit of a newbie to C and I am curious to figure out why I am getting this unusual behavior.
I am reading a file 16 bits at a time and just printing them out as follows.
#include <stdio.h>
#define endian(hex) (((hex & 0x00ff) << 8) + ((hex & 0xff00) >> 8))
int main(int argc, char *argv[])
{
const int SIZE = 2;
const int NMEMB = 1;
FILE *ifp; //input file pointe
FILE *ofp; // output file pointer
int i;
short hex;
for (i = 2; i < argc; i++)
{
// Reads the header and stores the bits
ifp = fopen(argv[i], "r");
if (!ifp) return 1;
while (fread(&hex, SIZE, NMEMB, ifp))
{
printf("\n%x", hex);
printf("\n%x", endian(hex)); // this prints what I expect
printf("\n%x", hex);
hex = endian(hex);
printf("\n%x", hex);
}
}
}
The results look something like this:
ffffdeca
cade // expected
ffffdeca
ffffcade
0
0 // expected
0
0
600
6 // expected
600
6
Can anyone explain to me why the last line in each block doesn't print the same value as the second?
The placeholder %x in the format string interprets the corresponding parameter as unsigned int.
To print the parameter as short, add a length modifier h to the placeholder:
printf("%hx", hex);
http://en.wikipedia.org/wiki/Printf_format_string#Format_placeholders
This is due to integer type-promotion.
Your shorts are being implicitly promoted to int. (which is 32-bits here) So these are sign-extension promotions in this case.
Therefore, your printf() is printing out the hexadecimal digits of the full 32-bit int.
When your short value is negative, the sign-extension will fill the top 16 bits with ones, thus you get ffffcade rather than cade.
The reason why this line:
printf("\n%x", endian(hex));
seems to work is because your macro is implicitly getting rid of the upper 16-bits.
You have implicitly declared hex as a signed value (to make it unsigned write unsigned short hex) so that any value over 0x8FFF is considered to be negative. When printf displays it as a 32-bit int value it is sign-extended with ones, causing the leading Fs. When you print the return value of endian before truncating it by assigning it to hex the full 32 bits are available and printed correctly.

Resources