In the following code I stored the mac address in a char array.
But even when I am storing it in a char variable, while printing it's printing as follows:
ffffffbb
ffffffcc
ffffffdd
ffffffee
ffffffff
This is the code:
#include<stdio.h>
int main()
{
char *mac = "aa:bb:cc:dd:ee:ff";
char a[6];int i;
sscanf(mac,"%x:%x:%x:%x:%x:%x",&a[0],&a[1],&a[2],&a[3],&a[4],&a[5]);
for( i = 0; i < 6;i++)
printf("%x\n",a[i]);
}
I need the output to be in the following way:
aa
bb
cc
dd
ee
ff
The current printf statement is
printf("%x\n",a[i]);
How can I get the desired output and why is the printf statement printing ffffffaa even though I stored the aa in a char array?
You're using %x, which expects the argument to be unsigned int *, but you're just passing char *. This is dangerous, since sscanf() will do an int-sized write, possibly writing outside the space allocated to your variable.
Change the conversion specifier for the sscanf() to %hhx, which means unsigned char. Then change the print to match. Also, of course, make the a array unsigned char.
Also check to make sure sscanf() succeded:
unsigned char a[6];
if(sscanf(mac, "%hhx:%hhx:%hhx:%hhx:%hhx:%hhx",
a, a + 1, a + 2, a + 3, a + 4, a + 5) == 6)
{
printf("daddy MAC is %02hhx:%02hhx:%02hhx:%02hhx:%02hhx:%02hhx",
a[0], a[1], a[2], a[3], a[4], a[5]);
}
Make sure to treat your a array as unsigned chars, i.e.
unsigned char a[6];
In
printf("%x\n",a[i]);
the expression a[i] yields a char. However, the standard does not specify whether char is signed or unsigned. In your case, the compiler apparently treats it as a signed type.
Since the most significant bit is set in all the bytes of your Mac address (each by is larger than or equal to 0x80), a[i] is treated as a negative value so printf generates the hexadecimal representation of a negative value.
Related
I have a code that translates ASCII char array to a Hex char array:
void ASCIIFormatCharArray2HexFormatCharArray(char chrASCII[72], char chrHex[144])
{
int i,j;
memset(chrHex, 0, 144);
for(i=0, j=0; i<strlen(chrASCII); i++, j+=2)
{
sprintf((char*)chrHex + j, "%02X", chrASCII[i]);
}
chrHex[j] = '\0';
}
When I insert the function the char 'א' - Alef, the equivalent to 'A' in English, the function does this:
chrHex = "FFFF"
I don't understand how 1 char translates to 2 bytes of Hex ("FFFF") instead of 1 byte(like "u" in ASCII is "75" in Hex) when it's not even an English letter.
would love for an explanation of how the compiler treats 'א' like so.
When “א” appears in a string literal, your compiler likely represents it with the bytes D716 and 9016, although other possibilities are allowed by the C standard.
When these bytes are interpreted as a signed char, they have the values −41 and −112. When these are passed as an argument to sprintf, they are automatically promoted to int. In a 32-bit two’s complement int, the bits used to represent −41 and −112 are FFFFFFD716 and FFFFFF9016.
The behavior of asking sprintf to format these with %02X is technically not defined by the C standard, because an unsigned int should be passed for X, rather than an int. However, your C implementation likely formats them as “FFFFFFD7” and “FFFFFF90”.
So the first sprintf puts “FFFFFFD7” in chrHex starting at element 0.
Then the second sprintf puts “FFFFFF90” in chrHex starting at element 2, partially overwriting the first string. Now chrHex contains “FFFFFFFF90”.
Then chrHex[j] = '\0'; puts a null character an element 4, truncating the string to “FFFF”.
To fix this, change the sprintf to expect an unsigned char and pass an unsigned char value (which will be promoted to int, but sprintf expects that for hhX and works with it):
sprintf(chrHex + j, "%02hhX", (unsigned char) chrASCII[i]);
I'm trying to understand how pointers work. I created an int value and made a char pointer to point at it.
When printing the content of the address that char pointer points to, I don't get the expected result.
Like if that char pointer is pointing to 256, I was expecting the content of that address to return 0 because (256)10 = (0000000100000000)2. Because a char pointer points to one byte so it'll return the first 8 bits which are zeros.
But it returns -1.
Here's my code
#include <stdio.h>
int main()
{
int y = 256;
char *p = (char *)&y;
// returns Value -1
printf("Value %d \n", *p);
return 0;
}
But why 255 still gives me -1 ? Even though i'm printing
y
Getting Value -1 255 when y is 255 makes sense. p points to a byte that looks like 11111111 in binary, *p is a signed char, and %d prints a signed value, so that's why it prints -1.
– dbush
I am trying to put an integer value into a character array so I can have byte addressable memory. When I pass the integer I want to place and the character pointer to the array into my function they have the correct values. After the assignment the integer pointer keeps the correct value but the character pointer has the negative value. This only happens two times out of ten and on the same two numbers every time..
Here is a snippet of the function
//Places an int into an array at memLocation
void PutIntAt(int i, char *arr)
{
printf("value: %i at: %i\n", i, arr - &mainMemory[0]);
int *pos = (int*)arr;
*pos = i;
printf("*pos is: %x *arr is: %x\n", *pos, *arr);
}
The output I get from this i
value: 150 at: 28
*pos is: 96 *arr is: ffffff96
value: 50 at: 32
*pos is: 32 *arr is: 32
value: 20 at: 36
*pos is: 14 *arr is: 14
value: 10 at: 40
*pos is: a *arr is: a
value: 5 at: 44
*pos is: 5 *arr is: 5
value: 500 at: 48
*pos is: 1f4 *arr is: fffffff4
location 48 = -12
I am compiling with gcc and using the -o option and -std=gnu99 option.
The mainMemory array is a global variable. Not sure why this is happening.
You are using the wrong format specifier for printf. This causes undefined behaviour.
You use %x, which requires unsigned int, but supplied *arr which is a char. On your system char has a range that includes negative values.
To fix this you could either use the %hhd or %d specifiers (the latter works due to default argument promotion). If you want to convert the char to unsigned then you have to write code to perform a conversion, e.g.:
printf("%x\n", (unsigned char)*arr);
Note: The printf specification is not very clearly written, but it's generally interpreted that %u and %x may be used with any smaller argument which gets promoted to a non-negative integer. So you can use %x rather than %hhx in my example.
I am trying to initialize a string using pointer to int
#include <stdio.h>
int main()
{
int *ptr = "AAAA";
printf("%d\n",ptr[0]);
return 0;
}
the result of this code is 1094795585
could any body explain this behavior and why the code gave this answers ?
I am trying to initialize a string using pointer to int
The string literal "AAAA" is of type char[5], that is array of five elements of type char.
When you assign:
int *ptr = "AAAA";
you actually must use explicit cast (as types don't match):
int *ptr = (int *) "AAAA";
But, still it's potentially invalid, as int and char objects may have different alignment requirements. In other words:
alignof(char) != alignof(int)
may hold. Also, in this line:
printf("%d\n", ptr[0]);
you are invoking undefined behavior (so it might print "Hello from Mars" if compiler likes so), as ptr[0] dereferences ptr, thus violating strict aliasing rule.
Note that it is valid to make transition int * ---> char * and read object as char *, but not the opposite.
the result of this code is 1094795585
The result makes sense, but for that, you need to rewrite your program in valid form. It might look as:
#include <stdio.h>
#include <string.h>
union StringInt {
char s[sizeof("AAAA")];
int n[1];
};
int main(void)
{
union StringInt si;
strcpy(si.s, "AAAA");
printf("%d\n", si.n[0]);
return 0;
}
To decipher it, you need to make some assumptions, depending on your implementation. For instance, if
int type takes four bytes (i.e. sizeof(int) == 4)
CPU has little-endian byte ordering (though it's not really matter, since every letter is the same)
default character set is ASCII (the letter 'A' is represented as 0x41, that is 65 in decimal)
implementation uses two's complement representation of signed integers
then, you may deduce, that si.n[0] holds in memory:
0x41 0x41 0x41 0x41
that is in binary:
01000001 ...
The sign (most-significant) bit is unset, hence it is just equal to:
65 * 2^24 + 65 * 2^16 + 65 * 2^8 + 65 =
65 * (2^24 + 2^16 + 2^8 + 1) = 65 * 16843009 = 1094795585
1094795585 is correct.
'A' has the ASCII value 65, i.e. 0x41 in hexadecimal.
Four of them makes 0x41414141 which is equal to 1094795585 in decimal.
You got the value 65656565 by doing 65*100^0 + 65*100^1 + 65*100^2 + 65*100^3 but that's wrong since a byte1 can contain 256 different values, not 100.
So the correct calculation would be 65*256^0 + 65*256^1 + 65*256^2 + 65*256^3, which gives 1094795585.
It's easier to think of memory in hexadecimal because one hexadecimal digit directly corresponds to half a byte1, so two hex digits is one full byte1 (cf. 0x41). Whereas in decimal, 255 fits in a single byte1, but 256 does not.
1 assuming CHAR_BIT == 8
65656565 this is a wrong representation of the value of "AAAA" you are seprately representing each character and "AAAA" is stored as array.Its converting into 1094795585 because %d identifier prints decimal value. Run this in gdb with following command:
x/8xb (pointer) //this will show you the memory hex value
x/d (pointer) //this will show you the converted decimal value
#zenith gave you the answer you expected, but your code invokes UB. Anyway, you could demonstrate the same in an almost correct way :
#include <stdio.h>
int main()
{
int i, val;
char *pt = (char *) &val; // cast a pointer to any to a pointer to char : valid
for (i=0; i<sizeof(int); i++) pt[i] = 'A'; // assigning bytes of int : UB in general case
printf("%d 0x%x\n",val, val);
return 0;
}
Assigning bytes of an int is UB in the general case because C standard says that [for] signed integer types, the bits of the object representation shall be divided into three groups: value bits, padding bits, and the sign bit. And a remark adds Some combinations of padding bits might generate trap representations, for example, if one padding
bit is a parity bit.
But in common architectures, there are no padding bits and all bits values correspond to valid numbers, so the operation is valid (but implementation dependant) on all common systems. It is still implementation dependant because size of int is not fixed by standard, nor is endianness.
So : on a 32 bit system using no padding bits, above code will produce
1094795585 0x41414141
indepentantly of endianness.
Using scanf, each number typed in, i would like my program to
print out two lines: for example
byte order: little-endian
> 2
2 0x00000002
2.00 0x40000000
> -2
-2 0xFFFFFFFE
-2.00 0xC0000000
I can get it to print out the 2 in hex
but i also need a float and of course i cant scanf as one
when i need to also scan as an int
If i cast as a float when i try to printf i get a zero. If i scan in as a float
i get the correct output. I have tried to convert the int to a
float but it still comes out as zero.
here is my output so far
Int - float - hex
byte order: little-endian
>2
2 0x000002
2.00 00000000
it looks like i am converting to a float fine
why wont it print as a hex?
if i scan in as a float i get the correct hex representation like the first example.
this should be something simple. i do need to scan in as a decimal
keep in mind
i am running this in cygwin
here is what i have so far..
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
int HexNumber;
float convert;
printf("Int - float - hex\n");
int a = 0x12345678;
unsigned char *c = (unsigned char*)(&a);
if (*c == 0x78)
{
printf("\nbyte order: little-endian\n");
}
else
{
printf("\nbyte order: big-endian\n");
}
printf("\n>");
scanf("%d", &HexNumber);
printf("\n%10d ",HexNumber);
printf("%#08x",HexNumber);
convert = (float)HexNumber; // converts but prints a zero
printf("\n%10.2f ", convert);
printf("%#08x", convert); // prints zeros
return 0;
}
try this:
int i = 2;
float f = (float)i;
printf("%#08X", *( (int*) &f ));
[EDIT]
#Corey:
let's parse it from inside out:
& f = address of f = say address 0x5ca1ab1e
(int*) &f = interpret the address 0x5ca1ab1e as integer pointer
* ((int*)&f) = get the integer at address 0x5ca1ab1e
the following is more concise, but it's hard to remember the C language's operator associativity and operator precedence(i prefer the extra clarity of some added parenthesis and whitespace provides):
printf("%#08X", *(int*)&f);
printf("%#08x", convert); // prints zeros
This line is not going to work because you are telling printf that you are passing in an int (by using the %x) but infact you are passing it in a float.
What is your intention with this line? To show the binary representation of the floating point number in hex? If so, you may want to try something like this:
printf("%lx\n", *(unsigned long *)(&convert));
What this line is doing is taking the address of convert (&convert) which is a pointer to a float and casting it into a pointer to an unsigned long (note: that the type you cast into here may be different depending on the size of float and long on your system). The last * is dereferencing the pointer to an unsigned long into an unsigned long which is passed to printf
Given an int x, converting to float, then printing out the bytes of that float in hex could be done something like this:
show_as_float(int x) {
float xx = x;
//Edit: note that this really prints the value as a double.
printf("%f\t", xx);
unsigned char *ptr = (unsigned char *)&xx;
for (i=0; i<sizeof(float); i++)
printf("%2.2x", ptr[i]);
}
The standards (C++ and C99) give "special dispensation" for unsigned char, so it's safe to use them to view the bytes of any object. C89/90 didn't guarantee that, but it was reasonably portable nonetheless.