Basically I have a hard coded address in decimal value, and I would like to convert that to a pointer, I have been following this link
But I am not getting it to run as I believe my address is being truncated i.e. the 0's in the address are being removed.
Is there any how I can maintain the 0's or is there a way where I can type cast my address stored in buff to a pointer?
#include <stdio.h>
#include <stdint.h>
int main(int argc, char *argv[]) {
int address = 200000000;
char buff[80];
sprintf(buff, "0x%012x", address);
printf("%s\n", buff);
uint32_t * const Value = (uint32_t *)(uintptr_t)buff;
// *Value = 10;
printf("%p\n", Value); // Value is now storing the value of the variable buff, I dont want this
uint32_t *const Value2 = (uint32_t *)(uintptr_t)0x00000bebc200;
printf("%p\n", Value2); // my address gets truncated, dont want the address to be truncated
}
If %p presents only 8 hex digits for the address, then that is because a pointer on your platform is only 32 bits and in that case the leading zeros have no meaning as there are no address bus lines A32 to A40 to set to zero. The bits are not "truncated", they are not there in they first place.
If you some odd reason you wish to present the address as 48 bits (12 hex digits) on a platform where 32 bits is sufficient then:
uintptr_t address = 200000000u ;
uint32_t* const Value = (uint32_t *)address ;
printf( "0x%12.12lX\n", (uintptr_t)Value ) ;
Outputs:
0x00000BEBC200
But that is only a matter of presentation, the value in address and Value are unchanged and remain 32 bits.
It is not necessary to prevent the truncation of your pointer.
When compiling for 64bit, your pointer will be 64 bit big.
This means it holds a number like 0x0123456789ABCDEF.
However, the output formatter %p will drop any leading 0, as they do not change the behaviour of your programm. It is like comparing 0x42==0x0042.
You do not need to convert your address to hex in order to use it as a pointer.
A computer saves your address in binary format. In memory, your address 200000000 will be saved as 0b1011111010111100001000000000.
The output format of decimal and hexadecimal is only used to make it more comfortable for humans to read the output.
The computer does not care, if you supply decimal, hexadecimal or binary numbers, in-memory it will always work with binary representation.
This means that you can directly follow the advice of your linked answer
#include <inttypes.h> // defines PRIxPTR, see comments of #chqrlie and #JonathanLeffler
uintptr_t address= 200000000; // compiler makes sure to convert this to binary for the pc
uint32_t *Pointer = (uint32_t*) address;
printf("0x%" PRIxPTR " address\n", address); // if the ptr size is known, e.g. %lx can be used
printf("%p pointer\n", Pointer);
sprintf converts your number into an ascii string and saves that to buff. That means you cannot cast the content of buff to get back the number. You would need to to an string to int or string to hex conversion before.
Edit:
You can test the conversion of your compiler by printing the following compare statements
printf("%d\n", address == 200000000); // output true
printf("%d\n", address == 0xbebc200); // output true
printf("%d\n", address == 0x00000bebc200); // output true
printf("%d\n", address == 0b1011111010111100001000000000); // output true
Related
I am little bit confused on usage of memcpy. I though memcpy can be used to copy chunks of binary data to address we desire. I was trying to implement a small logic to directyl convert 2 bytes of hex to 16 bit signed integer without using union.
#include <stdio.h>
#include <stdint.h>
#include <string.h>
int main()
{ uint8_t message[2] = {0xfd,0x58};
// int16_t roll = message[0]<<8;
// roll|=message[1];
int16_t roll = 0;
memcpy((void *)&roll,(void *)&message,2);
printf("%x",roll);
return 0;
}
This return 58fd instead of fd58
No, memcpy did not reverse the bytes as it copied them. That would be a strange and wrong thing for memcpy to do.
The reason the bytes seem to be in the "wrong" order in the program you wrote is that that's the order they're actually in! There's probably a canonical answer on this somewhere, but here's what you need to understand about byte order, or "endianness".
When you declare a string, it's laid out in memory just about exactly as you expect. Suppose I write this little code fragment:
#include <stdio.h>
char string[] = "Hello";
printf("address of string: %p\n", (void *)&string);
printf("address of 1st char: %p\n", (void *)&string[0]);
printf("address of 5th char: %p\n", (void *)&string[4]);
If I compile and run it, I get something like this:
address of string: 0xe90a49c2
address of 1st char: 0xe90a49c2
address of 5th char: 0xe90a49c6
This tells me that the bytes of the string are laid out in memory like this:
0xe90a49c2 H
0xe90a49c3 e
0xe90a49c4 l
0xe90a49c5 l
0xe90a49c6 o
0xe90a49c7 \0
Here I've shown the string vertically, but if we laid it out horizontally, with addresses increasing from left to right, we would see the characters of the string "Hello" laid out from left to right also, just as we would expect.
But that's for strings, which are arrays of char. But integers of various sizes are not really built out of characters, and it turns out that the individual bytes of an integer are not necessarily laid out in memory in "left-to-right" order as we might expect. In fact, on the vast majority of machines today, the bytes within an integer are laid out in the opposite order. Let's take a closer look at how that works.
Suppose I write this code:
int16_t i2 = 0x1234;
printf("address of short: %p\n", (void *)&i2);
unsigned char *p = &i2;
printf("%p: %02x\n", p, *p);
p++;
printf("%p: %02x\n", p, *p);
This initializes a 16-bit (or "short") integer to the hex value 0x1234, and then uses a pointer to print the two bytes of the integer in "left-to-right" order, that is, with the lower-addressed byte first, followed by the higher-addressed byte.
On my machine, the result is something like:
address of short: 0xe68c99c8
0xe68c99c8: 34
0xe68c99c9: 12
You can clearly see that the byte that's stored at the "front" of the two-byte region in memory is 34, followed by 12. The least-significant byte is stored first. This is referred to as "little endian" byte order, because the "little end" of the integer — its least-significant byte, or LSB — comes first.
Larger integers work the same way:
int32_t i4 = 0x5678abcd;
printf("address of long: %p\n", (void *)&i4);
p = &i4;
printf("%p: %02x\n", p, *p);
p++;
printf("%p: %02x\n", p, *p);
p++;
printf("%p: %02x\n", p, *p);
p++;
printf("%p: %02x\n", p, *p);
This prints:
address of long: 0xe68c99bc
0xe68c99bc: cd
0xe68c99bd: ab
0xe68c99be: 78
0xe68c99bf: 56
There are machines that lay the byes out in the other order, with the most-significant byte (MSB) first. Those are called "big endian" machines, but for reasons I won't go into they're not as popular.
How do you construct an integer value out of individual bytes if you don't know your machine's byte order? The best way is to do it "mathematically", based on the properties of the numbers. For example, let's go back to your original array of bytes:
uint8_t message[2] = {0xfd, 0x58};
Now, you know, because you wrote it, that 0xfd is supposed to be the MSB and 0xf8 is supposed to be the LSB. So one good way of combining them together into an integer is like this:
int16_t roll = message[0] << 8; /* MSB */
roll |= message[1]; /* LSB */
The nice thing about this code is that it works correctly on machines of either endianness. I called this technique "mathematical" because it's equivalent to doing it this other way:
int16_t roll = message[0] * 256; /* MSB */
roll += message[1]; /* LSB */
And, in fact, this suggestion of mine involving roll = message[0] << 8 is very close to something you already tried, but had commented out in the code you posted. The difference is that you don't want to think about it in terms of two bytes next to each other in memory; you want to think about it in terms of the most- and least-significant byte. When you say << 8, you're obviously thinking about the most-significant byte, so that should be message[0].
Does memcpy copy bytes in reverse order?
memcpy does not reverse the order bytes.
This return 58fd instead of fd58
Yes, your computer is little endian, so bytes 0xfd,0x58 in order are interpreted by your computer as the value 0x58fd.
I have given a number, for example n = 10, and I want to calculate its length in hex with big endian and save it in a 8 byte char pointer. In this example I would like to get the following string:
"\x00\x00\x00\x00\x00\x00\x00\x50".
How do I do that automatically in C with for example sprintf?
I am not even able to get "\x50" in a char pointer:
char tmp[1];
sprintf(tmp, "\x%x", 50); // version 1
sprintf(tmp, "\\x%x", 50); // version 2
Version 1 and 2 don't work.
I have given a number, for example n = 10, and I want to calculate its length in hex
Repeatedly divide by 16 to find the number of hexadecimal digits. A do ... while insures the result is 1 when n==0.
int hex_length = 0;
do {
hex_length++;
} while (number /= 16);
save it in a 8 byte char pointer.
C cannot force your system to use 8-byte pointer. So if you system uses 4 byte char pointer, we are out of luck. Let us assume OP's system uses 8-byte pointer. Yet integers may be assigned to pointers. This may or may not result in valid pointer.
assert(sizeof (char*) == 8);
char *char_pointer = n;
printf("%p\n", (void *) char_pointer);
In this example I would like to get the following string: "\x00\x00\x00\x00\x00\x00\x00\x50".
In C, a string includes the various characters up to an including a null character. "\x00\x00\x00\x00\x00\x00\x00\x50" is not a valid C string, yet is a valid string literal. Code cannot construct string literals at run time, that is a part of source code. Further the relationship between n==10 and "\x00...\x00\x50" is unclear. Instead perhaps the goal is to store n into a 8-byte array (big endian).
char buf[8];
for (int i=8; i>=0; i--) {
buf[i] = (char) n;
n /= 256;
}
OP's code certainly will fail as it attempts to store a string which is too small. Further "\x%x" is not valid code as \x begins an invalid escape sequence.
char tmp[1];
sprintf(tmp, "\x%x", 50); // version 1
Just do:
int i;
...
int length = round(ceil(log(i) / log(16)));
This will give you (in length) the number of hexadecimal digits needed to represent i (without 0x of course).
log(i) / log(base) is the log-base of i. The log16 of i gives you the exponent.
To make clear what we're doing here: When rising 16 to the power of the found exponent, we get back i: 16^log16(i) = i.
By rounding up this exponent using ceil(), you get the number of digits.
Heres the code:
char chararray[] = {68, 97, 114, 105, 110};
/* 1 byte each*/
int i;
printf("chararray intarray\n");
printf("-------------------\n");
for(i = 0; i < 5; i++)
printf("%p\n", (chararray + i));
Output:
chararray
---------
0012FF74
0012FF75
0012FF76
0012FF77
Now im trying to understand this in terms of hexadecimal, bits and bytes.
I understand that a char is 1 byte and its supposed to increment by 1 byte which is 8 bits.
But I dont understand how its only increasing by 1 in hex? 1 hexadecimal only represents 4 bits correct? so Im kind of confused, it seems like its only incrementing by 4 bits.
Any help on clearing this up is greatly appreciated thanks!
It's true that if you represent a byte in hexa then it is made out of 2 hexa digits where each one stands for 4 bits.
However, the addresses you are seeing are addresses of bytes, and not the content of them. Each byte receives its own address, and the addresses are sequential, just like if we gave each byte a number: byte 0, byte 1, byte 2, byte 3,....
The address in a pointer points to a byte, not to a bit. Your pointer is of type char *, so when it is incremented, the address increases by sizeof(char). If, however, you used a different type, such as int, your pointer would increase by sizeof(int) on each increment, even if it is pointing to a char [] array.
On my machine, sizeof(int)==4, for example.
I wrote this code:
#include <stdio.h>
int main()
{
char str[] = "ACBDEFGHIJKLMNOPQRSTUVWXYZ";
int *a = str;
printf("Char\tAddr\n");
while(a <= &str[25])
{
printf("%c\t%p\n", *a, (void *)a);
a++;
}
return 0;
}
Output:
Char Addr
A 00D5F9BC
E 00D5F9C0
I 00D5F9C4
M 00D5F9C8
Q 00D5F9CC
U 00D5F9D0
Y 00D5F9D4
Every fourth character in the string is outputted.
First, pointer arithmetics like (chararray + i), where chararray points to a char (i.e. is of type char*) increases the value of pointer chararray by i * sizeof(char). Note that sizeof(char) is 1 by definition.
Second, a pointer represents a memory address, which is represented by an integral value that indicates a position in an (absolutely or relatively) addressed memory block, e.g. on the heap, on the stack, on some other data segment, ... . Confer, for example, the following statement in this online C standard draft:
6.3.2.3 Pointers
(5) An integer may be converted to any pointer type. ...
(6) Any pointer type may be converted to an integer type. ...
So when viewing the value of a pointer, we can think of an integral value, just like 256 or 1024 (when "viewed" in decimal format), or 0x100 or 0x400 (when viewed in hexadecimal format). Note that 256 in decmial is equivalent to 100 in hexadecimal, and this has nothing to do with bits and bytes.
Adding 1 to an integral value of 256 (or 0x100) gives 257 (or 0x101), regardless of whether this value stands for a position in a memory block or for oranges sold in the department store. So it's all about "outputting" integral values in hex format.
See the following code illustrating this:
int main()
{
char chararray[] = {68, 97, 114, 105, 110};
for(int i = 0; i < 5; i++) {
char *ptr = (chararray + i);
unsigned long ptrAsIntegralVal = (unsigned long)ptr;
printf("ptr: %p; in decmial format: %lu\n", ptr, ptrAsIntegralVal);
}
}
Output:
ptr: 0x7fff5fbff767; in decmial format: 140734799804263
ptr: 0x7fff5fbff768; in decmial format: 140734799804264
ptr: 0x7fff5fbff769; in decmial format: 140734799804265
ptr: 0x7fff5fbff76a; in decmial format: 140734799804266
ptr: 0x7fff5fbff76b; in decmial format: 140734799804267
Using hexadecimal numbers is just another way of representing any number. It has nothing to do with bits and bytes. One byte is 8 bits, no matter if you represent it as hexadecimal number or decimal number. So it just increases by one = 1 Byte = 8 Bits.
I am trying to initialize a string using pointer to int
#include <stdio.h>
int main()
{
int *ptr = "AAAA";
printf("%d\n",ptr[0]);
return 0;
}
the result of this code is 1094795585
could any body explain this behavior and why the code gave this answers ?
I am trying to initialize a string using pointer to int
The string literal "AAAA" is of type char[5], that is array of five elements of type char.
When you assign:
int *ptr = "AAAA";
you actually must use explicit cast (as types don't match):
int *ptr = (int *) "AAAA";
But, still it's potentially invalid, as int and char objects may have different alignment requirements. In other words:
alignof(char) != alignof(int)
may hold. Also, in this line:
printf("%d\n", ptr[0]);
you are invoking undefined behavior (so it might print "Hello from Mars" if compiler likes so), as ptr[0] dereferences ptr, thus violating strict aliasing rule.
Note that it is valid to make transition int * ---> char * and read object as char *, but not the opposite.
the result of this code is 1094795585
The result makes sense, but for that, you need to rewrite your program in valid form. It might look as:
#include <stdio.h>
#include <string.h>
union StringInt {
char s[sizeof("AAAA")];
int n[1];
};
int main(void)
{
union StringInt si;
strcpy(si.s, "AAAA");
printf("%d\n", si.n[0]);
return 0;
}
To decipher it, you need to make some assumptions, depending on your implementation. For instance, if
int type takes four bytes (i.e. sizeof(int) == 4)
CPU has little-endian byte ordering (though it's not really matter, since every letter is the same)
default character set is ASCII (the letter 'A' is represented as 0x41, that is 65 in decimal)
implementation uses two's complement representation of signed integers
then, you may deduce, that si.n[0] holds in memory:
0x41 0x41 0x41 0x41
that is in binary:
01000001 ...
The sign (most-significant) bit is unset, hence it is just equal to:
65 * 2^24 + 65 * 2^16 + 65 * 2^8 + 65 =
65 * (2^24 + 2^16 + 2^8 + 1) = 65 * 16843009 = 1094795585
1094795585 is correct.
'A' has the ASCII value 65, i.e. 0x41 in hexadecimal.
Four of them makes 0x41414141 which is equal to 1094795585 in decimal.
You got the value 65656565 by doing 65*100^0 + 65*100^1 + 65*100^2 + 65*100^3 but that's wrong since a byte1 can contain 256 different values, not 100.
So the correct calculation would be 65*256^0 + 65*256^1 + 65*256^2 + 65*256^3, which gives 1094795585.
It's easier to think of memory in hexadecimal because one hexadecimal digit directly corresponds to half a byte1, so two hex digits is one full byte1 (cf. 0x41). Whereas in decimal, 255 fits in a single byte1, but 256 does not.
1 assuming CHAR_BIT == 8
65656565 this is a wrong representation of the value of "AAAA" you are seprately representing each character and "AAAA" is stored as array.Its converting into 1094795585 because %d identifier prints decimal value. Run this in gdb with following command:
x/8xb (pointer) //this will show you the memory hex value
x/d (pointer) //this will show you the converted decimal value
#zenith gave you the answer you expected, but your code invokes UB. Anyway, you could demonstrate the same in an almost correct way :
#include <stdio.h>
int main()
{
int i, val;
char *pt = (char *) &val; // cast a pointer to any to a pointer to char : valid
for (i=0; i<sizeof(int); i++) pt[i] = 'A'; // assigning bytes of int : UB in general case
printf("%d 0x%x\n",val, val);
return 0;
}
Assigning bytes of an int is UB in the general case because C standard says that [for] signed integer types, the bits of the object representation shall be divided into three groups: value bits, padding bits, and the sign bit. And a remark adds Some combinations of padding bits might generate trap representations, for example, if one padding
bit is a parity bit.
But in common architectures, there are no padding bits and all bits values correspond to valid numbers, so the operation is valid (but implementation dependant) on all common systems. It is still implementation dependant because size of int is not fixed by standard, nor is endianness.
So : on a 32 bit system using no padding bits, above code will produce
1094795585 0x41414141
indepentantly of endianness.
void *memory;
unsigned int b=65535; //1111 1111 1111 1111 in binary
int i=0;
memory= &b;
for(i=0;i<100;i++){
printf("%d, %d, d\n", (char*)memory+i, *((unsigned int * )((char *) memory + i)));
}
I am trying to understand one thing.
(char*)memory+i - print out adress in range 2686636 - 2686735.
and when i store 65535 with memory= &b this should store this number at adress 2686636 and 2686637
because every adress is just one byte so 8 binary characters so when i print it out
*((unsigned int * )((char *) memory + i)) this should print 2686636, 255 and 2686637, 255
instead of it it prints 2686636, 65535 and 2686637, random number
I am trying to implement memory allocation. It is school project. This should represent memory. One adress should be one byte so header will be 2686636-2586639 (4 bytes for size of block) and 2586640 (1 byte char for free or used memory flag). Can someone explain it to me thanks.
Thanks for answers.
void *memory;
void *abc;
abc=memory;
for(i=0;i<100;i++){
*(int*)abc=0;
abc++;
}
*(int*)memory=16777215;
for(i=0;i<100;i++){
printf("%p, %c, %d\n", (char*)memory+i, *((char *)memory +i), *((char *)memory +i));
}
output is
0028FF94, , -1
0028FF95, , -1
0028FF96, , -1
0028FF97, , 0
0028FF98, , 0
0028FF99, , 0
0028FF9A, , 0
0028FF9B, , 0
i think it works. 255 only one -1, 65535 2 times -1 and 16777215 3 times -1.
In your program it seems that address of b is 2686636 and when you will write (char*)memory+i or (char*)&b+i it means this pointer is pointing to char so when you add one to it will jump to only one memory address i.e2686637 and so on till 2686735(i.e.(char*)2686636+99).
now when you are dereferencing i.e.*((unsigned int * )((char *) memory + i))) you are going to get the value at that memory address but you have given value to b only (whose address is 2686636).all other memory address have garbage values which you are printing.
so first you have to store some data at the rest of the addresses(2686637 to 2686735)
good luck..
i hope this will help
I did not mention this in my comments yesterday but it is obvious that your for loop from 0 to 100 overruns the size of an unsigned integer.
I simply ignored some of the obvious issues in the code and tried to give hints on the actual question you asked (difficult to do more than that on a handy :-)). Unfortunately I did not have time to complete this yesterday. So, with one day delay my hints for you.
Try to avoid making assumptions about how big a certain type is (like 2 bytes or 4 bytes). Even if your assumption holds true now, it might change if you switch the compiler or switch to another platform. So use sizeof(type) consequently throughout the code. For a longer discussion on this you might want to take a look at: size of int, long a.s.o. on Stack Overflow. The standard mandates only the ranges a certain type should be able to hold (0-65535 for unsigned int) so a minimal size for types only. This means that the size of int might (and tipically is) bigger than 2 bytes. Beyond primitive types sizeof helps you also with computing the size of structures where due to memory alignment && packing the size of a structure might be different from what you would "expect" by simply looking at its attributes. So the sizeof operator is your friend.
Make sure you use the correct formatting in printf.
Be carefull with pointer arithmetic and casting since the result depends on the type of the pointer (and obviously on the value of the integer you add with).
I.e.
(unsigned int*)memory + 1 != (unsigned char*)memory + 1
(unsigned int*)memory + 1 == (unsigned char*)memory + 1 * sizeof(unsigned int)
Below is how I would write the code:
//check how big is int on our platform for illustrative purposes
printf("Sizeof int: %d bytes\n", sizeof(unsigned int));
//we initialize b with maximum representable value for unsigned int
//include <limits.h> for UINT_MAX
unsigned int b = UINT_MAX; //0xffffffff (if sizeof(unsigned int) is 4)
//we print out the value and its hexadecimal representation
printf("B=%u 0x%X\n", b, b);
//we take the address of b and store it in a void pointer
void* memory= &b;
int i = 0;
//we loop the unsigned chars starting at the address of b up to the sizeof(b)
//(in our case b is unsigned int) using sizeof(b) is better since if we change the type of b
//we do not have to remember to change the sizeof in the for loop. The loop works just the same
for(i=0; i<sizeof(b); ++i)
{
//here we kept %d for formating the individual bytes to represent their value as numbers
//we cast to unsigned char since char might be signed (so from -128 to 127) on a particular
//platform and we want to illustrate that the expected (all bytes 1 -> printed value 255) occurs.
printf("%p, %d\n", (unsigned char *)memory + i, *((unsigned char *) memory + i));
}
I hope you will find this helpfull. And good luck with your school assignment, I hope you learned something you can use now and in the future :-).