declaring string using pointer to int - c

I am trying to initialize a string using pointer to int
#include <stdio.h>
int main()
{
int *ptr = "AAAA";
printf("%d\n",ptr[0]);
return 0;
}
the result of this code is 1094795585
could any body explain this behavior and why the code gave this answers ?

I am trying to initialize a string using pointer to int
The string literal "AAAA" is of type char[5], that is array of five elements of type char.
When you assign:
int *ptr = "AAAA";
you actually must use explicit cast (as types don't match):
int *ptr = (int *) "AAAA";
But, still it's potentially invalid, as int and char objects may have different alignment requirements. In other words:
alignof(char) != alignof(int)
may hold. Also, in this line:
printf("%d\n", ptr[0]);
you are invoking undefined behavior (so it might print "Hello from Mars" if compiler likes so), as ptr[0] dereferences ptr, thus violating strict aliasing rule.
Note that it is valid to make transition int * ---> char * and read object as char *, but not the opposite.
the result of this code is 1094795585
The result makes sense, but for that, you need to rewrite your program in valid form. It might look as:
#include <stdio.h>
#include <string.h>
union StringInt {
char s[sizeof("AAAA")];
int n[1];
};
int main(void)
{
union StringInt si;
strcpy(si.s, "AAAA");
printf("%d\n", si.n[0]);
return 0;
}
To decipher it, you need to make some assumptions, depending on your implementation. For instance, if
int type takes four bytes (i.e. sizeof(int) == 4)
CPU has little-endian byte ordering (though it's not really matter, since every letter is the same)
default character set is ASCII (the letter 'A' is represented as 0x41, that is 65 in decimal)
implementation uses two's complement representation of signed integers
then, you may deduce, that si.n[0] holds in memory:
0x41 0x41 0x41 0x41
that is in binary:
01000001 ...
The sign (most-significant) bit is unset, hence it is just equal to:
65 * 2^24 + 65 * 2^16 + 65 * 2^8 + 65 =
65 * (2^24 + 2^16 + 2^8 + 1) = 65 * 16843009 = 1094795585

1094795585 is correct.
'A' has the ASCII value 65, i.e. 0x41 in hexadecimal.
Four of them makes 0x41414141 which is equal to 1094795585 in decimal.
You got the value 65656565 by doing 65*100^0 + 65*100^1 + 65*100^2 + 65*100^3 but that's wrong since a byte1 can contain 256 different values, not 100.
So the correct calculation would be 65*256^0 + 65*256^1 + 65*256^2 + 65*256^3, which gives 1094795585.
It's easier to think of memory in hexadecimal because one hexadecimal digit directly corresponds to half a byte1, so two hex digits is one full byte1 (cf. 0x41). Whereas in decimal, 255 fits in a single byte1, but 256 does not.
1 assuming CHAR_BIT == 8

65656565 this is a wrong representation of the value of "AAAA" you are seprately representing each character and "AAAA" is stored as array.Its converting into 1094795585 because %d identifier prints decimal value. Run this in gdb with following command:
x/8xb (pointer) //this will show you the memory hex value
x/d (pointer) //this will show you the converted decimal value

#zenith gave you the answer you expected, but your code invokes UB. Anyway, you could demonstrate the same in an almost correct way :
#include <stdio.h>
int main()
{
int i, val;
char *pt = (char *) &val; // cast a pointer to any to a pointer to char : valid
for (i=0; i<sizeof(int); i++) pt[i] = 'A'; // assigning bytes of int : UB in general case
printf("%d 0x%x\n",val, val);
return 0;
}
Assigning bytes of an int is UB in the general case because C standard says that [for] signed integer types, the bits of the object representation shall be divided into three groups: value bits, padding bits, and the sign bit. And a remark adds Some combinations of padding bits might generate trap representations, for example, if one padding
bit is a parity bit.
But in common architectures, there are no padding bits and all bits values correspond to valid numbers, so the operation is valid (but implementation dependant) on all common systems. It is still implementation dependant because size of int is not fixed by standard, nor is endianness.
So : on a 32 bit system using no padding bits, above code will produce
1094795585 0x41414141
indepentantly of endianness.

Related

Add 0 padding to a pointer address

Basically I have a hard coded address in decimal value, and I would like to convert that to a pointer, I have been following this link
But I am not getting it to run as I believe my address is being truncated i.e. the 0's in the address are being removed.
Is there any how I can maintain the 0's or is there a way where I can type cast my address stored in buff to a pointer?
#include <stdio.h>
#include <stdint.h>
int main(int argc, char *argv[]) {
int address = 200000000;
char buff[80];
sprintf(buff, "0x%012x", address);
printf("%s\n", buff);
uint32_t * const Value = (uint32_t *)(uintptr_t)buff;
// *Value = 10;
printf("%p\n", Value); // Value is now storing the value of the variable buff, I dont want this
uint32_t *const Value2 = (uint32_t *)(uintptr_t)0x00000bebc200;
printf("%p\n", Value2); // my address gets truncated, dont want the address to be truncated
}
If %p presents only 8 hex digits for the address, then that is because a pointer on your platform is only 32 bits and in that case the leading zeros have no meaning as there are no address bus lines A32 to A40 to set to zero. The bits are not "truncated", they are not there in they first place.
If you some odd reason you wish to present the address as 48 bits (12 hex digits) on a platform where 32 bits is sufficient then:
uintptr_t address = 200000000u ;
uint32_t* const Value = (uint32_t *)address ;
printf( "0x%12.12lX\n", (uintptr_t)Value ) ;
Outputs:
0x00000BEBC200
But that is only a matter of presentation, the value in address and Value are unchanged and remain 32 bits.
It is not necessary to prevent the truncation of your pointer.
When compiling for 64bit, your pointer will be 64 bit big.
This means it holds a number like 0x0123456789ABCDEF.
However, the output formatter %p will drop any leading 0, as they do not change the behaviour of your programm. It is like comparing 0x42==0x0042.
You do not need to convert your address to hex in order to use it as a pointer.
A computer saves your address in binary format. In memory, your address 200000000 will be saved as 0b1011111010111100001000000000.
The output format of decimal and hexadecimal is only used to make it more comfortable for humans to read the output.
The computer does not care, if you supply decimal, hexadecimal or binary numbers, in-memory it will always work with binary representation.
This means that you can directly follow the advice of your linked answer
#include <inttypes.h> // defines PRIxPTR, see comments of #chqrlie and #JonathanLeffler
uintptr_t address= 200000000; // compiler makes sure to convert this to binary for the pc
uint32_t *Pointer = (uint32_t*) address;
printf("0x%" PRIxPTR " address\n", address); // if the ptr size is known, e.g. %lx can be used
printf("%p pointer\n", Pointer);
sprintf converts your number into an ascii string and saves that to buff. That means you cannot cast the content of buff to get back the number. You would need to to an string to int or string to hex conversion before.
Edit:
You can test the conversion of your compiler by printing the following compare statements
printf("%d\n", address == 200000000); // output true
printf("%d\n", address == 0xbebc200); // output true
printf("%d\n", address == 0x00000bebc200); // output true
printf("%d\n", address == 0b1011111010111100001000000000); // output true

Output of the following C code

What will be the output of the following C code. Assuming it runs on Little endian machine, where short int takes 2 Bytes and char takes 1 Byte.
#include<stdio.h>
int main() {
short int c[5];
int i = 0;
for(i = 0; i < 5; i++)
c[i] = 400 + i;
char *b = (char *)c;
printf("%d", *(b+8));
return 0;
}
In my machine it gave
-108
I don't know if my machine is Little endian or big endian. I found somewhere that it should give
148
as the output. Because low order 8 bits of 404(i.e. element c[4]) is 148. But I think that due to "%d", it should read 2 Bytes from memory starting from the address of c[4].
The code gives different outputs on different computers because on some platforms the char type is signed by default and on others it's unsigned by default. That has nothing to do with endianness. Try this:
char *b = (char *)c;
printf("%d\n", (unsigned char)*(b+8)); // always prints 148
printf("%d\n", (signed char)*(b+8)); // always prints -108 (=-256 +148)
The default value is dependent on the platform and compiler settings. You can control the default behavior with GCC options -fsigned-char and -funsigned-char.
c[4] stores 404. In a two-byte little-endian representation, that means two bytes of 0x94 0x01, or (in decimal) 148 1.
b+8 addresses the memory of c[4]. b is a pointer to char, so the 8 means adding 8 bytes (which is 4 two-byte shorts). In other words, b+8 points to the first byte of c[4], which contains 148.
*(b+8) (which could also be written as b[8]) dereferences the pointer and thus gives you the value 148 as a char. What this does is implementation-defined: On many common platforms char is a signed type (with a range of -128 .. 127), so it can't actually be 148. But if it is an unsigned type (with a range of 0 .. 255), then 148 is fine.
The bit pattern for 148 in binary is 10010100. Interpreting this as a two's complement number gives you -108.
This char value (of either 148 or -108) is then automatically converted to int because it appears in the argument list of a variable-argument function (printf). This doesn't change the value.
Finally, "%d" tells printf to take the int argument and format it as a decimal number.
So, to recap: Assuming you have a machine where
a byte is 8 bits
negative numbers use two's complement
short int is 2 bytes
... then this program will output either -108 (if char is a signed type) or 148 (if char is an unsigned type).
To see what sizes types have in your system:
printf("char = %u\n", sizeof(char));
printf("short = %u\n", sizeof(short));
printf("int = %u\n", sizeof(int));
printf("long = %u\n", sizeof(long));
printf("long long = %u\n", sizeof(long long));
Change the lines in your program
unsigned char *b = (unsigned char *)c;
printf("%d\n", *(b + 8));
And simple test (I know that it is not guaranteed but all C compilers I know do it this way and I do not care about old CDC or UNISYS machines which had different addresses and pointers to different types of data
printf(" endianes test: %s\n", (*b + (unsigned)*(b + 1) * 0x100) == 400? "little" : "big");
Another remark: it is only because in your program c[0] == 400

C programming why does the address of char array increment from 0012FF74 to 0012FF75?

Heres the code:
char chararray[] = {68, 97, 114, 105, 110};
/* 1 byte each*/
int i;
printf("chararray intarray\n");
printf("-------------------\n");
for(i = 0; i < 5; i++)
printf("%p\n", (chararray + i));
Output:
chararray
---------
0012FF74
0012FF75
0012FF76
0012FF77
Now im trying to understand this in terms of hexadecimal, bits and bytes.
I understand that a char is 1 byte and its supposed to increment by 1 byte which is 8 bits.
But I dont understand how its only increasing by 1 in hex? 1 hexadecimal only represents 4 bits correct? so Im kind of confused, it seems like its only incrementing by 4 bits.
Any help on clearing this up is greatly appreciated thanks!
It's true that if you represent a byte in hexa then it is made out of 2 hexa digits where each one stands for 4 bits.
However, the addresses you are seeing are addresses of bytes, and not the content of them. Each byte receives its own address, and the addresses are sequential, just like if we gave each byte a number: byte 0, byte 1, byte 2, byte 3,....
The address in a pointer points to a byte, not to a bit. Your pointer is of type char *, so when it is incremented, the address increases by sizeof(char). If, however, you used a different type, such as int, your pointer would increase by sizeof(int) on each increment, even if it is pointing to a char [] array.
On my machine, sizeof(int)==4, for example.
I wrote this code:
#include <stdio.h>
int main()
{
char str[] = "ACBDEFGHIJKLMNOPQRSTUVWXYZ";
int *a = str;
printf("Char\tAddr\n");
while(a <= &str[25])
{
printf("%c\t%p\n", *a, (void *)a);
a++;
}
return 0;
}
Output:
Char Addr
A 00D5F9BC
E 00D5F9C0
I 00D5F9C4
M 00D5F9C8
Q 00D5F9CC
U 00D5F9D0
Y 00D5F9D4
Every fourth character in the string is outputted.
First, pointer arithmetics like (chararray + i), where chararray points to a char (i.e. is of type char*) increases the value of pointer chararray by i * sizeof(char). Note that sizeof(char) is 1 by definition.
Second, a pointer represents a memory address, which is represented by an integral value that indicates a position in an (absolutely or relatively) addressed memory block, e.g. on the heap, on the stack, on some other data segment, ... . Confer, for example, the following statement in this online C standard draft:
6.3.2.3 Pointers
(5) An integer may be converted to any pointer type. ...
(6) Any pointer type may be converted to an integer type. ...
So when viewing the value of a pointer, we can think of an integral value, just like 256 or 1024 (when "viewed" in decimal format), or 0x100 or 0x400 (when viewed in hexadecimal format). Note that 256 in decmial is equivalent to 100 in hexadecimal, and this has nothing to do with bits and bytes.
Adding 1 to an integral value of 256 (or 0x100) gives 257 (or 0x101), regardless of whether this value stands for a position in a memory block or for oranges sold in the department store. So it's all about "outputting" integral values in hex format.
See the following code illustrating this:
int main()
{
char chararray[] = {68, 97, 114, 105, 110};
for(int i = 0; i < 5; i++) {
char *ptr = (chararray + i);
unsigned long ptrAsIntegralVal = (unsigned long)ptr;
printf("ptr: %p; in decmial format: %lu\n", ptr, ptrAsIntegralVal);
}
}
Output:
ptr: 0x7fff5fbff767; in decmial format: 140734799804263
ptr: 0x7fff5fbff768; in decmial format: 140734799804264
ptr: 0x7fff5fbff769; in decmial format: 140734799804265
ptr: 0x7fff5fbff76a; in decmial format: 140734799804266
ptr: 0x7fff5fbff76b; in decmial format: 140734799804267
Using hexadecimal numbers is just another way of representing any number. It has nothing to do with bits and bytes. One byte is 8 bits, no matter if you represent it as hexadecimal number or decimal number. So it just increases by one = 1 Byte = 8 Bits.

Convert char array to int in C

Is this a safe way to convert array to number?
// 23 FD 15 94 -> 603788692
char number[4] = {0x94, 0x15, 0xFD, 0x23};
uint32_t* n = (uint32_t*)number;
printf("number is %lu", *n);
MORE INFO
I'm using that in a embedded device with LSB architecture, does not need to be portable.
I'm currently using shifting, but if this code is safe i prefer it.
No. You're only allowed to access something as an integer if it is an integer.
But here's how you can manipulate the binary representation of an object by simply turning the logic around:
uint32_t n;
unsigned char * p = (unsigned char *)&n;
assert(sizeof n == 4); // assumes CHAR_BIT == 8
p[0] = 0x94; p[1] = 0x15; p[2] = 0xFD; p[3] = 0x23;
The moral: You can treat every object as a sequence of bytes, but you can't treat an arbitrary sequence of bytes as any particular object.
Moreover, the binary representation of a type is very much platform dependent, so there's no telling what actual integer value you get out from this. If you just want to synthesize an integral value from its base-256 digits, use normal maths:
uint32_t n = 0x94 + (0x15 * 0x100) + (0xFD * 0x10000) + (0x23 * 0x1000000);
This is completely platform-independent and expresses what you want purely in terms of values, not representations. Leave it to your compiler to produce a machine representation of the code.
No, it is not safe.
This is violating C aliasing rules that say that an object can only be accessed trough its own type, its signed / unsigned variant or through a character type. It can also invoke undefined behavior by breaking alignment.
A safe solution to get a uint32_t value from the array is to use bitwise operators (<< and &) on the char values to form an uint32_t.
You're better off with something like this (more portable):
int n = (c[3]<<24)|(c[2]<<16)|(c[1]<<8)|c[0];
where c is an unsigned char array.

Casting int pointer to char pointer causes loss of data in C?

I have the following piece of code:
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
int n = 260;
int *p = &n;
char *pp = (char*)p;
*pp = 0;
printf("n = %d\n", n);
system("PAUSE");
return 0;
}
The output put of the program is n = 256.
I may understand why it is, but I am not really sure.
Can anyone give me a clear explanation, please?
Thanks a lot.
The int 260 (= 256 * 1 + 4) will look like this in memory - note that this depends on the endianness of the machine - also, this is for a 32-bit (4 byte) int:
0x04 0x01 0x00 0x00
By using a char pointer, you point to the first byte and change it to 0x00, which changes the int to 256 (= 256 * 1 + 0).
You're apparently working on a little-endian machine. What's happening is that you're starting with an int that takes up at least two bytes. The value 260 is 256+4. The 256 goes in the second byte, and the 4 in the first byte. When you write 0 to the first byte, you're left with only the 256 in the second byte.
In C a pointer references a block of bytes based on the type associated with the pointer. So in your case the integer pointer refers to a block 4 bytes in size, while a char is only one byte long. When you set the char to 0 it only changes the first byte of the integer value, but because of how numbers are stored in memory on modern machines (effectively in reverse order from how you would write it) you are overwritting the least significant byte (which was 4) you are left w/ 256 as the value
I understood what exactly happens by changing value:
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
int n = 260;
int *p = &n;
char *pp = (char*)p;
*pp = 20;
printf("pp = %d\n", (int)*pp);
printf("n = %d\n", (int)n);
system("PAUSE");
return 0;
}
The output value are
20
and
276
So basically the problem is not that you have data loss, is that the char pointer points only to the first byte of the int and so it changes only that, the other bytes are not changed and that's why those weird value (if you are on an INTEL processor the first byte is the least significant, that's why you change the "smallest" part of the number
Your problem is the assignment
*pp = 0;
You're dereferencing pp which points to n, and changing n.
However, pp is a char pointer so it doesn't change all of n
which is an int. This causes the binary complications in the other answers.
In terms of the C language, the description for what you are doing is modifying the representation of the int variable n. In C, all types have a "representation" as one or more bytes (unsigned char), and it's legal to access the underlying representation by casting a pointer to char * or unsigned char * - the latter is better for reasons that would just unnecessarily complicate things if I went into them here.
As schnaader answered, on a little endian, twos complement implementation with 32-bit int, the representation of 260 is:
0x04 0x01 0x00 0x00
and overwriting the first byte with 0 yields:
0x00 0x01 0x00 0x00
which is the representation for 256 on such an implementation.
C allows implementations which have padding bits and trap representations (which raise a signal/abort your program if they're accessed), so in general overwriting part but not all of an int in this way is not safe to do. Nonetheless, it does work on most real-world machines, and if you instead used the type uint32_t, it would be guaranteed to work (although the ordering of the bits would still be implementation-dependent).
Considering 32 bit systems,
256 will be represented in like this.
00000000 (Byte-3) 00000000 (Byte-2) 00000001(Byte-1) 00000100(Byte-0)
Now when p is typecast-ed to a char pointer, the label on the pointer changes, but the memory contents don't. It means earlier p could have access 4 bytes, as it was an integer pointer, but now it can only access 1 byte as it is a char pointer. So, only the LSB gets changes to zero, not all the 4 bytes.
And it becomes
00000000 (Byte-3) 00000000 (Byte-2) 00000001(Byte-1) 00000000(Byte-0)
Hence, the o/p is 256.

Resources