How i can know the size of all data type in my computer?
The following program should do the trick for the primitive types:
#include <stdio.h>
int main()
{
printf("sizeof(char) = %d\n", sizeof(char));
printf("sizeof(short) = %d\n", sizeof(short));
printf("sizeof(int) = %d\n", sizeof(int));
printf("sizeof(long) = %d\n", sizeof(long));
printf("sizeof(long long) = %d\n", sizeof(long long));
printf("sizeof(float) = %d\n", sizeof(float));
printf("sizeof(double) = %d\n", sizeof(double));
printf("sizeof(long double) = %d\n", sizeof(long double));
return 0;
}
This prints the number of "bytes" the type uses, with sizeof(char) == 1 by definition. Just what 1 means (that is how many bits that is) is implementation specific and likely depend on the underlying hardware. Some machines have 7 bit bytes for instance, some have 10 or 12 bit bytes.
You can apply sizeof to each type whose size you need to know and then you can print the result.
sizeof(T) will give you the size of any type passed to it. If you're trying to find out the size of all data types used or defined in a particular program, you won't be able to--C doesn't maintain that level of information when compiling.
Use sizeof to get the size of the type of variable (measured in bytes).
For example:
#include <stdint.h>
sizeof(int32_t) will return 4
sizeof(char) will return 1
int64_t a;
sizeof a; will return 8
See http://publications.gbdirect.co.uk/c_book/chapter5/sizeof_and_malloc.html
Related
After long time, I was doing some experiments on array with this program and printing output in decimal using %lu.
The confusing part I observed is when I use cast with unsigned long, array address '&thing+1' increment by just +1,
140733866717248 140733866717248
140733866717249 140733866717249
When I don't cast then array addresses,
140720750924480 140720750924480
140720750924481 140720750924488
How address in first part increment by just 1 on int type address '&'?
How casting affect the values here?
Example program:
#include <stdio.h>
int main(void)
#if 0 /* (unsigned long)&thing --> 140733866717248
(unsigned long)&thing+1 --> 140733866717249*/
{
int thing[8];
printf("%lu %lu\n", (unsigned long)thing, (unsigned long)&thing );
printf("%lu %lu\n", (unsigned long)thing+1, (unsigned long)&thing+1);
return 0;
}
#endif
#if 1 /* &thing --> 140720750924480
&thing+1 --> 140720750924488*/
{
int thing[8];
printf("%lu %lu\n", thing, &thing );
printf("%lu %lu\n", thing+1, &thing+1);
return 0;
}
#endif
In the first example, you are adding 1 to unsigned long values, so that just adds 1
In the second example, you are adding 1 to a pointer, which increases the pointer value by the size of the pointed at type. So with thing + 1, thing is an int *, so it increases by sizeof(int), while with &thing +1, &thing is an int (*)[8], so it increases by the size of that (32).
Result from running the code you posted:
140733007047872 140733007047872
140733007047876 140733007047904
What will be the output of the following C code. Assuming it runs on Little endian machine, where short int takes 2 Bytes and char takes 1 Byte.
#include<stdio.h>
int main() {
short int c[5];
int i = 0;
for(i = 0; i < 5; i++)
c[i] = 400 + i;
char *b = (char *)c;
printf("%d", *(b+8));
return 0;
}
In my machine it gave
-108
I don't know if my machine is Little endian or big endian. I found somewhere that it should give
148
as the output. Because low order 8 bits of 404(i.e. element c[4]) is 148. But I think that due to "%d", it should read 2 Bytes from memory starting from the address of c[4].
The code gives different outputs on different computers because on some platforms the char type is signed by default and on others it's unsigned by default. That has nothing to do with endianness. Try this:
char *b = (char *)c;
printf("%d\n", (unsigned char)*(b+8)); // always prints 148
printf("%d\n", (signed char)*(b+8)); // always prints -108 (=-256 +148)
The default value is dependent on the platform and compiler settings. You can control the default behavior with GCC options -fsigned-char and -funsigned-char.
c[4] stores 404. In a two-byte little-endian representation, that means two bytes of 0x94 0x01, or (in decimal) 148 1.
b+8 addresses the memory of c[4]. b is a pointer to char, so the 8 means adding 8 bytes (which is 4 two-byte shorts). In other words, b+8 points to the first byte of c[4], which contains 148.
*(b+8) (which could also be written as b[8]) dereferences the pointer and thus gives you the value 148 as a char. What this does is implementation-defined: On many common platforms char is a signed type (with a range of -128 .. 127), so it can't actually be 148. But if it is an unsigned type (with a range of 0 .. 255), then 148 is fine.
The bit pattern for 148 in binary is 10010100. Interpreting this as a two's complement number gives you -108.
This char value (of either 148 or -108) is then automatically converted to int because it appears in the argument list of a variable-argument function (printf). This doesn't change the value.
Finally, "%d" tells printf to take the int argument and format it as a decimal number.
So, to recap: Assuming you have a machine where
a byte is 8 bits
negative numbers use two's complement
short int is 2 bytes
... then this program will output either -108 (if char is a signed type) or 148 (if char is an unsigned type).
To see what sizes types have in your system:
printf("char = %u\n", sizeof(char));
printf("short = %u\n", sizeof(short));
printf("int = %u\n", sizeof(int));
printf("long = %u\n", sizeof(long));
printf("long long = %u\n", sizeof(long long));
Change the lines in your program
unsigned char *b = (unsigned char *)c;
printf("%d\n", *(b + 8));
And simple test (I know that it is not guaranteed but all C compilers I know do it this way and I do not care about old CDC or UNISYS machines which had different addresses and pointers to different types of data
printf(" endianes test: %s\n", (*b + (unsigned)*(b + 1) * 0x100) == 400? "little" : "big");
Another remark: it is only because in your program c[0] == 400
As part of a program for a class, I have to print the output a specific way, split up into blocks of sixteen bytes. I've been searching for quite a while for a way to cast the pointer to an int or another way to perform a modulus or division remainder operation on the pointer address stored in a variable. I've hit a roadblock, does anyone here know how I could perform this seemingly simple operation? Here's the basic form of the function:
void printAddress(char *loc, char *minLoc, char *maxLoc) {
minLoc = (loc - (loc % 16));
maxLoc = minLoc + 16;
printf("%p - %p - %p", minLoc, loc, maxLoc);
}
I removed all my attempts at casting it to make it clear what I'm trying to do.
The type you're looking for is uintptr_t, defined in <stdint.h>. It is an unsigned integer type big enough to hold any pointer to data. The formats are in <inttypes.h>. They allow you to format the code correctly. When you include <intttypes.h>, it is not necessary to include <stdint.h> too. I chose 16 assuming you have a 64-bit processor; you can use 8 if you're working with a 32-bit processor.
void printAddress(char *loc)
{
uintptr_t absLoc = (uintptr_t)loc;
uintptr_t minLoc = absLoc - (absLoc % 16);
uintptr_t maxLoc = minLoc + 16;
printf("0x%16" PRIXPTR " - 0x%16" PRIXPTR " - 0x%16" PRIXPTR "\n",
minLoc, absLoc, maxLoc);
}
You could also write:
uintptr_t minLoc = absLoc & ~(uintptr_t)0x0F;
See also Solve the memory alignment in C interview question that stumped me.
Note that there might, theoretically, be a system where uintptr_t is not defined; I know of no system where it cannot actually be supported (but I don't know all systems).
I might not fully understood the problem, but for me it looks as if you are trying to do the good old hexdump?
void hexdump(char *buf, int size)
{
int i;
for (i = 0; i < size; i++)
{
if (i % 16 == 0)
{
puts("");
printf("%p", &buf[i]);
}
printf("%02x ", buff[i]);
}
}
My assignment is to print the binary value of a decimal number, and I want to control the size of the array as I understood I should do so my program would work in all the compilers.
I don't understand briefly the operator sizeof, but I would appriciate if you can explain where should I, and why, put the sizeof in my program:
void translate_dec_bin(char s[]){
unsigned int decNum;
char st[MAX_LEN] = { 0 };
int j = 0;
sizeof(decNum, 4);
decNum = atoi(s);
while (decNum > 0){
st[j] = decNum % 2;
decNum = decNum / 2;
j++;
}
while (j >=0){
printf("%d", st[j]);
j--;
}
printf("\n");
}
My thought is that when I print the number, i.e in the code:
printf("%d", st[j]);
I should put the operator. Is it right?
sizeof is a unary operation, meaning it takes only one operand or argument.
http://en.wikipedia.org/wiki/Sizeof
Sizeof is for measuring the byte-length of a datatype in C (and C++). So, if I were to write
size_t a = sizeof(int);
a will generally be equal to 4 (see Jonathan Leffler's comment). This is because a 32-bit integer requires 4 bytes of memory (32 bits/8 bits in a byte = 4).
Answering your question about portability, sizeof(int) should work on any compiler.
You might find this question useful:
Is the size of C "int" 2 bytes or 4 bytes?
To set the size of your char array to the bit-size of an int, this should work:
const size_t intsize = sizeof(int) * 8;//sizeof returns size in bytes, so * 8 will give size in bits
char st[intsize] = { 0 };
I tried to understand the size of address used to store variables and pointers, pointers-pointers and pointers-pointers-pointers. The results are kind of confusing.
Here is the code:
#include <stdio.h>
#include <conio.h>
#include <stdlib.h>
int main(void)
{
char *** ppptr_string = NULL;
int *** ppptr_int = NULL;
double *** ppptr_dbl = NULL;
char c=0; int i=0; double d=0;
printf("\n %d %d %d %d %d\n", sizeof(&ppptr_string),
sizeof(ppptr_string), sizeof(*ppptr_string), sizeof(**ppptr_string),
sizeof(***ppptr_string));
printf("\n %d %d %d %d %d\n", sizeof(&ppptr_int), sizeof(ppptr_int),
sizeof(*ppptr_int), sizeof(**ppptr_int), sizeof(***ppptr_int));
printf("\n %d %d %d %d %d\n", sizeof(&ppptr_dbl), sizeof(ppptr_dbl),
sizeof(*ppptr_dbl), sizeof(**ppptr_dbl), sizeof(***ppptr_dbl));
printf("\n sizeof(char) = %d, sizeof(int) = %d, sizeof(double) = %d",
sizeof(c), sizeof(i), sizeof(d));
printf("\n sizeof(&char) = %d, sizeof(&int) = %d, sizeof(&double) = %d",
sizeof(&c), sizeof(&i), sizeof(&d));
getch();
return 0;
}
Now the confusion. I can see that a variable address is always 2 bytes long on this machine. Regardless of type of the variable and regardless of the whether its a pointer variable. But why do I get size of 4 for so many entries in here? The pointer has size 4 always regardless of the type. The >address< at which the variable is stored is of size 2. And the content pointed to has a sized depending on the type.
Why do I get 4s in the output for sizeof??
My output from Borland C++ 5.02
If you have a type T and a pointer on pointer like T*** ptr, then ptr, *ptr, **ptr are pointers themselves. You're probably working on a 32bit system (or compiling a 32bit application), so sizeof(ptr) == sizeof(*ptr) == sizeof(**ptr):
--- Program output ---
4 4 4 4 1
4 4 4 4 4
4 4 4 4 8
sizeof(char) = 1, sizeof(int) = 4, sizeof(double) = 8
sizeof(&char) = 4, sizeof(&int) = 4, sizeof(&double) = 4
&ptr is an address/a pointer on T***, so its size is 4 too. Only if you dereference the pointer to its maximum level (***ptr) you will have the actual type and not another pointer.
I think what's happening is that you're getting near (16-bit) pointers for local variables, but a pointer declared as type * is a far (32-bit) pointer.
It's a quirk of working on a 16-bit Intel processor (or a 32-bit processor in "real mode"), e.g. in DOS, where you only have access to 1 MB of memory (or 640 kB in practice). The upper 16 bits of a far pointer are a segment (a 64k page in memory), and the lower 16 bits are an offset.
http://en.wikipedia.org/wiki/Real_mode
http://wiki.answers.com/Q/What_are_near_far_and_huge_pointers_in_C
Answerers not able to reproduce this are most likely using a 32-bit (or more) OS on a 32-bit (or more) processor.