I have the following code fragment,
char *chptr;
int *numptr;
printf("\nSize of char is %d\n",sizeof(chptr));
printf("\nSize of int is %d\n",sizeof(numptr));
For which I got the following Output,
Size of char is 4
Size of int is 4
Obviously the pointers can store addresses up to 232 - 1.
I am using Windows 7 32-bit Operating System with Code::Blocks 10.05 and MingW.
But my system is having an Pentium Dual-Core Processor with 36 Bit Address Bus. Currently I have 4 GB RAM. But suppose if I increase the size of my RAM to say 8 GB, How could the C Pointers deal with such an expanded address space? The size of the C pointer is just 32 bits but the address space is well over 232.
Any Suggestions? Thank You in Advance.
PS : I have checked the answers given here which deals with address storage in pointers, But they do not cover my question I believe.
The addresses your pointers will use is in the virtual address space, not the physical address space (assuming you are using a modern O/S; you don't say in your question).
The O/S kernel will map virtual memory pages to physical address pages as-and-when necessary and manage that whole memory system without the user processes being aware.
If you are using a 32-bit O/S then you will probably have 2GB of user address space to address, which will dramatically increase when you move to a 64-bit O/S.
Check out this Wikipedia article on Virtual Memory.
The easiest solution for C pointers to deal with 8 GB is to again separate code and data. You'd then be able to have 4 GB of code and 4 GB of data, so 8 GB in total. You can't legally compare code pointer 0x00010000 and data pointer 0x00010000 anyway.
But realistically, the solution is to move to 64 bits. The 36 bits hack isn't useful; 128 GB of RAM is entirely possible today.
Some hints:
Check if your OS is 32 bit or 64 bit.
Check if your compiler is capable of generating 64 bit pointers.
Check if your compiler has additional data types for 64 bit pointers.
Check if your compiler has extension to C language like keyword far or __far etc.
Related
I am a beginner in Computer Science and I started learning the C language.
(So, I apologize at first if my question doesn't make any sense)
I am facing a doubt in the chapter called Pointers.
I am thinking that when we declare a variable, let's say an integer variable i.
A memory is allocated for the variable in the RAM.
In my book nothing was written about how the computer selects the memory address which is to be allocated for the variable.
I am asking this because, I was thinking a computer has 8GB RAM and a 32-bit processor.
I learnt that a 32-bit processor consists of a 32-bit register, which can allow the processor to access atmost up to 4GB of the RAM.
So, is it possible in that computer, when we declare the integer variable i, the computer allocated a memory space with address which can't be accessed by the 32-bit processor?
And if this happens what will be shown on the screen if I want to print the address of i using the address of operator?
A computer with 8 GB of RAM will have a 64 bit processor.
Many 64 bit CPU's can still run 32 bit programs, in which case that both use 4 GB. You will see the address &i as a 32 bit offset in your program's memory. But even if you did compile for 64 bits, you'd still see the 64 bit offset in your program - the OS will instruct the CPU how different programs use different parts of memory.
I just restart the C programming study. Now, I'm studying the memory storage capacity and the difference between bit and byte. I came across to this definition.
There is a calculation to a 32 bits system. I'm very confused, because in this calculation 2^32 = 4294967296 bytes and it means about 4 Gigabyte. My question is: Why 2 raised to 32 power results in a number in bytes instead of bits ?
Thanks for helping me.
Because the memory is byte-addressable (that is, each byte has its own address).
There are two ways to look at this:
A 32-bit integer can hold one of 2^32 different values. Thus, a uint32_t can represent the values from 0 to 4294967295.
A 32-bit address can represent 2^32 different addresses. And as Scott said, on a byte-addressable system, that means 2^32 different bytes can be addressed. Thus, a process with 32-bit pointers can address up to 4 GiB of virtual memory. Or, a microprocessor with a 32-bit address bus can address up to 4 GiB of RAM.
That description is really superficial and misses a lot of important considerations, especially as to how memory is defined and accessed.
Fundamentally an N-bit value has 2N possible states, so a 16-bit value has 65,536 possible states. Additionally, memory is accessed as bytes, or 8-bit values. This was not always the case, older machines had different "word" sizes, anywhere from 4 to 36 bits per word, occasionally more, but over time the 8-bit word, or "byte", became the dominant form.
In every case a memory "address" contains one "word" or, on more modern machines, "byte". Memory is measured in these units, like "kilowords" or "gigabytes", for reasons of simplicity even though the individual memory chips themselves are specified in terms of bits. For example, a 1 gigabyte memory module often has 8 gigabit chips on it. These chips are read at the same time, the resulting data combined to produce a single byte of memory.
By that article's wobbly definition this means a 16-bit CPU can only address 64KB of memory, which is wrong. DOS systems from the 1980s used two pointers to represent memory, a segment and an offset, and could address 16MB using an effective 24-bit pointer. This isn't the only way in which the raw pointer size and total addressable memory can differ.
Some 32-bit systems also had an alternate 36-bit memory model that allowed addressing up to 64GB of memory, though an individual process was limited to a 4GB slice of the available memory.
In other words, for systems with a singular pointer to a memory address and where the smallest memory unit is a byte then the maximum addressable memory is 2N bytes.
Thankfully, since 64-bit systems are now commonplace and a computer with > 64GB of memory is not even exotic or unusual, addressing systems are a lot simpler now then when having to work around pointer-size limitations.
We say that memory is byte-addressable, you can think like byte is the smallest unit of memory so you are not reading by bits but bytes. The reason might be that the smallest data type is 1 byte, even boolean type in c/c++ is 1 byte.
Is there any way to make this statement valid in C ? I need to hold 18446744073709551615 elements in an array.
unsigned long long int array [18446744073709551615] ;
Most likely your program will crash if this huge amount of memory is gonna allocated on stack because stack size is generally small. You can declare array as global (outside the main) to make it valid if that machine has memory greater than 18446744073709551615 * 8 = 147573952589.67642212 GB!
This depends on the size of the pointer on a specific machine/OS/platform; in other words, the machine needs to be able to access all elements. So theoretically, yes, it could be a valid statement. But it also couldn't.
Who knows.
Not possible, not even close.
18446744073709551615 is 2^64-1. long long is 64 bit wide, i.e. 8 bytes. To address your array you would need 64+3=67 bits address space. No current computer goes beyond 64 bit addressing. Furthermore even 64 bit computers can only address a subset of that.
x64 for example only allows for 48 bit virtual address space and physically even less than that (AMD 48 bit, Intel less than that).
Even if there was a 128 bit processor, to handle your 2^67 bytes of memory (even if only virtual), you would need around 17000 of the biggest hard disk currently available (6 TB).
Why there is no concept of near,far & huge pointer in a 32 bit compiler? As far as I understand, programs created on 16 bit 8086 architecture complier can have 1 mb size in which the data segment, graphics segments etc are there. To access all those segment and to maintain pointer increment concept we need these various pointers, but why in 32 bit its not necessary?
32-bit compilers can address the entire address space made available to the program (or to the OS) with a single, 32-bit pointer. There is no need for basing because the pointer is large enough to address any byte in the available address space.
One could theoretically conceive of a 32-bit OS that addresses > 4GB of memory (and therefore would need a segment system common with 16-bit OS's), but the practicality is that 64-bit systems became available before the need for that complexity arose.
why there is no concept of near,far & huge pointer in 32 bit compiler?
It depends on the platform and the compiler. Open Watcom C/C++ supports near, far and huge pointers in 16-bit code and near and far pointers in 32-bit code.
As i know programs created on 16 bit 8086 architecture complier can have 1 mb size in which datasegment graphics segments etc are there. to access all those segment and to maintain pointer increment concept we need these various pointers, but why in 32 bit its not necessary?
Because in most cases near 32-bit pointers are enough to cover the entire address space (all 232 bytes = 4 GB of it), which is not the case with near or far 16-bit pointers that as you said yourself can only cover up to 1 MB of memory (strictly speaking, in 16-bit protected mode of 80286+, you can use 16-bit far pointers to address up to at least 16 MB of memory, that's because those pointers are relative to the beginning of segments and segments on 80286+ can start anywhere in the first 16 MB since the segment descriptors in the global descriptor table (GDT) or the local descriptor table (LDT) reserve 24 bits for the start address of a segment (224 bytes = 16 MB)).
Just a quick question:
on a 32 bit machine, is a pointer to a pointer (**p) going to be 4 bytes?
The logic is that pointers are merely memory addresses. The memory address of any stored entity in a machine with 32-bit addresses is almost certainly 4 bytes. Therefore the memory address of a stored pointer is 4 bytes. Therefore a pointer to a pointer is 4 bytes. None of this is promised by the ISO C standard. It's just the way that nearly all implementations turn out.
yes... it will be 4 bytes... but its not guaranteed.
Correct. Pointers usually have a fixed size. On a 32-bit machine they are usually 32 bits (= 4 bytes)
Typically yes, addresses on 32-bit machines it will be 4 bytes.
Best bet if you don't want to make assumptions is run the old sizeof(p)
Others have already mentioned that it's most certainly 32 bits or 4 8-bit bytes.
However, depending on the hardware and the compiler it may be less or more than that.
If your machine can address its memory only as 32-bit units at 32-bit boundaries, you will have to have a bigger pointer to address and access 8-bit portions (chars/bytes) of every 32-bit memory cell. If the compiler here decides not to have pointers of different sizes, all pointers (including pointers to pointers) become 34+-bit long.
Likewise, if the program is very small and can fit into 64KB, the compiler may be able to reduce all pointers to 16 bits.