Is there any way to make this statement valid in C ? I need to hold 18446744073709551615 elements in an array.
unsigned long long int array [18446744073709551615] ;
Most likely your program will crash if this huge amount of memory is gonna allocated on stack because stack size is generally small. You can declare array as global (outside the main) to make it valid if that machine has memory greater than 18446744073709551615 * 8 = 147573952589.67642212 GB!
This depends on the size of the pointer on a specific machine/OS/platform; in other words, the machine needs to be able to access all elements. So theoretically, yes, it could be a valid statement. But it also couldn't.
Who knows.
Not possible, not even close.
18446744073709551615 is 2^64-1. long long is 64 bit wide, i.e. 8 bytes. To address your array you would need 64+3=67 bits address space. No current computer goes beyond 64 bit addressing. Furthermore even 64 bit computers can only address a subset of that.
x64 for example only allows for 48 bit virtual address space and physically even less than that (AMD 48 bit, Intel less than that).
Even if there was a 128 bit processor, to handle your 2^67 bytes of memory (even if only virtual), you would need around 17000 of the biggest hard disk currently available (6 TB).
Related
In a 64 bit system every memory cell is 64 bit, so how does it save an int variable that contains less space? Wouldn't it spend one 64 bit address any way? If so why bother to use difference types of variables if they going to catch one cell any way.
Your use of terminology is all over the place.
A memory cell typically corresponds to a logic gate on the hardware level and is very likely to be 1 bit large assuming binary computers.
What I think you are asking about is the smallest addressable unit in a computer, also known as a byte, which is very likely 8 bits large.
This has nothing to do with the data register width of the CPU, which is what one usually refers to when talking about "64 bit computers". The data register width is the largest chunk of data that the CPU can process in a single instruction, but not necessarily the smallest. And this has no relation with the address bus width of the computer, though they are often the same nowadays.
When you declare a variable in C, the size allocated depends on the system. An int is for example very likely 32 bit large on all 32 bit and 64 bit computers. Notably, all mainstream 64 bit computers also support 32 bit or smaller instructions. So it doesn't necessarily make sense for the compiler to allocate more memory than 32 bit - you might get larger memory use for no speed gained.
I believe the term you are fishing for is alignment. It is only inefficient for the computer to read smaller chunks in case they are allocated on misaligned addresses. That is, an address which is not evenly divisible by the data register width (expressed in bytes). Such accesses are typically slower, or in some cases not supported at all. So a 64 bit compiler might therefore decide to allocate a small variable inside a 8 byte chunk, and leave the remaining bytes that aren't used as padding bytes. However, in case the compiler optimizes for size, it may chose to store data in a more memory-effective way, at the cost of access time.
I have the following code fragment,
char *chptr;
int *numptr;
printf("\nSize of char is %d\n",sizeof(chptr));
printf("\nSize of int is %d\n",sizeof(numptr));
For which I got the following Output,
Size of char is 4
Size of int is 4
Obviously the pointers can store addresses up to 232 - 1.
I am using Windows 7 32-bit Operating System with Code::Blocks 10.05 and MingW.
But my system is having an Pentium Dual-Core Processor with 36 Bit Address Bus. Currently I have 4 GB RAM. But suppose if I increase the size of my RAM to say 8 GB, How could the C Pointers deal with such an expanded address space? The size of the C pointer is just 32 bits but the address space is well over 232.
Any Suggestions? Thank You in Advance.
PS : I have checked the answers given here which deals with address storage in pointers, But they do not cover my question I believe.
The addresses your pointers will use is in the virtual address space, not the physical address space (assuming you are using a modern O/S; you don't say in your question).
The O/S kernel will map virtual memory pages to physical address pages as-and-when necessary and manage that whole memory system without the user processes being aware.
If you are using a 32-bit O/S then you will probably have 2GB of user address space to address, which will dramatically increase when you move to a 64-bit O/S.
Check out this Wikipedia article on Virtual Memory.
The easiest solution for C pointers to deal with 8 GB is to again separate code and data. You'd then be able to have 4 GB of code and 4 GB of data, so 8 GB in total. You can't legally compare code pointer 0x00010000 and data pointer 0x00010000 anyway.
But realistically, the solution is to move to 64 bits. The 36 bits hack isn't useful; 128 GB of RAM is entirely possible today.
Some hints:
Check if your OS is 32 bit or 64 bit.
Check if your compiler is capable of generating 64 bit pointers.
Check if your compiler has additional data types for 64 bit pointers.
Check if your compiler has extension to C language like keyword far or __far etc.
Why there is no concept of near,far & huge pointer in a 32 bit compiler? As far as I understand, programs created on 16 bit 8086 architecture complier can have 1 mb size in which the data segment, graphics segments etc are there. To access all those segment and to maintain pointer increment concept we need these various pointers, but why in 32 bit its not necessary?
32-bit compilers can address the entire address space made available to the program (or to the OS) with a single, 32-bit pointer. There is no need for basing because the pointer is large enough to address any byte in the available address space.
One could theoretically conceive of a 32-bit OS that addresses > 4GB of memory (and therefore would need a segment system common with 16-bit OS's), but the practicality is that 64-bit systems became available before the need for that complexity arose.
why there is no concept of near,far & huge pointer in 32 bit compiler?
It depends on the platform and the compiler. Open Watcom C/C++ supports near, far and huge pointers in 16-bit code and near and far pointers in 32-bit code.
As i know programs created on 16 bit 8086 architecture complier can have 1 mb size in which datasegment graphics segments etc are there. to access all those segment and to maintain pointer increment concept we need these various pointers, but why in 32 bit its not necessary?
Because in most cases near 32-bit pointers are enough to cover the entire address space (all 232 bytes = 4 GB of it), which is not the case with near or far 16-bit pointers that as you said yourself can only cover up to 1 MB of memory (strictly speaking, in 16-bit protected mode of 80286+, you can use 16-bit far pointers to address up to at least 16 MB of memory, that's because those pointers are relative to the beginning of segments and segments on 80286+ can start anywhere in the first 16 MB since the segment descriptors in the global descriptor table (GDT) or the local descriptor table (LDT) reserve 24 bits for the start address of a segment (224 bytes = 16 MB)).
I am reading some C text at the address:
https://cs.senecac.on.ca/~lczegel/BTP100/pages/content/compu.html
In the section: Addressible Memory they say that "The maximum size of addressable primary memory depends upon the size of the address registers."
I do not understand why is that.
Can anyone give me a clear explanation, please?
Thanks a lot.
If you have 32-bit registers, then the highest address you can store in a single register is 2^32-1, so you can address 2^32 units (in modern computers, units are almost always bytes). A larger number simply won't fit.
You can get around this by using memory addresses that are larger than a single register can hold (and some CPUs/operating systems have features for doing so), but using addresses/pointers will be slower because it has to fiddle with multiple registers.
As an example, suppose you have 32-bit registers but 64-bit pointers and want to increment a pointer to find the next item in an array of char (++p). Instead of performing a simple increment instruction, the processor will have to
Increment the lower 32 bits;
check if the result is zero (overflow);
increment the upper half as well if overflow occurred.
Simplifying a bit, this means it has to perform a branch (if-then-else) instruction, which is one of the slowest and most complex instructions a modern CPU performs.
(See, e.g., x86 memory segmentation on the Wikipedia for a multi-register addressing scheme used in Intel processors.)
Keeping it simple: the address registers are used to store and refer to addresses of memory; since their size and number is fixed, there is a maximum address.
Obviously you can't exploit more memory than what is addressable (because the machine wouldn't know how to refer to it), so the usable memory is in fact limited by the maximum address that can be expressed by the address registers.
If you have 1 address register, holding a 16 bit address, you can have a maximum of 2^16 - 1 addresses.
However many registers, the number of addresses they can point to will be limited by their width (number of bits).
Thus, the maximum size of addressable primary memory depends upon the size of the address registers.
Can we anyhow change the size of the pointer from 2 bytes so it can occupy more than 2 bytes?
Sure, compile for a 32 (or 64) bit platform :-)
The size of pointers is platform specific, it would be 2 bytes only on 16-bit platforms (which have not been widely used for more than a decade - nowadays all mainstream [update](desktop / laptop / server)[/update] platforms are at least 32 bits).
If your pointer size is 2 byte that means you're running on a 16-bit system.
The only way to increase the pointer size is to use a 32-bit or 64-bit system instead (which would mean any desktop or laptop computer built in the last 15 years or so).
If you're running on some embedded device that uses 16-bit, your only option would be to switch to another device which uses 32-bits (or just live with your pointers being 16-bit).
When a processor is said to be "X-bit" (where X is 16, 32, 64, etc), that X refers to the size of the memory address register. Thus a 16-bit system has a memory address register of 2 bytes.
You cannot cast a 4-byte address to anything smaller because it would lose part of where it's pointing to. (A 2-byte memory address register can only point to 2^16=64KB of memory, whereas a 4-byte register can point to 2^32=4GB of memory.)
You can always "step-up" (ie, run a 32-bit software application on a 64-bit computer) because there's no loss in pointer range. But you can never step down, which is why 64-bit programs don't run on 32-bit systems.
Think of a pointer as a number, only instead of an actual value used for computation, it's the number of a 'slot' in the memory map of the system.
A pointer must be able to represent the highest position of the memory map. That is, it must have at least the amount of bytes required to represent the number of the highest position.
In a 16-bit system, the highest possible position is 0xFFFF (a 16-bit number with all the bits set to 1). A pointer must also have 16 bits, so it can reach that number.
Generalizing, in an X-bit system, a pointer will have X bits.
You can store a pointer in a larger variable, the same way you can store the number 1 in a char, in an int, or an unsigned long long if you wanted to; but there's little point to that: think that, the same way a shorter pointer won't be able to reach the highest memory position, a longer pointer would be able to point to things that can't actually exist in memory, so why have it?
Also, you'd have to 'trick' the compiler for that. If you use the pointer notation in your code, the compiler will always use the correct amount of bytes for it. You can instruct the compiler to compile for another platform, though.