C Heap Environment variable - c

Is there an environment variable in c which stores the heap word size, or at least a variable which stores the type of system ?
For example in 64 bit system would be 8(bytes) and in 32 bit would be 4(bytes)

Note that 64 bit systems can execute 32 bit binaries, in this case, sizeof(void *), sizeof(int), ... will be 4, even on 64 bit system.
You can get some additional mileage using the uname system call (see uname -m). For Intel, it will be x86_64 (64), or i686 (for 32). If you need a solution for Intel only, this can work. You can extend this to other processor (arm, etc.) but you will need to code each platform that you code may run on. See "machine" in: https://man7.org/linux/man-pages/man2/uname.2.html
To make things more complex, you might be running under 32 bit operating system, which runs under 64 bit processor (or some virtualized environment). In those cases, uname will report on the operating system, not on the processor. Not clear which one you are looking for.

Related

sizeof(int*) in 32-bit compatibility mode

Running the following on Linux x86-64 compiled with gcc -m32
#include <stdio.h>
#include <limits.h>
int main() {
int a = 4;
int* ptr = &a;
printf("int* is %d bits in size\n", CHAR_BIT * sizeof(ptr));
return 0;
}
results in
int* is 32 bits in size
Why I convinced myself it ought to be 64 bits (prior to executing): since it is running on a 64-bit computer in order to address the memory we need 64 bits. Since &a is the address of where value 4 is stored it should be 64 bits. The compiler could implement a trick by having the same offset for all pointers since it is running in the compatibility mode, but it couldn't guarantee congruent data after calling malloc multiple times. This is wrong. Why?
On the hardware level, your typical x86-64 processor has a 32-bits compatibility mode, where it behaves like a x86 processor. That means memory is addressed using 4 bytes, hence your pointer is 32 bits.
On the software level, the 64 bits kernel allows 32 bits processes to be run in this compatibility mode.
This is how 'old' 32 bits programs can run on 64 bits machines.
The compiler, particularly with the -m32 flag, writes code for x86 addressing, so that's why int* is also 32 bits.
Modern CPUs have a memory management unit, it makes possible that every program has its own address space. You could even have two different programs using the same addresses. This unit is also what detects segmentation faults (access violations). With this, the addresses a program uses are not the same as the addresses on the address bus that connects the CPU and the peripherials including RAM, so it's no problem for the OS to assign 32-bit addresses to a program.
An x86-64 machine running a 64bit OS runs 32bit processes in "compat" mode, which is different from "legacy" mode. In compat mode, user-space (i.e. the 32bit program's point of view) works the same as on a system in legacy mode (32bit everything).
However, the kernel is still 64bits, and can map the compat-mode process's virtual address space anywhere in physical address space. (so two different 32b processes can each be using 4GB of RAM.) IDK if the page tables for a compat process need to be different from 64bit processes. I found http://wiki.osdev.org/Setting_Up_Long_Mode, which has some stuff but doesn't answer that question.
In compat mode, system calls switch the CPU to 64b long mode, and returns from system calls switch back. Kernel functions that take a user-space pointer as an argument need simple wrappers to do whatever is necessary to get the appropriate address for use from kernel-space.
The high level answer is that there's hardware support for everything compat mode needs to be just as fast as legacy mode (32bit kernel).
IIRC, 32bit virtual addresses get zero-extended to 64bit by the MMU hardware, so the kernel just sets up the page tables accordingly.
If you use an address-size override prefix in 64bit code, the 32-bit address formed from the 32bit registers involved will be zero-extended. (There's an x32 ABI for code that doesn't need more than 4GB of RAM, and would benefit from smaller pointers, but still wants the performance benefit of more registers, and having them be 64b.)

Difference between 32bit and 64bit Assembler Programs?

As far as I noticed a 32bit program uses the FLAT memory model and the 64bit also. Using the 32bit program one has only 4GB to address and using 64bit (rcx for example) makes it possible to saturate the 40 to 48 address bits modern CPU provide and address even more.
So beside this and some additional control registers that a 32bit processor does not has, I ask myself if it is possible to run 32bit code in linux flawlessly.
I mean must every C code I execute be 64bit for instance?
I can understand that since C builds upon a stack frame and base pointer pushing a 32bit base pointer on stack my introduce problems where the stack pointer is 64bit and one might access the pop and push op codes in 32 bit fashion.
So what are the difference and is it possible to actually run 32bit code when running a 64bit Linux kernel?
[Update]
To state the scenario clear I am running a 64bit program and load a ELF64 file into memory map everything and call the method directly. The idea is to generate asm code dynamically.
The main difference between them is the different calling conventions. On 32bit there are several types: __stdcall, __fastcall, ...
On 64bit (x64) there's only one (on Windows® platforms, about others I don't know) And it has some requirements, which are very different to 32bit.
More on https://future2048.blogspot.com
Note that ARM and IA64 (Itanium) also are different Encodings as x64 (Intel64/AMD64)
And you have 8 more general registers r8..r15, with sub registers
r8d..r15d, r8w..r15w, r8b..r15b
For the SIMD-based code also 8 additional registers xmm8..xmm15 are present.
The exception handling is data-based on 64bit; on 32bit it was code-based. So on 64bit for unwinding exceptions no longer instructions are used to build the exception frame. The exceptiom handling is completely data-based so that no additional instructions are required to try/catch.
The memory limit of 2GB on 32bit apps (or with /LARGEADDRESSAWARE 3GB on an app on 32bit Win OS, or 4GB on 64bit Win OS) is now much larger
More on https://msdn.microsoft.com/en-us/library/windows/desktop/aa366778(v=vs.85).aspx
And of course, the general purpose registers have 64bit width instead of 32bit. So any integer calculation can process values bigger than the 32bit limit of 0..4294967296. (signed -2147483648..+2147483647)
Also reading and storing memory with a simple MOV instruction can read and write a QWORD (64bit) at once; on 32bit that only could write a DWORD (32bit).
Some instructions have been removed: PUSHA + POPA disappeared.
And one Encoding form of INC/DEC is now used as REX-Byte prefix Encoding.
Some 32 bit code will work in a 64 bit environment without modification. However, in general, functions won't work because the calling conventions are probably different (depends on the architecture). Depending on the program, you could write some glue to convert arguments to the calling convention you want. So you can't just link a 32-bit library into your 64-bit application.
However, if your entire application is 32-bit, you will probably be able to run it just fine. The word size of the kernel doesn't really matter. Popular operating systems all support running 32-bit code with a 64-bit kernel: Linux, OS X, and Windows support this.
In short: you can run 32-bit applications on a 64-bit system but you can't mix 32-bit and 64-bit code in the same application (barring deep wizardry).

Size of integer in C [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Does the size of an int depend on the compiler and/or processor?
Does the size of Integer depend on Compiler or on OS or on Processor? What if I use gcc on both 32 bit OS or 64bit OS running either on 32 bit machine or 64 bit machine(only 64 bit OS in this case).
It depends on the combination of compiler, processor and OS.
For instance, on a 64 bit Intel CPU, in 64 bit mode, the size of a long int in Windows is 4 byte while in Linux and on the Mac it is 8 byte. int is 4 bytes in all three OSes on Intel.
The compiler implementer also has a choice, but usually uses what the OS uses. But it could well be that a compiler vendor that has C compilers for all three platforms decides to use the same sizes in all three.
Of course, it doesn't make sense to make int 4 bytes (although it would be possible) on a 16 bit CPU.
So it depends on all three things you mention.
Depends on compiler options.
Of course it depends on the compiler itself too.
But the compiler was made for a specific OS, so it depends on the OS
And / or
The compiler was made for a specific processor, so it depends on the processor
The size of int, long, etc, depends on the compiler, but the compiler implementer will choose the best size for a particular processor and/or OS.
It depends on the system. And by system I mean any combination of processor and operating system, but it usually is bound to the "natural" integer size of the processor in use.
Does the size of Integer depands on Compiler or on OS or on Processor?
Yes. It can depend on any of those things.
It is actually defined by the platform ABI, which is set by the compiler and runtime libraries, but compilers use different ABIs on different OS's or architectures.
The size of an int, and pretty much every other type, in C is implementation defined. Certain compilers may make guarantees on specific platforms but this is implementation dependent. It's nothing you should ever rely on

Can a C compiler generate an executable 64-bits where pointers are 32-bits?

Most programs fits well on <4GB address space but needs to use new features just available on x64 architecture.
Are there compilers/platforms where I can use x64 registers and specific instructions but preserving 32-bits pointers to save memory?
Is it possible do that transparently on legacy code? What switch to do that?
OR
What changes on code is it necessary to get 64-bits features while keep 32-bits pointers?
A simple way to circumvent this is if you'd have only few types for your structures that you are pointing to. Then you could just allocate big arrays for your data and do the indexing with uint32_t.
So a "pointer" in such a model would be just an index in a global array. Usually addressing with that should be efficient enough with a decent compiler, and it would save you some space. You'd loose other things that you might be interested in, dynamic allocation for instance.
Another way to achieve something similar is to encode a pointer with the difference to its actual location. If you can ensure that that difference always fits into 32 bit, you could gain too.
It's worth noting that there an ABI in development for linux, X32, that lets you build a x86_64 binary that uses 32 bit indices and addresses.
Only relatively new, but interesting nonetheless.
http://en.wikipedia.org/wiki/X32_ABI
Technically, it is possible for a compiler to do so. AFAIK, in practice it isn't done. It has been proposed for gcc (even with a patch here: http://gcc.gnu.org/ml/gcc/2007-10/msg00156.html) but never integrated (at least, it was not documented the last time I checked). My understanding is that it needs also support from the kernel and standard library to work (i.e. the kernel would need to set up things in a way not currently possible and using the existing 32 or 64 bit ABI to communicate with the kernel would not be possible).
What exactly are the "64-bit features" you need, isn't that a little vague?
Found this while searching myself for an answer:
http://www.codeproject.com/KB/cpp/smallptr.aspx
Also pick up the discussion at the bottom...
Never had any need to think about this, but it is interesting to realize that one can be concerned with how much space pointers need...
It depends on the platform. On Mac OS X, the first 4 GB of a 64-bit process' address space is reserved and unmapped, presumably as a safety feature so no 32-bit value is ever mistaken for a pointer. If you try, there may be a way to defeat this. I worked around it once by writing a C++ "pointer" class which adds 0x100000000 to the stored value. (This was significantly faster than indexing into an array, which also requires finding the array-base address and multiplying before the addition.)
On the ISA level, you can certainly choose to load and zero-extend a 32-bit value and then use it as a 64-bit pointer. It's a good feature for a platform to have.
No change should be necessary to a program unless you wish to use 64-bit and 32-bit pointers simultaneously. In that case you are back to the bad old days of having near and far pointers.
Also, you will certainly break ABI compatibility with APIs that take pointers to pointers.
I think this would be similar to the MIPS n32 ABI: 64-bit registers with 32-bit pointers.
In the n32 ABI, all registers are 64-bit (so requires a MIPS64 processor). But addresses and pointers are only 32-bit (when stored in memory), decreasing the memory footprint. When loading a 32-bit value (such as a pointer) into a register, it is sign-extended into 64-bits. When the processor uses the pointer/address for a load or store, all 64-bits are used (the processor is not aware of the n32-ess of the SW). If your OS supports n32 programs (maybe the OS also follows the n32 model or it may be a proper 64-bit OS with added n32 support), it can locate all memory used by the n32 application in suitable memory (e.g. the lower 2GB and the higher 2GB, virtual addresses). The only glitch with this model is that when registers are saved on the stack (function calls etc), all 64-bits are used, there is no 32-bit data model in the n32 ABI.
Probably such an ABI could be implemented for x86-64 as well.
On x86, no. On other processors, such as PowerPC it is quite common - 64 bit registers and instructions are available in 32 bit mode, whereas with x86 it tends to be "all or nothing".
I'm afraid that if you are concerned about the size of pointers you might have bigger problems to deal with. If the number of pointers is going to be in the millions or billions, you will probably run into limitations within the Windows OS before you actually run out of physical or virtual memory.
Mark Russinovich has written a great article relating to this, named Pushing the Limits of Windows: Virtual Memory.
Linux now has fairly comprehensive support for the X32 ABI which does exactly what the asker is asking, in fact it is partially supported as a configuration under the Gentoo operating system. I think this question needs to be reviewed in light of resent development.
The second part of your question is easily answered. It is very possible, in fact many C implementations have support, for 64-bit operations using 32-bit code. The C type often used for this is long long (but check with your compiler and architecture).
As far as I know it is not possible to have 32-bit pointers in 64-bit native code.

How come a 32 bit kernel can run a 64 bit binary?

On my OS X box, the kernel is a 32-bit binary and yet it can run a 64-bit binary.
How does this work?
cristi:~ diciu$ file ./a.out
./a.out: Mach-O 64-bit executable x86_64
cristi:~ diciu$ file /mach_kernel
/mach_kernel: Mach-O universal binary with 2 architectures
/mach_kernel (for architecture i386): Mach-O executable i386
/mach_kernel (for architecture ppc): Mach-O executable ppc
cristi:~ diciu$ ./a.out
cristi:~ diciu$ echo $?
1
The CPU can be switched from 64 bit execution mode to 32 bit when it traps into kernel context, and a 32 bit kernel can still be constructed to understand the structures passed in from 64 bit user-space apps.
The MacOS X kernel does not directly dereference pointers from the user app anyway, as it resides its own separate address space. A user-space pointer in an ioctl call, for example, must first be resolved to its physical address and then a new virtual address created in the kernel address space. It doesn't really matter whether that pointer in the ioctl was 64 bits or 32 bits, the kernel does not dereference it directly in either case.
So mixing a 32 bit kernel and 64 bit binaries can work, and vice-versa. The thing you cannot do is mix 32 bit libraries with a 64 bit application, as pointers passed between them would be truncated. MacOS X supplies more of its frameworks in both 32 and 64 bit versions in each release.
It's not the kernel that runs the binary. It's the processor.
The binary does call library functions and those need to be 64bit. And if they need to make a system call, it's their responsibility to cope with the fact that they themselves are 64bit, but the kernel is only 32.
But that's not something you would have to worry about.
Note that not all 32-bit kernels are capable of running 64-bit processes. Windows certainly doesn't have this property and I've never seen it done on Linux.
The 32 bit kernel that is capable of loading and running 64 bit binaries has to have some 64 bit code to handle memory mapping, program loading and a few other 64 bit issues.
However, the scheduler and many other OS operations aren't required to work in the 64 bit mode in order to deal with other issues - it switches the processor to 32 bit mode and back as needed to handle drivers, tasks, memory allocation and mapping, interrupts, etc.
In fact, most of the things that the OS does wouldn't necessarily perform any faster running at 64 bits - the OS is not a heavy data processor, and those portions that are (streams, disk I/O, etc) are likely converted to 64 bit (plugins to the OS anyway).
But the bare kernel itself probably won't task switch any faster, etc, if it were 64 bit.
This is especially the case when most people are still running 32 bit apps, so the mode switching isn't always needed, even though that's a low overhead operation, it does take some time.
-Adam
An ELF32 file can contain 64bit instructions and run in 64 bit mode. Only thing it is having is that organization of header and symbols are in 32bit format. Symbols table offsets are 32 bits. Symbol table entries are 32 bit wide etc. A file which contain both 64 bit code and 32 bit code can expose itself as 32 bit ELF file wheres it uses 64 bit registors for its internal calculations. mach_kernel is one such executable. Advantage it get is that 32 bit driver ELFs can linked to it. If it take care of passing pointers which are located below 4GBs to other linked ELF binaries it will work fine.
For the kernel to be 64-bit would only bring the effective advantage that kernel extensions (i.e., typically drivers) could be 64-bit. In fact, you'd need to have either all 64-bit kernel extensions, or (as is the case now) all 32-bit ones; they need to be native to the architecture of the running kernel.

Resources