How do I determine the word size of my CPU? If I understand correct an int should be one word right? I'm not sure if I am correct.
So should just printing sizeof(int) would be enough to determine the word size of my processor?
Your assumption about sizeof(int) is untrue; see this.
Since you must know the processor, OS and compiler at compilation time, the word size can be inferred using predefined architecture/OS/compiler macros provided by the compiler.
However while on simpler and most RISC processors, word size, bus width, register size and memory organisation are often consistently one value, this may not be true to more complex CISC and DSP architectures with various sizes for floating point registers, accumulators, bus width, cache width, general purpose registers etc.
Of course it begs the question why you might need to know this? Generally you would use the type appropriate to the application, and trust the compiler to provide any optimisation. If optimisation is what you think you need this information for, then you would probably be better off using the C99 'fast' types. If you need to optimise a specific algorithm, implement it for a number of types and profile it.
an int should be one word right?
As I understand it, that depends on the data size model. For an explanation for UNIX Systems, 64-bit and Data Size Neutrality. For example Linux 32-bit is ILP32, and Linux 64-bit is LP64. I am not sure about the difference across Window systems and versions, other than I believe all 32-bit Window systems are ILP32.
How do I determine the word size of my CPU?
That depends. Which version of C standard are you assuming. What platforms are we talking. Is this a compile or run time determination you're trying to make.
The C header file <limits.h> may defines WORD_BIT and/or __WORDSIZE.
sizeof(int) is not always the "word" size of your CPU. The most important question here is why you want to know the word size.... are you trying to do some kind of run-time and CPU specific optimization?
That being said, on Windows with Intel processors, the nominal word size will be either 32 or 64 bits and you can easily figure this out:
if your program is compiled for 32-bits, then the nominal word size is 32-bits
if you have compiled a 64-bit program then then the nominal word size is 64-bits.
This answer sounds trite, but its true to the first order. But there are some important subtleties. Even though the x86 registers on a modern Intel or AMD processor are 64-bits wide; you can only (easily) use their 32-bit widths in 32-bit programs - even though you may be running a 64-bit operating system. This will be true on Linux and OSX as well.
Moreover, on most modern CPU's the data bus width is wider than the standard ALU registers (EAX, EBX, ECX, etc). This bus width can vary, some systems have 128 bit, or even 192 bit wide busses.
If you are concerned about performance, then you also need to understand how the L1 and L2 data caches work. Note that some modern CPU's have an L3 cache. Caches including a unit called the Write Buffer
Make a program that does some kind of integer operation many times, like an integer version of the SAXPY algorithm. Run it for different word sizes, from 8 to 64 bits (i.e. from char to long long).
Measure the time each version spends while running the algorithm. If there is one specific version that lasts noticeably less than the others, the word size used for that version is probably the native word size of your computer. On the other way, if there are several versions that last more or less the same time, pick up the one which has the greater word size.
Note that even with this technique you can get false data: your benchmark, compiled using Turbo C and running on a 80386 processor through DOS will report that the word size is 16 bits, just because the compiler doesn't use the 32-bit registers to perform integer aritmetic, but calls to internal functions that do the 32-bit version of each aritmetic operation.
"Additionally, the size of the C type long is equal to the word size, whereas the size of the int type is sometimes less than that of the word size. For example, the Alpha has a 64-bit word size. Consequently, registers, pointers, and the long type are 64 bits in length."
source: http://books.msspace.net/mirrorbooks/kerneldevelopment/0672327201/ch19lev1sec2.html
Keeping this in mind, the following program can be executed to find out the word size of the machine you're working on-
#include <stdio.h>
int main ()
{
long l;
short s = (8 * sizeof(l));
printf("Word size of this machine is %hi bits\n", s);
return 0;
}
In short: There's no good way. The original idea behind the C data types was that int would be the fastest (native) integer type, long the biggest etc.
Then came operating systems that originated on one CPU and were then ported to different CPUs whose native word size was different. To maintain source code compatibility, some of the OSes broke with that definition and kept the data types at their old sizes, and added new, non-standard ones.
That said, depending on what you actually need, you might find some useful data types in stdint.h, or compiler-specific or platform-specific macros for various purposes.
To use at compile time: sizeof(void*)
What every may be the reason for knowing the size of the processor it don't matter.
The size of the processor is the amount of date that Arthematic Logic Unit(ALU) of One CPU Core can work on at a single point of time. A CPU Cores's ALU will on Accumulator Register at any time. So, The size of a CPU in bits is the the size of Accumulator Register in bits.
You can find the size of the accumulator from the data sheet of the processor or by writing a small assembly language program.
Note that the effective usable size of Accumulator Register can change in some processors (like ARM) based on mode of operations (Thumb and ARM modes). That means the size of the processor will also change based on the mode for that processors.
It common in many architectures to have virtual address pointer size and integer size same as accumulator size. It is only to take advantage of Accumulator Register in different processor operations but it is not a hard rule.
Many thinks of memory as an array of bytes. But CPU has another view of it. Which is about memory granularity. Depending on architecture, there would be 2, 4, 8, 16 or even 32 bytes memory granularity. Memory granularity and address alignment have great impact on performance, stability and correctness of software. Consider a granularity of 4 bytes and an unaligned memory access to read in 4 bytes. In this case every read, 75% if address is increasing by one byte, takes two more read instructions plus two shift operations and finally a bitwise instruction for final result which is performance killer. Further atomic operations could be affected as they must be indivisible. Other side effects would be caches, synchronization protocols, cpu internal bus traffic, cpu write buffer and you guess what else. A practical test could be run on a circular buffer to see how the results could be different. CPUs from different manufacturers, based on model, have different registers which will be used in general and specific operations. For example modern CPUs have extensions with 128 bits registers. So, the word size is not only about type of operation but memory granularity. Word size and address alignment are beasts which must be taken care about. There are some CPUs in market which does not take care of address alignment and simply ignore it if provided. And guess what happens?
As others have pointed out, how are you interested in calculating this value? There are a lot of variables.
sizeof(int) != sizeof(word). the size of byte, word, double word, etc have never changed since their creation for the sake of API compatibility in the windows api world at least. Even though a processor word size is the natural size an instruction can operate on. For example, in msvc/cpp/c#, sizeof(int) is four bytes. Even in 64bit compilation mode. Msvc/cpp has __int64 and c# has Int64/UInt64(non CLS compliant) ValueType's. There are also type definitions for WORD DWORD and QWORD in the win32 API that have never changed from two bytes, four bytes, and eight bytes respectively. As well as UINT/INT_PTR on Win32 and UIntPtr/IntPtr on c# that are guranteed to be big enough to represent a memory address and a reference type respectively. AFAIK, and I could be wrong if arch's still exist, I don't think anyone has to deal with, nor do, near/far pointers exist anymore, so if you're on c/cpp/c#, sizeof(void*) and Unsafe.SizeOf{IntPtr}() would be enough to determine your maximum "word" size I would think in a compliant cross-platform way, and if anyone can correct that, please do so! Also, sizes of intrinsic types in c/cpp are vague in size definition.
C data type sizes - Wikipedia
Related
I am working on embedded C firmware for Freescale Coldfire processors. After writing some code, I began to look at ways to reduce the size of the build. We are limited on space, so this is important for me to consider.
I realized I had several int32's in my code, but I only need int16's for them. To save space I tried replacing the relevant variables with int16's. When I built it, the size of the build went up by about 60 bytes.
I thought it might be how my structs were packed, so I defined how I wanted it packed, but it made no change.
#pragma pack(push, 1)
// Struct here
#pragma pack(pop)
I could kind of see it staying the same, but I can't figure what would cause it to go up. Any thoughts here? What might be causing this?
Edit:
Yes, looks like it was simply generating extra instructions to account for 32-bit being the optimized size for the processor. I should have checked the datasheet first.
This was the extra assembly being generated:
0x00000028 0x3210 move.w (a0),d1
0x0000002A 0x48C1 ext.l d1
; Other instructions between
0x0000002E 0x3028000E move.w 14(a0),d0
0x00000032 0x48C0 ext.l d0**
Your compiler probably emits code for int32's just fine; it's probably the natural int size for your architectiure (is this true? Is sizeof(int)==4?).
I am guessing that the size increase is from three places:
Making alignment work
32-bit ints probably align naturally on the stack and other places,
so typically code would not have to be emitted to make sure "the
stack is 4 byte aligned". If you sprinkle a bunch of 16 bit ints in
your code, it may have to add padding (extra adds to frame pointer
as a fix-up?) Usually one add instruction covers the frame/stack
maintenance, but maybe extra instructions are emitted to guarantee
alignment.
Translating between 16-bit and 32-bit ints
With 32-bit ints, most instructions naturally work. With smaller ints, sometimes the compiler has to emit code that chops/slices up bits so that
it preserves the semantics of the smaller. (Maybe doing an extra AND instruction to mask
off some high-order bits or an OR instruction to set some bits).
Going back and forth to memory
Standard LOADS and STORES are for 32-bit ints (which is probably natural size of your machine). It's possible, that when it has to store only 2 bytes instead of 4, that the architecture has to emit extra instructions to store a non-standard int (either by chopping up the int, using a strange instruction that has a longer encoding, or using bit instructions to chop up the instruction).
These are all guesses. The best way to see is to look at the assembly code and see what's going on!
To save space I tried replacing the relevant variables with int16's.
When I built it, the size of the build went up by about 60 bytes.
This doesn't make much sense to me. Using a smaller data type doesn't necessary translate to fewer instructions. It could reduce memory use when running the software but not necessarily build size.
So, for example, the reason using smaller data types here could be increasing the size of your binaries could be due to the fact that using smaller types requires more instructions or longer instructions. For example, for non-word/dword aligned memory, the compiler may have to use more instructions for unaligned moves. It may have to use special instructions to extract lower/upper words if all the general-purpose registers are larger. In that case, you might also get a slight performance hit in addition to the increased binary size using those smaller types (but less memory use when the code is running).
There may be a number of scenarios and it's specific to both the exact compiler you are using and architecture (the assembly code will reveal the exact cause), but in short, using smaller types for variables does not necessarily mean smaller-sized builds/fewer instructions, and could easily mean the opposite.
This question already has answers here:
What is the difference between intXX_t and int_fastXX_t?
(4 answers)
Closed 9 years ago.
What is the difference between the two? I know that int32_t is exactly 32 bits regardless of the environment but, as its name suggests that it's fast, how much faster can int_fast32_t really be compared to int32_t? And if it's significantly faster, then why so?
C is specified in terms of an idealized, abstract machine. But real-world hardware has behavioural characteristics that are not captured by the language standard. The _fast types are type aliases that allow each platform to specify types which are "convenient" for the hardware.
For example, if you had an array of 8-bit integers and wanted to mutate each one individually, this would be rather inefficient on contemporary desktop machines, because their load operations usually want to fill an entire processor register, which is either 32 or 64 bit wide (a "machine word"). So lots of loaded data ends up wasted, and more importantly, you cannot parallelize the loading and storing of two adjacent array elements, because they live in the same machine word and thus need to be load-modify-stored sequentially.
The _fast types are usually as wide as a machine word, if that's feasible. That is, they may be wider than you need and thus consume more memory (and thus are harder to cache!), but your hardware may be able to access them faster. It all depends on the usage pattern, though. (E.g. an array of int_fast8_t would probably be an array of machine words, and a tight loop modifying such an array may well benefit significantly.)
The only way to find out whether it makes any difference is to compare!
int32_t is an integer which is exactly 32bits. It is useful if you want for example to create a struct with an exact memory placement.
int_fast32_t is the "fastest" integer for your current processor that is at last bigger or equal to an int32_t. I don't know if there is really a gain for current processors (x86 or ARM)
But I can at last outline a real case : I used to work with a 32bits PowerPC processor. When accessing misaligned 16bits int16_t, it was inefficient for it has to first realign them in one of its 32bits registers. For non memory-mapped data, since we didn't have memory restrictions, it was more efficient to use int_fast16_t (which were in fact 32bits int).
I'm building a small bytecode VM that will run on a variety of platforms including exotic embedded and microcontroller environments.
Each opcode in my VM can be variable length(no more than 4 bytes, no less than 1 byte). In interpreting the opcodes, I want to create a tiny "cache" for the current opcode. However, due to it being used on many different platforms, it's hard to do.
So, here is a few examples of expected behavior:
On an 8-bit microcontroller with an 8-bit memory bus, I'd want it to only load 1 byte because it'd take multiple (slow) memory operations to load anymore, and in theory, it might only require 1 byte to execute the current opcode
On an 8086(16-bit), I'd want to load 2 bytes because to only load 1 byte we would basically be throwing some useful data away to be read later, but I don't want to load more than 2 bytes because it'd take multiple operations
On a 32-bit ARM processor, I'd want to load 4 bytes because otherwise we're either throwing data that'd might have to be read again away, or we're doing multiple operations
I would say this could be handled easily by just assuming that unsigned int is good enough, but on 8-bit AVR microcontrollers, int is defined as 16-bit, but the memory data bus width is only 8 bit, so 2 memory load operations would be required.
Anyway, current ideas:
using uint_fast16_t seems to work as expected on most platforms (32 bits on ARM, 16 bits on 8086, 64 bits on x86-64). However, it clearly still leaves out AVR and other 8-bit microcontrollers.
I thought using uint_fast8_t might work, but it would appear on most platforms that it's defined as being unsigned char, which definitely isn't optimal
Also, there is another problem that must be solved as well: unaligned memory access. On x86, this probably isn't going to be a problem(in theory it does 2 memory operations, but it's probably cached away in hardware), however on ARM I know that doing an unaligned 32-bit access could possibly cost 3 times as much as a single aligned 32-bit load. If the address is unaligned, I want to load the aligned option and get as much data as possible, but at all costs avoid another memory operation
Is there a way to somehow do this using magical preprocessor includes or some such, or does it just require manually defining the optimum cache size before compiling for the platform?
There is no automatic way to do this using the types or information provided by standard C (in headers such as and so on).
Problems such as this are sometimes handled by executing and measuring sample code on the target platform and using the results to determine what code to use in practice. The samples might be executed during a build and then built into the final code or might be executed at the start of each program execution and then used for the duration of execution.
The stdint.h header lacks an int_fastest_t and uint_fastest_t to correspond with the {,u}int_fastX_t types. For instances where the width of the integer type does not matter, how does one pick the integer type that allows processing the greatest quantity of bits with the least penalty to performance? For example, if one was searching for the first set bit in a buffer using a naive approach, a loop such as this might be considered:
// return the bit offset of the first 1 bit
size_t find_first_bit_set(void const *const buf)
{
uint_fastest_t const *p = buf; // use the fastest type for comparison to zero
for (; *p == 0; ++p); // inc p while no bits are set
// return offset of first bit set
return (p - buf) * sizeof(*p) * CHAR_BIT + ffsX(*p) - 1;
}
Naturally, using char would result in more operations than int. But long long might result in more expensive operations than the overhead of using int on a 32 bit system and so on.
My current assumption is for the mainstream architectures, the use of long is the safest bet: It's 32 bit on 32 bit systems, and 64 bit on 64 bit systems.
int_fast8_t is always the fastest integer type in a correct implementation. There can never be integer types smaller than 8 bits (because CHAR_BIT>=8 is required), and since int_fast8_t is the fastest integer type with at least 8 bits, it's thus the fastest integer type, period.
Theoretically, int is the best bet. It should map to the CPU's native register size, and thus be "optimal" in the sense you're asking about.
However, you may still find that an int-64 or int-128 is faster on some CPUs than an int-32, because although these are larger than the register size, they will reduce the number of iterations of your loop, and thus may work out more efficient by minimising the loop overheads and/or taking advantage of DMA to load/store the data faster.
(For example, on ARM-2 processors it took 4 memory cycles to load one 32-bit register, but only 5 cycles to load two sequentially, and 7 cycles to load 4 sequentially. The routine you suggest above would be optimised to use as many registers as you could free up (8 to 10 usually), and could therefore run up to 3 or 4 times faster by using multiple registers per loop iteration)
The only way to be sure is to write several routines and then profile them on the specific target machine to find out which produces the best performance.
I'm not sure I really understand the question, but why aren't you just using int? Quoting from my (free draft copy of the wrong, i. e. C++) standard, "Plain ints have the natural size suggested by the architecture of the execution environment."
But I think that if you want to have the optimal integer type for a certain operation, it will be different depending on which operation it is. Trying to find the first bit in a large data buffer, or finding a number in a sequence of integers, or moving them around, could very well have completely different optimal types.
EDIT:
For whatever it's worth, I did a small benchmark. On my particular system (Intel i7 920 with Linux, gcc -O3) it turns out that long ints (64 bits) are quite a bit faster than plain ints (32 bits), on this particular example. I would have guessed the opposite.
If you want to be certain you've got the fastest implementation, why not benchmark each one on the systems you're expecting to run on instead of trying to guess?
The answer is int itself. At least in C++, where 3.9.1/2 of the standard says:
Plain ints have the natural size
suggested by the architecture of the
execution environment
I expect the same is true for C, though I don't have any of the standards documents.
I would guess that the types size_t (for an unsigned type) and ptrdiff_t (for a signed type) will usually correspond to quite efficient integer types on any given platform.
But nothing can prove that than inspecting the produced assembler and to do benchmarks.
Edit, including the different comments, here and in other replies:
size_t and ptrdiff_t are the only typedefs that are normative in C99 and for which one may make a reasonable assumption that they are related to the architecture.
There are 5 different possible ranks for standard integer types (char, short, int, long, long long). All the forces go towards having types of width 8, 16, 32, 64 and in near future 128. As a consequence int will be stuck on 32 bit. Its definition will have nothing to do with efficiency on the platform, but just be constrained by that width requirement.
If you're compiling with gcc, i'd recommend using __builtin_ffs() for finding the first bit set:
Built-in Function: int __builtin_ffs (unsigned int x)
Returns one plus the index of the least significant 1-bit of x, or if x is zero, returns zero.
This will be compiled into (often a single) native assembly instruction.
It is not possible to answer this question since the question is incomplete. As an analogy, consider the question:
What is the fastest vehicle
A Bugatti Veyron? Certainly fast, but no good for going from London to New York.
What is missing from the question, is the context the integer will be used in. In the original example above, I doubt you'd see much difference between 8, 32 or 64 bit values if the array is large and sparse since you'll be hitting memory bandwidth limits before cpu limits.
The main point is, the architecture does not define what size the various integer types are, it's the compiler designer that does that. The designer will carefully weigh up the pros and cons for various sizes for each type for a given architecture and pick the most appropriate.
I guess the 32 bit int on the 64 bit system was chosen because for most operations ints are used for 32 bits are enough. Since memory bandwidth is a limiting factor, saving on memory use was probably the overriding factor.
For all existing mainstream architectures long is the fastest type at present for loop throughput.
I am trying to implement a simple, moderately efficient bignum library in C. I would like to store digits using the full register size of the system it's compiled on (presumably 32 or 64-bit ints). My understanding is that I can accomplish this using intptr_t. Is this correct? Is there a more semantically appropriate type, i.e. something like intword_t?
I also know that with GCC I can easily do overflow detection on a 32-bit machine by upcasting both arguments to 64-bit ints, which will occupy two registers and take advantage of instructions like IA31 ADC (add with carry). Can I do something similar on a 64-bit machine? Is there a 128-bit type I can upcast to which will compile to use these instructions if they're available? Better yet, is there a standard type that represents twice the register size (like intdoubleptr_t) so this could be done in a machine independent fashion?
Thanks!
Any reason not to use size_t? size_t is 4 bytes on a 32-bit system and 8 bytes on a 64-bit system, and is probably more portable than using WORD_SIZE (I think WORD_SIZE is gcc-specific, no?)
I am not aware of any 128-bit value on 64-bit systems, could be wrong here but haven't come across that type in the kernel or regular user apps.
I'd strongly recommend using the C99 <stdint.h> header. It declares int32_t, int64_t, uint32_t, and uint64_t, which look like what you really want to use.
EDIT: As Alok points out, int_fast32_t, int_fast64_t, etc. are probably what you want to use. The number of bits you specify should be the minimum you need for the math to work, i.e. for the calculation to not "roll over".
The optimization comes from the fact that the CPU doesn't have to waste cycles realigning data, padding the leading bits on a read, and doing a read-modify-write on a write. Truth is, a lot of processors (such as recent x86s) have hardware in the CPU that optimizes these access pretty well (at least the padding and read-modify-write parts), since they're so common and usually only involve transfers between the processor and cache.
So the only thing left for you to do is make sure the accesses are aligned: take sizeof(int_fast32_t) or whatever and use it to make sure your buffer pointers are aligned to that.
Truth is, this may not amount to that much improvement (due to the hardware optimizing transfers at runtime anyway), so writing something and timing it may be the only way to be sure. Also, if you're really crazy about performance, you may need to look at SSE or AltiVec or whatever vectorization tech your processor has, since that will outperform anything you can write that is portable when doing vectored math.