Can XMM registers be used to do any 128 bit integer math? [duplicate] - bigint

This question already has an answer here:
Is it possible to use SSE and SSE2 to make a 128-bit wide integer?
(1 answer)
Closed 5 years ago.
My impression is definitely not but perhaps there is a clever trick?
Thanks.

Not directly, but there are 64 bit arithmetic operations which can be easily combined to perform 128 bit (or greater) precision.

The xmm registers can do arithmetics on 8, 16, 32 and 64 bit integers. It doesn't produce a carry flag so you can't extend the precision beyond 64 bits. The extended precision math libraries use the general purpose registers which are 32 bit or 64 bit, depending on the OS.

Related

Can an 8 bit microcontrollers use a 32 bit Integers? (and the other way around) [duplicate]

This question already has answers here:
If an embedded system coded in C is 8 or 16-bit, how will it manipulate 32-bit data types like int?
(2 answers)
Closed 4 years ago.
I'm really wondering about the relation of the datasheets and how does this change how thing are done on the programming level,
As far as i know 8 bit in a uC is the resolution of the ADC up to 256 values that signals can be sampled, the higher you go the higher the precision is on the sampled signal however...
Does this affect the code? (Is everything 32bit on the code?)
Whenever I declare an int in a 32bit uC am i actually using an int32? or an int8?
Can an 8 bit microcontroller use a 32 bit integer?
The short answer is yes
When a microcontroller is said to be 8 bit, it means that the internal registers are 8 bit and that the arithmetic unit operates with 8 bit numbers. So in a single instruction, you can only do 8 bit math.
However, you can still do 32 bit math but that will require a number of instructions. For instance, you need 4 8 bit registers to hold a single 32 bit value. Further, you'll have to do the math operation using 8 bit operations (i.e. multiple instructions).
For an ADD of two 32 bit int, you'll need four 8 bit add instructions and besides that you'll need instructions to handle carries from the individual add instruction.
So you can do it but it will be slow as a single 32 bit add may require 10-20 instructions (or more - see comment from #YannVernier).
... (and the other way around)
AFAIK most 32 bit CPUs have instructions that allows for 8 bit math, i.e. as a single instruction. So doing 8 bit or 32 bit math will will be equally fast (in terms of instructions required).
Whenever I declare an int in a 32bit uC am i actually using an int32? or an int8?
With a 32 bit CPU, an int will normally be 32 bit so the answer is: int32
But from a C standard point of view, it would be okay to have 16 bit int on a 32 bit machine. So even if 32 bit would be common, you'll still have to check what the size is on your specific system to be real sure.
This seems to be quite a bit more than one question. First, for the title; yes, 8-bit and 32-bit microcontrollers can typically use integers of either width. Narrower processors will require more steps to handle larger widths, and therefore be slower. Wider processors may lack support for narrower types, causing them to require extra steps as well. Either way, a typical compiler will handle the difference between 8 and 32 bits.
Peripherals such as ADCs can have their own widths; it's not uncommon for them to be a width that doesn't fit precisely in bytes, such as 10 or 12 bits. Successive approximation ADCs also frequently offer a faster mode where less bits hold valid data. In such cases, requesting the fast/narrow mode would require different code from running in slow/full width mode.
If you declare an int in a C compliant compiler, you'll never get an 8-bit variable, because C requires it to be at least 16 bits. Many compilers have options to diverge from the standard. On 32 bit computers it frequently is 32 bits, but on a microcontroller it may well be smaller to conserve memory even if the processor is 32 bit. There are width specific types in inttypes.h if you want to be specific.

size of long data type changes in 32 bit & 64 bit compilers [duplicate]

This question already has answers here:
integer size in c depends on what?
(8 answers)
Closed 6 years ago.
The size of long data type changes on 64 bit compiler. On 32-bit compiler both int & long has 4 bytes size. Whereas in 64-bit, it changes to 4 & 8 bytes. Why is this difference?
What determines the size (number of bits) of data is the width of the internal registers of the microcontrollers.
The software are always a step behind the hardware, and is not unusual you have a 64-bit processor and compile your programs with a 32-bit compiler or even a 16-bit compiler (I still have 16-bit software running on 64-bit processors).
The ideal case is when you have a compiler with features target to get the full power of the processor you have.
Today, despite the majority of computers has 64-bit processor, not all compilers are ready to use the full power of hardware.
According to Microsoft, their tools continue to differentiate the 32 to 64 bit code, only on the width of pointers, and maintaining the width of data.
However, nothing prevent you to use a compiler that take full advantage of the 64 bits of the internal registers of the processors.

Performance comparison: 64 bit and 32 bit multiplication [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
I'm using an Intel(R) Core(TM) i5-4200U CPU # 1.60GHz and wondering why the multiplication of 64 bit numbers is slower than that of 32 bit numbers. I've done a test run in C and it turns out it needs twice as much time.
I expected it to need the same amount of time since the CPU works with native 64 bit registers and it shouldn't matter how wide the numbers are (as long as they fit into a 64 bit register).
Can someone explain this?
There are specialized instructions in the x86-64 instruction set to express that you only want to multiply two 32-bit quantities. One instruction may look like IMUL %EBX, %ECX in a particular dialect for the x86-64 assembly, as opposed to the 64-bit multiplication IMUL %RBX, %RCX.
So the processor knows that you only want to multiply 32-bit quantities. This happens often enough that the designers of the processor made sure that the internal circuitry would be optimized to provide a faster answer in this easier case, just as it is easier for you to multiply 3-digit numbers than 6-digit numbers. The difference can be seen in the timings measured by Agner Fog and described in his comprehensive assembly optimization resources.
If your compiler is targeting the older 32-bit IA-32 instruction set, then the difference between 32-bit and 64-bit multiplication is even wider. The compiler has to implement 64-bit multiplication with only instructions for 32-bit multiplication, using four of them (three if computing only the 64 least significant bits of the result).
64-bit multiplication can be about three-four times slower than 32-bit multiplication in this case.
I can think of a problem occuring here because of 64-bit multiplication.
Actually, for multiplying two 32-bit numbers,the result will be maximum of 64 bits. But, in case of multiplying two 64-bit numbers, the product may be of 128 bits and in all cases it'll be greater than 64 bits!
As a similar example in 8086 microprocessor,if you'll perform the same with 8-bit numbers and 16-bit numbers,you'll encounter the situation that CPU registers will have to store it from AX register and DX register as well(if you know the assembly language abbreviations).
So,I believe that is possibly increasing the calculation time!!! I feel this is what making your 64-bits multiplication slow!

Difference between C 8 bit 16-bit 32-bit compilers [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
This question may be redundant, but I didn't found exact answer.
What is the Difference between C 8 bit 16-bit 32-bit compilers.
how the .exe differ generated by different compilers for same code...........
16 bit compilers compile the program into 16-bit machine code that will run on a computer with a 16-bit processor. 16-bit machine code will run on a 32-bit processor, but 32-bit machine code will not run on a 16-bit processor. 32-bit machine code is usually faster than 16-bit machine code.
With 16 bit compiler the type-sizes (in bits) are the following:
short, int: 16
long: 32
long long: (no such type)
pointer: 16/32 (but even 32 means only 1MB address-space on 8086)
With 32 bit compiler the object-sizes (in bits) are the following:
short: 16
int, long: 32
long long: 64
pointer: 32
With 64 bit compiler the object-sizes (in bits) are the following:
short: 16
int: 32
long: 32 or 64 (!)
long long: 64
pointer: 64
[While the above values are generally correct, they may vary for specific Operating Systems. Please check your compiler's documentation for the default sizes of standard types]
Following can explain a little bit more...
http://cboard.cprogramming.com/c-programming/96536-16-bit-compilar-32-bit-compilar.html
not all compilers generate .exe for starters, different platforms have different forms you can give it code.
8bit compilers target microprocessors with 8 bit registers, same for 16 bit and 32 bit, and also 64 bit. Depending on the microprocessor each often has there on addressing scheme also for memory and hardware.
for each of 8/16/32/64 bit C compilers, there are many compilers targetting different micros. Each will do various optimizations for each platform. So...
They are all quite different.
it depends also on the processor register bit. the 32bit compiler can compile into 32bit machine code which can be run only on 32bit and 64bit microprocessor. But not less than 32 bit.

variation in the integer size? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
integer size in c depends on what?
Why is the size of an integer 2 bytes on a 16-bit compiler and 4 bytes on a 32-bit compiler? And also, how is it related to OS?
printf("%d", sizeof(int));//what will be o/p on windows 32bit Turboc 32 bit architecture
printf("%d", sizeof(int));//what will be o/p on windows 32bit visual studio 32 bit architecture
16 bit compilers are generally used for 16 bit hardware, where the natural size of an integer is 16 bits. The "int" type is intended to use the natural size of the hardware.

Resources