MASM using registers as expressions between mod operator - masm

I am completely newbie in masm32 and I want to realize such idea which is described in following line of (incorrect) code :
mov ebx,(eax mod any_number)
Compiler gives me error A2026 : constant expected
I read that mod operation cannot be used between registers, so which methods will help me to perform same idea ?
Hope for your help.

9 % 5 = 4
What does Modulus mean? It is the remainder after you divide 2 numbers
mov eax, 9 mod 5
or
xor edx, edx
mov eax, 9
mov ecx, 5
div ecx
now edx contain the Modulus

I would like to use my answer to exercise 2.b of the book Guide to Assembly Language: A Concise Introduction by James T. Streib,
;result = number % amount
mov eax,number
cdq ;copy or propagate the sign bit into the edx register
idiv amount
mov result,edx ;the remainder in the edx register and the
;quotient in the eax register

Related

Assembler Intel x86 Loop n times with user input

I'm learning x86 assembly and I'm trying to write a program that reads a number n (2 digits) from user input and iterate n times.
I've tried many ways but I get an infinite loop or segment fault.
input:
push msgInputQty
call printf
add esp, 4
push quantity
call gets
add esp, 4
mov ecx, 2
mov eax, 0
mov ebx, 0
mov edi, 0
mov dl, 10
transform:
mul dl
mov ebx, 0
mov bl, byte[quantity+edi]
sub bl, 30h
add eax, ebx
inc edi
loop transform
mov ecx, eax
printNTimes:
push msgDig
call printf
add esp, 4
loop printNTimes
I'd like to save in ecx and iterate n times this number
Your ecx register is being blown away by the call to printf.
ecx is a volatile register in some calling conventions and its likely that your loop is being corrupted by what printf is leaving in there.
To begin with, I would follow Raymond's advice in the comment attached to your original question and attach a debugger to witness this behaviour for yourself.
As for a solution, you can try preserving ecx and restoring it after the call to see the difference:
; for example
mov edi,ecx
call printf
mov ecx,edi
There may be more issues here (hard to know for sure since your code is incomplete ... but things like your stack allocations that don't appear to be for any reason are interesting) - but that is a good place to start.
Peter has left a comment under my answer to point out that you could remove the issue and optimize my solution by just not using ecx for your loop at all and instead do it manually, making your code change:
mov edi, eax
printNTimes:
push msgDig
call printf
add esp, 4
dec edi
jnz printNTimes

Assembly 8086 loops issue [duplicate]

This question already has an answer here:
Problems with IDIV Assembly Language
(1 answer)
Closed 1 year ago.
The pseudocode is the following:
read c //a double digit number
for(i=1,n,i++)
{ if (n%i==0)
print i;}
In assembly I have written it as:
mov bx,ax ; ax was the number ex.0020, storing a copy in bx.
mov cx,1 ; the start of the for loop
.forloop:
mov ax,bx ; resetting ax to be the number(needed for the next iterations)
div cx
cmp ah,0 ; checking if the remainder is 0
jne .ifinstr
add cl 48 ;adding so my number would be displayed later as decimal
mov dl,cl ;printing the remainder
mov ah,2
int 21h
sub cl,48 ;converting it back to hexa
.ifinstr:
inc cx ;the loop goes on
cmp cx,bx
jle .forloop
I've checked by tracing its steps. The first iteration goes well, then, at the second one, it makes ax=the initial number and cx=2 as it should, but at 'div cx' it jumps somwhere unknown to me and it doesn't stop anywhere. It does:
push ax
mov al,12
nop
push 9
.
.
Any idea why it does that?
try to do mov dx,0 just before div instruction.
Basically every time you come after jump, there may be some data in dx register, so you can just move zero in dx or XOR dx,dx.
This is to be done, because otherwise division will be considered differently.
See this:
Unsigned divide.
Algorithm:
when operand is a byte:
AL = AX / operand
AH = remainder (modulus)
when operand is a word:
AX = (DX AX) / operand
DX = remainder (modulus)
Example:
MOV AX, 203 ; AX = 00CBh
MOV BL, 4
DIV BL ; AL = 50 (32h), AH = 3
RET

Trouble inserting things into an array in assembly

I'm currently trying to learn Assembly, and one of the tasks I am given is to take user input integers and insert those numbers into an array. Once the array has 7 integers, I will loop through the array and print out the numbers. However, I'm currently stuck on how to insert the numbers into the array. Here is the code I have right now:
.DATA
inputIntMessage BYTE "Enter an integer: ", 0
inputStringMessage BYTE "Enter a string: ", 0
intArray DWORD 0,0,0,0,0,0,0
intCounter DWORD 0
user_input DWORD ?
.CODE
main PROC
mov eax, intCounter
mov edx, 0
top:
cmp eax, 7
je final1
jl L1
L1: intInput inputIntMessage, user_input
mov ebx, user_input
mov intArray[edx], ebx ;This is where I think the problem is.
add edx, 4
inc eax
jmp top
final1:
mov ecx, 0
mov edx, 0
printarrayloop:
cmp edx,7
jl L2
je next
L2: intOutput intArray[ecx]
add ecx, 4
inc edx
next:
next: just goes to the next problem; irrelevant to this inserting into an array problem. My thinking is that I should use the offset of the array, so I can access the address of each element in the array and directly change that, but I do not know how. Can someone point me in the right direction?
EDIT: When I run the program the window prompts the user to enter an integer 7 times (which is as intended), and then prints out the first number the user entered. However, the window should be printing out all of the numbers the user entered.
The primary reason why your code only prints one number is because the code that displays the array of numbers does this:
mov ecx, 0
mov edx, 0
printarrayloop:
cmp edx,7
jl L2
je next
L2: intOutput intArray[ecx]
add ecx, 4
inc edx
next:
What is missing is that you do not continue the loop after displaying the first number. You need to jump back to printarrayloop to process the next number. Add this right below inc edx:
jmp printarrayloop
There are some other things you may wish to consider. In this code:
top:
cmp eax, 7
je final1
jl L1
L1: intInput inputIntMessage, user_input
[snip]
final1:
You do cmp eax, 7. If it is equal you jump out. If it is less then you just branch to label L1 anyway. You can modify that code by removing the extraneous jl L1 branch and label. So you would have this:
top:
cmp eax, 7
je final1
intInput inputIntMessage, user_input
In this code there are some extra instructions that can be removed:
mov ecx, 0
mov edx, 0
printarrayloop:
cmp edx,7
jl L2
je next
L2: intOutput intArray[ecx]
add ecx, 4
inc edx
jmp printarrayloop
next:
Similar to the previous comment I made to compare EDX to 7 using cmp edx,7. You can simply say that after comparing if it is equal to 7 then you jump out of the loop to next . If it is less than 7 it will just continue and print the number out. So your code could look like this:
mov ecx, 0
mov edx, 0
printarrayloop:
cmp edx,7
je next
intOutput intArray[ecx]
add ecx, 4
inc edx
jmp printarrayloop
next:
x86 32-bit code has a scaled addressing mode (with displacement). You can find all the addressing modes described in this summary here and more detailed description here.
You can using a scaling factor (multiply a register by 1,2,4, or 8) when doing addressing. You have code that looks like this:
mov intArray[edx], ebx
add edx, 4
EDX points to the element number you wish to display, multiplying it by 4 will account for the fact that the size of a DWORD is 4 bytes. So you can remove the add edx, 4and change the code accessing the array to:
mov intArray[edx*4], ebx
intArray[edx*4] is an address that is equivalent to intArray+(edx*4)
You can make a similar change when you output. Delete this line:
add ecx, 4
And use scaled addressing with:
intOutput intArray[ecx*4]

Fastest way to calculate a 128-bit integer modulo a 64-bit integer

I have a 128-bit unsigned integer A and a 64-bit unsigned integer B. What's the fastest way to calculate A % B - that is the (64-bit) remainder from dividing A by B?
I'm looking to do this in either C or assembly language, but I need to target the 32-bit x86 platform. This unfortunately means that I cannot take advantage of compiler support for 128-bit integers, nor of the x64 architecture's ability to perform the required operation in a single instruction.
Edit:
Thank you for the answers so far. However, it appears to me that the suggested algorithms would be quite slow - wouldn't the fastest way to perform a 128-bit by 64-bit division be to leverage the processor's native support for 64-bit by 32-bit division? Does anyone know if there is a way to perform the larger division in terms of a few smaller divisions?
Re: How often does B change?
Primarily I'm interested in a general solution - what calculation would you perform if A and B are likely to be different every time?
However, a second possible situation is that B does not vary as often as A - there may be as many as 200 As to divide by each B. How would your answer differ in this case?
You can use the division version of Russian Peasant Multiplication.
To find the remainder, execute (in pseudo-code):
X = B;
while (X <= A/2)
{
X <<= 1;
}
while (A >= B)
{
if (A >= X)
A -= X;
X >>= 1;
}
The modulus is left in A.
You'll need to implement the shifts, comparisons and subtractions to operate on values made up of a pair of 64 bit numbers, but that's fairly trivial (likely you should implement the left-shift-by-1 as X + X).
This will loop at most 255 times (with a 128 bit A). Of course you need to do a pre-check for a zero divisor.
Perhaps you're looking for a finished program, but the basic algorithms for multi-precision arithmetic can be found in Knuth's Art of Computer Programming, Volume 2. You can find the division algorithm described online here. The algorithms deal with arbitrary multi-precision arithmetic, and so are more general than you need, but you should be able to simplify them for 128 bit arithmetic done on 64- or 32-bit digits. Be prepared for a reasonable amount of work (a) understanding the algorithm, and (b) converting it to C or assembler.
You might also want to check out Hacker's Delight, which is full of very clever assembler and other low-level hackery, including some multi-precision arithmetic.
If your B is small enough for the uint64_t + operation to not wrap:
Given A = AH*2^64 + AL:
A % B == (((AH % B) * (2^64 % B)) + (AL % B)) % B
== (((AH % B) * ((2^64 - B) % B)) + (AL % B)) % B
If your compiler supports 64-bit integers, then this is probably the easiest way to go.
MSVC's implementation of a 64-bit modulo on 32-bit x86 is some hairy loop filled assembly (VC\crt\src\intel\llrem.asm for the brave), so I'd personally go with that.
This is almost untested partly speed modificated Mod128by64 'Russian peasant' algorithm function. Unfortunately I'm a Delphi user so this function works under Delphi. :) But the assembler is almost the same so...
function Mod128by64(Dividend: PUInt128; Divisor: PUInt64): UInt64;
//In : eax = #Dividend
// : edx = #Divisor
//Out: eax:edx as Remainder
asm
//Registers inside rutine
//Divisor = edx:ebp
//Dividend = bh:ebx:edx //We need 64 bits + 1 bit in bh
//Result = esi:edi
//ecx = Loop counter and Dividend index
push ebx //Store registers to stack
push esi
push edi
push ebp
mov ebp, [edx] //Divisor = edx:ebp
mov edx, [edx + 4]
mov ecx, ebp //Div by 0 test
or ecx, edx
jz #DivByZero
xor edi, edi //Clear result
xor esi, esi
//Start of 64 bit division Loop
mov ecx, 15 //Load byte loop shift counter and Dividend index
#SkipShift8Bits: //Small Dividend numbers shift optimisation
cmp [eax + ecx], ch //Zero test
jnz #EndSkipShiftDividend
loop #SkipShift8Bits //Skip 8 bit loop
#EndSkipShiftDividend:
test edx, $FF000000 //Huge Divisor Numbers Shift Optimisation
jz #Shift8Bits //This Divisor is > $00FFFFFF:FFFFFFFF
mov ecx, 8 //Load byte shift counter
mov esi, [eax + 12] //Do fast 56 bit (7 bytes) shift...
shr esi, cl //esi = $00XXXXXX
mov edi, [eax + 9] //Load for one byte right shifted 32 bit value
#Shift8Bits:
mov bl, [eax + ecx] //Load 8 bits of Dividend
//Here we can unrole partial loop 8 bit division to increase execution speed...
mov ch, 8 //Set partial byte counter value
#Do65BitsShift:
shl bl, 1 //Shift dividend left for one bit
rcl edi, 1
rcl esi, 1
setc bh //Save 65th bit
sub edi, ebp //Compare dividend and divisor
sbb esi, edx //Subtract the divisor
sbb bh, 0 //Use 65th bit in bh
jnc #NoCarryAtCmp //Test...
add edi, ebp //Return privius dividend state
adc esi, edx
#NoCarryAtCmp:
dec ch //Decrement counter
jnz #Do65BitsShift
//End of 8 bit (byte) partial division loop
dec cl //Decrement byte loop shift counter
jns #Shift8Bits //Last jump at cl = 0!!!
//End of 64 bit division loop
mov eax, edi //Load result to eax:edx
mov edx, esi
#RestoreRegisters:
pop ebp //Restore Registers
pop edi
pop esi
pop ebx
ret
#DivByZero:
xor eax, eax //Here you can raise Div by 0 exception, now function only return 0.
xor edx, edx
jmp #RestoreRegisters
end;
At least one more speed optimisation is possible! After 'Huge Divisor Numbers Shift Optimisation' we can test divisors high bit, if it is 0 we do not need to use extra bh register as 65th bit to store in it. So unrolled part of loop can look like:
shl bl,1 //Shift dividend left for one bit
rcl edi,1
rcl esi,1
sub edi, ebp //Compare dividend and divisor
sbb esi, edx //Subtract the divisor
jnc #NoCarryAtCmpX
add edi, ebp //Return privius dividend state
adc esi, edx
#NoCarryAtCmpX:
I know the question specified 32-bit code, but the answer for 64-bit may be useful or interesting to others.
And yes, 64b/32b => 32b division does make a useful building-block for 128b % 64b => 64b. libgcc's __umoddi3 (source linked below) gives an idea of how to do that sort of thing, but it only implements 2N % 2N => 2N on top of a 2N / N => N division, not 4N % 2N => 2N.
Wider multi-precision libraries are available, e.g. https://gmplib.org/manual/Integer-Division.html#Integer-Division.
GNU C on 64-bit machines does provide an __int128 type, and libgcc functions to multiply and divide as efficiently as possible on the target architecture.
x86-64's div r/m64 instruction does 128b/64b => 64b division (also producing remainder as a second output), but it faults if the quotient overflows. So you can't directly use it if A/B > 2^64-1, but you can get gcc to use it for you (or even inline the same code that libgcc uses).
This compiles (Godbolt compiler explorer) to one or two div instructions (which happen inside a libgcc function call). If there was a faster way, libgcc would probably use that instead.
#include <stdint.h>
uint64_t AmodB(unsigned __int128 A, uint64_t B) {
return A % B;
}
The __umodti3 function it calls calculates a full 128b/128b modulo, but the implementation of that function does check for the special case where the divisor's high half is 0, as you can see in the libgcc source. (libgcc builds the si/di/ti version of the function from that code, as appropriate for the target architecture. udiv_qrnnd is an inline asm macro that does unsigned 2N/N => N division for the target architecture.
For x86-64 (and other architectures with a hardware divide instruction), the fast-path (when high_half(A) < B; guaranteeing div won't fault) is just two not-taken branches, some fluff for out-of-order CPUs to chew through, and a single div r64 instruction, which takes about 50-100 cycles1 on modern x86 CPUs, according to Agner Fog's insn tables. Some other work can be happening in parallel with div, but the integer divide unit is not very pipelined and div decodes to a lot of uops (unlike FP division).
The fallback path still only uses two 64-bit div instructions for the case where B is only 64-bit, but A/B doesn't fit in 64 bits so A/B directly would fault.
Note that libgcc's __umodti3 just inlines __udivmoddi4 into a wrapper that only returns the remainder.
Footnote 1: 32-bit div is over 2x faster on Intel CPUs. On AMD CPUs, performance only depends on the size of the actual input values, even if they're small values in a 64-bit register. If small values are common, it might be worth benchmarking a branch to a simple 32-bit division version before doing 64-bit or 128-bit division.
For repeated modulo by the same B
It might be worth considering calculating a fixed-point multiplicative inverse for B, if one exists. For example, with compile-time constants, gcc does the optimization for types narrower than 128b.
uint64_t modulo_by_constant64(uint64_t A) { return A % 0x12345678ABULL; }
movabs rdx, -2233785418547900415
mov rax, rdi
mul rdx
mov rax, rdx # wasted instruction, could have kept using RDX.
movabs rdx, 78187493547
shr rax, 36 # division result
imul rax, rdx # multiply and subtract to get the modulo
sub rdi, rax
mov rax, rdi
ret
x86's mul r64 instruction does 64b*64b => 128b (rdx:rax) multiplication, and can be used as a building block to construct a 128b * 128b => 256b multiply to implement the same algorithm. Since we only need the high half of the full 256b result, that saves a few multiplies.
Modern Intel CPUs have very high performance mul: 3c latency, one per clock throughput. However, the exact combination of shifts and adds required varies with the constant, so the general case of calculating a multiplicative inverse at run-time isn't quite as efficient each time its used as a JIT-compiled or statically-compiled version (even on top of the pre-computation overhead).
IDK where the break-even point would be. For JIT-compiling, it will be higher than ~200 reuses, unless you cache generated code for commonly-used B values. For the "normal" way, it might possibly be in the range of 200 reuses, but IDK how expensive it would be to find a modular multiplicative inverse for 128-bit / 64-bit division.
libdivide can do this for you, but only for 32 and 64-bit types. Still, it's probably a good starting point.
I have made both version of Mod128by64 'Russian peasant' division function: classic and speed optimised. Speed optimised can do on my 3Ghz PC more than 1000.000 random calculations per second and is more than three times faster than classic function.
If we compare the execution time of calculating 128 by 64 and calculating 64 by 64 bit modulo than this function is only about 50% slower.
Classic Russian peasant:
function Mod128by64Clasic(Dividend: PUInt128; Divisor: PUInt64): UInt64;
//In : eax = #Dividend
// : edx = #Divisor
//Out: eax:edx as Remainder
asm
//Registers inside rutine
//edx:ebp = Divisor
//ecx = Loop counter
//Result = esi:edi
push ebx //Store registers to stack
push esi
push edi
push ebp
mov ebp, [edx] //Load divisor to edx:ebp
mov edx, [edx + 4]
mov ecx, ebp //Div by 0 test
or ecx, edx
jz #DivByZero
push [eax] //Store Divisor to the stack
push [eax + 4]
push [eax + 8]
push [eax + 12]
xor edi, edi //Clear result
xor esi, esi
mov ecx, 128 //Load shift counter
#Do128BitsShift:
shl [esp + 12], 1 //Shift dividend from stack left for one bit
rcl [esp + 8], 1
rcl [esp + 4], 1
rcl [esp], 1
rcl edi, 1
rcl esi, 1
setc bh //Save 65th bit
sub edi, ebp //Compare dividend and divisor
sbb esi, edx //Subtract the divisor
sbb bh, 0 //Use 65th bit in bh
jnc #NoCarryAtCmp //Test...
add edi, ebp //Return privius dividend state
adc esi, edx
#NoCarryAtCmp:
loop #Do128BitsShift
//End of 128 bit division loop
mov eax, edi //Load result to eax:edx
mov edx, esi
#RestoreRegisters:
lea esp, esp + 16 //Restore Divisors space on stack
pop ebp //Restore Registers
pop edi
pop esi
pop ebx
ret
#DivByZero:
xor eax, eax //Here you can raise Div by 0 exception, now function only return 0.
xor edx, edx
jmp #RestoreRegisters
end;
Speed optimised Russian peasant:
function Mod128by64Oprimized(Dividend: PUInt128; Divisor: PUInt64): UInt64;
//In : eax = #Dividend
// : edx = #Divisor
//Out: eax:edx as Remainder
asm
//Registers inside rutine
//Divisor = edx:ebp
//Dividend = ebx:edx //We need 64 bits
//Result = esi:edi
//ecx = Loop counter and Dividend index
push ebx //Store registers to stack
push esi
push edi
push ebp
mov ebp, [edx] //Divisor = edx:ebp
mov edx, [edx + 4]
mov ecx, ebp //Div by 0 test
or ecx, edx
jz #DivByZero
xor edi, edi //Clear result
xor esi, esi
//Start of 64 bit division Loop
mov ecx, 15 //Load byte loop shift counter and Dividend index
#SkipShift8Bits: //Small Dividend numbers shift optimisation
cmp [eax + ecx], ch //Zero test
jnz #EndSkipShiftDividend
loop #SkipShift8Bits //Skip Compute 8 Bits unroled loop ?
#EndSkipShiftDividend:
test edx, $FF000000 //Huge Divisor Numbers Shift Optimisation
jz #Shift8Bits //This Divisor is > $00FFFFFF:FFFFFFFF
mov ecx, 8 //Load byte shift counter
mov esi, [eax + 12] //Do fast 56 bit (7 bytes) shift...
shr esi, cl //esi = $00XXXXXX
mov edi, [eax + 9] //Load for one byte right shifted 32 bit value
#Shift8Bits:
mov bl, [eax + ecx] //Load 8 bit part of Dividend
//Compute 8 Bits unroled loop
shl bl, 1 //Shift dividend left for one bit
rcl edi, 1
rcl esi, 1
jc #DividentAbove0 //dividend hi bit set?
cmp esi, edx //dividend hi part larger?
jb #DividentBelow0
ja #DividentAbove0
cmp edi, ebp //dividend lo part larger?
jb #DividentBelow0
#DividentAbove0:
sub edi, ebp //Return privius dividend state
sbb esi, edx
#DividentBelow0:
shl bl, 1 //Shift dividend left for one bit
rcl edi, 1
rcl esi, 1
jc #DividentAbove1 //dividend hi bit set?
cmp esi, edx //dividend hi part larger?
jb #DividentBelow1
ja #DividentAbove1
cmp edi, ebp //dividend lo part larger?
jb #DividentBelow1
#DividentAbove1:
sub edi, ebp //Return privius dividend state
sbb esi, edx
#DividentBelow1:
shl bl, 1 //Shift dividend left for one bit
rcl edi, 1
rcl esi, 1
jc #DividentAbove2 //dividend hi bit set?
cmp esi, edx //dividend hi part larger?
jb #DividentBelow2
ja #DividentAbove2
cmp edi, ebp //dividend lo part larger?
jb #DividentBelow2
#DividentAbove2:
sub edi, ebp //Return privius dividend state
sbb esi, edx
#DividentBelow2:
shl bl, 1 //Shift dividend left for one bit
rcl edi, 1
rcl esi, 1
jc #DividentAbove3 //dividend hi bit set?
cmp esi, edx //dividend hi part larger?
jb #DividentBelow3
ja #DividentAbove3
cmp edi, ebp //dividend lo part larger?
jb #DividentBelow3
#DividentAbove3:
sub edi, ebp //Return privius dividend state
sbb esi, edx
#DividentBelow3:
shl bl, 1 //Shift dividend left for one bit
rcl edi, 1
rcl esi, 1
jc #DividentAbove4 //dividend hi bit set?
cmp esi, edx //dividend hi part larger?
jb #DividentBelow4
ja #DividentAbove4
cmp edi, ebp //dividend lo part larger?
jb #DividentBelow4
#DividentAbove4:
sub edi, ebp //Return privius dividend state
sbb esi, edx
#DividentBelow4:
shl bl, 1 //Shift dividend left for one bit
rcl edi, 1
rcl esi, 1
jc #DividentAbove5 //dividend hi bit set?
cmp esi, edx //dividend hi part larger?
jb #DividentBelow5
ja #DividentAbove5
cmp edi, ebp //dividend lo part larger?
jb #DividentBelow5
#DividentAbove5:
sub edi, ebp //Return privius dividend state
sbb esi, edx
#DividentBelow5:
shl bl, 1 //Shift dividend left for one bit
rcl edi, 1
rcl esi, 1
jc #DividentAbove6 //dividend hi bit set?
cmp esi, edx //dividend hi part larger?
jb #DividentBelow6
ja #DividentAbove6
cmp edi, ebp //dividend lo part larger?
jb #DividentBelow6
#DividentAbove6:
sub edi, ebp //Return privius dividend state
sbb esi, edx
#DividentBelow6:
shl bl, 1 //Shift dividend left for one bit
rcl edi, 1
rcl esi, 1
jc #DividentAbove7 //dividend hi bit set?
cmp esi, edx //dividend hi part larger?
jb #DividentBelow7
ja #DividentAbove7
cmp edi, ebp //dividend lo part larger?
jb #DividentBelow7
#DividentAbove7:
sub edi, ebp //Return privius dividend state
sbb esi, edx
#DividentBelow7:
//End of Compute 8 Bits (unroled loop)
dec cl //Decrement byte loop shift counter
jns #Shift8Bits //Last jump at cl = 0!!!
//End of division loop
mov eax, edi //Load result to eax:edx
mov edx, esi
#RestoreRegisters:
pop ebp //Restore Registers
pop edi
pop esi
pop ebx
ret
#DivByZero:
xor eax, eax //Here you can raise Div by 0 exception, now function only return 0.
xor edx, edx
jmp #RestoreRegisters
end;
I'd like to share a few thoughts.
It's not as simple as MSN proposes I'm afraid.
In the expression:
(((AH % B) * ((2^64 - B) % B)) + (AL % B)) % B
both multiplication and addition may overflow. I think one could take it into account and still use the general concept with some modifications, but something tells me it's going to get really scary.
I was curious how 64 bit modulo operation was implemented in MSVC and I tried to find something out. I don't really know assembly and all I had available was Express edition, without the source of VC\crt\src\intel\llrem.asm, but I think I managed to get some idea what's going on, after a bit of playing with the debugger and disassembly output. I tried to figure out how the remainder is calculated in case of positive integers and the divisor >=2^32. There is some code that deals with negative numbers of course, but I didn't dig into that.
Here is how I see it:
If divisor >= 2^32 both the dividend and the divisor are shifted right as much as necessary to fit the divisor into 32 bits. In other words: if it takes n digits to write the divisor down in binary and n > 32, n-32 least significant digits of both the divisor and the dividend are discarded. After that, the division is performed using hardware support for dividing 64 bit integers by 32 bit ones. The result might be incorrect, but I think it can be proved, that the result may be off by at most 1. After the division, the divisor (original one) is multiplied by the result and the product subtracted from the dividend. Then it is corrected by adding or subtracting the divisor if necessary (if the result of the division was off by one).
It's easy to divide 128 bit integer by 32 bit one leveraging hardware support for 64-bit by 32-bit division. In case the divisor < 2^32, one can calculate the remainder performing just 4 divisions as follows:
Let's assume the dividend is stored in:
DWORD dividend[4] = ...
the remainder will go into:
DWORD remainder;
1) Divide dividend[3] by divisor. Store the remainder in remainder.
2) Divide QWORD (remainder:dividend[2]) by divisor. Store the remainder in remainder.
3) Divide QWORD (remainder:dividend[1]) by divisor. Store the remainder in remainder.
4) Divide QWORD (remainder:dividend[0]) by divisor. Store the remainder in remainder.
After those 4 steps the variable remainder will hold what You are looking for.
(Please don't kill me if I got the endianess wrong. I'm not even a programmer)
In case the divisor is grater than 2^32-1 I don't have good news. I don't have a complete proof that the result after the shift is off by no more than 1, in the procedure I described earlier, which I believe MSVC is using. I think however that it has something to do with the fact, that the part that is discarded is at least 2^31 times less than the divisor, the dividend is less than 2^64 and the divisor is greater than 2^32-1, so the result is less than 2^32.
If the dividend has 128 bits the trick with discarding bits won't work. So in general case the best solution is probably the one proposed by GJ or caf. (Well, it would be probably the best even if discarding bits worked. Division, multiplication subtraction and correction on 128 bit integer might be slower.)
I was also thinking about using the floating point hardware. x87 floating point unit uses 80 bit precision format with fraction 64 bits long. I think one can get the exact result of 64 bit by 64 bit division. (Not the remainder directly, but also the remainder using multiplication and subtraction like in the "MSVC procedure"). IF the dividend >=2^64 and < 2^128 storing it in the floating point format seems similar to discarding least significant bits in "MSVC procedure". Maybe someone can prove the error in that case is bound and find it useful. I have no idea if it has a chance to be faster than GJ's solution, but maybe it's worth it to try.
The solution depends on what exactly you are trying to solve.
E.g. if you are doing arithmetic in a ring modulo a 64-bit integer then using
Montgomerys reduction is very efficient. Of course this assumes that you the same modulus many times and that it pays off to convert the elements of the ring into a special representation.
To give just a very rough estimate on the speed of this Montgomerys reduction: I have an old benchmark that performs a modular exponentiation with 64-bit modulus and exponent in 1600 ns on a 2.4Ghz Core 2. This exponentiation does about 96 modular multiplications (and modular reductions) and hence needs about 40 cycles per modular multiplication.
The accepted answer by #caf was real nice and highly rated, yet it contain a bug not seen for years.
To help test that and other solutions, I am posting a test harness and making it community wiki.
unsigned cafMod(unsigned A, unsigned B) {
assert(B);
unsigned X = B;
// while (X < A / 2) { Original code used <
while (X <= A / 2) {
X <<= 1;
}
while (A >= B) {
if (A >= X) A -= X;
X >>= 1;
}
return A;
}
void cafMod_test(unsigned num, unsigned den) {
if (den == 0) return;
unsigned y0 = num % den;
unsigned y1 = mod(num, den);
if (y0 != y1) {
printf("FAIL num:%x den:%x %x %x\n", num, den, y0, y1);
fflush(stdout);
exit(-1);
}
}
unsigned rand_unsigned() {
unsigned x = (unsigned) rand();
return x * 2 ^ (unsigned) rand();
}
void cafMod_tests(void) {
const unsigned i[] = { 0, 1, 2, 3, 0x7FFFFFFF, 0x80000000,
UINT_MAX - 3, UINT_MAX - 2, UINT_MAX - 1, UINT_MAX };
for (unsigned den = 0; den < sizeof i / sizeof i[0]; den++) {
if (i[den] == 0) continue;
for (unsigned num = 0; num < sizeof i / sizeof i[0]; num++) {
cafMod_test(i[num], i[den]);
}
}
cafMod_test(0x8711dd11, 0x4388ee88);
cafMod_test(0xf64835a1, 0xf64835a);
time_t t;
time(&t);
srand((unsigned) t);
printf("%u\n", (unsigned) t);fflush(stdout);
for (long long n = 10000LL * 1000LL * 1000LL; n > 0; n--) {
cafMod_test(rand_unsigned(), rand_unsigned());
}
puts("Done");
}
int main(void) {
cafMod_tests();
return 0;
}
As a general rule, division is slow and multiplication is faster, and bit shifting is faster yet. From what I have seen of the answers so far, most of the answers have been using a brute force approach using bit-shifts. There exists another way. Whether it is faster remains to be seen (AKA profile it).
Instead of dividing, multiply by the reciprocal. Thus, to discover A % B, first calculate the reciprocal of B ... 1/B. This can be done with a few loops using the Newton-Raphson method of convergence. To do this well will depend upon a good set of initial values in a table.
For more details on the Newton-Raphson method of converging on the reciprocal, please refer to http://en.wikipedia.org/wiki/Division_(digital)
Once you have the reciprocal, the quotient Q = A * 1/B.
The remainder R = A - Q*B.
To determine if this would be faster than the brute force (as there will be many more multiplies since we will be using 32-bit registers to simulate 64-bit and 128-bit numbers, profile it.
If B is constant in your code, you can pre-calculate the reciprocal and simply calculate using the last two formulae. This, I am sure will be faster than bit-shifting.
Hope this helps.
If 128-bit unsigned by 63-bit unsigned is good enough, then it can be done in a loop doing at most 63 cycles.
Consider this a proposed solution MSNs' overflow problem by limiting it to 1-bit. We do so by splitting the problem in 2, modular multiplication and adding the results at the end.
In the following example upper corresponds to the most significant 64-bits, lower to the least significant 64-bits and div is the divisor.
unsigned 128_mod(uint64_t upper, uint64_t lower, uint64_t div) {
uint64_t result = 0;
uint64_t a = (~0%div)+1;
upper %= div; // the resulting bit-length determines number of cycles required
// first we work out modular multiplication of (2^64*upper)%div
while (upper != 0){
if(upper&1 == 1){
result += a;
if(result >= div){result -= div;}
}
a <<= 1;
if(a >= div){a -= div;}
upper >>= 1;
}
// add up the 2 results and return the modulus
if(lower>div){lower -= div;}
return (lower+result)%div;
}
The only problem is that, if the divisor is 64-bits then we get overflows of 1-bit (loss of information) giving a faulty result.
It bugs me that I haven't figured out a neat way to handle the overflows.
I don't know how to compile the assembler codes, any help is appreciated to compile and test them.
I solved this problem by comparing against gmplib "mpz_mod()" and summing 1 million loop results. It was a long ride to go from slowdown (seedup 0.12) to speedup 1.54 -- that is the reason I think the C codes in this thread will be slow.
Details inclusive test harness in this thread:
https://www.raspberrypi.org/forums/viewtopic.php?f=33&t=311893&p=1873122#p1873122
This is "mod_256()" with speedup over using gmplib "mpz_mod()", use of __builtin_clzll() for longer shifts was essential:
typedef __uint128_t uint256_t[2];
#define min(x, y) ((x<y) ? (x) : (y))
int clz(__uint128_t u)
{
// unsigned long long h = ((unsigned long long *)&u)[1];
unsigned long long h = u >> 64;
return (h!=0) ? __builtin_clzll(h) : 64 + __builtin_clzll(u);
}
__uint128_t mod_256(uint256_t x, __uint128_t n)
{
if (x[1] == 0) return x[0] % n;
else
{
__uint128_t r = x[1] % n;
int F = clz(n);
int R = clz(r);
for(int i=0; i<128; ++i)
{
if (R>F+1)
{
int h = min(R-(F+1), 128-i);
r <<= h; R-=h; i+=(h-1); continue;
}
r <<= 1; if (r >= n) { r -= n; R=clz(r); }
}
r += (x[0] % n); if (r >= n) r -= n;
return r;
}
}
If you have a recent x86 machine, there are 128-bit registers for SSE2+. I've never tried to write assembly for anything other than basic x86, but I suspect there are some guides out there.
I am 9 years after the battle but here is an interesting O(1) edge case for powers of 2 that is worth mentioning.
#include <stdio.h>
// example with 32 bits and 8 bits.
int main() {
int i = 930;
unsigned char b = (unsigned char) i;
printf("%d", (int) b); // 162, same as 930 % 256
}
Since there is no predefined 128-bit integer type in C, bits of A have to be represented in an array. Although B (64-bit integer) can be stored in an unsigned long long int variable, it is needed to put bits of B into another array in order to work on A and B efficiently.
After that, B is incremented as Bx2, Bx3, Bx4, ... until it is the greatest B less than A. And then (A-B) can be calculated, using some subtraction knowledge for base 2.
Is this the kind of solution that you are looking for?

Examining code generated by the Visual Studio C++ compiler, part 1 [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Why is such complex code emitted for dividing a signed integer by a power of two?
Background
I'm just learning x86 asm by examining the binary code generated by the compiler.
Code compiled using the C++ compiler in Visual Studio 2010 beta 2.
Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 16.00.21003.01 for 80x86
C code (sandbox.c)
int mainCRTStartup()
{
int x=5;int y=1024;
while(x) { x--; y/=2; }
return x+y;
}
Compile it using the Visual Studio Command Prompt
cl /c /O2 /Oy- /MD sandbox.c
link /NODEFAULTLIB /MANIFEST:NO /SUBSYSTEM:CONSOLE sandbox.obj
Disasm sandbox.exe in OllyDgb
The following starts from the entry point.
00401000 >/$ B9 05000000 MOV ECX,5
00401005 |. B8 00040000 MOV EAX,400
0040100A |. 8D9B 00000000 LEA EBX,DWORD PTR DS:[EBX]
00401010 |> 99 /CDQ
00401011 |. 2BC2 |SUB EAX,EDX
00401013 |. D1F8 |SAR EAX,1
00401015 |. 49 |DEC ECX
00401016 |.^75 F8 \JNZ SHORT sandbox.00401010
00401018 \. C3 RETN
Examination
MOV ECX, 5 int x=5;
MOV EAX, 400 int y=1024;
LEA ... // no idea what LEA does here. seems like ebx=ebx. elaborate please.
// in fact, NOPing it does nothing to the original procedure and the values.
CQD // sign extends EAX into EDX:EAX, which here: edx = 0. no idea why.
SUB EAX, EDX // eax=eax-edx, here: eax=eax-0. no idea, pretty redundant.
SAR EAX,1 // okay, y/= 2
DEC ECX // okay, x--, sets the zero flag when reaches 0.
JNZ ... // okay, jump back to CQD if the zero flag is not set.
This part bothers me:
0040100A |. 8D9B 00000000 LEA EBX,DWORD PTR DS:[EBX]
00401010 |> 99 /CDQ
00401011 |. 2BC2 |SUB EAX,EDX
You can nop it all and the values of EAX and ECX will remain the same at the end. So, what's the point of these instructions?
The whole thing
00401010 |> 99 /CDQ
00401011 |. 2BC2 |SUB EAX,EDX
00401013 |. D1F8 |SAR EAX,1
stands for the y /= 2. You see, a standalone SAR would not perform the signed integer division the way the compiler authors intended. C++98 standard recommends that signed integer division rounds the result towards 0, while SAR alone would round towards the negative infinity. (It is permissible to round towards negative infinity, the choice is left to the implementation). In order to implement rounding to 0 for negative operands, the above trick is used. If you use an unsigned type instead of a signed one, then the compiler will generate just a single shift instruction, since the issue with negative division will not take place.
The trick is pretty simple: for negative y sign extension will place a pattern of 11111...1 in EDX, which is actually -1 in 2's complement representation. The following SUB will effectively add 1 to EAX if the original y value was negative. If the original y was positive (or 0), the EDX will hold 0 after the sign extension and EAX will remain unchanged.
In other words, when you write y /= 2 with signed y, the compiler generates the code that does something more like the following
y = (y < 0 ? y + 1 : y) >> 1;
or, better
y = (y + (y < 0)) >> 1;
Note, that C++ standard does not require the result of the division to be rounded towards zero, so the compiler has the right to do just a single shift even for signed types. However, normally compilers follow the recommendation to round towards zero (or offer an option to control the behavior).
P.S. I don't know for sure what the purpose of that LEA instruction is. It is indeed a no-op. However, I suspect that this might be just a placeholder instruction inserted into the code for further patching. If I remember correctly, MS compiler has an option that forces the insertion of placeholder instructions at the beginning and at the end of each function. In the future this instruction can be overwritten by the patcher with a CALL or JMP instruction that will execute the patch code. This specific LEA was chosen just because it produces the a no-op placeholder instruction of the correct length. Of course, it could be something completely different.
The lea ebx,[ebx] is just a NOP operation. Its purpose is to align the beginning of the loop in memory, which will make it faster. As you can see here, the beginning of the loop starts at address 0x00401010, which is divisible by 16, thanks to this instruction.
The CDQ and SUB EAX,EDX operations make sure that the division will round a negative number towards zero - otherwise SAR would round it down, giving incorrect results for negative numbers.
The reason that the compiler emits this:
LEA EBX,DWORD PTR DS:[EBX]
instead of the semantically equivalent:
NOP
NOP
NOP
NOP
NOP
NOP
..is that it's faster for the processor to execute one 6-byte instruction than six 1-byte instructions. That's all.
This doesn't really answer the question, but is a helpful hint. Instead of mucking around with the OllyDbg.exe thing, you can make Visual Studio generate the asm file for you, which has the added bonus that it can put in the original source code as comments. This isn't a big deal for your current small project, but as your project grows, you may end up spending a fair amount of time figuring out which assembly code matches which source code.
From the command line, you want the /FAs and /Fa options (MSDN).
Here's part of the output for your example code (I compiled debug code, so the .asm is longer, but you can do the same thing for your optimized code):
_wmain PROC ; COMDAT
; 8 : {
push ebp
mov ebp, esp
sub esp, 216 ; 000000d8H
push ebx
push esi
push edi
lea edi, DWORD PTR [ebp-216]
mov ecx, 54 ; 00000036H
mov eax, -858993460 ; ccccccccH
rep stosd
; 9 : int x=5; int y=1024;
mov DWORD PTR _x$[ebp], 5
mov DWORD PTR _y$[ebp], 1024 ; 00000400H
$LN2#wmain:
; 10 : while(x) { x--; y/=2; }
cmp DWORD PTR _x$[ebp], 0
je SHORT $LN1#wmain
mov eax, DWORD PTR _x$[ebp]
sub eax, 1
mov DWORD PTR _x$[ebp], eax
mov eax, DWORD PTR _y$[ebp]
cdq
sub eax, edx
sar eax, 1
mov DWORD PTR _y$[ebp], eax
jmp SHORT $LN2#wmain
$LN1#wmain:
; 11 : return x+y;
mov eax, DWORD PTR _x$[ebp]
add eax, DWORD PTR _y$[ebp]
; 12 : }
pop edi
pop esi
pop ebx
mov esp, ebp
pop ebp
ret 0
_wmain ENDP
Hope that helps!

Resources