Fastest way to calculate a 128-bit integer modulo a 64-bit integer - c

I have a 128-bit unsigned integer A and a 64-bit unsigned integer B. What's the fastest way to calculate A % B - that is the (64-bit) remainder from dividing A by B?
I'm looking to do this in either C or assembly language, but I need to target the 32-bit x86 platform. This unfortunately means that I cannot take advantage of compiler support for 128-bit integers, nor of the x64 architecture's ability to perform the required operation in a single instruction.
Edit:
Thank you for the answers so far. However, it appears to me that the suggested algorithms would be quite slow - wouldn't the fastest way to perform a 128-bit by 64-bit division be to leverage the processor's native support for 64-bit by 32-bit division? Does anyone know if there is a way to perform the larger division in terms of a few smaller divisions?
Re: How often does B change?
Primarily I'm interested in a general solution - what calculation would you perform if A and B are likely to be different every time?
However, a second possible situation is that B does not vary as often as A - there may be as many as 200 As to divide by each B. How would your answer differ in this case?

You can use the division version of Russian Peasant Multiplication.
To find the remainder, execute (in pseudo-code):
X = B;
while (X <= A/2)
{
X <<= 1;
}
while (A >= B)
{
if (A >= X)
A -= X;
X >>= 1;
}
The modulus is left in A.
You'll need to implement the shifts, comparisons and subtractions to operate on values made up of a pair of 64 bit numbers, but that's fairly trivial (likely you should implement the left-shift-by-1 as X + X).
This will loop at most 255 times (with a 128 bit A). Of course you need to do a pre-check for a zero divisor.

Perhaps you're looking for a finished program, but the basic algorithms for multi-precision arithmetic can be found in Knuth's Art of Computer Programming, Volume 2. You can find the division algorithm described online here. The algorithms deal with arbitrary multi-precision arithmetic, and so are more general than you need, but you should be able to simplify them for 128 bit arithmetic done on 64- or 32-bit digits. Be prepared for a reasonable amount of work (a) understanding the algorithm, and (b) converting it to C or assembler.
You might also want to check out Hacker's Delight, which is full of very clever assembler and other low-level hackery, including some multi-precision arithmetic.

If your B is small enough for the uint64_t + operation to not wrap:
Given A = AH*2^64 + AL:
A % B == (((AH % B) * (2^64 % B)) + (AL % B)) % B
== (((AH % B) * ((2^64 - B) % B)) + (AL % B)) % B
If your compiler supports 64-bit integers, then this is probably the easiest way to go.
MSVC's implementation of a 64-bit modulo on 32-bit x86 is some hairy loop filled assembly (VC\crt\src\intel\llrem.asm for the brave), so I'd personally go with that.

This is almost untested partly speed modificated Mod128by64 'Russian peasant' algorithm function. Unfortunately I'm a Delphi user so this function works under Delphi. :) But the assembler is almost the same so...
function Mod128by64(Dividend: PUInt128; Divisor: PUInt64): UInt64;
//In : eax = #Dividend
// : edx = #Divisor
//Out: eax:edx as Remainder
asm
//Registers inside rutine
//Divisor = edx:ebp
//Dividend = bh:ebx:edx //We need 64 bits + 1 bit in bh
//Result = esi:edi
//ecx = Loop counter and Dividend index
push ebx //Store registers to stack
push esi
push edi
push ebp
mov ebp, [edx] //Divisor = edx:ebp
mov edx, [edx + 4]
mov ecx, ebp //Div by 0 test
or ecx, edx
jz #DivByZero
xor edi, edi //Clear result
xor esi, esi
//Start of 64 bit division Loop
mov ecx, 15 //Load byte loop shift counter and Dividend index
#SkipShift8Bits: //Small Dividend numbers shift optimisation
cmp [eax + ecx], ch //Zero test
jnz #EndSkipShiftDividend
loop #SkipShift8Bits //Skip 8 bit loop
#EndSkipShiftDividend:
test edx, $FF000000 //Huge Divisor Numbers Shift Optimisation
jz #Shift8Bits //This Divisor is > $00FFFFFF:FFFFFFFF
mov ecx, 8 //Load byte shift counter
mov esi, [eax + 12] //Do fast 56 bit (7 bytes) shift...
shr esi, cl //esi = $00XXXXXX
mov edi, [eax + 9] //Load for one byte right shifted 32 bit value
#Shift8Bits:
mov bl, [eax + ecx] //Load 8 bits of Dividend
//Here we can unrole partial loop 8 bit division to increase execution speed...
mov ch, 8 //Set partial byte counter value
#Do65BitsShift:
shl bl, 1 //Shift dividend left for one bit
rcl edi, 1
rcl esi, 1
setc bh //Save 65th bit
sub edi, ebp //Compare dividend and divisor
sbb esi, edx //Subtract the divisor
sbb bh, 0 //Use 65th bit in bh
jnc #NoCarryAtCmp //Test...
add edi, ebp //Return privius dividend state
adc esi, edx
#NoCarryAtCmp:
dec ch //Decrement counter
jnz #Do65BitsShift
//End of 8 bit (byte) partial division loop
dec cl //Decrement byte loop shift counter
jns #Shift8Bits //Last jump at cl = 0!!!
//End of 64 bit division loop
mov eax, edi //Load result to eax:edx
mov edx, esi
#RestoreRegisters:
pop ebp //Restore Registers
pop edi
pop esi
pop ebx
ret
#DivByZero:
xor eax, eax //Here you can raise Div by 0 exception, now function only return 0.
xor edx, edx
jmp #RestoreRegisters
end;
At least one more speed optimisation is possible! After 'Huge Divisor Numbers Shift Optimisation' we can test divisors high bit, if it is 0 we do not need to use extra bh register as 65th bit to store in it. So unrolled part of loop can look like:
shl bl,1 //Shift dividend left for one bit
rcl edi,1
rcl esi,1
sub edi, ebp //Compare dividend and divisor
sbb esi, edx //Subtract the divisor
jnc #NoCarryAtCmpX
add edi, ebp //Return privius dividend state
adc esi, edx
#NoCarryAtCmpX:

I know the question specified 32-bit code, but the answer for 64-bit may be useful or interesting to others.
And yes, 64b/32b => 32b division does make a useful building-block for 128b % 64b => 64b. libgcc's __umoddi3 (source linked below) gives an idea of how to do that sort of thing, but it only implements 2N % 2N => 2N on top of a 2N / N => N division, not 4N % 2N => 2N.
Wider multi-precision libraries are available, e.g. https://gmplib.org/manual/Integer-Division.html#Integer-Division.
GNU C on 64-bit machines does provide an __int128 type, and libgcc functions to multiply and divide as efficiently as possible on the target architecture.
x86-64's div r/m64 instruction does 128b/64b => 64b division (also producing remainder as a second output), but it faults if the quotient overflows. So you can't directly use it if A/B > 2^64-1, but you can get gcc to use it for you (or even inline the same code that libgcc uses).
This compiles (Godbolt compiler explorer) to one or two div instructions (which happen inside a libgcc function call). If there was a faster way, libgcc would probably use that instead.
#include <stdint.h>
uint64_t AmodB(unsigned __int128 A, uint64_t B) {
return A % B;
}
The __umodti3 function it calls calculates a full 128b/128b modulo, but the implementation of that function does check for the special case where the divisor's high half is 0, as you can see in the libgcc source. (libgcc builds the si/di/ti version of the function from that code, as appropriate for the target architecture. udiv_qrnnd is an inline asm macro that does unsigned 2N/N => N division for the target architecture.
For x86-64 (and other architectures with a hardware divide instruction), the fast-path (when high_half(A) < B; guaranteeing div won't fault) is just two not-taken branches, some fluff for out-of-order CPUs to chew through, and a single div r64 instruction, which takes about 50-100 cycles1 on modern x86 CPUs, according to Agner Fog's insn tables. Some other work can be happening in parallel with div, but the integer divide unit is not very pipelined and div decodes to a lot of uops (unlike FP division).
The fallback path still only uses two 64-bit div instructions for the case where B is only 64-bit, but A/B doesn't fit in 64 bits so A/B directly would fault.
Note that libgcc's __umodti3 just inlines __udivmoddi4 into a wrapper that only returns the remainder.
Footnote 1: 32-bit div is over 2x faster on Intel CPUs. On AMD CPUs, performance only depends on the size of the actual input values, even if they're small values in a 64-bit register. If small values are common, it might be worth benchmarking a branch to a simple 32-bit division version before doing 64-bit or 128-bit division.
For repeated modulo by the same B
It might be worth considering calculating a fixed-point multiplicative inverse for B, if one exists. For example, with compile-time constants, gcc does the optimization for types narrower than 128b.
uint64_t modulo_by_constant64(uint64_t A) { return A % 0x12345678ABULL; }
movabs rdx, -2233785418547900415
mov rax, rdi
mul rdx
mov rax, rdx # wasted instruction, could have kept using RDX.
movabs rdx, 78187493547
shr rax, 36 # division result
imul rax, rdx # multiply and subtract to get the modulo
sub rdi, rax
mov rax, rdi
ret
x86's mul r64 instruction does 64b*64b => 128b (rdx:rax) multiplication, and can be used as a building block to construct a 128b * 128b => 256b multiply to implement the same algorithm. Since we only need the high half of the full 256b result, that saves a few multiplies.
Modern Intel CPUs have very high performance mul: 3c latency, one per clock throughput. However, the exact combination of shifts and adds required varies with the constant, so the general case of calculating a multiplicative inverse at run-time isn't quite as efficient each time its used as a JIT-compiled or statically-compiled version (even on top of the pre-computation overhead).
IDK where the break-even point would be. For JIT-compiling, it will be higher than ~200 reuses, unless you cache generated code for commonly-used B values. For the "normal" way, it might possibly be in the range of 200 reuses, but IDK how expensive it would be to find a modular multiplicative inverse for 128-bit / 64-bit division.
libdivide can do this for you, but only for 32 and 64-bit types. Still, it's probably a good starting point.

I have made both version of Mod128by64 'Russian peasant' division function: classic and speed optimised. Speed optimised can do on my 3Ghz PC more than 1000.000 random calculations per second and is more than three times faster than classic function.
If we compare the execution time of calculating 128 by 64 and calculating 64 by 64 bit modulo than this function is only about 50% slower.
Classic Russian peasant:
function Mod128by64Clasic(Dividend: PUInt128; Divisor: PUInt64): UInt64;
//In : eax = #Dividend
// : edx = #Divisor
//Out: eax:edx as Remainder
asm
//Registers inside rutine
//edx:ebp = Divisor
//ecx = Loop counter
//Result = esi:edi
push ebx //Store registers to stack
push esi
push edi
push ebp
mov ebp, [edx] //Load divisor to edx:ebp
mov edx, [edx + 4]
mov ecx, ebp //Div by 0 test
or ecx, edx
jz #DivByZero
push [eax] //Store Divisor to the stack
push [eax + 4]
push [eax + 8]
push [eax + 12]
xor edi, edi //Clear result
xor esi, esi
mov ecx, 128 //Load shift counter
#Do128BitsShift:
shl [esp + 12], 1 //Shift dividend from stack left for one bit
rcl [esp + 8], 1
rcl [esp + 4], 1
rcl [esp], 1
rcl edi, 1
rcl esi, 1
setc bh //Save 65th bit
sub edi, ebp //Compare dividend and divisor
sbb esi, edx //Subtract the divisor
sbb bh, 0 //Use 65th bit in bh
jnc #NoCarryAtCmp //Test...
add edi, ebp //Return privius dividend state
adc esi, edx
#NoCarryAtCmp:
loop #Do128BitsShift
//End of 128 bit division loop
mov eax, edi //Load result to eax:edx
mov edx, esi
#RestoreRegisters:
lea esp, esp + 16 //Restore Divisors space on stack
pop ebp //Restore Registers
pop edi
pop esi
pop ebx
ret
#DivByZero:
xor eax, eax //Here you can raise Div by 0 exception, now function only return 0.
xor edx, edx
jmp #RestoreRegisters
end;
Speed optimised Russian peasant:
function Mod128by64Oprimized(Dividend: PUInt128; Divisor: PUInt64): UInt64;
//In : eax = #Dividend
// : edx = #Divisor
//Out: eax:edx as Remainder
asm
//Registers inside rutine
//Divisor = edx:ebp
//Dividend = ebx:edx //We need 64 bits
//Result = esi:edi
//ecx = Loop counter and Dividend index
push ebx //Store registers to stack
push esi
push edi
push ebp
mov ebp, [edx] //Divisor = edx:ebp
mov edx, [edx + 4]
mov ecx, ebp //Div by 0 test
or ecx, edx
jz #DivByZero
xor edi, edi //Clear result
xor esi, esi
//Start of 64 bit division Loop
mov ecx, 15 //Load byte loop shift counter and Dividend index
#SkipShift8Bits: //Small Dividend numbers shift optimisation
cmp [eax + ecx], ch //Zero test
jnz #EndSkipShiftDividend
loop #SkipShift8Bits //Skip Compute 8 Bits unroled loop ?
#EndSkipShiftDividend:
test edx, $FF000000 //Huge Divisor Numbers Shift Optimisation
jz #Shift8Bits //This Divisor is > $00FFFFFF:FFFFFFFF
mov ecx, 8 //Load byte shift counter
mov esi, [eax + 12] //Do fast 56 bit (7 bytes) shift...
shr esi, cl //esi = $00XXXXXX
mov edi, [eax + 9] //Load for one byte right shifted 32 bit value
#Shift8Bits:
mov bl, [eax + ecx] //Load 8 bit part of Dividend
//Compute 8 Bits unroled loop
shl bl, 1 //Shift dividend left for one bit
rcl edi, 1
rcl esi, 1
jc #DividentAbove0 //dividend hi bit set?
cmp esi, edx //dividend hi part larger?
jb #DividentBelow0
ja #DividentAbove0
cmp edi, ebp //dividend lo part larger?
jb #DividentBelow0
#DividentAbove0:
sub edi, ebp //Return privius dividend state
sbb esi, edx
#DividentBelow0:
shl bl, 1 //Shift dividend left for one bit
rcl edi, 1
rcl esi, 1
jc #DividentAbove1 //dividend hi bit set?
cmp esi, edx //dividend hi part larger?
jb #DividentBelow1
ja #DividentAbove1
cmp edi, ebp //dividend lo part larger?
jb #DividentBelow1
#DividentAbove1:
sub edi, ebp //Return privius dividend state
sbb esi, edx
#DividentBelow1:
shl bl, 1 //Shift dividend left for one bit
rcl edi, 1
rcl esi, 1
jc #DividentAbove2 //dividend hi bit set?
cmp esi, edx //dividend hi part larger?
jb #DividentBelow2
ja #DividentAbove2
cmp edi, ebp //dividend lo part larger?
jb #DividentBelow2
#DividentAbove2:
sub edi, ebp //Return privius dividend state
sbb esi, edx
#DividentBelow2:
shl bl, 1 //Shift dividend left for one bit
rcl edi, 1
rcl esi, 1
jc #DividentAbove3 //dividend hi bit set?
cmp esi, edx //dividend hi part larger?
jb #DividentBelow3
ja #DividentAbove3
cmp edi, ebp //dividend lo part larger?
jb #DividentBelow3
#DividentAbove3:
sub edi, ebp //Return privius dividend state
sbb esi, edx
#DividentBelow3:
shl bl, 1 //Shift dividend left for one bit
rcl edi, 1
rcl esi, 1
jc #DividentAbove4 //dividend hi bit set?
cmp esi, edx //dividend hi part larger?
jb #DividentBelow4
ja #DividentAbove4
cmp edi, ebp //dividend lo part larger?
jb #DividentBelow4
#DividentAbove4:
sub edi, ebp //Return privius dividend state
sbb esi, edx
#DividentBelow4:
shl bl, 1 //Shift dividend left for one bit
rcl edi, 1
rcl esi, 1
jc #DividentAbove5 //dividend hi bit set?
cmp esi, edx //dividend hi part larger?
jb #DividentBelow5
ja #DividentAbove5
cmp edi, ebp //dividend lo part larger?
jb #DividentBelow5
#DividentAbove5:
sub edi, ebp //Return privius dividend state
sbb esi, edx
#DividentBelow5:
shl bl, 1 //Shift dividend left for one bit
rcl edi, 1
rcl esi, 1
jc #DividentAbove6 //dividend hi bit set?
cmp esi, edx //dividend hi part larger?
jb #DividentBelow6
ja #DividentAbove6
cmp edi, ebp //dividend lo part larger?
jb #DividentBelow6
#DividentAbove6:
sub edi, ebp //Return privius dividend state
sbb esi, edx
#DividentBelow6:
shl bl, 1 //Shift dividend left for one bit
rcl edi, 1
rcl esi, 1
jc #DividentAbove7 //dividend hi bit set?
cmp esi, edx //dividend hi part larger?
jb #DividentBelow7
ja #DividentAbove7
cmp edi, ebp //dividend lo part larger?
jb #DividentBelow7
#DividentAbove7:
sub edi, ebp //Return privius dividend state
sbb esi, edx
#DividentBelow7:
//End of Compute 8 Bits (unroled loop)
dec cl //Decrement byte loop shift counter
jns #Shift8Bits //Last jump at cl = 0!!!
//End of division loop
mov eax, edi //Load result to eax:edx
mov edx, esi
#RestoreRegisters:
pop ebp //Restore Registers
pop edi
pop esi
pop ebx
ret
#DivByZero:
xor eax, eax //Here you can raise Div by 0 exception, now function only return 0.
xor edx, edx
jmp #RestoreRegisters
end;

I'd like to share a few thoughts.
It's not as simple as MSN proposes I'm afraid.
In the expression:
(((AH % B) * ((2^64 - B) % B)) + (AL % B)) % B
both multiplication and addition may overflow. I think one could take it into account and still use the general concept with some modifications, but something tells me it's going to get really scary.
I was curious how 64 bit modulo operation was implemented in MSVC and I tried to find something out. I don't really know assembly and all I had available was Express edition, without the source of VC\crt\src\intel\llrem.asm, but I think I managed to get some idea what's going on, after a bit of playing with the debugger and disassembly output. I tried to figure out how the remainder is calculated in case of positive integers and the divisor >=2^32. There is some code that deals with negative numbers of course, but I didn't dig into that.
Here is how I see it:
If divisor >= 2^32 both the dividend and the divisor are shifted right as much as necessary to fit the divisor into 32 bits. In other words: if it takes n digits to write the divisor down in binary and n > 32, n-32 least significant digits of both the divisor and the dividend are discarded. After that, the division is performed using hardware support for dividing 64 bit integers by 32 bit ones. The result might be incorrect, but I think it can be proved, that the result may be off by at most 1. After the division, the divisor (original one) is multiplied by the result and the product subtracted from the dividend. Then it is corrected by adding or subtracting the divisor if necessary (if the result of the division was off by one).
It's easy to divide 128 bit integer by 32 bit one leveraging hardware support for 64-bit by 32-bit division. In case the divisor < 2^32, one can calculate the remainder performing just 4 divisions as follows:
Let's assume the dividend is stored in:
DWORD dividend[4] = ...
the remainder will go into:
DWORD remainder;
1) Divide dividend[3] by divisor. Store the remainder in remainder.
2) Divide QWORD (remainder:dividend[2]) by divisor. Store the remainder in remainder.
3) Divide QWORD (remainder:dividend[1]) by divisor. Store the remainder in remainder.
4) Divide QWORD (remainder:dividend[0]) by divisor. Store the remainder in remainder.
After those 4 steps the variable remainder will hold what You are looking for.
(Please don't kill me if I got the endianess wrong. I'm not even a programmer)
In case the divisor is grater than 2^32-1 I don't have good news. I don't have a complete proof that the result after the shift is off by no more than 1, in the procedure I described earlier, which I believe MSVC is using. I think however that it has something to do with the fact, that the part that is discarded is at least 2^31 times less than the divisor, the dividend is less than 2^64 and the divisor is greater than 2^32-1, so the result is less than 2^32.
If the dividend has 128 bits the trick with discarding bits won't work. So in general case the best solution is probably the one proposed by GJ or caf. (Well, it would be probably the best even if discarding bits worked. Division, multiplication subtraction and correction on 128 bit integer might be slower.)
I was also thinking about using the floating point hardware. x87 floating point unit uses 80 bit precision format with fraction 64 bits long. I think one can get the exact result of 64 bit by 64 bit division. (Not the remainder directly, but also the remainder using multiplication and subtraction like in the "MSVC procedure"). IF the dividend >=2^64 and < 2^128 storing it in the floating point format seems similar to discarding least significant bits in "MSVC procedure". Maybe someone can prove the error in that case is bound and find it useful. I have no idea if it has a chance to be faster than GJ's solution, but maybe it's worth it to try.

The solution depends on what exactly you are trying to solve.
E.g. if you are doing arithmetic in a ring modulo a 64-bit integer then using
Montgomerys reduction is very efficient. Of course this assumes that you the same modulus many times and that it pays off to convert the elements of the ring into a special representation.
To give just a very rough estimate on the speed of this Montgomerys reduction: I have an old benchmark that performs a modular exponentiation with 64-bit modulus and exponent in 1600 ns on a 2.4Ghz Core 2. This exponentiation does about 96 modular multiplications (and modular reductions) and hence needs about 40 cycles per modular multiplication.

The accepted answer by #caf was real nice and highly rated, yet it contain a bug not seen for years.
To help test that and other solutions, I am posting a test harness and making it community wiki.
unsigned cafMod(unsigned A, unsigned B) {
assert(B);
unsigned X = B;
// while (X < A / 2) { Original code used <
while (X <= A / 2) {
X <<= 1;
}
while (A >= B) {
if (A >= X) A -= X;
X >>= 1;
}
return A;
}
void cafMod_test(unsigned num, unsigned den) {
if (den == 0) return;
unsigned y0 = num % den;
unsigned y1 = mod(num, den);
if (y0 != y1) {
printf("FAIL num:%x den:%x %x %x\n", num, den, y0, y1);
fflush(stdout);
exit(-1);
}
}
unsigned rand_unsigned() {
unsigned x = (unsigned) rand();
return x * 2 ^ (unsigned) rand();
}
void cafMod_tests(void) {
const unsigned i[] = { 0, 1, 2, 3, 0x7FFFFFFF, 0x80000000,
UINT_MAX - 3, UINT_MAX - 2, UINT_MAX - 1, UINT_MAX };
for (unsigned den = 0; den < sizeof i / sizeof i[0]; den++) {
if (i[den] == 0) continue;
for (unsigned num = 0; num < sizeof i / sizeof i[0]; num++) {
cafMod_test(i[num], i[den]);
}
}
cafMod_test(0x8711dd11, 0x4388ee88);
cafMod_test(0xf64835a1, 0xf64835a);
time_t t;
time(&t);
srand((unsigned) t);
printf("%u\n", (unsigned) t);fflush(stdout);
for (long long n = 10000LL * 1000LL * 1000LL; n > 0; n--) {
cafMod_test(rand_unsigned(), rand_unsigned());
}
puts("Done");
}
int main(void) {
cafMod_tests();
return 0;
}

As a general rule, division is slow and multiplication is faster, and bit shifting is faster yet. From what I have seen of the answers so far, most of the answers have been using a brute force approach using bit-shifts. There exists another way. Whether it is faster remains to be seen (AKA profile it).
Instead of dividing, multiply by the reciprocal. Thus, to discover A % B, first calculate the reciprocal of B ... 1/B. This can be done with a few loops using the Newton-Raphson method of convergence. To do this well will depend upon a good set of initial values in a table.
For more details on the Newton-Raphson method of converging on the reciprocal, please refer to http://en.wikipedia.org/wiki/Division_(digital)
Once you have the reciprocal, the quotient Q = A * 1/B.
The remainder R = A - Q*B.
To determine if this would be faster than the brute force (as there will be many more multiplies since we will be using 32-bit registers to simulate 64-bit and 128-bit numbers, profile it.
If B is constant in your code, you can pre-calculate the reciprocal and simply calculate using the last two formulae. This, I am sure will be faster than bit-shifting.
Hope this helps.

If 128-bit unsigned by 63-bit unsigned is good enough, then it can be done in a loop doing at most 63 cycles.
Consider this a proposed solution MSNs' overflow problem by limiting it to 1-bit. We do so by splitting the problem in 2, modular multiplication and adding the results at the end.
In the following example upper corresponds to the most significant 64-bits, lower to the least significant 64-bits and div is the divisor.
unsigned 128_mod(uint64_t upper, uint64_t lower, uint64_t div) {
uint64_t result = 0;
uint64_t a = (~0%div)+1;
upper %= div; // the resulting bit-length determines number of cycles required
// first we work out modular multiplication of (2^64*upper)%div
while (upper != 0){
if(upper&1 == 1){
result += a;
if(result >= div){result -= div;}
}
a <<= 1;
if(a >= div){a -= div;}
upper >>= 1;
}
// add up the 2 results and return the modulus
if(lower>div){lower -= div;}
return (lower+result)%div;
}
The only problem is that, if the divisor is 64-bits then we get overflows of 1-bit (loss of information) giving a faulty result.
It bugs me that I haven't figured out a neat way to handle the overflows.

I don't know how to compile the assembler codes, any help is appreciated to compile and test them.
I solved this problem by comparing against gmplib "mpz_mod()" and summing 1 million loop results. It was a long ride to go from slowdown (seedup 0.12) to speedup 1.54 -- that is the reason I think the C codes in this thread will be slow.
Details inclusive test harness in this thread:
https://www.raspberrypi.org/forums/viewtopic.php?f=33&t=311893&p=1873122#p1873122
This is "mod_256()" with speedup over using gmplib "mpz_mod()", use of __builtin_clzll() for longer shifts was essential:
typedef __uint128_t uint256_t[2];
#define min(x, y) ((x<y) ? (x) : (y))
int clz(__uint128_t u)
{
// unsigned long long h = ((unsigned long long *)&u)[1];
unsigned long long h = u >> 64;
return (h!=0) ? __builtin_clzll(h) : 64 + __builtin_clzll(u);
}
__uint128_t mod_256(uint256_t x, __uint128_t n)
{
if (x[1] == 0) return x[0] % n;
else
{
__uint128_t r = x[1] % n;
int F = clz(n);
int R = clz(r);
for(int i=0; i<128; ++i)
{
if (R>F+1)
{
int h = min(R-(F+1), 128-i);
r <<= h; R-=h; i+=(h-1); continue;
}
r <<= 1; if (r >= n) { r -= n; R=clz(r); }
}
r += (x[0] % n); if (r >= n) r -= n;
return r;
}
}

If you have a recent x86 machine, there are 128-bit registers for SSE2+. I've never tried to write assembly for anything other than basic x86, but I suspect there are some guides out there.

I am 9 years after the battle but here is an interesting O(1) edge case for powers of 2 that is worth mentioning.
#include <stdio.h>
// example with 32 bits and 8 bits.
int main() {
int i = 930;
unsigned char b = (unsigned char) i;
printf("%d", (int) b); // 162, same as 930 % 256
}

Since there is no predefined 128-bit integer type in C, bits of A have to be represented in an array. Although B (64-bit integer) can be stored in an unsigned long long int variable, it is needed to put bits of B into another array in order to work on A and B efficiently.
After that, B is incremented as Bx2, Bx3, Bx4, ... until it is the greatest B less than A. And then (A-B) can be calculated, using some subtraction knowledge for base 2.
Is this the kind of solution that you are looking for?

Related

Assembly Language 8086 Display 1-10 [duplicate]

I was tasked to write a program that displays the linear address of my
program's PSP. I wrote the following:
ORG 256
mov dx,Msg
mov ah,09h ;DOS.WriteStringToStandardOutput
int 21h
mov ax,ds
mov dx,16
mul dx ; -> Linear address is now in DX:AX
???
mov ax,4C00h ;DOS.TerminateWithExitCode
int 21h
; ------------------------------
Msg: db 'PSP is at linear address $'
I searched the DOS api (using Ralph Brown's interrupt list)
and didn't find a single function to output a number!
Did I miss it, and what can I do?
I want to display the number in DX:AX in decimal.
It's true that DOS doesn't offer us a function to output a number directly.
You'll have to first convert the number yourself and then have DOS display it
using one of the text output functions.
Displaying the unsigned 16-bit number held in AX
When tackling the problem of converting a number, it helps to see how the
digits that make up a number relate to each other.
Let's consider the number 65535 and its decomposition:
(6 * 10000) + (5 * 1000) + (5 * 100) + (3 * 10) + (5 * 1)
Method 1 : division by decreasing powers of 10
Processing the number going from the left to the right is convenient because it
allows us to display an individual digit as soon as we've extracted it.
By dividing the number (65535) by 10000, we obtain a single digit quotient
(6) that we can output as a character straight away. We also get a remainder
(5535) that will become the dividend in the next step.
By dividing the remainder from the previous step (5535) by 1000, we obtain
a single digit quotient (5) that we can output as a character straight away.
We also get a remainder (535) that will become the dividend in the next step.
By dividing the remainder from the previous step (535) by 100, we obtain
a single digit quotient (5) that we can output as a character straight away.
We also get a remainder (35) that will become the dividend in the next step.
By dividing the remainder from the previous step (35) by 10, we obtain
a single digit quotient (3) that we can output as a character straight away.
We also get a remainder (5) that will become the dividend in the next step.
By dividing the remainder from the previous step (5) by 1, we obtain
a single digit quotient (5) that we can output as a character straight away.
Here the remainder will always be 0. (Avoiding this silly division by 1
requires some extra code)
mov bx,.List
.a: xor dx,dx
div word ptr [bx] ; -> AX=[0,9] is Quotient, Remainder DX
xchg ax,dx
add dl,"0" ;Turn into character [0,9] -> ["0","9"]
push ax ;(1)
mov ah,02h ;DOS.DisplayCharacter
int 21h ; -> AL
pop ax ;(1) AX is next dividend
add bx,2
cmp bx,.List+10
jb .a
...
.List:
dw 10000,1000,100,10,1
Although this method will of course produce the correct result, it has a few
drawbacks:
Consider the smaller number 255 and its decomposition:
(0 * 10000) + (0 * 1000) + (2 * 100) + (5 * 10) + (5 * 1)
If we were to use the same 5 step process we'd get "00255". Those 2 leading
zeroes are undesirable and we would have to include extra instructions to get
rid of them.
The divider changes with each step. We had to store a list of dividers in
memory. Dynamically calculating these dividers is possible but introduces a
lot of extra divisions.
If we wanted to apply this method to displaying even larger numbers say
32-bit, and we will want to eventually, the divisions involved would get
really problematic.
So method 1 is impractical and therefore it is seldom used.
Method 2 : division by const 10
Processing the number going from the right to the left seems counter-intuitive
since our goal is to display the leftmost digit first. But as you're about to
find out, it works beautifully.
By dividing the number (65535) by 10, we obtain a quotient (6553) that will
become the dividend in the next step. We also get a remainder (5) that we
can't output just yet and so we'll have to save in somewhere. The stack is a
convenient place to do so.
By dividing the quotient from the previous step (6553) by 10, we obtain
a quotient (655) that will become the dividend in the next step. We also get
a remainder (3) that we can't just yet output and so we'll have to save it
somewhere. The stack is a convenient place to do so.
By dividing the quotient from the previous step (655) by 10, we obtain
a quotient (65) that will become the dividend in the next step. We also get
a remainder (5) that we can't just yet output and so we'll have to save it
somewhere. The stack is a convenient place to do so.
By dividing the quotient from the previous step (65) by 10, we obtain
a quotient (6) that will become the dividend in the next step. We also get
a remainder (5) that we can't just yet output and so we'll have to save it
somewhere. The stack is a convenient place to do so.
By dividing the quotient from the previous step (6) by 10, we obtain
a quotient (0) that signals that this was the last division. We also get
a remainder (6) that we could output as a character straight away, but
refraining from doing so turns out to be most effective and so as before we'll
save it on the stack.
At this point the stack holds our 5 remainders, each being a single digit
number in the range [0,9]. Since the stack is LIFO (Last In First Out), the
value that we'll POP first is the first digit we want displayed. We use a
separate loop with 5 POP's to display the complete number. But in practice,
since we want this routine to be able to also deal with numbers that have
fewer than 5 digits, we'll count the digits as they arrive and later do that
many POP's.
mov bx,10 ;CONST
xor cx,cx ;Reset counter
.a: xor dx,dx ;Setup for division DX:AX / BX
div bx ; -> AX is Quotient, Remainder DX=[0,9]
push dx ;(1) Save remainder for now
inc cx ;One more digit
test ax,ax ;Is quotient zero?
jnz .a ;No, use as next dividend
.b: pop dx ;(1)
add dl,"0" ;Turn into character [0,9] -> ["0","9"]
mov ah,02h ;DOS.DisplayCharacter
int 21h ; -> AL
loop .b
This second method has none of the drawbacks of the first method:
Because we stop when a quotient becomes zero, there's never any problem
with ugly leading zeroes.
The divider is fixed. That's easy enough.
It's real simple to apply this method to displaying larger numbers and
that's precisely what comes next.
Displaying the unsigned 32-bit number held in DX:AX
On 8086 a cascade of 2 divisions is needed to divide the 32-bit value in
DX:AX by 10.
The 1st division divides the high dividend (extended with 0) yielding a high
quotient. The 2nd division divides the low dividend (extended with the
remainder from the 1st division) yielding the low quotient. It's the remainder
from the 2nd division that we save on the stack.
To check if the dword in DX:AX is zero, I've OR-ed both halves in a scratch
register.
Instead of counting the digits, requiring a register, I chose to put a sentinel
on the stack. Because this sentinel gets a value (10) that no digit can ever
have ([0,9]), it nicely allows to determine when the display loop has to stop.
Other than that this snippet is similar to method 2 above.
mov bx,10 ;CONST
push bx ;Sentinel
.a: mov cx,ax ;Temporarily store LowDividend in CX
mov ax,dx ;First divide the HighDividend
xor dx,dx ;Setup for division DX:AX / BX
div bx ; -> AX is HighQuotient, Remainder is re-used
xchg ax,cx ;Temporarily move it to CX restoring LowDividend
div bx ; -> AX is LowQuotient, Remainder DX=[0,9]
push dx ;(1) Save remainder for now
mov dx,cx ;Build true 32-bit quotient in DX:AX
or cx,ax ;Is the true 32-bit quotient zero?
jnz .a ;No, use as next dividend
pop dx ;(1a) First pop (Is digit for sure)
.b: add dl,"0" ;Turn into character [0,9] -> ["0","9"]
mov ah,02h ;DOS.DisplayCharacter
int 21h ; -> AL
pop dx ;(1b) All remaining pops
cmp dx,bx ;Was it the sentinel?
jb .b ;Not yet
Displaying the signed 32-bit number held in DX:AX
The procedure is as follows:
First find out if the signed number is negative by testing the sign bit.
If it is, then negate the number and output a "-" character but beware to not
destroy the number in DX:AX in the process.
The rest of the snippet is the same as for an unsigned number.
test dx,dx ;Sign bit is bit 15 of high word
jns .a ;It's a positive number
neg dx ;\
neg ax ; | Negate DX:AX
sbb dx,0 ;/
push ax dx ;(1)
mov dl,"-"
mov ah,02h ;DOS.DisplayCharacter
int 21h ; -> AL
pop dx ax ;(1)
.a: mov bx,10 ;CONST
push bx ;Sentinel
.b: mov cx,ax ;Temporarily store LowDividend in CX
mov ax,dx ;First divide the HighDividend
xor dx,dx ;Setup for division DX:AX / BX
div bx ; -> AX is HighQuotient, Remainder is re-used
xchg ax,cx ;Temporarily move it to CX restoring LowDividend
div bx ; -> AX is LowQuotient, Remainder DX=[0,9]
push dx ;(2) Save remainder for now
mov dx,cx ;Build true 32-bit quotient in DX:AX
or cx,ax ;Is the true 32-bit quotient zero?
jnz .b ;No, use as next dividend
pop dx ;(2a) First pop (Is digit for sure)
.c: add dl,"0" ;Turn into character [0,9] -> ["0","9"]
mov ah,02h ;DOS.DisplayCharacter
int 21h ; -> AL
pop dx ;(2b) All remaining pops
cmp dx,bx ;Was it the sentinel?
jb .c ;Not yet
Will I need separate routines for different number sizes?
In a program where you need to display on occasion AL, AX, or DX:AX, you could
just include the 32-bit version and use next little wrappers for the smaller
sizes:
; IN (al) OUT ()
DisplaySignedNumber8:
push ax
cbw ;Promote AL to AX
call DisplaySignedNumber16
pop ax
ret
; -------------------------
; IN (ax) OUT ()
DisplaySignedNumber16:
push dx
cwd ;Promote AX to DX:AX
call DisplaySignedNumber32
pop dx
ret
; -------------------------
; IN (dx:ax) OUT ()
DisplaySignedNumber32:
push ax bx cx dx
...
Alternatively, if you don't mind the clobbering of the AX and DX registers use
this fall-through solution:
; IN (al) OUT () MOD (ax,dx)
DisplaySignedNumber8:
cbw
; --- --- --- --- -
; IN (ax) OUT () MOD (ax,dx)
DisplaySignedNumber16:
cwd
; --- --- --- --- -
; IN (dx:ax) OUT () MOD (ax,dx)
DisplaySignedNumber32:
push bx cx
...

Converting a loop from x86 assembly to C language with AND, OR, SHR and SHL instructions and an array

I don't understand what is the problem because the result is right, but there is something wrong in it and i don't get it.
1.This is the x86 code I have to convert to C:
%include "io.inc"
SECTION .data
mask DD 0xffff, 0xff00ff, 0xf0f0f0f, 0x33333333, 0x55555555
SECTION .text
GLOBAL CMAIN
CMAIN:
GET_UDEC 4, EAX
MOV EBX, mask
ADD EBX, 16
MOV ECX, 1
.L:
MOV ESI, DWORD [EBX]
MOV EDI, ESI
NOT EDI
MOV EDX, EAX
AND EAX, ESI
AND EDX, EDI
SHL EAX, CL
SHR EDX, CL
OR EAX, EDX
SHL ECX, 1
SUB EBX, 4
CMP EBX, mask - 4
JNE .L
PRINT_UDEC 4, EAX
NEWLINE
XOR EAX, EAX
RET
2.My converted C code, when I input 0 it output me the right answer but there is something false in my code I don't understand what is:
#include "stdio.h"
int main(void)
{
int mask [5] = {0xffff, 0xff00ff, 0xf0f0f0f, 0x33333333, 0x55555555};
int eax;
int esi;
int ebx;
int edi;
int edx;
char cl = 0;
scanf("%d",&eax);
ebx = mask[4];
ebx = ebx + 16;
int ecx = 1;
L:
esi = ebx;
edi = esi;
edi = !edi;
edx = eax;
eax = eax && esi;
edx = edx && edi;
eax = eax << cl;
edx = edx >> cl ;
eax = eax || edx;
ecx = ecx << 1;
ebx = ebx - 4;
if(ebx == mask[1]) //mask - 4
{
goto L;
}
printf("%d",eax);
return 0;
}
Assembly AND is C bitwise &, not logical &&. (Same for OR). So you want eax &= esi.
(Using &= "compound assignment" makes the C even look like x86-style 2-operand asm so I'd recommend that.)
NOT is also bitwise flip-all-the-bits, not booleanize to 0/1. In C that's edi = ~edi;
Read the manual for x86 instructions like https://www.felixcloutier.com/x86/not, and for C operators like ~ and ! to check that they are / aren't what you want. https://en.cppreference.com/w/c/language/expressions https://en.cppreference.com/w/c/language/operator_arithmetic
You should be single-stepping your C and your asm in a debugger so you notice the first divergence, and know which instruction / C statement to fix. Don't just run the whole thing and look at one number for the result! Debuggers are massively useful for asm; don't waste your time without one.
CL is the low byte of ECX, not a separate C variable. You could use a union between uint32_t and uint8_t in C, or just use eax <<= ecx&31; since you don't have anything that writes CL separately from ECX. (x86 shifts mask their count; that C statement could compile to shl eax, cl. https://www.felixcloutier.com/x86/sal:sar:shl:shr). The low 5 bits of ECX are also the low 5 bits of CL.
SHR is a logical right shift, not arithmetic, so you need to be using unsigned not int at least for the >>. But really just use it for everything.
You're handling EBX completely wrong; it's a pointer.
MOV EBX, mask
ADD EBX, 16
This is like unsigned int *ebx = mask+4;
The size of a dword is 4 bytes, but C pointer math scales by the type size, so +1 is a whole element, not 1 byte. So 16 bytes is 4 dwords = 4 unsigned int elements.
MOV ESI, DWORD [EBX]
That's a load using EBX as an address. This should be easy to see if you single-step the asm in a debugger: It's not just copying the value.
CMP EBX, mask - 4
JNE .L
This is NASM syntax; it's comparing against the address of the dword before the start of the array. It's effectively the bottom of a fairly normal do{}while loop. (Why are loops always compiled into "do...while" style (tail jump)?)
do { // .L
...
} while(ebx != &mask[-1]); // cmp/jne
It's looping from the end of the mask array, stopping when the pointer goes past the end.
Equivalently, the compare could be ebx !-= mask - 1. I wrote it with unary & (address-of) cancelling out the [] to make it clear that it's the address of what would be one element before the array.
Note that it's jumping on not equal; you had your if()goto backwards, jumping only on equality. This is a loop.
unsigned mask[] should be static because it's in section .data, not on the stack. And not const, because again it's in .data not .rodata (Linux) or .rdata (Windows))
This one doesn't affect the logic, only that detail of decompiling.
There may be other bugs; I didn't try to check everything.
if(ebx != mask[1]) //mask - 4
{
goto L;
}
//JNE IMPLIES a !=

Encoding 3 base-6 digits in 8 bits for unpacking performance

I'm looking for an efficient-to-unpack (in terms of small number of basic ALU ops in the generated code) way of encoding 3 base-6 digits (i.e. 3 numbers in the range [0,5]) in 8 bits. Only one is needed at a time, so approaches that need to decode all three in order to access one are probably not good unless the cost of decoding all three is very low.
The obvious method is of course:
x = b%6; // 8 insns
y = b/6%6; // 13 insns
z = b/36; // 5 insns
The instruction counts are measured on x86_64 with gcc>=4.8 which knows how to avoid divs.
Another method (using a different encoding) is:
b *= 6
x = b>>8;
b &= 255;
b *= 6
y = b>>8;
b &= 255;
b *= 6
z = b>>8;
This encoding has more than one representation for many tuples (it uses the whole 8bit range rather than just [0,215]) and appears more efficient if you want all 3 outputs, but wasteful if you only want one.
Are there better approaches?
Target language is C but I've tagged this assembly as well since answering requires some consideration of the instructions that would be generated.
As discussed in comments, a LUT would be excellent if it stays hot in cache. uint8_t LUT[3][256] would need the selector scaled by 256, which takes an extra instruction if it's not a compile-time constant. Scaling by 216 to pack the LUT better is only 1 or 2 instructions more expensive. struct3 LUT[216] is nice, where the struct has a 3-byte array member. On x86, this compiles extremely well in position-dependent code where the LUT base can be a 32-bit absolute as part of the addressing mode (if the table is static):
struct { uint8_t vals[3]; } LUT[216];
unsigned decode_LUT(uint8_t b, unsigned selector) {
return LUT[b].vals[selector];
}
gcc7 -O3 on Godbolt for x86-64 and AArch64
movzx edi, dil
mov esi, esi # zero-extension to 64-bit: goes away when inlining.
lea rax, LUT[rdi+rdi*2] # multiply by 3 and add the base
movzx eax, BYTE PTR [rax+rsi] # then index by selector
ret
Silly gcc used a 3-component LEA (3 cycle latency and runs on fewer ports) instead of using LUT as a disp32 for the actual load (no extra latency for an indexed addressing mode, I think).
This layout has the added advantage of locality if you ever need to decode multiple components of the same byte.
In PIC / PIE code, this costs 2 extra instructions, unfortunately:
movzx edi, dil
lea rax, LUT[rip] # RIP-relative LEA instead of absolute as part of another addressing mode
mov esi, esi
lea rdx, [rdi+rdi*2]
add rax, rdx
movzx eax, BYTE PTR [rax+rsi]
ret
But that's still cheap, and all the ALU instructions are single-cycle latency.
Your 2nd ALU unpacking strategy is promising. I thought at first we could use a single 64-bit multiply to get b*6, b*6*6, and b*6*6*6 in different positions of the same 64-bit integer. (b * ((6ULL*6*6<<32) + (36<<16) + 6)
But the upper byte of each multiply result does depend on masking back to 8-bit after each multiply by 6. (If you can think of a way to not require that, one multiple and shift would be very cheap, especially on 64-bit ISAs where the entire 64-bit multiply result is in one register).
Still, x86 and ARM can multiply by 6 and mask in 3 cycles of latency, the same or better latency than a multiply, or less on Intel CPUs with zero-latency movzx r32, r8, if the compiler avoids using parts of the same register for movzx.
add eax, eax ; *2
lea eax, [rax + rax*2] ; *3
movzx ecx, al ; 0 cycle latency on Intel
.. repeat for next steps
ARM / AArch64 is similarly good, with add r0, r0, r0 lsl #1 for multiply by 3.
As a branchless way to select one of the three, you could consider storing (from ah / ch / ... to get the shift for free) to an array, then loading with the selector as the index. This costs store/reload latency (~5 cycles), but is cheap for throughput and avoids branch misses. (Possibly a 16-bit store and then a byte reload would be good, scaling the selector in the load address and adding 1 to get the high byte, saving an extract instruction before each store on ARM).
This is in fact what gcc emits if you write it this way:
unsigned decode_ALU(uint8_t b, unsigned selector) {
uint8_t decoded[3];
uint32_t tmp = b * 6;
decoded[0] = tmp >> 8;
tmp = 6 * (uint8_t)tmp;
decoded[1] = tmp >> 8;
tmp = 6 * (uint8_t)tmp;
decoded[2] = tmp >> 8;
return decoded[selector];
}
movzx edi, dil
mov esi, esi
lea eax, [rdi+rdi*2]
add eax, eax
mov BYTE PTR -3[rsp], ah # store high half of mul-by-6
movzx eax, al # costs 1 cycle: gcc doesn't know about zero-latency movzx?
lea eax, [rax+rax*2]
add eax, eax
mov BYTE PTR -2[rsp], ah
movzx eax, al
lea eax, [rax+rax*2]
shr eax, 7
mov BYTE PTR -1[rsp], al
movzx eax, BYTE PTR -3[rsp+rsi]
ret
The first store's data is ready 4 cycles after the input to the first movzx, or 5 if you include the extra 1c of latency for reading ah when it's not renamed separately on Intel HSW/SKL. The next 2 stores are 3 cycles apart.
So the total latency is ~10 cycles from b input to result output, if selector=0. Otherwise 13 or 16 cycles.
Measuring a number of different approaches in-place in the function that needs to do this, the practical answer is really boring: it doesn't matter. They're all running at about 50ns per call, and other work is dominating. So for my purposes, the approach that pollutes the cache and branch predictors the least is probably the best. That seems to be:
(b * (int[]){2048,342,57}[i] >> 11) % 6;
where b is the byte containing the packed values and i is the index of the value wanted. The magic constants 342 and 57 are just the multiplicative constants GCC generates for division by 6 and 36, respectively, scaled to a common shift of 11. The final %6 is spurious in the /36 case (i==2) but branching to avoid it does not seem worthwhile.
On the other hand, if doing this same work in a context where there wasn't an interface constraint to have the surrounding function call overhead per lookup, I think an approach like Peter's would be preferable.

Move variable to cl and perform shr using inline assembly

So I am trying to translate the following assignment from C to inline assembly
resp = (0x1F)&(letter >> (3 - numB));
Assuming that the declaration of the variables are the following
unsigned char resp;
unsigned char letter;
int numB;
So I have tried the following:
_asm {
mov ebx, 01fh
movzx edx, letter
mov cl,3
sub cl, numB // Line 5
shr edx, cl
and ebx, edx
mov resp, ebx
}
or the following
_asm {
mov ebx, 01fh
movzx edx, letter
mov ecx,3
sub ecx, numB
mov cl, ecx // Line 5
shr edx, cl
and ebx, edx
mov resp, ebx
}
In both cases I get size operand error in Line 5.
How can I achieve the right shift?
The E*X registers are 32 bits, while the *L registers are 8 bits. Similarly, on Windows, the int type is 32 bits wide, while the char type is 8 bits wide. You cannot arbitrarily mix these sizes within a single instruction.
So, in your first piece of code:
sub cl, numB // Line 5
this is wrong because the cl register stores an 8-bit value, whereas the numB variable is of type int, which stores a 32-bit value. You cannot subtract a 32-bit value from an 8-bit value; both operands to the SUB instruction must be the same size.
Similarly, in your second piece of code:
mov cl, ecx // Line 5
you are trying to move the 32-bit value in ECX into the 8-bit CL register. That can't happen without some kind of truncation, so you have to indicate it explicitly. The MOV instruction requires that both of its operands have the same size.
(MOVZX and MOVSX are obvious exceptions to this rule that the operand types must match for a single instruction. These instructions zero-extend or sign-extend, respectively, a smaller value so that it can be stored into a larger-sized register.)
However, in this case, you don't even need the MOV instruction. Remember that CL is just the lower 8 bits of the full 32-bit ECX register. Therefore, setting ECX also implicitly sets CL. If you only need the lower 8 bits, you can just use CL in a subsequent instruction. Thus, your code becomes:
mov ebx, 01fh ; move constant into 32-bit EBX
movzx edx, BYTE PTR letter ; zero-extended move of 8-bit variable into 32-bit EDX
mov ecx, 3 ; move constant into ECX
sub ecx, DWORD PTR numB ; subtract 32-bit variable from ECX
shr edx, cl ; shift EDX right by the lower 8 bits of ECX
and ebx, edx ; bitwise AND of EDX and EBX, leaving result in EBX
mov BYTE PTR resp, bl ; move lower 8 bits of EBX into 8-bit variable
For the same operand-size matching issue discussed above, I've also had to change the final MOV instruction. You cannot move the value stored in a 32-bit register directly into an 8-bit variable. You will have to move either the lower 8 bits or the upper 8 bits, allowing you to use either the BL or BH registers, which are 8 bits and therefore match the size of resp. In the above code, I assumed that you want only the lower 8 bits, so I've used BL.
Also note that I've used the BYTE PTR and DWORD PTR specifications. These are not strictly necessary in MASM (or Visual Studio's inline assembler), since it can deduce the sizes of the types from the types of the variables. However, I think it increases readability, and is generally a recommended practice. DWORD means 32 bit; it is the same size as int and a 32-bit register (E*X). WORD means 16 bit; it is the same size as short and a 16-bit register (*X). BYTE means 8 bits; it is the same size as char and an 8-bit register (*L or *H).

Assembly How to translate IMUL opcode (with only one oprand) to C code

Say I got
EDX = 0xA28
EAX = 0x0A280105
I run this ASM code
IMUL EDX
which to my understand only uses EAX.. if one oprand is specified
So in C code it should be like
EAX *= EDX;
correct?
After looking in debugger.. I found out EDX got altered too.
0x0A280105 * 0xA28 = 0x67264A5AC8
in debugger
EAX = 264A5AC8
EDX = 00000067
now if you take the answer 0x67264A5AC8 and split off first hex pair, 0x67 264A5AC8
you can clearly see why the EDX and EAX are the way they are.
Okay so a overflow happens.. as it cannot store such a huge number into 32 bits. so it starts using extra 8 bits in EDX
But my question is how would I do this in C code now to get same results?
I'm guessing it would be like
EAX *= EDX;
EDX = 0xFFFFFFFF - EAX; //blah not good with math manipulation like this.
The IMUL instruction actually produces a result twice the size of the operand (unless you use one of the newer versions that can specify a destination). So:
imul 8bit -> result = ax, 16bits
imul 16bit -> result = dx:ax, 32bits
imul 32bit -> result = edx:eax, 64bits
To do this in C will be dependent on the compiler, but some will work doing this:
long result = (long) eax * (long) edx;
eax = result & 0xffffffff;
edx = result >> 32;
This assumes a long is 64 bits. If the compiler has no 64 bit data type then calculating the result becomes much harder, you need to do long multiplication.
You could always inline the imul instruction.

Resources