bne instruction in cpu simulator not working - c

I am trying to write a cpu simulator. But, it doesnt seem to function as expected when bne instruction is encountered. bne performs the same as bqe. bqe seems to be working fine though:
Mux2_32(mbranchAddress, pcPlus4, branchAddress, AND2_1(zero, branch));
Mux2_32(pc, mbranchAddress, jumpAddress, jump);
if(!strcmp(opcode, "000101")&& !strcmp(branch, "1")){ /*bne instruction, ("000101" is the opcode for bne)*/
Mux2_32(mbranchAddress, pcPlus4, branchAddress, AND2_1(NOT_1(zero), branch));
Mux2_32(pc, mbranchAddress, jumpAddress, jump);
}
"branch" is the flag raised when the instruction is a branch instruction. zero is the single bit alu output
MUX2_32(a, b, c, d) works as follows:
a=b if d=0
a=c if d=1
where a, b and c are 32 bits long and d is a single bit.
Could someone please point out why beq instruction works fine but bne doesn't. Thanks

C does not support binary number constants. 000101 is an octal number with value 65... and '000101' is a 64 bit wide multichar constant. You need to use hex numbers, that is opcode 000101 in hex is 0x5...

Related

Assembly Code for part-number P89LPC933935

I am working on translating assembly code to 'C', to which I came across a instruction which I am finding difficult to understand, here is the code
add a,#0-3
jc c_fail
I don't understand the line add a,#0-3
add a,#0-3 is same as add a,#-3 that will adds A with -3. -3 will be considered as 0xFD in 8051 MCU. So, if A value be equal or greater than 3 (a >= 3), program will goes to the c_fail address due to carry flag.
Also, you can replace it with subb a,#3, if use from the jnc instead of jc in its next line.

MIPS assembly code - trying to find out what this code's about

I'm learning assembly code, and given this code, I need to find what this code is about. However I am trying to debug using qtspim. I know what the value inside each register, but I still don't get what is this code about.
If you find the pattern and what this code about, can you tell me how can you do it, and in what line you know the pattern? thanks!
.text
.globl main
.text
main:
li $s0, 0x00BEEF00 ##given $s0= 0x00BEEF00
Init:
li $t0, 0x55555555
li $t1,0x33333333
li $t2,0x0f0f0f0f
li $t3,0x00ff00ff
li $t4,0x0000ffff
Step1: and $s1, $s0, $t0
srl $s0,$s0,1
and $s2,$s0,$t0
add $s0,$s1,$s2
Step2: and $s1,$s0,$t1
srl $s0,$s0,2
and $s2,$s0,$t1
add $s0,$s1,$s2
Step3: and $s1,$s0,$t2
srl $s0,$s0,4
and $s2,$s0,$t2
add $s0,$s1,$s2
Step4: and $s1,$s0,$t3
srl $s0,$s0,8
and $s2,$s0,$t3
add $s0,$s1,$s2
Step5:
and $s1,$s0,$t4
srl $s0,$s0,16
and $s2,$s0,$t4
add $s0,$s1,$s2
End:
andi $s0,$s0,0x003f
enter image description here
enter image description here
mips explain
This is a population count, aka popcount, aka Hamming Weight. The final result in $s0 is the number of 1 bits in the input. This is an optimized implementation that gives the same result as shifting each bit separately to the bottom of a register and adding it to a total. See https://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetNaive
This implementation works by building up from 2-bit accumulators to 4-bit, 8-bit, and 16-bit using SWAR to do multiple narrow adds that don't carry into each other with one add instruction.
Notice how it masks every other bit, then every pair of bits, then every group of 4 bits. And uses a shift to bring the other pair down to line up for an add. Like C
(x & mask) + ((x>>1) & mask)
Repeating this with a larger shift and a different mask eventually gives you the sum of all the bits (treating them all as having place-value of 1), i.e. the number of set bits in the input.
So the GNU C representation of this is __builtin_popcnt(x).
(Except that compilers will actually use a more efficient popcnt: either a byte lookup table for each byte separately, or a bithack that starts this way, but uses a multiply by a number like 0x01010101 to horizontal sum 4 bytes into the high byte of the result. Because multiply is a shift-and-add instruction. How to count the number of set bits in a 32-bit integer?)
But this is broken: it needs to use addu to avoid faulting; if you try to popcnt 0x80000000, the first add will have both inputs = 0x40000000, thus producing signed overflow and faulting.
IDK why anyone uses the add instruction on MIPS. The normal binary add instruction is called addu.
The add-with-trapping-on-signed-overflow instruction is add, which is rarely what you want even if your numbers are signed. You might as well just forget it exists and use addu / addui

Emulation Implementing CPU instructions?

I'm trying to learn emulation programming. I've done a CHIP-8 emulator, Under 40 instructions, and lived because of my music. I'm now hoping to do something A bit more complex, like an SNES. The problem I'm encountering is the sheer number of CPU instructions. Looking through the wiki.SuperFamicom.org 65c816 instruction listing, It look's like a pain in the rear. And I've seen notes here and there on various internet pages that the CPU is the easyest part of an emulator to impliment.
Under the assumption that it was so hard because I was doing it wrong, I looked around and found a simple implimentation: SNES Emulator in 15 minutes which is about 900 lines of code. Easy enough to work through.
So then, from the SNES Emulator in 15 minutes Source, I found where the CPU instructions are. It look's a lot simpler than what I was thinking. I dont really understand it, but it's a few lines of code as opposed to a large mass of code. First thing I notice is that the instructions only have 1 implimentation each. If you look at the table in SuperFamicom then you see that it has
ADC #const
ADC (_db_),X
ADC (_db_,X)
ADC addr
ADC long
...
And The emulator source for (I think) ALL of those is:
// Note: op 0x100 means "NMI", 0x101 means "Reset", 0x102 means "IRQ". They are implemented in terms of "BRK".
// User is responsible for ensuring that WB() will not store into memory while Reset is being processed.
unsigned addr=0, d=0, t=0xFF, c=0, sb=0, pbits = op<0x100 ? 0x30 : 0x20;
// Define the opcode decoding matrix, which decides which micro-operations constitute
// any particular opcode. (Note: The PLA of 6502 works on a slightly different principle.)
const unsigned o8 = op / 32, o8m = 1u << (op%32);
// Fetch op'th item from a bitstring encoded in a data-specific variant of base64,
// where each character transmits 8 bits of information rather than 6.
// This peculiar encoding was chosen to reduce the source code size.
// Enum temporaries are used in order to ensure compile-time evaluation.
#define t(w8,w7,w6,w5,w4,w3,w2,w1,w0) if( \
(o8<1?w0##u : o8<2?w1##u : o8<3?w2##u : o8<4?w3##u : \
o8<5?w4##u : o8<6?w5##u : o8<7?w6##u : o8<8?w7##u : w8##u) & o8m)
t(0,0xAAAAAAAA,0x00000000,0x00000000,0x00000000,0xAAAAA2AA,0x00000000,0x00000000,0x00000000) { c = t; t += A + P.C; P.V = (c^t) & (A^t) & 0x80; P.C = t & 0x100; }
In short, my General question:
Condensing the phenomenal cosmic power of CPU instructions into an itty bitty piece of code
Questions specific to the SNES emulator in 15 minutes source (portion posted above):
How does t(0, 0xAAAAAAAA, 0x00000000, ....) parse the instruction? I see the if statment, but I dont know where the number's for any of the arguments come from, or what they mean to the overall code.
Why o8 = op / 32 and o8m = 1u << (op%32)?
The opcodes for ADC has ADC #const which has a 2 byte operand, or ADC addr which has a 3 byte operand. And the code t(0, 0xAAAAAAAA, ...) impliments both cases?
While I'm asking:
what do the dp, _dp_ and sr that appear in ADC dp, ADC (_dp_) and ADC sr,S mean?
what is the difference between ADC (_dp_,X) and ADC dp,X? (probably redundand given the question above.)
I can't answer all of this, but dp stands for Direct Page, meaning that the instruction takes a single-byte operand which is a memory address within the Direct Page. Direct Page addressing is an extension of the Zero Page addressing mode of the 6502, where the single-byte addresses referred to memory locations $00 through $FF. The 16-bit derivatives of the 6502 have a configuration register which basically relocates the Zero Page to an alternate location.f
In the wiki page you linked to, some of the dp in the table have underscores on them, and the others are in italics. I assume that they are all intended to be italic, and the wiki markup isn't working. A quick check of the Edit link supports this assumption (in the wiki source, they all have underscores). So don't read anything into that.
In 6502 assembly and derivatives of it, ADC dp,X means... let's take a concrete example instead... ADC $10,X means to add $10 to the value in register X to obtain an address, then load a value from that address and add it to the accumulator. ADC ($10,X) adds an extra level of indirection: add $10 to X to obtain an address, load a value from that address, interpret the loaded value as another address, and load the value from that address and add it to the accumulator. Parenthesized operands always add a level of indirection.
Note that the available modes include (dp,X) and (dp),Y and the placement of the parentheses relative to the comma and register is significant. With (dp),Y the value of Y is added to the first loaded value to get the address to use in the second load.
As for that emulator... code golf doesn't lead to enhanced readability! I don't think the portion you've posted is actually understandable by itself, and I don't feel like tracking down and reading the rest of it. But the key concept in the t macro is bitstring. Its arguments are a series of 9 bitmasks, each 32 bits long, for a total of 288 bits. Every possible opcode (256 of them), plus the 3 pseudo-opcodes mentioned in the first comment, is therefore represented by a single bit in this 288-bit-long bitstring, with 29 bits left over.
That explains the construction of o8 and o8m. The 8-bit value is split into a 3-bit portion (to select an argument from the 8 arguments supplied to t) and a 5-bit portion (to select a single bit from the selected argument). The big ?: chain does the first selection and the combination of & and 1 << ... does the select selection.
And then, oh look we have a variable called t too. It's not related to the macro. Giving them the same name was just cruel.
Maybe I can figure out what that bitstring is doing. When the opcode is a low number, o8 (the high bits) will be 0, so the ?: chain will use w0, which is the last argument to the macro. As the opcode increases, the selected argument moves leftward through the argument list to w1, then w2... and the o8m selector likewise starts at the right and moves left (& (1<<0) is the rightmost bit, & (1<<1) is the next one, etc.) and the if condition will be true when the selected bit is 1. Values are:
0, # opcodes $100 and up
0xAAAAAAAA, # opcodes $E0 to $FF
0x00000000, # opcodes $C0 to $DF
0x00000000, # opcodes $A0 to $BF
0x00000000, # opcodes $80 to $9F
0xAAAAA2AA, # opcodes $60 to $7F
0x00000000, # opcodes $40 to $5F
0x00000000, # opcodes $20 to $3F
0x00000000 # opcodes $00 to $1F
or in binary
0, # opcodes $100 and up
0b10101010101010101010101010101010, # opcodes $E0 to $FF
0b00000000000000000000000000000000, # opcodes $C0 to $DF
0b00000000000000000000000000000000, # opcodes $A0 to $BF
0b00000000000000000000000000000000, # opcodes $80 to $9F
0x10101010101010101010001010101010, # opcodes $60 to $7F
0b00000000000000000000000000000000, # opcodes $40 to $5F
0b00000000000000000000000000000000, # opcodes $20 to $3F
0b00000000000000000000000000000000 # opcodes $00 to $1F
Reading each line from right to left, the 1's are in positions corresponding to these opcodes: $61 $63 $65 $67 $69 $6D $6F $71 $73 $75 $77 $79 $7B $7D $7F $E1 $E3 $E5 $E7 $E9 $EB $ED $EF $F1 $F3 $F5 $F7 $F9 $FB $FD $FF
Hmm... that sort of resembles the list of ADC and SBC opcodes, but some of them are wrong.
Oh (I finally gave up and looked at some more of the emulator code) that's a NES emulator, not a SNES emulator, so it only has 6502 opcodes.

64bit/32bit division faster algorithm for ARM / NEON?

I am working on a code in which at two places there are 64bit by 32 bit fixed point division and the result is taken in 32 bits. These two places are together taking more than 20% of my total time taken. So I feel like if I could remove the 64 bit division, I could optimize the code well. In NEON we can have some 64 bit instructions. Can any one suggest some routine to get the bottleneck resolved by using some faster implementation.
Or if I could make the 64 bit/32 bit division in terms of 32bit/32 bit division in C, that also is fine?
If any one has some idea, could you please help me out?
I did a lot of fixed-point arithmetic in the past and did a lot of research looking for fast 64/32 bit divisions myself. If you google for 'ARM division' you will find tons of great links and discussion about this issue.
The best solution for ARM architecture, where even a 32 bit division may not be available in hardware is here:
http://www.peter-teichmann.de/adiv2e.html
This assembly code is very old, and your assembler may not understand the syntax of it. It is however worth porting the code to your toolchain. It is the fastest division code for your special case I've seen so far, and trust me: I've benchmarked them all :-)
Last time I did that (about 5 years ago, for CortexA8) this code was about 10 times faster than what the compiler generated.
This code doesn't use NEON. A NEON port would be interesting. Not sure if it will improve the performance much though.
Edit:
I found the code with assembler ported to GAS (GNU Toolchain). This code is working and tested:
Divide.S
.section ".text"
.global udiv64
udiv64:
adds r0,r0,r0
adc r1,r1,r1
.rept 31
cmp r1,r2
subcs r1,r1,r2
adcs r0,r0,r0
adc r1,r1,r1
.endr
cmp r1,r2
subcs r1,r1,r2
adcs r0,r0,r0
bx lr
C-Code:
extern "C" uint32_t udiv64 (uint32_t a, uint32_t b, uint32_t c);
int32_t fixdiv24 (int32_t a, int32_t b)
/* calculate (a<<24)/b with 64 bit immediate result */
{
int q;
int sign = (a^b) < 0; /* different signs */
uint32_t l,h;
a = a<0 ? -a:a;
b = b<0 ? -b:b;
l = (a << 24);
h = (a >> 8);
q = udiv64 (l,h,b);
if (sign) q = -q;
return q;
}

x86 Assembly: INC and DEC instruction and overflow flag

In x86 assembly, the overflow flag is set when an add or sub operation on a signed integer overflows, and the carry flag is set when an operation on an unsigned integer overflows.
However, when it comes to the inc and dec instructions, the situation seems to be somewhat different. According to this website, the inc instruction does not affect the carry flag at all.
But I can't find any information about how inc and dec affect the overflow flag, if at all.
Do inc or dec set the overflow flag when an integer overflow occurs? And is this behavior the same for both signed and unsigned integers?
============================= EDIT =============================
Okay, so essentially the consensus here is that INC and DEC should behave the same as ADD and SUB, in terms of setting flags, with the exception of the carry flag. This is also what it says in the Intel manual.
The problem is I can't actually reproduce this behavior in practice, when it comes to unsigned integers.
Consider the following assembly code (using GCC inline assembly to make it easier to print out results.)
int8_t ovf = 0;
__asm__
(
"movb $-128, %%bh;"
"decb %%bh;"
"seto %b0;"
: "=g"(ovf)
:
: "%bh"
);
printf("Overflow flag: %d\n", ovf);
Here we decrement a signed 8-bit value of -128. Since -128 is the smallest possible value, an overflow is inevitable. As expected, this prints out: Overflow flag: 1
But when we do the same with an unsigned value, the behavior isn't as I expect:
int8_t ovf = 0;
__asm__
(
"movb $255, %%bh;"
"incb %%bh;"
"seto %b0;"
: "=g"(ovf)
:
: "%bh"
);
printf("Overflow flag: %d\n", ovf);
Here I increment an unsigned 8-bit value of 255. Since 255 is the largest possible value, an overflow is inevitable. However, this prints out: Overflow flag: 0.
Huh? Why didn't it set the overflow flag in this case?
The overflow flag is set when an operation would cause a sign change. Your code is very close. I was able to set the OF flag with the following (VC++) code:
char ovf = 0;
_asm {
mov bh, 127
inc bh
seto ovf
}
cout << "ovf: " << int(ovf) << endl;
When BH is incremented the MSB changes from a 0 to a 1, causing the OF to be set.
This also sets the OF:
char ovf = 0;
_asm {
mov bh, 128
dec bh
seto ovf
}
cout << "ovf: " << int(ovf) << endl;
Keep in mind that the processor does not distinguish between signed and unsigned numbers. When you use 2's complement arithmetic, you can have one set of instructions that handle both. If you want to test for unsigned overflow, you need to use the carry flag. Since INC/DEC don't affect the carry flag, you need to use ADD/SUB for that case.
IntelĀ® 64 and IA-32 Architectures Software Developer's Manuals
Look at the appropriate manual Instruction Set Reference, A-M. Every instruction is precisely documented.
Here is the INC section on affected flags:
The CF flag is not affected. The OF, SZ, ZF, AZ, and PF flags are set according to the result.
try changing your test to pass in the number rather than hard code it, then have a loop that tries all 256 numbers to find the one if any that affects the flag. Or have the asm perform the loop and exit out when it hits the flag and or when it wraps around to the number it started with (start with something other than 0x00, 0x7f, 0x80, or 0xFF).
EDIT
.globl inc
inc:
mov $33, %eax
top:
inc %al
jo done
jmp top
done:
ret
.globl dec
dec:
mov $33, %eax
topx:
dec %al
jo donex
jmp topx
donex:
ret
Inc overflows when it goes from 0x7F to 0x80. dec overflows when it goes from 0x80 to 0x7F, I suspect the problem is in the way you are using inline assembler.
As many of the other answers have pointed out, INC and DEC do not affect the CF, whereas ADD and SUB do.
What has not been said yet, however, is that this might make a performance difference. Not that you'd usually be bothered by that unless you are trying to optimise the hell out of a routine, but essentially not setting the CF means that INC/DEC only write to part of the flags register, which can cause a partial flag register stall, see Intel 64 and IA-32 Architectures Optimization Reference Manual or Agner Fog's optimisation manuals.
Except for the carry flag inc sets the flags the same way as add operand 1 would.
The fact that inc does not affect the carry flag is very important.
http://oopweb.com/Assembly/Documents/ArtOfAssembly/Volume/Chapter_6/CH06-2.html#HEADING2-117
The CPU/ALU is only capable of handling unsigned binary numbers, and then it uses OF, CF, AF, SF, ZF, etc., to allow you to decide whether to use it as a signed number (OF), an unsigned number (CF) or a BCD number (AF).
About your problem, remember to consider the binary numbers themselves, as unsigned.
**Also, the overflow and the OF require 3 numbers: The input number, a second number to use in the arithmetic, and the result number.
Overflow is activated only if the first and second numbers have the same value for the sign bit (the most significant bit) and the result has a different sign. As in, adding 2 negative numbers resulted in a positive number, or adding 2 positive numbers resulted in a negative number:
if( (Sign_Num1==Sign_Num2) && (Sign_Result!=Sign_Num1) ) OF=1;
else OF=0;
For your first problem, you are using -128 as the first number. The second number is implicitly -1, used by the DEC instruction. So we really have the binary numbers 0x80 and 0xFF. Both them have the sign bit set to 1. The result is 0x7F, which is a number with the sign bit set to 0. We got 2 initial numbers with the same sign, and a result with a different sign, so we indicate an overflow. -128-1 resulted in 127, and thus the overflow flag is set to indicate a wrong signed result.
For your second problem, you are using 255 as the first number. The second number is implicitly 1, used by the INC instruction. So we really have the binary numbers 0xFF and 0x01. Both them have a different sign bit, so it is not possible to get an overflow (it is only possible to overflow when basically adding 2 numbers of the same sign, but it is never possible to overflow with 2 numbers of a different sign because they will never lead to go beyond the possible signed value). The result is 0x00, and it doesn't set the overflow flag because 255+1, or more exactly, -1+1 gives 0, which is obviously correct for signed arithmetic.
Remember that for the overflow flag to be set, the 2 numbers being added/subtracted need to have the sign bit with the same value, and then the result must have a sign bit with a value different from them.
What the processor does is set the appropriate flags for the results of these instructions (add, adc, dec, inc, sbb, sub) for both the signed and unsigned cases i e two different flag results for every op. The alternative would be having two sets of instructions where one sets signed-related flags and the other the unsigned-related. If the issuing compiler is using unsigned variables in the operation it will test carry and zero (jc, jnc, jb, jbe etc), if signed it tests overflow, sign and zero (jo, jno, jg, jng, jl, jle etc).

Resources