need help in understanding Arm processor - arm

I am a mechanical eng student, I am currently studying about the ARM processor. I just came across a question but I don't understand how they could arrive at these answers. Need help in understanding. Also please help in how to convert from negative decimal to hexadecimal. Thank you.
What is the result of the following calculations performed in ARM? How
are the status flags set? (Write the operands and the result in 32-bit
hexadecimal notation!)
(–1) + (+1)
(0) – (+1)
(2^31 – 1) + (+1)
(–4) + (+5)
The Answers are :
(-1)+(+1):
-1: 0xFFFFFFFF
1: 0x00000001
----------------
0: 0x00000000
N=0, Z=1, C=1, V=0
(0)-(+1): subtraction replaced by addition and negation => (0)+(-1)
0: 0x00000000
-1: 0xFFFFFFFF
----------------
0: 0xFFFFFFFF
N=1, Z=0, C=0, V=0
(2^31-1)+(+1):
: 0x7FFFFFFF
1: 0x00000001
----------------
0: 0x80000000
N=1, Z=0, C=0, V=1
(-4)+(+5):
-4: 0xFFFFFFFC
5: 0x00000005
----------------
1: 0x00000001
N=0, Z=0, C=1, V=0

The way to translate negative binary into hexa is called two's complement
Status flags are:
N : strictly negative result
Z : result is zero
C : carry : if you consider the numbers as unsigned, the operation produces a carry on the most significant bit, on ARM, this means that the result should be 2^32 + the actual result stored in the register.
V : signed oVerflow. This means that the result of your operation hasn't the sign it should have. For instance, you add two positive integers and you get a negative one.

Related

How do I deal with negative numbers in C?

I'm trying to code a very basic Assembler language in C.
Each instruction has 32 bits, the first 8 being the Opcode and the following 24 containing an immediate value (if the instruction comes with one. Else it's just zeros).
The Opcodes are defined as simple numbers. For example PUSHC is defined as 1.
Now I want to test if the Opcode and the immediate value are properly separable.
I wrote the following defintions:
#define SIGN_EXTEND(i) ((i) & 0x00800000 ? (i) | 0xFF000000 : (i)) // Handles negative Immediate Values (i)
#define IMMEDIATE(x) SIGN_EXTEND(x & 0x00FFFFFF) // Returns the Immediate Value of an instruction (x)
#define OPCODE(x) (x >> 24) // Returns the Opcode of an instruction (x)
#define OP(o) ((o) << 24) // An instruction with an Opcode (o)
#define OPI(o, i) (OP(o) | IMMEDIATE(i)) // An instruction with an Opcode (o) and an Immediate Value (i)
In a method void runProg(uint32_t prog[]) {...}, I pass an Array that contains the instructions for an Assembler program, in order, separated by commas, like such:
uint32_t progTest[] = {OPI(PUSHC, 3), OPI(PUSHC, 4}, OP(ADD), OP(HALT)};
runProg(progTest);
Here's what runProg does:
void runProg(uint32_t prog[]) {
int pc = -1; // the program counter
uint32_t instruction;
do {
pc++;
instruction = prog[pc];
printf("Command %d : 0x%08x -> Opcode [%d] Immediate [%d]\n",
pc, instruction, OPCODE(instruction), IMMEDIATE(instruction));
} while (prog[pc] != OP(HALT));
}
So it prints out the full command as a hexidecimal, followed by just the Opcode, followed by just the immediate value. This works for all instructions I have defined. The test program above gives this output:
Command 0 : 0x01000003 -> Opcode [1] Immediate [3]
Command 1 : 0x01000004 -> Opcode [1] Immediate [4]
Command 2 : 0x02000000 -> Opcode [2] Immediate [0]
Command 3 : 0x00000000 -> Opcode [0] Immediate [0]
Now here's the problem:
The command PUSHC only works with positive values.
Changing the immediate value of the first PUSHC call to -3 produces this result:
Command 0 : 0xfffffffd -> Opcode [255] Immediate [-3]
Command 1 : 0x01000004 -> Opcode [1] Immediate [4]
Command 2 : 0x02000000 -> Opcode [2] Immediate [0]
Command 3 : 0x00000000 -> Opcode [0] Immediate [0]
So as you can see, the immediate value is displayed correctly, however the command is missing the Opcode.
Changing the definition of IMMEDIATE(x)
from
#define IMMEDIATE(x) SIGN_EXTEND(x & 0x00FFFFFF)
to
#define IMMEDIATE(x) (SIGN_EXTEND(x) & 0x00FFFFFF)
produces the exact opposite result, where the Opcode is correctly separated, but the immediate value is wrong:
Command 0 : 0x01fffffd -> Opcode [1] Immediate [16777213]
Command 1 : 0x01000004 -> Opcode [1] Immediate [4]
Command 2 : 0x02000000 -> Opcode [2] Immediate [0]
Command 3 : 0x00000000 -> Opcode [0] Immediate [0]
Therefore, I am relatively sure that my definition of SIGN_EXTEND(i) is flawed. However, I can't seem to put my finger on it.
Any ideas on how to fix this are greatly appreciated!
0x01fffffd is correct; the 8-bit operation code field contains 0x01, and the 24-bit immediate field contains 0xfffffd, which is the 24-bit two’s complement encoding of −3. When encoding, there is no need for sign extension; the IMMEDIATE macro is passed a (presumably) 32-bit two’s complement value, and its job is merely to reduce it to 24 bits, not to extend it. So it merely needs to be #define IMMEDIATE(x) ((x) & 0xffffff). When you want to interpret the immediate field, as for printing, then you need to convert from 24 bits to 32. For that, you need a different macro, the way you have different macros to encode an operation code (OPI) and to decode/extract/interpret the operation code (OPCODE).
You can interpret the immediate field with this function:
static int InterpretImmediate(unsigned x)
{
// Extract the low 24 bits.
x &= (1u<<24) - 1;
/* Flip the sign bit. If the sign bit is 0, this
adds 2**23. If the sign bit is 1, this subtracts 2**23.
*/
x ^= 1u<<23;
/* Convert to int and subtract 2**23. If the sign bit started as 0, this
negates the 2**23 we added above. If it started as 1, this results in
a total subtraction of 2**24, which produces the two’s complement of a
24-bit encoding.
*/
return (int) x - (1u<<23);
}

Trick to divide a constant (power of two) by an integer

NOTE This is a theoretical question. I'm happy with the performance of my actual code as it is. I'm just curious about whether there is an alternative.
Is there a trick to do an integer division of a constant value, which is itself an integer power of two, by an integer variable value, without having to use do an actual divide operation?
// The fixed value of the numerator
#define SIGNAL_PULSE_COUNT 0x4000UL
// The division that could use a neat trick.
uint32_t signalToReferenceRatio(uint32_t referenceCount)
{
// Promote the numerator to a 64 bit value, shift it left by 32 so
// the result has an adequate number of bits of precision, and divide
// by the numerator.
return (uint32_t)((((uint64_t)SIGNAL_PULSE_COUNT) << 32) / referenceCount);
}
I've found several (lots) of references for tricks to do division by a constant, both integer and floating point. For example, the question What's the fastest way to divide an integer by 3? has a number of good answers including references to other academic and community materials.
Given that the numerator is constant, and it's an integer power of two, is there a neat trick that could be used in place of doing an actual 64 bit division; some kind of bit-wise operation (shifts, AND, XOR, that kind of stuff) or similar?
I don't want any loss of precision (beyond a possible half bit due to integer rounding) greater than that of doing the actual division, as the precision of the instrument relies on the precision of this measurement.
"Let the compiler decide" is not an answer, because I want to know if there is a trick.
Extra, Contextual Information
I'm developing a driver on a 16 bit data, 24 bit instruction word micro-controller. The driver does some magic with the peripheral modules to obtain a pulse count of a reference frequency for a fixed number of pulses of a signal frequency. The required result is a ratio of the signal pulses to the reference pulse, expressed as an unsigned 32 bit value. The arithmetic for the function is defined by the manufacturer of the device for which I'm developing the driver, and the result is processed further to obtain a floating point real-world value, but that's outside the scope of this question.
The micro-controller I'm using has a Digital Signal Processor that has a number of division operations that I could use, and I'm not afraid to do so if necessary. There would be some minor challenges to overcome with this approach, beyond the putting together the assembly instructions to make it work, such as the DSP being used to do a PID function in a BLDC driver ISR, but nothing I can't manage.
You cannot use clever mathematical tricks to not do a division, but you can of course still use programming tricks if you know the range of your reference count:
Nothing beats a pre-computed lookup table in terms of speed.
There are fast approximate square root algorithms (probably already in your DSP), and you can improve the approximation by one or two Newton-Raphson iterations. If doing the computation with floating-point numbers is accurate enough for you, you can probably beat a 64bit integer division in terms of speed (but not in clarity of code).
You mentioned that the result will be converted to floating-point later, it might be beneficial to not compute the integer division at all, but use your floating point hardware.
I worked out a Matlab version, using fixed point arithmetic.
This method assumes that a integer version of log2(x) can be calculated efficiently, which is true for dsPIC30/33F and TI C6000 that have instruction to detect the most significant 1 of an integer.
For this reason, this code has strong ISA depency and can not be written in portable/standard C and can be improved using instructions like multiply-and-add, multiply-and-shift, so I won't try translating it to C.
nrdiv.m
function [ y ] = nrdiv( q, x, lut)
% assume q>31, lut = 2^31/[1,1,2,...255]
p2 = ceil(log2(x)); % available in TI C6000, instruction LMBD
% available in Microchip dsPIC30F/33F, instruction FF1L
if p2<8
pre_shift=0;
else
pre_shift=p2-8;
end % shr = (p2-8)>0?(p2-8):0;
xn = shr(x, pre_shift); % xn = x>>pre_shift;
y = shr(lut(xn), pre_shift); % y = lut[xn]>pre_shift;
y = shr(y * (2^32 - y*x), 30); % basic iteration
% step up from q31 to q32
y = shr(y * (2^33 - y*x), (64-q)); % step up from q32 to desired q
if q>39
y = shr(y * (2^(1+q) - y*x), (q)); % when q>40, additional
% iteration is required,
end % no step up is performed
end
function y = shr(x, r)
y=floor(x./2^r); % simulate operator >>
end
test.m
test_number = (2^22-12345);
test_q = 48;
lut_q31 = round(2^31 ./ [1,[1:1:255]]);
display(sprintf('tested 2^%d/%d, diff=%f\n',test_q, test_number,...
nrdiv( 39, (2^22-5), lut_q31) - 2^39/(2^22-5)));
sample output
tested 2^48/4181959, diff=-0.156250
reference:
Newton–Raphson division
A little late but here is my solution.
First some assumptions:
Problem:
X=N/D where N is a constant ans a power of 2.
All 32 bit unsigned integers.
X is unknown but we have a good estimate
(previous but no longer accurate solution).
An exact solution is not required.
Note: due to integer truncation this is not an accurate algorithm!
An iterative solution is okay (improves with each loop).
Division is much more expensive than multiplication:
For 32bit unsigned integer for Arduino UNO:
'+/-' ~0.75us
'*' ~3.5us
'/' ~36us 4 We seek to replace the Basically lets start with Newton's method:
Xnew=Xold-f(x)/(f`(x)
where f(x)=0 for the solution we seek.
Solving this I get:
Xnew=XNew*(C-X*D)/N
where C=2*N
First trick:
Now that the Numerator (constant) is now a Divisor (constant) then one solution here (which does not require the N to be a power of 2) is:
Xnew=XNew*(C-X*D)*A>>M
where C=2*N, A and M are constants (look for dividing by a constant tricks).
or (staying with Newtons method):
Xnew=XNew*(C-X*D)>>M
where C=2>>M where M is the power.
So I have 2 '*' (7.0us), a '-' (0.75us) and a '>>' (0.75us?) or 8.5us total (rather than 36us), excluding other overheads.
Limitations:
As the data type is 32 bit unsigned, 'M' should not exceed 15 else there will be problems with overflow (you can probably get around this using a 64bit intermediate data type).
N>D (else the algorithm blows up! at least with unsigned integer)
Obviously the algorithm will work with signed and float data types)
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
int main(void)
{
unsigned long c,d,m,x;
// x=n/d where n=1<<m
m=15;
c=2<<m;
d=10;
x=10;
while (true)
{
x=x*(c-d*x)>>m;
printf("%ld",x);
getchar();
}
return(0);
}
Having tried many alternatives, I ended up doing normal binary long division in assembly language. However, the routine does use a few optimisations that bring the execution time down to an acceptable level.
/*
* Converts the reference frequency count for a specific signal frequency
* to a ratio.
* Xs = Ns * 2^32 / Nr
* Where:
* 2^32 is a constant scaling so that the maximum accuracy can be achieved.
* Ns is the number of signal counts (fixed at 0x4000 by hardware).
* Nr is the number of reference counts, passed in W1:W0.
* #param W1:W0 The number of reference frequency pulses.
* #return W1:W0 The scaled ratio.
*/
.align 2
.global _signalToReferenceRatio
.type _signalToReferenceRatio, #function
; This is the position of the most significant bit of the fixed Ns (0x4000).
.equ LOG2_DIVIDEND, 14
.equ DIVISOR_LIMIT, LOG2_DIVIDEND+1
.equ WORD_SIZE, 16
_signalToReferenceRatio:
; Create a dividend, MSB-aligned with the divisor, in W2:W3 and place the
; number of iterations required for the MSW in [W14] and the LSW in [W14+2].
LNK #4
MUL.UU W2, #0, W2
FF1L W1, W4
; If MSW is zero the argument is out of range.
BRA C, .returnZero
SUBR W4, #WORD_SIZE, W4
; Find the number of quotient MSW loops.
; This is effectively 1 + log2(dividend) - log2(divisor).
SUBR W4, #DIVISOR_LIMIT, [W14]
BRA NC, .returnZero
; Since the SUBR above is always non-negative and the C flag set, use this
; to set bit W3<W5> and the dividend in W2:W3 = 2^(16+W5) = 2^log2(divisor).
BSW.C W3, W4
; Use 16 quotient LSW loops.
MOV #WORD_SIZE, W4
MOV W4, [W14+2]
; Set up W4:W5 to hold the divisor and W0:W1 to hold the result.
MOV.D W0, W4
MUL.UU W0, #0, W0
.checkLoopCount:
; While the bit count is non-negative ...
DEC [W14], [W14]
BRA NC, .nextWord
.alignQuotient:
; Shift the current quotient word up by one bit.
SL W0, W0
; Subtract divisor from the current dividend part.
SUB W2, W4, W6
SUBB W3, W5, W7
; Check if the dividend part was less than the divisor.
BRA NC, .didNotDivide
; It did divide, so set the LSB of the quotient.
BSET W0, #0
; Shift the remainder up by one bit, with the next zero in the LSB.
SL W7, W3
BTSC W6, #15
BSET W3, #0
SL W6, W2
BRA .checkLoopCount
.didNotDivide:
; Shift the next (zero) bit of the dividend into the LSB of the remainder.
SL W3, W3
BTSC W2, #15
BSET W3, #0
SL W2, W2
BRA .checkLoopCount
.nextWord:
; Test if there are any LSW bits left to calculate.
MOV [++W14], W6
SUB W6, #WORD_SIZE, [W14--]
BRA NC, .returnQ
; Decrement the remaining bit counter before writing it back.
DEC W6, [W14]
; Move the working part of the quotient up into the MSW of the result.
MOV W0, W1
BRA .alignQuotient
.returnQ:
; Return the quotient in W0:W1.
ULNK
RETURN
.returnZero:
MUL.UU W0, #0, W0
ULNK
RETURN
.size _signalToReferenceRatio, .-_signalToReferenceRatio

Fibonacci sequence on 68HC11 using 4-byte numbers

I'm trying to figure out a way to implement the Fibonacci sequence using a 68HC11 IDE that uses a Motorolla as11 assembler.
I've done it using 2-byte unsigned in little-endian format, now I'm attempting to change it using 4-byte variables, using big-endian
My pseudo-code (which is written in c):
RESULT = 1;
PREV = 1;
COUNT = N;
WHILE(COUNT > 2){
NEXT = RESULT + PREV;
PREV = RESULT;
RESULT = NEXT;
COUNT--;
}
I'll include some of my current assembly code. Please note that count is set to unsigned int at 1-byte, and prev, next, and result are unsigned ints at 2 bytes. N is unsigned, set to 10.
ORG $C000
LDD #1
STD RESULT
STD PREV
LDAA N
STAA COUNT
WHILE LDAA COUNT
CMPA #2
BLS ENDWHILE
LDD RESULT
ADDD PREV
STD NEXT
LDD RESULT
STD PREV
LDD NEXT
STD RESULT
DEC COUNT
BRA WHILE
ENDWHILE
DONE BRA DONE
END
The issue that I'm having is now altering this (other than the obvious variable changes/declarations) N will begin at 40 now, not 10. Would altering my pseudo-code to include pointers allow me to implement it 1 to 1 better with big-endian? Since this is in little-endian, I assume I have to alter some of the branches. Yes this is an assignment for class, I'm not looking for the code, just some guidance would be nice.
Thank you!
(Your problem description is a bit vague as to what your actual problem is, so I may be guessing a bit.)
BTW, 68HC11 is big-endian.
The 68HC11 has a 16-bit accumulator, so as soon as your result overflows this, you need to do math operations in pieces.
I suppose you mean that by changing N from 10 to 40 your fibonacci number becomes too big to be stored in a 16-bit variable.
The use or not of pointers is irrelevant to your problem as you can solve it both with or without them. For example, you can use a pointer to tell your routine where to store the result.
Depending on your maximum expected result, you need to adjust your routine. I will assume you won't need to go over 32-bit result (N=47 => 2971215073).
Here's a partially tested but unoptimized possibility (using ASM11 assembler):
STACKTOP equ $1FF
RESET_VECTOR equ $FFFE
org $100 ;RAM
result rmb 4
org $d000 ;ROM
;*******************************************************************************
; Purpose: Return the Nth fibonacci number in result
; Input : HX -> 32-bit result
; : A = Nth number to calculate
; Output : None
; Note(s):
GetFibonacci proc
push ;macro to save D, X, Y
;--- define & initialize local variables
des:4 ;allocate 4 bytes on stack
tmp## equ 5 ;5,Y: temp number
ldab #1
pshb
clrb
pshb:3
prev## equ 1 ;1,Y: previous number (initialized to 1)
psha
n## equ 0 ;0,Y: N
;---
tsy ;Y -> local variables
clra
clrb
std ,x
std prev##,y
ldd #1
std 2,x
std prev##+2,y
Loop## ldaa n##,y
cmpa #2
bls Done##
ldd 2,x
addd prev##+2,y
std tmp##+2,y
ldaa 1,x
adca prev##+1,y
staa tmp##+1,y
ldaa ,x
adca prev##,y
staa tmp##,y
ldd ,x
std prev##,y
ldd 2,x
std prev##+2,y
ldd tmp##,y
std ,x
ldd tmp##+2,y
std 2,x
dec n##,y
bra Loop##
Done## ins:9 ;de-allocate all locals from stack
pull ;macro to restore D, X, Y
rts
;*******************************************************************************
; Test code
;*******************************************************************************
Start proc
ldx #STACKTOP ;setup our stack
txs
ldx #result
ldaa #40 ;Nth fibonacci number to get
bsr GetFibonacci
bra * ;check 'result' for answer
org RESET_VECTOR
dw Start

a function to check if the nth bit is set in a byte

I want a simple C function which will return true if the n-th bit in a byte is set to1. Otherwise it will return false.
This is a critical function in terms of execution time, so I am thinking of the most optimal way to do that.
The following function can do what you need:
int isNthBitSet (unsigned char c, int n) {
static unsigned char mask[] = {128, 64, 32, 16, 8, 4, 2, 1};
return ((c & mask[n]) != 0);
}
This assumes 8-bit bytes (not a given in C) and the zeroth bit being the highest order one. If those assumption are incorrect, it simply comes down to expanding and/or re-ordering the mask array.
No error checking is done since you cited speed as the most important consideration. Do not pass in an invalid n, that'll be undefined behaviour.
At insane optimisation level -O3, gcc gives us:
isNthBitSet: pushl %ebp
movl %esp, %ebp
movl 12(%ebp), %eax
movzbl 8(%ebp), %edx
popl %ebp
testb %dl, mask(%eax)
setne %al
movzbl %al, %eax
ret
mask: .byte -128, 64, 32, 16, 8, 4, 2, 1
which is pretty small and efficient. And if you make it static and suggest inlining, or force it inline as a macro definition, you can even bypass the cost of a function call.
Just make sure you benchmark any solution you're given, including this one (a). The number one mantra in optimisation is "Measure, don't guess!"
If you want to know how the bitwise operators work, see here. The simplified AND-only version is below.
The AND operation & will set a bit in the target only if both bits are set in the tewo sources. The relevant table is:
AND | 0 1
----+----
0 | 0 0
1 | 0 1
For a given char value, we use the single-bit bit masks to check if a bit is set. Let's say you have the value 13 and you want to see if the third-from-least-significant bit is set.
Decimal Binary
13 0000 1101
4 0000 0100 (the bitmask for the third-from-least bit).
=========
0000 0100 (the result of the AND operation).
You can see that all the zero bits in the mask result in the equivalent result bits being zero. The single one bit in the mask will basically let the equivalent bit in the value flow through to the result. The result is then zero if the bit we're checking was zero, or non-zero if it was one.
That's where the expression in the return statement comes from. The values in the mask lookup table are all the single-bit masks:
Decimal Binary
128 1000 0000
64 0100 0000
32 0010 0000
16 0001 0000
8 0000 1000
4 0000 0100
2 0000 0010
1 0000 0001
(a) I know how good I am, but you don't :-)
Just check the value of (1 << bit) & byte. If it is nonzero, the bit is set.
Let the number be num. Then:
return ((1 << n) & num);
bool isSet(unsigned char b, unsigned char n) { return b & ( 1 << n); }
Another approach would be
bool isNthBitSet (unsigned char c, int n) {
return (1 & (c >> n));
}
#include<stdio.h>
int main()
{
unsigned int n,a;
printf("enter value for n\n");
scanf("%u",&n);
pintf("enter value for a:\n");
scanf("%u",&a);
a= a|(((~((unsigned)0))>>(sizeof(int)*8-1))<<n);
printf("%u\n",a);
}
#include<stdio.h>
int main()
{
int data,bit;
printf("enter data:");
scanf("%d",&data);
printf("enter bit position to test:");
scanf("%d",&bit);
data&(1<<bit)?printf("bit is set\n"):printf("bit is clear\n");
return 0;
}

x86 Assembly: INC and DEC instruction and overflow flag

In x86 assembly, the overflow flag is set when an add or sub operation on a signed integer overflows, and the carry flag is set when an operation on an unsigned integer overflows.
However, when it comes to the inc and dec instructions, the situation seems to be somewhat different. According to this website, the inc instruction does not affect the carry flag at all.
But I can't find any information about how inc and dec affect the overflow flag, if at all.
Do inc or dec set the overflow flag when an integer overflow occurs? And is this behavior the same for both signed and unsigned integers?
============================= EDIT =============================
Okay, so essentially the consensus here is that INC and DEC should behave the same as ADD and SUB, in terms of setting flags, with the exception of the carry flag. This is also what it says in the Intel manual.
The problem is I can't actually reproduce this behavior in practice, when it comes to unsigned integers.
Consider the following assembly code (using GCC inline assembly to make it easier to print out results.)
int8_t ovf = 0;
__asm__
(
"movb $-128, %%bh;"
"decb %%bh;"
"seto %b0;"
: "=g"(ovf)
:
: "%bh"
);
printf("Overflow flag: %d\n", ovf);
Here we decrement a signed 8-bit value of -128. Since -128 is the smallest possible value, an overflow is inevitable. As expected, this prints out: Overflow flag: 1
But when we do the same with an unsigned value, the behavior isn't as I expect:
int8_t ovf = 0;
__asm__
(
"movb $255, %%bh;"
"incb %%bh;"
"seto %b0;"
: "=g"(ovf)
:
: "%bh"
);
printf("Overflow flag: %d\n", ovf);
Here I increment an unsigned 8-bit value of 255. Since 255 is the largest possible value, an overflow is inevitable. However, this prints out: Overflow flag: 0.
Huh? Why didn't it set the overflow flag in this case?
The overflow flag is set when an operation would cause a sign change. Your code is very close. I was able to set the OF flag with the following (VC++) code:
char ovf = 0;
_asm {
mov bh, 127
inc bh
seto ovf
}
cout << "ovf: " << int(ovf) << endl;
When BH is incremented the MSB changes from a 0 to a 1, causing the OF to be set.
This also sets the OF:
char ovf = 0;
_asm {
mov bh, 128
dec bh
seto ovf
}
cout << "ovf: " << int(ovf) << endl;
Keep in mind that the processor does not distinguish between signed and unsigned numbers. When you use 2's complement arithmetic, you can have one set of instructions that handle both. If you want to test for unsigned overflow, you need to use the carry flag. Since INC/DEC don't affect the carry flag, you need to use ADD/SUB for that case.
Intel® 64 and IA-32 Architectures Software Developer's Manuals
Look at the appropriate manual Instruction Set Reference, A-M. Every instruction is precisely documented.
Here is the INC section on affected flags:
The CF flag is not affected. The OF, SZ, ZF, AZ, and PF flags are set according to the result.
try changing your test to pass in the number rather than hard code it, then have a loop that tries all 256 numbers to find the one if any that affects the flag. Or have the asm perform the loop and exit out when it hits the flag and or when it wraps around to the number it started with (start with something other than 0x00, 0x7f, 0x80, or 0xFF).
EDIT
.globl inc
inc:
mov $33, %eax
top:
inc %al
jo done
jmp top
done:
ret
.globl dec
dec:
mov $33, %eax
topx:
dec %al
jo donex
jmp topx
donex:
ret
Inc overflows when it goes from 0x7F to 0x80. dec overflows when it goes from 0x80 to 0x7F, I suspect the problem is in the way you are using inline assembler.
As many of the other answers have pointed out, INC and DEC do not affect the CF, whereas ADD and SUB do.
What has not been said yet, however, is that this might make a performance difference. Not that you'd usually be bothered by that unless you are trying to optimise the hell out of a routine, but essentially not setting the CF means that INC/DEC only write to part of the flags register, which can cause a partial flag register stall, see Intel 64 and IA-32 Architectures Optimization Reference Manual or Agner Fog's optimisation manuals.
Except for the carry flag inc sets the flags the same way as add operand 1 would.
The fact that inc does not affect the carry flag is very important.
http://oopweb.com/Assembly/Documents/ArtOfAssembly/Volume/Chapter_6/CH06-2.html#HEADING2-117
The CPU/ALU is only capable of handling unsigned binary numbers, and then it uses OF, CF, AF, SF, ZF, etc., to allow you to decide whether to use it as a signed number (OF), an unsigned number (CF) or a BCD number (AF).
About your problem, remember to consider the binary numbers themselves, as unsigned.
**Also, the overflow and the OF require 3 numbers: The input number, a second number to use in the arithmetic, and the result number.
Overflow is activated only if the first and second numbers have the same value for the sign bit (the most significant bit) and the result has a different sign. As in, adding 2 negative numbers resulted in a positive number, or adding 2 positive numbers resulted in a negative number:
if( (Sign_Num1==Sign_Num2) && (Sign_Result!=Sign_Num1) ) OF=1;
else OF=0;
For your first problem, you are using -128 as the first number. The second number is implicitly -1, used by the DEC instruction. So we really have the binary numbers 0x80 and 0xFF. Both them have the sign bit set to 1. The result is 0x7F, which is a number with the sign bit set to 0. We got 2 initial numbers with the same sign, and a result with a different sign, so we indicate an overflow. -128-1 resulted in 127, and thus the overflow flag is set to indicate a wrong signed result.
For your second problem, you are using 255 as the first number. The second number is implicitly 1, used by the INC instruction. So we really have the binary numbers 0xFF and 0x01. Both them have a different sign bit, so it is not possible to get an overflow (it is only possible to overflow when basically adding 2 numbers of the same sign, but it is never possible to overflow with 2 numbers of a different sign because they will never lead to go beyond the possible signed value). The result is 0x00, and it doesn't set the overflow flag because 255+1, or more exactly, -1+1 gives 0, which is obviously correct for signed arithmetic.
Remember that for the overflow flag to be set, the 2 numbers being added/subtracted need to have the sign bit with the same value, and then the result must have a sign bit with a value different from them.
What the processor does is set the appropriate flags for the results of these instructions (add, adc, dec, inc, sbb, sub) for both the signed and unsigned cases i e two different flag results for every op. The alternative would be having two sets of instructions where one sets signed-related flags and the other the unsigned-related. If the issuing compiler is using unsigned variables in the operation it will test carry and zero (jc, jnc, jb, jbe etc), if signed it tests overflow, sign and zero (jo, jno, jg, jng, jl, jle etc).

Resources