Enabling interrupts on 8052 causes lock-up - loops

The problem I have currently is that ever since I enabled interrupts, the program is stuck in an endless loop. If I disable the interrupts then the program executes normally.
I even made sure that I protected the registers (variables) in the functions by pushing them onto the stack and popping them off upon exit of the function, and that did not help.
I even took the step to even replace the function names with only iret (to exit from interrupt), and I still face the same problem.
The only way for me to solve the problem right now is to disable the interrupts by not executing mov TCON,#50h. This makes me think that the interrupt vector addresses published on the internet are not correct, and that in reality, screwed-up code is being executed instead.
My microcontroller I'm using is AT89S52.
Am I off with my vector addressing here? I need some advice because the code I currently use below is currently not working when timer interrupts are enabled.
org 000h
;entry point when program first runs
ljmp sysinit ;seems to execute
sjmp $
org 00Bh
;Timer 0 interrupt executes every 65536 machine cycles even if timer 1 interrupt executes
ljmp crit
sjmp $
org 01Bh
;Timer 1 interrupt executes every 256 machine cycles
ljmp processkey
sjmp $
org 030h
start:
;rest of program goes here.
sysinit:
mov TL0,#0h
mov TH0,#0h
mov TH1,#0h
mov TL1,#0h
mov PSW,#0h
mov R0,#7Fh
;make all ram addresses 0h to 7Fh contain value 0
sysreset:
CLR A
mov #R0,A
djnz R0,sysreset
mov T2CON,#0h ;8052 register
mov AUXR,#0h ;8052 register
mov AUXR1,#0h ;8052 register
mov PCON,#80h ;Make baud divisor=192
mov TMOD,#21h ;Timer1=0-255,Timer2=0-65535
mov IP,#02h ;priority to timer 0
mov TCON,#50h ;timers on
mov SP,#050h ;stack=50h
mov IE,#8Ah ;ints enabled
ljmp start

Are you sure that the interrupt service routines actually clear the interrupts? If not, the core will continuously try to serve them hence the apparent lockup.
Check out the datasheet pg. 17:
Timer 2 interrupt is generated by the logical OR of bits TF2 and EXF2 in register T2CON. Nei-
ther of these flags is cleared by hardware when the service routine is vectored to. In fact, the
service routine may have to determine whether it was TF2 or EXF2 that generated the interrupt,
and that bit will have to be cleared in software.
So your interrupt will keep firing if the flags aren't reset.
Why are you writing your code in assembly anyway? Surely, for this functionality you could use plain C instead.

Related

does single step interrupt(01H interrupt) clear TF flag after every instruction?

i want to use single step interrupt and i understand that for make this interrupt work for every instruction the TF flag should be 1 (TF=1).
and in my code i see single step interrupt work but after some instructions(maybe after every output) it stop and i should make TF=1 again to continue code and continue display outputs).
does that mean after every instruction or after special instructions cause TF=0 again ?
this code i used to set TF=1
asm{
PUSHF
MOV BP,SP
OR WORD PTR[BP+0],0100H
POPF
}

Is there a delay between enabling a peripheral timer and the timer actually starting its count? What is this delay and what causes it?

I am trying to configure a timer (TIM9) and its interrupt handler to single step through a program. My approach is to first interrupt the first instruction, and then in the interrupt handler configure the timer so that it triggers an interrupt right after returning from the ISR.
Right now I'm still trying to get the first step correctly.
This is the piece of sample code I'm working with right now. I have configured the timer to generate an interrupt when its counter is equal to some compare value.
I set the compare value equal to 1 so that the code gets interrupted after 1 cycle of the timer. According to the reference manual the counter starts counting 1 cycle after setting the enable bit so I added a NOP.
/*
* TIM9 is configured to generate an interrupt when counter s equal to compare value
*/
TIM9->CCR1 = 1; // set compare 1 value to 1
TIM9->CR1 |= 1; // enable TIM9
__ISB(); // flush pipeline
__asm volatile ("NOP"); // from reference manual: counter starts counting 1 cycle after setting CEN bit
__asm volatile("MOV r8, #1 \n\t"); // expect to interrupt this instruction
__asm volatile("MOV r8, #2 \n\t");
__asm volatile("MOV r8, #3 \n\t");
__asm volatile("MOV r8, #4 \n\t");
To verify that the correct instruction got interrupted I used GDB to check the content of register r8 after entering the interrupt handler, and I found that it is actually equal to 6. This implies that there is a much longer delay than 1 cycle or that I'm simply missing something.
I can simply add 5 NOP instructions before the first MOV instruction so that the interrupt occurs at the right time but I don't understand why this is neccessary. As far as I know the code as I have it right now should generate an interrupt during the first MOV instruction.
Why am I getting this behavior? Why does there seem to be such a long delay between enabling the timer and the interrupt being generated?
Is this maybe because there is a delay between the counter value equaling the compare value and the actual generation of the interrupt?
Or is it possible that this has something to do with how the pipeline works?
I have set the prescaler of the timer to 0 and there is no internal clock division happening so the timer should have the same frequency as the system clock.
I do not know why you want to check it. It works as it is supposed to work.
NOP is the worst way of adding the delay. It is something different than the 8 bits AVR`s NOP. It is instantly flushed off the pipeline and it should be used only as padding.
1 clock after CEN means one timer clock, not the HCLK clock (they can be the same)
CCRx = 1 means two clocks.
If you run from FLASH - wait states are added
Pipeline execution status is not easy to determine. The actual instruction interruped does not have to be the one you actually think. The time of the interrupt is determined but not the actual instruction.

Turn PS/2 keyboard Caps Lock LED on/off in custom kernel

I'm developing a kernel with assembly language and C and in the boot up process, I want the kernel to turn the Caps Lock key on and then off. I have found the reference for the LED on the key here. How could I call that function with C or assembly (NASM style)?
SetKeyBoardLeds:
push eax
mov al,0xed
out 60h,al
KeyBoardWait:
in al,64h
test al,10b
jne KeyBoardWait
mov al,byte [keyBoardStatus]
and al,111b
out 60h,al
pop eax
ret
For example,
mov [keyBoardStatus],0xb0 ; Initialise keyboard LED state
call SetKeyBoardLeds
;--------------------------------- This will toggle CapsLock LED
xor [keyBoardStatus],4
call SetKeyBoardLeds
;---------------------------------
and [keyBoardStatus],0xfb
call SetKeyBoardLeds
where keyBoardStatus is a byte (it will hold the current keyboard LED state).
You can read my code for reference:
irq_ex2.c: An interrupt handler example. This code binds itself to IRQ 1, which is the IRQ of the keyboard controlled under Intel architectures. Then, when it receives a keyboard interrupt, it reads the information about the status LED and keycode into the work queue.
Pressing:
ESC → Caps Lock LED OFF
F1,F2 → Caps Lock ON

ARM. Access user R13 and R14 from Supervisor mode

How do I access the user R13 and R14 which are saved when supervisor mode is entered? I am using an ARM7TDMI.
I.E. I do not want to access supervisor R14 which now contains the return address to user mode, instead want the value of user mode's link register. This is part of a debugger I am writing.
Are there special aliases for these registers?
Thanks
I'll describe the answer for your specific question but the same approach applies to other modes as well.
You'll need to change the processor mode by changing the mode bits in the CPSR to system mode. This will give you access to user mode's SP/LR (R13 & R14). Remember that system mode is privileged, but its R13 and R14 are the same as user mode's R13 and R14.
Once you're in system mode, read R13 and R14 and put them where you want. Then just switch the mode bits back to your previous mode (I believe that was supervisor mode in your example) and you're good to go.
Note that we did not switch from supervisor to user mode. If you switched from supervisor to user, you couldn't get back to supervisor mode. (Otherwise there would be no protection from user code escalating privilege). That's why we used system mode -- system mode is privileged, but the registers are the same as user mode.
You can switch between any of the privileged modes at will by manipulating the mode bits in the CPSR. I think they're the lower 5 bits? I'm on the road & don't have the info at my fingertips. Otherwise I would have provided you with the assembly code for what I've described above. Actually, if you want to put some hair on your chest, take what I've given you above, implement it, test it, and post it back here. :-D
(One thing I should add for the "general case" (yours is very specific) -- you can examine the SPSR to see "where you came from" -- and use that to determine which mode you need to switch to.)
By the way, I just did this recently for one of my customers.... small world, I guess.
I've discovered a better way: -
When doing a STM, if r15 isn't one of the operands then ^ gives access to user-mode registers. However, autoincrementing doesn't seem to work within the instruction, and a nop is required afterwards if you want to access the register bank.
Something like
stmfd r13, {r13-r14}^ ;store r13 and r14 usermode
nop
sub r13, r13, #8 ;update stack pointer

How to get keyboard input with x86 bare metal assembly?

I'm in the process of trying to hack together the first bits of a kernel. I currently have the entire kernel compiled down as C code, and I've managed to get it displaying text in the console window and all of that fine goodness. Now, I want to start accepting keyboard input so I can actually make some use of the thing and get going on process management.
I'm using DJGPP to compile, and loading with GRUB. I'm also using a small bit of assembly which basically jumps directly into my compiled C code and I'm happy from there.
All the research I've done seems to point to an ISR at $0x16 to read in the next character from the keyboard buffer. From what I can tell, this is supposed to store the ASCII value in ah, and the keycode in al, or something to that effect. I'm attempting to code this using the following routine in inline assembly:
char getc(void)
{
int output = 0;
//CRAZY VOODOO CODE
asm("xor %%ah, %%ah\n\t"
"int $0x16"
: "=a" (output)
: "a" (output)
:
);
return (char)output;
}
When this code is called, the core immediately crashes. (I'm running it on VirtualBox, I didn't feel the need to try something this basic on real hardware.)
Now I have actually a couple of questions. No one has been able to tell me if (since my code was launched from GRUB) I'm running in real mode or protected mode at the moment. I haven't made the jump one way or another, I was planning on running in real mode until I got a process handler set up.
So, assuming that I'm running in real mode, what am I doing wrong, and how do I fix it? I just need a basic getc routine, preferably non-blocking, but I'll be darned if google is helping on this one at all. Once I can do that, I can do the rest from there.
I guess what I'm asking here is, am I anywhere near the right track? How does one generally go about getting keyboard input on this level?
EDIT: OOhh... so I'm running in protected mode. This certainly explains the crash trying to access real mode functions then.
So then I guess I'm looking for how to access the keyboard IO from protected mode. I might be able to find that on my own, but if anyone happens to know feel free. Thanks again.
If you are compiling with gcc, unless you are using the crazy ".code16gcc" trick the linux kernel uses (which I very much doubt), you cannot be in real mode. If you are using the GRUB multiboot specification, GRUB itself is switching to protected mode for you. So, as others pointed out, you will have to talk to the 8042-compatible keyboard/mouse controller directly. Unless it's a USB keyboard/mouse and 8042 emulation is disabled, where you would need a USB stack (but you can use the "boot" protocol for the keyboard/mouse, which is simpler).
Nobody said writing an OS kernel was simple.
The code you've got there is trying to access a real mode BIOS service. If you're running in protected mode, which is likely considering that you're writing a kernel, then the interrupt won't work. You will need to do one of the following:
Thunk the CPU into real mode, making sure the interrupt vector table is correct, and use the real mode code you have or
Write your own protected mode keyboard handler (i.e. use the in/out instructions).
The first solution is going to involve a runtime performance overhead whist the second will require some information about keyboard IO.
I've a piece of GeekOS that seems to do
In_Byte(KB_CMD);
and then
In_Byte(KB_DATA);
to fetch a scancode. I put it up: keyboard.c and keyboard.h. KB_CMD and KB_DATA being 0x64 and 0x60 respectively. I could perhaps also point out that this is done in an interrupt handler for intr:1.
You're doing the right thing, but I seem to recall that djgpp only generates protected mode output, which you can't call interrupts from. Can you drop to real mode like others have suggested, or would you prefer to address the hardware directly?
For the purposes of explanation, let's suppose you were writing everything in assembly language yourself, boot loader and kernel (*cough* I've done this).
In real mode, you can make use of the interrupt routines that come from the BIOS. You can also replace the interrupt vectors with your own. However all code is 16-bit code, which is not binary compatible with 32-bit code.
When you jump through a few burning hoops to get to protected mode (including reprogramming the interrupt controller, to get around the fact that IBM used Intel-reserved interrupts in the PC), you have the opportunity to set up 16- and 32-bit code segments. This can be used to run 16-bit code. So you can use this to access the getchar interrupt!
... not quite. For this interrupt to work, you actually need data in a keyboard buffer that was put there by a different ISR - the one that is triggered by the keyboard when a key is pressed. There are various issues which pretty much prevent you using BIOS ISRs as actual hardware ISRs in protected mode. So, the BIOS keyboard routines are useless.
BIOS video calls, on the other hand, are fine, because there's no hardware-triggered component. You do have to prepare a 16-bit code segment but if that's under control then you can switch video modes and that sort of thing by using BIOS interrupts.
Back to the keyboard: what you need (again assuming that YOU'RE writing all the code) is to write a keyboard driver. Unless you're a masochist (I'm one) then don't go there.
A suggestion: try writing a multitasking kernel in Real mode. (That's 16-bit mode.) You can use all the BIOS interrupts! You don't get memory protection but you can still get pre-emptive multitasking by hooking the timer interrupt.
Just an idea: looking at GRUB for DOS source (asm.s), the console_checkkey function is using BIOS INT 16H Function 01, and not function 00, as you are trying to do. Maybe you'd want to check if a key is waiting to be input.
The console_checkkey code is setting the CPU to real mode in order to use the BIOS, as #skizz suggested.
You can also try using GRUB functions directly (if still mapped in real mode).
A note on reading assembly source: in this version
movb $0x1, %ah
means move constant byte (0x1) to register %ah
The console_checkkey from GRUB asm.s:
/*
* int console_checkkey (void)
* if there is a character pending, return it; otherwise return -1
* BIOS call "INT 16H Function 01H" to check whether a character is pending
* Call with %ah = 0x1
* Return:
* If key waiting to be input:
* %ah = keyboard scan code
* %al = ASCII character
* Zero flag = clear
* else
* Zero flag = set
*/
ENTRY(console_checkkey)
push %ebp
xorl %edx, %edx
call EXT_C(prot_to_real) /* enter real mode */
.code16
sti /* checkkey needs interrupt on */
movb $0x1, %ah
int $0x16
DATA32 jz notpending
movw %ax, %dx
//call translate_keycode
call remap_ascii_char
DATA32 jmp pending
notpending:
movl $0xFFFFFFFF, %edx
pending:
DATA32 call EXT_C(real_to_prot)
.code32
mov %edx, %eax
pop %ebp
ret
Example for polling the keyboard controller:
Start:
cli
mov al,2 ; dissable IRQ 1
out 21h,al
sti
;--------------------------------------
; Main-Routine
AGAIN:
in al,64h ; get the status
test al,1 ; check output buffer
jz short NOKEY
test al,20h ; check if it is a PS2Mouse-byte
jnz short NOKEY
in al,60h ; get the key
; insert your code here (maybe for converting into ASCII...)
NOKEY:
jmp AGAIN
;--------------------------------------
; At the end
cli
xor al,al ; enable IRQ 1
out 21h,al
sti

Resources