does single step interrupt(01H interrupt) clear TF flag after every instruction? - c

i want to use single step interrupt and i understand that for make this interrupt work for every instruction the TF flag should be 1 (TF=1).
and in my code i see single step interrupt work but after some instructions(maybe after every output) it stop and i should make TF=1 again to continue code and continue display outputs).
does that mean after every instruction or after special instructions cause TF=0 again ?
this code i used to set TF=1
asm{
PUSHF
MOV BP,SP
OR WORD PTR[BP+0],0100H
POPF
}

Related

Is there a delay between enabling a peripheral timer and the timer actually starting its count? What is this delay and what causes it?

I am trying to configure a timer (TIM9) and its interrupt handler to single step through a program. My approach is to first interrupt the first instruction, and then in the interrupt handler configure the timer so that it triggers an interrupt right after returning from the ISR.
Right now I'm still trying to get the first step correctly.
This is the piece of sample code I'm working with right now. I have configured the timer to generate an interrupt when its counter is equal to some compare value.
I set the compare value equal to 1 so that the code gets interrupted after 1 cycle of the timer. According to the reference manual the counter starts counting 1 cycle after setting the enable bit so I added a NOP.
/*
* TIM9 is configured to generate an interrupt when counter s equal to compare value
*/
TIM9->CCR1 = 1; // set compare 1 value to 1
TIM9->CR1 |= 1; // enable TIM9
__ISB(); // flush pipeline
__asm volatile ("NOP"); // from reference manual: counter starts counting 1 cycle after setting CEN bit
__asm volatile("MOV r8, #1 \n\t"); // expect to interrupt this instruction
__asm volatile("MOV r8, #2 \n\t");
__asm volatile("MOV r8, #3 \n\t");
__asm volatile("MOV r8, #4 \n\t");
To verify that the correct instruction got interrupted I used GDB to check the content of register r8 after entering the interrupt handler, and I found that it is actually equal to 6. This implies that there is a much longer delay than 1 cycle or that I'm simply missing something.
I can simply add 5 NOP instructions before the first MOV instruction so that the interrupt occurs at the right time but I don't understand why this is neccessary. As far as I know the code as I have it right now should generate an interrupt during the first MOV instruction.
Why am I getting this behavior? Why does there seem to be such a long delay between enabling the timer and the interrupt being generated?
Is this maybe because there is a delay between the counter value equaling the compare value and the actual generation of the interrupt?
Or is it possible that this has something to do with how the pipeline works?
I have set the prescaler of the timer to 0 and there is no internal clock division happening so the timer should have the same frequency as the system clock.
I do not know why you want to check it. It works as it is supposed to work.
NOP is the worst way of adding the delay. It is something different than the 8 bits AVR`s NOP. It is instantly flushed off the pipeline and it should be used only as padding.
1 clock after CEN means one timer clock, not the HCLK clock (they can be the same)
CCRx = 1 means two clocks.
If you run from FLASH - wait states are added
Pipeline execution status is not easy to determine. The actual instruction interruped does not have to be the one you actually think. The time of the interrupt is determined but not the actual instruction.

If the main function is called inside the reset handler, how other interrupts are handled?

This is a sample code from a start up code for Tiva C, as you can see the main function is called inside the reset handler, and as i understand it's is the highest priority, so my question is how any other interrupt can be handled if we are still inside the reset handler?
```
; Reset Handler
Reset_Handler PROC
EXPORT Reset_Handler [WEAK]
IMPORT SystemInit
IMPORT __main
LDR R0, =SystemInit
BLX R0
LDR R0, =__main
BX R0
ENDP
```
The reset is "special". When the reset handler is invoked by a processor reset, instructions are executed in thread mode. Necessarily so, since the reset vector is invoked on a power-on-reset (POR) - if the handler had to "return" where would it return to?
Also on reset in any case registers are reset to their defined reset state, and the stack pointer set to the address at the start of the table (in the case of am ARM Cortex-M at least), so there would be nowhere from which to fetch a return address - in fact the reset signal does not cause a return address to be stacked in any case.
The whole point of a reset is to restart the processor in a known state.
Returning to the point at which the reset occurred makes little sense, and would not be likely to work given that the reset state of the processor is unlikely to be a suitable run-state for the "interrupted" code.
From the ARM Cortex-M3 User Guide (my emphasis) other ARM architectures may differ in the details, but not the general point.
2.3.2. Exception types The exception types are:
Reset
Reset is invoked on power up or a warm reset. The exception model treats reset as a special form of exception. When reset is asserted,
the operation of the processor stops, potentially at any point in an
instruction. When reset is deasserted, execution restarts from the
address provided by the reset entry in the vector table. Execution
restarts as privileged execution in Thread mode.
[...]
I've found the pseudocode in the ARM architecture reference manuals to be quite helpful for answering this type of question. By "tiva c", I assume you are talking about the TM4C line of microcontrollers which are Cortex-M4 based MCUs. This means we will want to look at the ARMv7-M architecture reference manual.
Section "B1.5.5 Reset Behavior" has the pseudocode we are interested in. Here's a snippet (with the parts not relevant to the question elided out):
Asserting reset causes the processor to abandon the current execution
state without saving it. On the deassertion of reset, all registers
that have a defined reset value contain that value, and the processor
performs the actions described by the TakeReset() pseudocode.
// TakeReset()
// ============
TakeReset()
CurrentMode = Mode_Thread;
PRIMASK<0> = '0'; /* priority mask cleared at reset */
FAULTMASK<0> = '0'; /* fault mask cleared at reset */
BASEPRI<7:0> = Zeros(8); /* base priority disabled at reset */
// [...]
From the description we can note:
If the system is running and a reset is issued, the processor will always "abandon the current execution". So it is the "highest priority" thing that can happen if the MCU is running.
However, after the MCU restarts and the "TakeReset" logic starts to run, the "CurrentMode" the processor enters is actually Thread mode. ARMv7-M has two operation modes known as Thread Mode and Handler Mode. All interrupts/exceptions run in Handler Mode and normal code runs in Thread Mode. This tells us the reset path does not actually start like an interrupt/exception would. It's just running like normal code would.

Enabling interrupts on 8052 causes lock-up

The problem I have currently is that ever since I enabled interrupts, the program is stuck in an endless loop. If I disable the interrupts then the program executes normally.
I even made sure that I protected the registers (variables) in the functions by pushing them onto the stack and popping them off upon exit of the function, and that did not help.
I even took the step to even replace the function names with only iret (to exit from interrupt), and I still face the same problem.
The only way for me to solve the problem right now is to disable the interrupts by not executing mov TCON,#50h. This makes me think that the interrupt vector addresses published on the internet are not correct, and that in reality, screwed-up code is being executed instead.
My microcontroller I'm using is AT89S52.
Am I off with my vector addressing here? I need some advice because the code I currently use below is currently not working when timer interrupts are enabled.
org 000h
;entry point when program first runs
ljmp sysinit ;seems to execute
sjmp $
org 00Bh
;Timer 0 interrupt executes every 65536 machine cycles even if timer 1 interrupt executes
ljmp crit
sjmp $
org 01Bh
;Timer 1 interrupt executes every 256 machine cycles
ljmp processkey
sjmp $
org 030h
start:
;rest of program goes here.
sysinit:
mov TL0,#0h
mov TH0,#0h
mov TH1,#0h
mov TL1,#0h
mov PSW,#0h
mov R0,#7Fh
;make all ram addresses 0h to 7Fh contain value 0
sysreset:
CLR A
mov #R0,A
djnz R0,sysreset
mov T2CON,#0h ;8052 register
mov AUXR,#0h ;8052 register
mov AUXR1,#0h ;8052 register
mov PCON,#80h ;Make baud divisor=192
mov TMOD,#21h ;Timer1=0-255,Timer2=0-65535
mov IP,#02h ;priority to timer 0
mov TCON,#50h ;timers on
mov SP,#050h ;stack=50h
mov IE,#8Ah ;ints enabled
ljmp start
Are you sure that the interrupt service routines actually clear the interrupts? If not, the core will continuously try to serve them hence the apparent lockup.
Check out the datasheet pg. 17:
Timer 2 interrupt is generated by the logical OR of bits TF2 and EXF2 in register T2CON. Nei-
ther of these flags is cleared by hardware when the service routine is vectored to. In fact, the
service routine may have to determine whether it was TF2 or EXF2 that generated the interrupt,
and that bit will have to be cleared in software.
So your interrupt will keep firing if the flags aren't reset.
Why are you writing your code in assembly anyway? Surely, for this functionality you could use plain C instead.

Turn PS/2 keyboard Caps Lock LED on/off in custom kernel

I'm developing a kernel with assembly language and C and in the boot up process, I want the kernel to turn the Caps Lock key on and then off. I have found the reference for the LED on the key here. How could I call that function with C or assembly (NASM style)?
SetKeyBoardLeds:
push eax
mov al,0xed
out 60h,al
KeyBoardWait:
in al,64h
test al,10b
jne KeyBoardWait
mov al,byte [keyBoardStatus]
and al,111b
out 60h,al
pop eax
ret
For example,
mov [keyBoardStatus],0xb0 ; Initialise keyboard LED state
call SetKeyBoardLeds
;--------------------------------- This will toggle CapsLock LED
xor [keyBoardStatus],4
call SetKeyBoardLeds
;---------------------------------
and [keyBoardStatus],0xfb
call SetKeyBoardLeds
where keyBoardStatus is a byte (it will hold the current keyboard LED state).
You can read my code for reference:
irq_ex2.c: An interrupt handler example. This code binds itself to IRQ 1, which is the IRQ of the keyboard controlled under Intel architectures. Then, when it receives a keyboard interrupt, it reads the information about the status LED and keycode into the work queue.
Pressing:
ESC → Caps Lock LED OFF
F1,F2 → Caps Lock ON

How to get keyboard input with x86 bare metal assembly?

I'm in the process of trying to hack together the first bits of a kernel. I currently have the entire kernel compiled down as C code, and I've managed to get it displaying text in the console window and all of that fine goodness. Now, I want to start accepting keyboard input so I can actually make some use of the thing and get going on process management.
I'm using DJGPP to compile, and loading with GRUB. I'm also using a small bit of assembly which basically jumps directly into my compiled C code and I'm happy from there.
All the research I've done seems to point to an ISR at $0x16 to read in the next character from the keyboard buffer. From what I can tell, this is supposed to store the ASCII value in ah, and the keycode in al, or something to that effect. I'm attempting to code this using the following routine in inline assembly:
char getc(void)
{
int output = 0;
//CRAZY VOODOO CODE
asm("xor %%ah, %%ah\n\t"
"int $0x16"
: "=a" (output)
: "a" (output)
:
);
return (char)output;
}
When this code is called, the core immediately crashes. (I'm running it on VirtualBox, I didn't feel the need to try something this basic on real hardware.)
Now I have actually a couple of questions. No one has been able to tell me if (since my code was launched from GRUB) I'm running in real mode or protected mode at the moment. I haven't made the jump one way or another, I was planning on running in real mode until I got a process handler set up.
So, assuming that I'm running in real mode, what am I doing wrong, and how do I fix it? I just need a basic getc routine, preferably non-blocking, but I'll be darned if google is helping on this one at all. Once I can do that, I can do the rest from there.
I guess what I'm asking here is, am I anywhere near the right track? How does one generally go about getting keyboard input on this level?
EDIT: OOhh... so I'm running in protected mode. This certainly explains the crash trying to access real mode functions then.
So then I guess I'm looking for how to access the keyboard IO from protected mode. I might be able to find that on my own, but if anyone happens to know feel free. Thanks again.
If you are compiling with gcc, unless you are using the crazy ".code16gcc" trick the linux kernel uses (which I very much doubt), you cannot be in real mode. If you are using the GRUB multiboot specification, GRUB itself is switching to protected mode for you. So, as others pointed out, you will have to talk to the 8042-compatible keyboard/mouse controller directly. Unless it's a USB keyboard/mouse and 8042 emulation is disabled, where you would need a USB stack (but you can use the "boot" protocol for the keyboard/mouse, which is simpler).
Nobody said writing an OS kernel was simple.
The code you've got there is trying to access a real mode BIOS service. If you're running in protected mode, which is likely considering that you're writing a kernel, then the interrupt won't work. You will need to do one of the following:
Thunk the CPU into real mode, making sure the interrupt vector table is correct, and use the real mode code you have or
Write your own protected mode keyboard handler (i.e. use the in/out instructions).
The first solution is going to involve a runtime performance overhead whist the second will require some information about keyboard IO.
I've a piece of GeekOS that seems to do
In_Byte(KB_CMD);
and then
In_Byte(KB_DATA);
to fetch a scancode. I put it up: keyboard.c and keyboard.h. KB_CMD and KB_DATA being 0x64 and 0x60 respectively. I could perhaps also point out that this is done in an interrupt handler for intr:1.
You're doing the right thing, but I seem to recall that djgpp only generates protected mode output, which you can't call interrupts from. Can you drop to real mode like others have suggested, or would you prefer to address the hardware directly?
For the purposes of explanation, let's suppose you were writing everything in assembly language yourself, boot loader and kernel (*cough* I've done this).
In real mode, you can make use of the interrupt routines that come from the BIOS. You can also replace the interrupt vectors with your own. However all code is 16-bit code, which is not binary compatible with 32-bit code.
When you jump through a few burning hoops to get to protected mode (including reprogramming the interrupt controller, to get around the fact that IBM used Intel-reserved interrupts in the PC), you have the opportunity to set up 16- and 32-bit code segments. This can be used to run 16-bit code. So you can use this to access the getchar interrupt!
... not quite. For this interrupt to work, you actually need data in a keyboard buffer that was put there by a different ISR - the one that is triggered by the keyboard when a key is pressed. There are various issues which pretty much prevent you using BIOS ISRs as actual hardware ISRs in protected mode. So, the BIOS keyboard routines are useless.
BIOS video calls, on the other hand, are fine, because there's no hardware-triggered component. You do have to prepare a 16-bit code segment but if that's under control then you can switch video modes and that sort of thing by using BIOS interrupts.
Back to the keyboard: what you need (again assuming that YOU'RE writing all the code) is to write a keyboard driver. Unless you're a masochist (I'm one) then don't go there.
A suggestion: try writing a multitasking kernel in Real mode. (That's 16-bit mode.) You can use all the BIOS interrupts! You don't get memory protection but you can still get pre-emptive multitasking by hooking the timer interrupt.
Just an idea: looking at GRUB for DOS source (asm.s), the console_checkkey function is using BIOS INT 16H Function 01, and not function 00, as you are trying to do. Maybe you'd want to check if a key is waiting to be input.
The console_checkkey code is setting the CPU to real mode in order to use the BIOS, as #skizz suggested.
You can also try using GRUB functions directly (if still mapped in real mode).
A note on reading assembly source: in this version
movb $0x1, %ah
means move constant byte (0x1) to register %ah
The console_checkkey from GRUB asm.s:
/*
* int console_checkkey (void)
* if there is a character pending, return it; otherwise return -1
* BIOS call "INT 16H Function 01H" to check whether a character is pending
* Call with %ah = 0x1
* Return:
* If key waiting to be input:
* %ah = keyboard scan code
* %al = ASCII character
* Zero flag = clear
* else
* Zero flag = set
*/
ENTRY(console_checkkey)
push %ebp
xorl %edx, %edx
call EXT_C(prot_to_real) /* enter real mode */
.code16
sti /* checkkey needs interrupt on */
movb $0x1, %ah
int $0x16
DATA32 jz notpending
movw %ax, %dx
//call translate_keycode
call remap_ascii_char
DATA32 jmp pending
notpending:
movl $0xFFFFFFFF, %edx
pending:
DATA32 call EXT_C(real_to_prot)
.code32
mov %edx, %eax
pop %ebp
ret
Example for polling the keyboard controller:
Start:
cli
mov al,2 ; dissable IRQ 1
out 21h,al
sti
;--------------------------------------
; Main-Routine
AGAIN:
in al,64h ; get the status
test al,1 ; check output buffer
jz short NOKEY
test al,20h ; check if it is a PS2Mouse-byte
jnz short NOKEY
in al,60h ; get the key
; insert your code here (maybe for converting into ASCII...)
NOKEY:
jmp AGAIN
;--------------------------------------
; At the end
cli
xor al,al ; enable IRQ 1
out 21h,al
sti

Resources