I got Arduino to play with... but the Arduino language and IDE feel like a kids' toys to me.
So I'd like to use regular C to program it.
As I understand it, the bootloader sits in some place of the memory, has a rjmp in the reset vector, and when done, returns control to the installed "sketch".
The question is: What can I do in the "sketch" safely?
This is my current idea of what it looks like in the AVR with Arduino Bootloader, but it's maybe wrong:
.CSEG
.ORG 0000
rjmp bootloader
; ??? Interrupt vectors here
; ??? Interrupt vectors here
; Maybe it all goes to bootloader and it then forwards
; it to handlers in the "sketch"?
bootloader:
; ??? some magic tricks here
rjmp sketch
sketch:
; "Sketch" code
; I want my own interrupt vectors, too!
end:
rjmp end;
I mean, obviously the Bootloader uses some other parts of the AVR for it's function, so it'll have it's own interrupt handlers etc. If I use those in my program, will they work as expected?
So, what are the gotchas of programming Arduino with C (or even assembler)?
AVR devices with bootloader support have two sets of interrupt vectors, one for the bootloader (at the beginning of the bootloader section) and one for the regular firmware (at address 0). The BOOTRST fuse determines which reset vector is used by default, and IVSEL in MCUCR (or the equivalent in the device) determines which set is currently active.
Note that the lock bits can restrict which sets can be used, and as always see the datasheet for full details.
It will work. Arduino is just a Library. If You look at source code the setup() and loop() funtions are inserted to int main(); The setup() just before infinite while(true) loop.
Related
I am trying to fix an bug found in a mature program for Fujitsu MB90F543. The program works for nearly 10 years so far, but it was discovered, that under some special circumstances it fails to do two things at it's very beginning. One of them is crucial.
After low and high level initialization (ports, pins, peripherials, IRQ handlers) configuration data is read over SPI from EEPROM and status LEDs are turned on for a moment (to turn them a data is send over SPI to a LED driver).
When those special circumstances occur first and only first function invoking just a few EEPROM reads fails and additionally a few of the LEDs that should, don't turn on.
The program is written in C and compiled using Softune v30L32.
Surprisingly it is sufficient to add single __asm(" NOP ") in low level hardware init to make the program work as expected under mentioned circumstances. It is sufficient to turn off 'Control optimization of pointer aliasing' in Optimization settings. Adding just a few lines of code in various places helps too.
I have compared (DIFFed) ASM listings of compiled program for a version with and without __asm(" NOP ") and with both aforementioned optimizer settings and they all look just fine.
The only warning Softune compiler has been printing for years during compilation is as follows:
*** W1372L: The section is placed outside the RAM area or the I/O area (IOXTND)
I do realize it's rather general question, but maybe someone who has a bigger picture will be able to point out possible cause.
Have you got an idea what may cause such a weird behaviour? How to locate the bug and fix it?
During the initialization a few long (about 20ms) delay loops are used. They don't help although they were increased from about 2ms, yet single NOP in any line of the hardware initialization function and even before or after the function helps.
Both the wait loops works. I have checked it using an oscilloscope. (I have added LED turn on before and off after).
I have checked timming hypothesis by slowing down SPI clock from 1MHz to 500kHz. It does not change anything. Slowing down to 250kHz makes watchdog resets, as some parts of the code execute too long (>25ms).
One more thing. I have observed that adding local variables in any source file sometimes makes the problem disappear or reappear. The same concerns initializing uninitialized local variables. Adding a few extra lines of a code in any of the files helps or reveals the problem.
void main(void)
{
watchdog_init();
// waiting for power supply to stabilize
wait; // about 45ms
hardware_init();
clear_watchdog();
application_init();
clear_watchdog();
wait; // about 20ms
test_LED();
{...}
}
void hardware_init (void)
{
__asm("NOP"); // how it comes it helps? - it may be in any line of the function
io_init(); // ports initialization
clk_init();
timer_init();
adc_init();
spi_init();
LED_init();
spi_start();
key_driver_init();
can_init();
irq_init(); // set IRQ priorities and global IRQ enable
}
Could be one of many things but two spring to mind.
Timing.
Maybe the wait is not long enough for power to stabilize and not everything is synced to the clock. The NOP gets everything back in sync.
Alignment.
Perhaps the NOP gets your instructions aligned on a 32 or 64 bit boundary expected by the hardware. (we used to do this a lot on mainframe assemblers as IO operations often expected things to be on double word boundarys).
The problem was solved. It was caused by a trivial bug.
EEPROM's nHOLD and nCS signals were not initialized immediately after MCU's reset, but before the first use of the EEPROM. As a result they were 0's, so active.
This means EEPROM was selected, but waiting on hold. Meantime other transfer using SPI started. After 6 out of 8 CLK pulses EEPROM's nHOLD I/O pin was initialized and brought high. EEPROM was no longer on hold so it clocked in last two bits of a data for an other peripheral. Every subsequent operation on the EEPROM found it being having not synchronized CLK and MOSI.
When I have added NOP or anything other the moment of nHOLD 0->1 edge was shifted to happen after the last CLK pulse. Now CLK-MOSI were in sync.
All I have had to do was to initialize all the EEPROM's SPI lines, in
particular nHOLD and nCS right after the MCU reset.
I am new to MSP430 architecture and I am porting an RTOS which is written for ARM Cortex M3 into this architecure. In ARM Cortex architecture, there are PSP and MSP registers to hold stack values for execution modes.
As I understand from MSP430 architecture there is only just one stack pointer register (SP).
Here are my questions:
-Is there only one register bank for SP within interrupt/execution context?
-Can I use regular C functions for interrupt handling in MSP430 as in ARM Cortex?
-How does MSP430 handle (save/restore) registers during interrupt execution (specifically SP, SR and PC)?
There are no banks in terms of MSP430 registers, it is the only one SP register within context.
Yes, you can use C functions for interrupt handling link
__interrupt void MyFuncISR(void)
or it also can be like
#pragma vector=TIMER0_A0_VECTOR
__interrupt void
ta0cc0_isr (void)
in this case compiler will set the proper interrupt vector by the define/name you provide
3.
The interrupt logic executes the following:
1. Any currently executing instruction is completed.
2. The PC, which points to the next instruction, is pushed onto the stack.
3. The SR is pushed onto the stack
and so on, see below:
I'm programing a STM32F4 in C (gcc), it's a ARM Cortex M4, I see all examples finish their main() function with an infinite loop, even when the rest of the program will be executed from interruptions. If I try to remove the loop from my program, the interruptions stop being fired too.
Why can't I just remove this loop and exit the main thread?
here is the assembly (I guess it's thumb, but I can't read that, even with the doc):
LoopFillZerobss:
ldr r3, = _ebss
cmp r2, r3
bcc FillZerobss
/* Call the clock system intitialization function.*/
bl SystemInit
/* Call the application's entry point.*/
bl main
bx lr
.size Reset_Handler, .-Reset_Handler
Take a look at the setup code that runs before main in your project. It might be some slim assembly code or something more complicated, but in general it's pretty close to the bare minimum amount of processor setup required to initialize a stack and get the C runtime going.
If you were return from main, what is your processor supposed to do? Reset? Hang? There's no one good answer, so you'll have to look at the runtime support code being linked with your program to see what its designers decided. In your case, it sounds like they didn't make any allowances for main to return, so the processor just crashes/takes an exception and your program stops working.
Edit: It seems like what you're actually looking for is a way to enter a low power state during the idle loop. That's certainly possible - since your processor is an ARM Cortex-M4, there's a simple instruction to do just that:
while (1)
{
asm("wfi");
}
If you're using CMSIS (and it looks like you are, given your use of SystemInit), the assembly is probably already done for you:
while(1)
{
__WFI();
}
More details at this link.
you are not running on an operating system. main() is just a function like any other, it returns from where it was called. With a bare metal system like this though that is not an operating system. So if your software is all interrupt driven and main() is simply for setup, then you need to keep the processor in a controlled state either in an infinite loop or in a low power mode. you can do that at the end of main() or whatever your setup function is called, or the assembly that calls main:
bl main
b . ;# an infinite loop
or if you want wfi in there:
bl main
xyz:
wfi
b xyz
i guess you haven't defined a interrupt handler which your program need.
Can someone please explain to me what happens inside an interrupt service routine (although it depends upon specific routine, a general explanation is enough)? This always used be a black box for me.
There is a good wikipedia page on interrupt handlers.
"An interrupt handler, also known as an interrupt service routine (ISR), is a callback subroutine in an operating system or device driver whose execution is triggered by the reception of an interrupt. Interrupt handlers have a multitude of functions, which vary based on the reason the interrupt was generated and the speed at which the Interrupt Handler completes its task."
Basically when a piece of hardware (a hardware interrupt) or some OS task (software interrupt) needs to run it triggers an interrupt. If these interrupts aren't masked (ignored) the OS will stop what it's doing and call some special code to handle this new event.
One good example is reading from a hard drive. The drive is slow and you don't want your OS to wait for the data to come back; you want the OS to go and do other things. So you set up the system so that when the disk has the data requested, it raises an interrupt. In the interrupt service routine for the disk the CPU will take the data that is now ready and will return it to the requester.
ISRs often need to happen quickly as the hardware can have a limited buffer, which will be overwritten by new data if the older data is not pulled off quickly enough.
It's also important to have your ISR complete quickly as while the CPU is servicing one ISR other interrupts will be masked, which means if the CPU can't get to them quickly enough data can be lost.
Minimal 16-bit example
The best way to understand is to make some minimal examples yourself.
First learn how to create a minimal bootloader OS and run it on QEMU and real hardware as I've explained here: https://stackoverflow.com/a/32483545/895245
Now you can run in 16-bit real mode:
movw $handler0, 0x00
mov %cs, 0x02
movw $handler1, 0x04
mov %cs, 0x06
int $0
int $1
hlt
handler0:
/* Do 0. */
iret
handler1:
/* Do 1. */
iret
This would do in order:
Do 0.
Do 1.
hlt: stop executing
Note how the processor looks for the first handler at address 0, and the second one at 4: that is a table of handlers called the IVT, and each entry has 4 bytes.
Minimal example that does some IO to make handlers visible.
Protected mode
Modern operating systems run in the so called protected mode.
The handling has more options in this mode, so it is more complex, but the spirit is the same.
Minimal example
See also
Related question: What does "int 0x80" mean in assembly code?
While the 8086 is executing a program an interrupt breaks the normal sequence of execution of instruction, divert its execution to some other program called interrupt service Routine (ISR). after executing, control return the back again to the main program.
An interrupt is used to cause a temporary halt in the execution of program. Microprocessor responds to the interrupt service routine, which is short program or subroutine that instruct the microprocessor on how to handle the interrupt.
I'm in the process of trying to hack together the first bits of a kernel. I currently have the entire kernel compiled down as C code, and I've managed to get it displaying text in the console window and all of that fine goodness. Now, I want to start accepting keyboard input so I can actually make some use of the thing and get going on process management.
I'm using DJGPP to compile, and loading with GRUB. I'm also using a small bit of assembly which basically jumps directly into my compiled C code and I'm happy from there.
All the research I've done seems to point to an ISR at $0x16 to read in the next character from the keyboard buffer. From what I can tell, this is supposed to store the ASCII value in ah, and the keycode in al, or something to that effect. I'm attempting to code this using the following routine in inline assembly:
char getc(void)
{
int output = 0;
//CRAZY VOODOO CODE
asm("xor %%ah, %%ah\n\t"
"int $0x16"
: "=a" (output)
: "a" (output)
:
);
return (char)output;
}
When this code is called, the core immediately crashes. (I'm running it on VirtualBox, I didn't feel the need to try something this basic on real hardware.)
Now I have actually a couple of questions. No one has been able to tell me if (since my code was launched from GRUB) I'm running in real mode or protected mode at the moment. I haven't made the jump one way or another, I was planning on running in real mode until I got a process handler set up.
So, assuming that I'm running in real mode, what am I doing wrong, and how do I fix it? I just need a basic getc routine, preferably non-blocking, but I'll be darned if google is helping on this one at all. Once I can do that, I can do the rest from there.
I guess what I'm asking here is, am I anywhere near the right track? How does one generally go about getting keyboard input on this level?
EDIT: OOhh... so I'm running in protected mode. This certainly explains the crash trying to access real mode functions then.
So then I guess I'm looking for how to access the keyboard IO from protected mode. I might be able to find that on my own, but if anyone happens to know feel free. Thanks again.
If you are compiling with gcc, unless you are using the crazy ".code16gcc" trick the linux kernel uses (which I very much doubt), you cannot be in real mode. If you are using the GRUB multiboot specification, GRUB itself is switching to protected mode for you. So, as others pointed out, you will have to talk to the 8042-compatible keyboard/mouse controller directly. Unless it's a USB keyboard/mouse and 8042 emulation is disabled, where you would need a USB stack (but you can use the "boot" protocol for the keyboard/mouse, which is simpler).
Nobody said writing an OS kernel was simple.
The code you've got there is trying to access a real mode BIOS service. If you're running in protected mode, which is likely considering that you're writing a kernel, then the interrupt won't work. You will need to do one of the following:
Thunk the CPU into real mode, making sure the interrupt vector table is correct, and use the real mode code you have or
Write your own protected mode keyboard handler (i.e. use the in/out instructions).
The first solution is going to involve a runtime performance overhead whist the second will require some information about keyboard IO.
I've a piece of GeekOS that seems to do
In_Byte(KB_CMD);
and then
In_Byte(KB_DATA);
to fetch a scancode. I put it up: keyboard.c and keyboard.h. KB_CMD and KB_DATA being 0x64 and 0x60 respectively. I could perhaps also point out that this is done in an interrupt handler for intr:1.
You're doing the right thing, but I seem to recall that djgpp only generates protected mode output, which you can't call interrupts from. Can you drop to real mode like others have suggested, or would you prefer to address the hardware directly?
For the purposes of explanation, let's suppose you were writing everything in assembly language yourself, boot loader and kernel (*cough* I've done this).
In real mode, you can make use of the interrupt routines that come from the BIOS. You can also replace the interrupt vectors with your own. However all code is 16-bit code, which is not binary compatible with 32-bit code.
When you jump through a few burning hoops to get to protected mode (including reprogramming the interrupt controller, to get around the fact that IBM used Intel-reserved interrupts in the PC), you have the opportunity to set up 16- and 32-bit code segments. This can be used to run 16-bit code. So you can use this to access the getchar interrupt!
... not quite. For this interrupt to work, you actually need data in a keyboard buffer that was put there by a different ISR - the one that is triggered by the keyboard when a key is pressed. There are various issues which pretty much prevent you using BIOS ISRs as actual hardware ISRs in protected mode. So, the BIOS keyboard routines are useless.
BIOS video calls, on the other hand, are fine, because there's no hardware-triggered component. You do have to prepare a 16-bit code segment but if that's under control then you can switch video modes and that sort of thing by using BIOS interrupts.
Back to the keyboard: what you need (again assuming that YOU'RE writing all the code) is to write a keyboard driver. Unless you're a masochist (I'm one) then don't go there.
A suggestion: try writing a multitasking kernel in Real mode. (That's 16-bit mode.) You can use all the BIOS interrupts! You don't get memory protection but you can still get pre-emptive multitasking by hooking the timer interrupt.
Just an idea: looking at GRUB for DOS source (asm.s), the console_checkkey function is using BIOS INT 16H Function 01, and not function 00, as you are trying to do. Maybe you'd want to check if a key is waiting to be input.
The console_checkkey code is setting the CPU to real mode in order to use the BIOS, as #skizz suggested.
You can also try using GRUB functions directly (if still mapped in real mode).
A note on reading assembly source: in this version
movb $0x1, %ah
means move constant byte (0x1) to register %ah
The console_checkkey from GRUB asm.s:
/*
* int console_checkkey (void)
* if there is a character pending, return it; otherwise return -1
* BIOS call "INT 16H Function 01H" to check whether a character is pending
* Call with %ah = 0x1
* Return:
* If key waiting to be input:
* %ah = keyboard scan code
* %al = ASCII character
* Zero flag = clear
* else
* Zero flag = set
*/
ENTRY(console_checkkey)
push %ebp
xorl %edx, %edx
call EXT_C(prot_to_real) /* enter real mode */
.code16
sti /* checkkey needs interrupt on */
movb $0x1, %ah
int $0x16
DATA32 jz notpending
movw %ax, %dx
//call translate_keycode
call remap_ascii_char
DATA32 jmp pending
notpending:
movl $0xFFFFFFFF, %edx
pending:
DATA32 call EXT_C(real_to_prot)
.code32
mov %edx, %eax
pop %ebp
ret
Example for polling the keyboard controller:
Start:
cli
mov al,2 ; dissable IRQ 1
out 21h,al
sti
;--------------------------------------
; Main-Routine
AGAIN:
in al,64h ; get the status
test al,1 ; check output buffer
jz short NOKEY
test al,20h ; check if it is a PS2Mouse-byte
jnz short NOKEY
in al,60h ; get the key
; insert your code here (maybe for converting into ASCII...)
NOKEY:
jmp AGAIN
;--------------------------------------
; At the end
cli
xor al,al ; enable IRQ 1
out 21h,al
sti