We are using an ARM AM1808 based Embedded System with an rtos and a File System. We are using C language. We have a watchdog timer implemented inside the Application code. So, whenever something goes wrong in the Application code, the watchdog timer takes care of the system.
However, we are experiencing an issue where the system hangs before the watchdog timer task starts. The system hangs because the File System code is badly coded with so many number of while loops. And sometimes due to a bad NAND(or atleast the File System code thinks it is bad) the code hangs in a while loop and never gets out of it. And what we get is a dead board.
So, the point of giving all the information is to ask you guys whether there is any mechanism which could be implemented in the code that runs before the application code? Is there any hardware watchdog? What steps can be taken in order to make sure we don't get a dead board caused by some while loop.
Professional embedded systems are designed like this:
Pick a MCU with power-on-reset interrupt and on-chip watchdog. This is standard on all modern MCUs.
Implement the below steps from inside the reset interrupt vector.
If the MCU memory is simple to setup, such as just setting the stack pointer, then do so the first thing you do out of reset. This enables C programming. You can usually write the reset ISR in C as long as you don't declare any variables - disassemble to make sure that it doesn't touch any RAM memory addresses until those are available.
If the memory setup is complex - there is a MMU setup or similar - C code will have to wait and you'll have to stick to assembler to prevent accidental stacking caused by C code.
Setup the most fundamental registers, such as mode/peripheral routing registers, watchdog and system clock.
Setup the low-voltage detect hardware, if applicable. Hopefully the out-of-reset state for LVD on the MCU is a sound one.
Application-specific, critical registers such as GPIO direction and internal pull resistor registers should be set from here. Many MCU have pins as inputs by default, making them vulnerable. If they are not meant to be inputs in the application, the time they are kept as such out of reset should be minimized, to avoid problems with noise, transients and ESD.
Setup the MMU, if applicable.
Everything else "CRT", such as initialization of .data and .bss.
Call main().
Please note that pre-made startup code for your MCU is not necessarily made by professionals! It is fairly common that there's an amateur-level "CRT" delivered with your toolchain, which fails to setup the watchdog and clock early on. This is of course unacceptable since:
This makes any program running on that platform a notable safety/poor quality hazard, in case the "CRT" will crash/hang for whatever reason.
This makes the initialization of .data and .bss needlessly, painfully slow, as it is then typically executed with the clock running on the default on-chip RC oscillator or similar.
Please note that even industry de facto startup code such as ARM CMSIS fails to do some of the MCU-specific hardware setups mentioned above. This may or may not be a problem.
There is a hardware watchdog that could be run before the application runs. ARM AM1808 does have a timer that could be implemented as a watchdog, as per documentation: www.ti.com/lit/ds/symlink/am1808.pdf. So, you may wish to set it like that at least during the part of the program that runs through the critical and long section. You at wish to have a piece of booting code that first sets this watchdog, and after the correct initialization, goes to application. In fact, this is a very common approach.
Related
This question may seem slightly vague, however I am researching upon how interrupt systems work and their latency times. I am trying to achieve an understanding of how architecture facilities such as FIQ in ARM help decrease latency times. How does this differ from using a operating system that does not have access or can not provide access to this facilities? For example - Windows RT is made for ARM etc, and this operating system is not able to be ported to other architectures.
Simply put - how is interrupt latency different in dedicated architectures that have dedicated operating systems as compared to operating systems that can be ported across many different architectures (Linux for example)?
Sorry for the rant - I'm pretty confused as you can probably tell.
I'll start with your Windows RT example, Windows RT is a port of Windows to the ARM architecture. It is not a 'dedicated operating system'. There are (probably) many OSes that only run on only 1 architecture, but that is more a function of can't be arsed to port them due to some reason.
What does 'port' really mean though?
Windows has a kernel (we'll call is NT here, doesn't matter) and that NT kernel has a bunch of concepts that need to be implemented. These concepts are things like timers, memory virtualisation, exceptions etc...
These concepts are implemented differently between architectures, so the port of the kernel and drivers (I will ignore the rest of the OS here, often that is a recompile only) will be a matter of using the available pieces of silicon to implement the required concepts. This implementation is a called 'port'.
Let's zoom in on interrupts (AKA exceptions) on an ARM that has FIQ and IRQ.
In general an interrupt can occur asynchronously, by that I mean at any time. The CPU is generally busy doing something when an IRQ is asserted so that context (we'll call it UserContext1) needs to be stored before the CPU can use any resources in use by UserContext1. Generally this means storing registers on the stack before using them.
On ARM when an IRQ occurs the CPU will switch to IRQ mode. Registers r13 and r14 have there own copy for IRQ mode, the rest will need to be saved if they are used - so that is what happens. Those stores to memory take some time. The IRQ is handled, UserContext1 is popped back off the stack then IRQ mode is exited.
So the latency in this case might be the time from IRQ assertion to the time the IRQ vector starts executing. That going to be some set number of clock cycles based upon what the CPU was doing when the IRQ happened.
The latency before the IRQ handling can occur is the time from the IRQ assert to the time the CPU has finished storing the context.
The latency before user mode code can execute depends on too much stuff in the OS/Kernel to explain here, but the minimum boils down to the time from the IRQ assertion to the return after restoring UserContext1 + the time for the OS context switch.
FIQ - If you are a hard as nails programmer you might only need to use 7 registers to completely handle your interrupt servicing. I mentioned that IRQ mode has its own copy of 2 registers, well FIQ mode has its own copy of 7 registers. Yup, that's 28 bytes of context that doesn't need to be pushed out into the stack (actually one of them is the link register so it's really 6 you have). That can remove the need to store UserContext1 then restore UserContext1. Thus the latency can be reduced by up to the length of time needed to do that save/restore.
None of this has much to do with the OS. The OS can choose to use or not use these features. The OS can choose to make guarantees regarding how long it will take to execute the OSes concept of an interrupt handler, or it may not. This is one of the basic concepts of an RTOS, the contract about how long before the handler will run.
The OS is designed for some purpose (and that purpose may be 'general') - that target design goal will have a lot more affect on latency than haw many target the OS has been ported to.
Go have a read about something like freertos than buy some hardware and try it. Annotate the code to figure out the latencies you really want to look at. IT will likely be the best way to get your ehad around it.
(*Multi-CPU systems do it the same with but with some synchronization and barrier functions and a sprinkling of complexity)
I was wondering, because it seems to be different (for example WFI and WFE are separate instructions), but I can't exactly pinpoint the thing.
After a few years, I see this question is popular, and in the mean time I understood the answer by experience.
Events are implemented as lines entering the MCU ARM core, alongside the memory bus (actually the core can generate events too, the lines are one way and point to point dedicated, this is not a bus), so that peripherals or other cores can raise those lines to tell stuff in real time to the core outside any memory bus management or instruction execution, even if there is no bus arbitrer in the MCU (I guess they are clocked at bus frequency, tho).
Those events are then handled by the core, and one way to make the event enter the program world it by raising an interrupt (more exactly, plugging the line to the NVIC, that can interpret it as an interrupt), by flipping a bit in one of the core registers, by re-starting the core clock or they can be plugged to a DMA peripheral to start or stop a transfert. There is a whole logic part dedicated to event management in the core, to mask them, use them as sources for exception or as source of DMA actions.
The list of events is decided by the MCU implementers, they can decide to use an NVIC, a DMA, or connect them to PLD logic (some cypress MCU can trigger a DMA or interrupt from the PLD part).
The absolute most common way to handle an event is to ignore it, and the second most common is to send an exception to execute some code.
Both instructions are meant for power management/saving. While WFI is supposed to halt the core till an interrupt or exception occurs, WFE will also wait for an "event", which can be send by the SEV instruction.
It is implementation defined to which level the instructions are implemented, they might be just NOPs. So for example you can not trust that an interrupt or "event" really occurred when WFE returns.
I've started writing a boot loader for my z80 system.
So far the program can accept hex via serial and load it to a location in the memory.
The problem I have though is the boot loader is at the start of memory and uses the interrupts,
How can I load a new program with out overwriting the boot loader?
(Also a loaded program might want to use the interrupts as well)
The best and most widely used approach is to split your app into a stable boot loader that is never updated, and the application that you can replace from time to time.
AFAIK, in Z80 there are just interrupt vectors and there no support for replacing them in the CPU itself. You need to have something in your hardware that will replace your memory blocks.
Othewise you need to have feature that bootloaders is not using anything in the app part during the download and block any interrupts that can call anything in the app.
W.r.t. locations, you may place the bootloader at the top of the address space and load a program at the beginning of the address space.
You can also include the location and size of the program in the protocol so the bootloader would be able to check whether the pair of values is compatible with the bootloader location and size (IOW, whether or not the program would overwrite the bootloader if loaded).
Another option is to include relocation information in the program and a simple relocator in the bootloader. That way you'd be able to load the program at any location if there's enough free memory. This is what many OSes do when they load programs.
As for the interrupts, I don't see a problem. Who or what is there not to allow the program to use interrupts? Or do you want the bootloader to say resident and either continue to do something in the background or be able to return to it from the program? If you don't need any of these, just let the program use the interrupts (you probably don't even need to do anything to allow that).
If, OTOH, you do want the bootloader to remain functional you can introduce an extra layer of indirection by maintaining an additional interrupt vector table. The primary ISRs would extract the secondary ISRs from this interrupt vector table and jump there. Your bootloader and program would then need to do this in order to add a new or override an existing ISR:
disable interrupts
get the old ISR address from the additional interrupt vector table
put the new ISR address into the table
enable interrupts
Removing an ISR is obvious and similar to the above.
The new ISR can then:
before doing its work call the old ISR (the address is preserved at step 2 above)
after doing its work call the old ISR
just do its work without calling the old ISR
You need to require that the programs and bootloader use this table and restore it when they no longer need to have their own ISRs in it.
I don't know what issues you may need to solve if you chain ISRs by executing the old ones before/after the new ones. But in some systems that's a possible design. Many x86 PC programs and drivers did that in MSDOS.
I'm working on a custom Cortex-M3-based device and I need to implement in-application programming (IAP) mechanism so that it will be possible to update the device firmware without JTAG (we'll use TFTP or HTTP instead). While the IAP-related code examples available from ST Microelectronics are clear enough to me, I don't really understand how the re-flashing works.
As far as I understand, the instructions are fetched by the CPU from the Flash through the ICode bus (and the prefetch block, of course). So, here's my pretty silly question: why doesn't the running program get corrupted while it re-flashes itself (i.e. changes the Flash memory from which it is being run)?
A common solution is to have a small reserved area in the flash, where the actual flashing program is stored. When new firmware has been downloaded just make a jump to the code in this area.
Of course, this small area is not overwritten when flashing firmware, it can only be done by other means (like JTAG). So make sure this flashing-program works good to start with. :)
I'm not familiar with STM implementation, but in NXP chips the IAP routines are stored in a separate, reserved ROM area which can't be erased by user code.
If you're implementing the flash writing code yourself by using HW registers directly, you need to either make sure that it doesn't touch the sectors it's running from, or runs from the RAM.
Now a days many micro-controllers supporting IAP, that it is possible to program its flash memory while program execution in the same flash.
For IAP, program memory in the flash may be divided into 2 parts, one executable & other backup parts.
Generally we program the flash memory at a location (say, part-1) through JTAG, whose firmware version is 0.01. For IAP, i.e, program the flash in another part (part-2) while code is executing, corresponding API's should be provide in firmware version 0.01, which helps to program the flash part-2, After completion of programming successfully firmware version will be updated as 0.02. Upon processor restarts, program execution jumps to latest firmware by checking firmware version at initialization.
The part where the firmware is executing is called executable part, and other is back-up. why it is called back-up mean, suppose if there exists any firmware corruption while programming, firmware version will not update & upon restart, program control will automatically jumps back to back-up firmware after checking version number.
Another good way to do it is using custom made bootloader. However STM IAP is not stored in Flash so it can not be overwritten by it self. What generally people do is to spilt the flash in two parts , one is reserved for Custom made Bootloader and Another is for application. Bootloader makes sure that it does not write to its own assigned area. Bootloader can be programmed through JTAG and later application can utilize bootloader to program itself.
As far as I understand, the instructions are fetched by the CPU from the Flash through the ICode bus (and the prefetch block, of course). So, here's my pretty silly question: why doesn't the running program get corrupted while it re-flashes itself (i.e. changes the Flash memory from which it is being run)?
This is because, in general case writing/programming to flash memory is not allowed while you are reading from it(i.e executing code).
Have a look at this for some ideas on implementing IAP.
I have an open-source Atari 2600 emulator (Z26), and I'd like to add support for cartridges containing an embedded ARM processor (NXP 21xx family). The idea would be to simulate the 6507 until it tries to read or write a byte of memory (which it will do every 841ns). If the 6507 performs a write, put the address and data on some of the ARM's I/O ports and let the ARM code run 20 cycles, confirm that the ARM is floating its data bus, and let the ARM run for another 38 cycles. If the 6507 performs a read, put the address on the ARM's I/O ports, let the ARM run 38 cycles, grab the data from the ARM's I/O port (hopefully the ARM software will have put it there), and let the ARM run another 20 cycles.
The ARM7 seems pretty straightforward to implement; I don't need to simulate a whole lot of hardware features. Any thoughts?
Edit
What I have in mind would be a routine that would take as a parameter a struct holding the machine state and pointers to a memory access routine. When called, the routine would emulate the ARM's instruction engine, generating appropriate reads, writes, and code fetches. I could then write the memory access routine to regard appropriate areas as flash (with roughly-approximated wait states), RAM, I/O ports, and timer registers. Some other areas would be marked as don't-care, and accesses to any other areas would flag an error and stop the emulator.
Perhaps QEMU uses such a thing internally. Since the ARM emulation would be integrated into an already-existing emulation engine (which I didn't write and don't fully understand--the only parts of Z26 I've patched have been the memory read/write logic) I would need something with a fairly small footprint.
Any idea how QEMU works inside? Any idea what the GPL licence would require if I just use 2% of the code in QEMU--whether I'd have to bundle the code for the whole thing, or just the part that I use, or what?
Try QEMU.
With some work, you can make my emulator do what you want. It was written for ARM920, and the Thumb instruction set isn't done yet. Neither is the MMU/cache interface. Also, it's slow because it is an interpreter. On the bright side, it's all written in C99.
http://code.google.com/p/gp2xemu/
I haven't worked on it for a while (The svn trunk is 2 years old), but if you're going to use the code, I'll be glad to help you out with the missing features. It is licensed under MIT, so it's just the same as the broad BSD license.