I completed a basic microprocessor with 8051. In this course I learned using a timer to trigger an event. After a semester, I learned programming Embedded System with ARM Cortex M4 (Tiva C launchpad) and started to use Systick to trigger event ( almost used in FreeeRTOS) and sometimes it is used as a timer.
I wonder what different between timer and systick? Because sometime I
think systick behavior is the same as timer. I have searched the
differ, and know: systick is in arm core and timer is of chip vendor.
And which situation we should use systick intead of using timer?
Please let me know. Thank you.
You basically have it. The systick timer is part of the ARM core. And the other timer(s) are from the chip vendor. You, the programmer are free to use them however you wish.
They most likely have different features, the systick timer is pretty much only for polling or interrupts of simple durations. Where the chip vendor timers can do those things usually and much more, sometimes they can generate clocks for other timers sometimes they can generate clocks or signals that go out a pin, sometimes they can time inputs. Sometimes a vendor will have multiple timers in a chip and those timers have features different from each other. It varies widely.
Note some ARM cores do not have a systick timer or lets say the chip vendor has the option to compile the core without it. In those situations your only choice is the chip vendor supplied timers.
There is no magic here you are the programmer you are free to use the peripherals as you wish.
Now if you use an RTOS like FreeRTOS or others, then your freedom is limited to what the RTOS does not consume for itself (it will likely consume the systick timer if present, but leave others).
The reasoning behind this is any OS developer can write code for any Cortex-M which has SysTick, and not need to worry about the vendor specific details. There is a guarantee that SysTick always works the same way across a wide range of devices so there is less low-level porting to be done.
Same for your course, if you are writing bare metal, you don't need to worry about the device vendor until you use their peripherals (timer, uart, watchdog).
Related
We are using an ARM AM1808 based Embedded System with an rtos and a File System. We are using C language. We have a watchdog timer implemented inside the Application code. So, whenever something goes wrong in the Application code, the watchdog timer takes care of the system.
However, we are experiencing an issue where the system hangs before the watchdog timer task starts. The system hangs because the File System code is badly coded with so many number of while loops. And sometimes due to a bad NAND(or atleast the File System code thinks it is bad) the code hangs in a while loop and never gets out of it. And what we get is a dead board.
So, the point of giving all the information is to ask you guys whether there is any mechanism which could be implemented in the code that runs before the application code? Is there any hardware watchdog? What steps can be taken in order to make sure we don't get a dead board caused by some while loop.
Professional embedded systems are designed like this:
Pick a MCU with power-on-reset interrupt and on-chip watchdog. This is standard on all modern MCUs.
Implement the below steps from inside the reset interrupt vector.
If the MCU memory is simple to setup, such as just setting the stack pointer, then do so the first thing you do out of reset. This enables C programming. You can usually write the reset ISR in C as long as you don't declare any variables - disassemble to make sure that it doesn't touch any RAM memory addresses until those are available.
If the memory setup is complex - there is a MMU setup or similar - C code will have to wait and you'll have to stick to assembler to prevent accidental stacking caused by C code.
Setup the most fundamental registers, such as mode/peripheral routing registers, watchdog and system clock.
Setup the low-voltage detect hardware, if applicable. Hopefully the out-of-reset state for LVD on the MCU is a sound one.
Application-specific, critical registers such as GPIO direction and internal pull resistor registers should be set from here. Many MCU have pins as inputs by default, making them vulnerable. If they are not meant to be inputs in the application, the time they are kept as such out of reset should be minimized, to avoid problems with noise, transients and ESD.
Setup the MMU, if applicable.
Everything else "CRT", such as initialization of .data and .bss.
Call main().
Please note that pre-made startup code for your MCU is not necessarily made by professionals! It is fairly common that there's an amateur-level "CRT" delivered with your toolchain, which fails to setup the watchdog and clock early on. This is of course unacceptable since:
This makes any program running on that platform a notable safety/poor quality hazard, in case the "CRT" will crash/hang for whatever reason.
This makes the initialization of .data and .bss needlessly, painfully slow, as it is then typically executed with the clock running on the default on-chip RC oscillator or similar.
Please note that even industry de facto startup code such as ARM CMSIS fails to do some of the MCU-specific hardware setups mentioned above. This may or may not be a problem.
There is a hardware watchdog that could be run before the application runs. ARM AM1808 does have a timer that could be implemented as a watchdog, as per documentation: www.ti.com/lit/ds/symlink/am1808.pdf. So, you may wish to set it like that at least during the part of the program that runs through the critical and long section. You at wish to have a piece of booting code that first sets this watchdog, and after the correct initialization, goes to application. In fact, this is a very common approach.
Based on my understanding, the CPU has a "hardware timer" that fires an interrupt when its interval expires.
The kernel uses this hardware timer to implement the scheduling mechanism for the processes, so if the hardware timer fires an interrupt with the number of 123, the kernel will map this interrupt number to an interrupt handler that executes the scheduler code (which will decide which process to execute next).
I have two questions:
Can the kernel set the interval of the hardware timer, or is the interval a fixed number that can't be changed programmatically?
Does the CPU have a dedicated hardware timer for scheduling or is there many hardware timers, and the kernel can choose whichever timer it wants to use for scheduling?
Edit: The hardware architecture I am more interested in is a PC, but I would like to know if other architectures (for example: a mobile phone, a raspberry PI, etc.) works in a similar way.
Details are hardware specific (might be different with various motherboards, chipsets, processors; read about SouthBridge). Read about High Precision Event Timer (and APIC).
See also OSDEV wiki, notably Programmable Interval Timer.
(so the answer is usually yes to both questions)
From early on, IBM-compatible PCs had PITs (Programmable Interval Timers): IBM PC and IBM PC XT had the Intel 8253, the IBM PC AT introduced the Intel 8254.
From the IBM PC Technical Reference from April 1984, page 1-11:
System Timers
Three programmable timer/counters are used by the system as follows: Channel 0 is a general-purpose timer providing a constant time base for implementing a time-of-day clock, Channel 1 times and requests refresh cycles from the Direct Memory Access (DMA) channel, and Channel 2 supports the tone generation for the speaker. [...]
Channel 0 is exactly the "constant time base," the "interval" you are asking for. And, to answer your 1st question, it is changeable; it is the Programmable Interval Timer.
However, the CPU built into the original IBM PC was the Intel 8088, basically an Intel 8086 with an 8-bit data bus. Real Mode was the state of the art back then; Protected Mode was introduced some years later with the Intel 80286, so effective multitasking, let alone preemptive multitasking or multithreading, were of no concern in those days when DOS reigned the market.
Fast-forwarding to the IBM PC AT, the world was blessed with a Protected Mode-capable CPU, the Intel 80286, and the Intel 8254 was introduced, a "[...] superset of the 8253." (from the 8254 PIT datasheet). If you really want an in-depth understanding of the PITs, read the 8253/8254 datasheets linked at the bottom. It might also be worth looking at Linux. Since the latest kernels are way too complicated to really understand the particular parts in a matter of twenty minutes, I suggest you look at Linux 0.01, the very first release. _timer_interrupt in kernel/system_calls.s might be interesting and from there you can go wherever you want.
Regarding your 2nd question: there are multiple timer sources, but only one is suitable for interval timing, that is, channel 0. IBM-compatibles still comply with the system timer layout shown above. They retain the same functionality, but might add more on top of that or change how the hardware works and how it's packaged. Nowadays, additional timers do exist like high-resolution timers, but using them for interrupt timing instead would break compatibility.
Intel 8253 Datasheet
Intel 8254 Datasheet
IBM PC Technical Reference
IBM PC AT Technical Reference
Can the kernel set the interval of the hardware timer, or is the interval a fixed number that can't be changed programmatically?
Your questions are ENTIRELY processor specific. Some processors have controllable timers. Others have timers that go off at fixed intervals. Most processors you are likely to encounter have adjustable timers, however.
Does the CPU have a dedicated hardware timer for scheduling or is there many hardware timers, and the kernel can choose whichever timer it wants to use for scheduling?
Some processors have only one timer. Most processors these days have multiple timers.
I was learning about interrupts and came here to see if someone can help me!
A) I understand that the interrupt is the electrical signal sent by some external hardware to the processor, via one of the input ports.
B) I understand that in some MCU's more than one input port are "attached" to only one interrupt.
Can exist an useful input port in a MCU that is not linked to any interrupt at all?
A) I understand that the interrupt is the electrical signal sent by
some external hardware to the processor, via one of the input ports.
That is surely one class of interrupt, sure, as long as you understand that 'external hardware to the processor' can mean 'internal to the controller chip' - many MCU have extensive integrated peripherals.
B) I understand that in some MCU's more than one input port are
"attached" to only one interrupt.
Yes - that is not uncommon. The intrrupt-handler then has to poll the port to find out which GPIO/whatever pin generated the interrupt.
Can exist an useful input port in a MCU that is not linked to any
interrupt at all?
Sure, especially on 'trivial' controllers that do not require high-performace IO and have no RTOS.
Even higher-performance MCU apps may poll for sundry reasons. One common example is reading keypads. The input rate is very low and the mechanical switches need to be debounced. Fastening every KB read line to an interrupt line may cause unwanted multiple interrupts. Such iputs are better polled, though even then, a timer interrupt often handles the polling.
The answer is probably "yes," but it depends on the microcontroller architecture. There's no guarantee that one vendor's MCU will behave the same as any other (with respect to interrupts, ports, or anything else). If you're tasked with learning a particular MCU, then learn it, live it.
You house may have only one doorbell button. But pretty much anyone can use it for whatever reason. UPS there is a package. neighbor kid to play with your kid. someone trying to sell something and so on. A processor is no different. To reduce latency newer designs may have multiple interrupt signals on the core so that the handler doesnt have to do as much if any work to figure out who caused the interrupt. Kind of like having ringtones for every person on your phone, so you can tell without looking who is calling. Vs. one ringtone for everyone and you have to look.
Do not confuse external gpio ports on the chip with interrupt lines, they are not. they are general purpose I/O. they might have a way to be used as interrupts or not, depends on the design of the chip. Again as with the doorbell on your house, there are many things, technically all of them are within the chip (microcontroller), that create interrupts. Because software has to setup handlers before it can...handle...interrupts, all sources of interrupts are disabled at first, and only the ones software enabled have the ability to actually reach the core and cause an interrupt. Logic in the chip. so you may have an interrupt signal tied to the uart receiver and you might enable that. You might have one for the tx buffer, when it is empty interrupt. but you have to enable those before the processor can get an interrupt. there is a small section of logic that does fire an interrupt every time one of those events occurs, but that signal is gated and cannot reach the core, blocked by logic you control.
You can have timers in the mcu, that interrupt you when they roll over or count to zero. But you have to not only setup the timer to do that with software, you also have to enable the interrupt from making it across the chip from the timer to the processor core.
And yes sometimes the gpio peripheral has a way to interrupt the processor as well. as with everything else you have to with software setup the peripheral and define what interrupts you want and you have to enable them across the chip.
There are more different ways of doing this than there are companies making chips as they dont always do it the same way across their product lines. But generally at a minimum there is an interrupt enable on the peripheral end, one or many depending on the peripheral and features, that you have to enable in order for that signal to leave that peripheral on its way to the core. And there is often an interrupt controller peripheral or something built into the core or near it that takes all the dozens or hundreds of individual interrupt connections in the chip and prioritizes them and orrs them into the one or few interrupt lines into the core. you generally have to also enable the corresponding interrupt that matches the signal coming out of your peripheral to reach the processor core. And then there is sometimes an interrupt enable in the core itself so that even if you have the peripheral enabled, the interrupt controller enabled for that one peripherals interrupt, you still cannot interrupt the processor unless the interrupt enable in the processor core is enabled. That is the simple case, it can get more complicated if there are more layers of interrupt controllers along the way. Well the simple case is when you have something like a cortex-m with dozens or hundreds of individual interrupt signals, still have interrupt enables on both ends and in the core, just easier to manage as you have dozens to hundreds of interrupt handlers instead of one mega handler for everything.
So dont confuse the pins on the chip as being interrupts, on older dedicated processors, like the 8088/86, sure that was the one interrupt pin. But general purpose I/O sometimes called GPIO sometimes called ports, are just a peripheral, they are just pins you can make go high or low, they are not there to be interrupts although there may be a feature in that peripheral for that (or maybe there isnt). And again interrupt signals go through logic gates and have to be enabled, by software, at a minimum on both ends of that signal, at the peripheral and at the interrupt controller.
This question may seem slightly vague, however I am researching upon how interrupt systems work and their latency times. I am trying to achieve an understanding of how architecture facilities such as FIQ in ARM help decrease latency times. How does this differ from using a operating system that does not have access or can not provide access to this facilities? For example - Windows RT is made for ARM etc, and this operating system is not able to be ported to other architectures.
Simply put - how is interrupt latency different in dedicated architectures that have dedicated operating systems as compared to operating systems that can be ported across many different architectures (Linux for example)?
Sorry for the rant - I'm pretty confused as you can probably tell.
I'll start with your Windows RT example, Windows RT is a port of Windows to the ARM architecture. It is not a 'dedicated operating system'. There are (probably) many OSes that only run on only 1 architecture, but that is more a function of can't be arsed to port them due to some reason.
What does 'port' really mean though?
Windows has a kernel (we'll call is NT here, doesn't matter) and that NT kernel has a bunch of concepts that need to be implemented. These concepts are things like timers, memory virtualisation, exceptions etc...
These concepts are implemented differently between architectures, so the port of the kernel and drivers (I will ignore the rest of the OS here, often that is a recompile only) will be a matter of using the available pieces of silicon to implement the required concepts. This implementation is a called 'port'.
Let's zoom in on interrupts (AKA exceptions) on an ARM that has FIQ and IRQ.
In general an interrupt can occur asynchronously, by that I mean at any time. The CPU is generally busy doing something when an IRQ is asserted so that context (we'll call it UserContext1) needs to be stored before the CPU can use any resources in use by UserContext1. Generally this means storing registers on the stack before using them.
On ARM when an IRQ occurs the CPU will switch to IRQ mode. Registers r13 and r14 have there own copy for IRQ mode, the rest will need to be saved if they are used - so that is what happens. Those stores to memory take some time. The IRQ is handled, UserContext1 is popped back off the stack then IRQ mode is exited.
So the latency in this case might be the time from IRQ assertion to the time the IRQ vector starts executing. That going to be some set number of clock cycles based upon what the CPU was doing when the IRQ happened.
The latency before the IRQ handling can occur is the time from the IRQ assert to the time the CPU has finished storing the context.
The latency before user mode code can execute depends on too much stuff in the OS/Kernel to explain here, but the minimum boils down to the time from the IRQ assertion to the return after restoring UserContext1 + the time for the OS context switch.
FIQ - If you are a hard as nails programmer you might only need to use 7 registers to completely handle your interrupt servicing. I mentioned that IRQ mode has its own copy of 2 registers, well FIQ mode has its own copy of 7 registers. Yup, that's 28 bytes of context that doesn't need to be pushed out into the stack (actually one of them is the link register so it's really 6 you have). That can remove the need to store UserContext1 then restore UserContext1. Thus the latency can be reduced by up to the length of time needed to do that save/restore.
None of this has much to do with the OS. The OS can choose to use or not use these features. The OS can choose to make guarantees regarding how long it will take to execute the OSes concept of an interrupt handler, or it may not. This is one of the basic concepts of an RTOS, the contract about how long before the handler will run.
The OS is designed for some purpose (and that purpose may be 'general') - that target design goal will have a lot more affect on latency than haw many target the OS has been ported to.
Go have a read about something like freertos than buy some hardware and try it. Annotate the code to figure out the latencies you really want to look at. IT will likely be the best way to get your ehad around it.
(*Multi-CPU systems do it the same with but with some synchronization and barrier functions and a sprinkling of complexity)
I am working on a project where i am trying to figure out how an interrupt is processed in the Global interrupt controller for a ARM architecture. I am working with pl390 interrupt controller. I see there is a line which is mentioned as legacy interrupts which bypasses the distributor logic. It is given that 2 interrupts can be programmed as a legacy interrupt. Can any one help with some explanation of what exactly is a legacy interrupt?. I trying searching online without any luck.
Legacy interrupts are the two interrupts that were in ARM before GIC arrived: nIRQ - normal interrupt request, and fIRQ - fast interrupt request.
Since legacy interrupts were made for single-core processors, and they don't support multi-core processors internally, the reason they bypass the distributor logic should be rather clear - the legacy interrupts are hardwired into one of the cores.
In short - it allows the CPU to work in backwards compatibility with older ARM specification. For example, a four-core ARM CPU will have 4 nIRQs and 4 fIRQs, separate for each of the cores. When you have an old piece of ARM-compatible hardware (which doesn't support GIC), you connect it to one of the core's nIRQ/fIRQ just as if you connected it to an old single-core CPU, and it will always execute on that one core.
More information can be found here - http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0407e/CCHDBEBE.html