Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
In the arm architecture which registers to update the address of ISR to the programme counter.
Does in GIC or vendor IC(like Ti, NxP) there is any hardware logic that provides ISR instruction address to the program counter?
ARM CPUs use a so called vector table. It's an area in memory that contains the start addresses of all exception handlers aka ISRs.
So if the vector table starts at 0x08000000 and the Systick exception occurs, the CPU interrupts the current work and loads the Systick ISR start address into the program counter (PC). It can be found at 0x08000000 + 4 * 15 (as Systick has exception number 15).
Before PC is loaded, several registers (incl. PC) are saved on the stack.
The address of the vector table is configured via the VTOR register.
Exceptions 0 to 15 are given by the ARM architecture and are the same for all CPUs and MCUs. Exceptions numbers 16 and higher are vendor dependent and usually used for hardware related events such as UART, SPI, DMA events.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I'm a student and im currently learning the basics of microprocessors.
I've been assigned a task with no prior knowledge of the task itself and i've been struggling to find the answer to it. So i wanted to give it a shot on Stackoverflow.
The task is to write a simple program using the Programing language C to test the Arduino board which first initializes the 12th pin and afterwards continually gives pin 5V for 5 seconds and a low voltage in 0 for 1 seconds for the same pin.
For a humble solution or explanation for it i would be very happy.
Thank you in advance.
Edit:
Im referring to the Arduino Hardware.
Since the task is to set a certain pin as output, the first thing you need to do is to check the board schematics to find out which microcontroller port and pin that "pin 12" corresponds to. Microcontroller ports often have a 1 letter name such as PORTA, PORTB etc. This is also the case for AVR.
Once you have found out which port that's the correct one, you also have to figure out which bit in that port register to set. There will be 8 pins per port, since each port on this MCU corresponds to 8 bit registers.
Since you want this pin to be an output, you have to configure a "data direction register" to make it such. On AVR (and most Motorola-flavoured MCUs), these registers are called DDRx, where x is the port letter. See the AVR manual, GPIO section. You set the DDR register as output by writing a one to the bit corresponding to the pin.
Once that is done, you can set the relevant bit in actual port register to 1 or 0, depending on if you want a high or low signal. These are called PORTx on AVR.
To create a 5 second delay, it is likely enough for hobbyist/student purposes to call a "busy wait" function. The Arduino libs has the delay() function that should suffice. Simply wait 5000ms. In professional/real-world applications, busy-wait delays should be avoided and then you'd rather use the on-chip hardware peripheral timers instead, to set a flag when the timer has elapsed.
From your post I take that you have an Arduino board.
It is unlikely that someone told you to program it in C as you don't know any C. And programming that Arduino's AVR microcontroller bare metal in C is impossible of someone of your skill level.
So let's assume you're supposed to complete the indeed very simple task of programming an Arduino with the Arduino IDE in C++ to do what is asked.
All you have to do is follow this link:
https://www.arduino.cc/en/Guide
I won't give you any code as this would take away the learning experience from you.
You will have to configure pin 12 as a digital output.
You will have to find out how to set LOW and HIGH to a digital output
You will find out how to pause/delay your program for a number of seconds.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I am programming a Texas Instruments TMS320F28335 Digital Signal Controller (DSC) and I am writing the communication with a resolver model AD2S1205 (datasheet: https://www.analog.com/media/en/technical-documentation/data-sheets/AD2S1205.pdf). I have to implement the “Supply sequencing and reset” procedure and the procedure for reading and sending the speed and position values through the SPI serial interface back to the DSC. I'm new to firmware and there are several things I don't know how to do:
In the resolver datasheet (page 16) you are asked to wait for the supply voltage Vdd to reach its operating range before moving the reset signal. How can I know when this happened?
To ask the resolver to read and transmit the position and speed information, I must observe the time diagram (page 15), e.g. before putting /RD = 0 I have to wait for /RDVEL to remain stable for t4 = 5 ns. What should I insert in the code before the instruction that lowers RD to make sure that 5 ns have passed? I can pass 0,005 to the DELAY_US(A) function available on the DSC (which delays for A microseconds) but I don’t know if it will actually work and if this is the right way to go to observe device timing diagrams.
In the “/RD input” section (page 14) it is specified that the high-to-low transition of /RD must occur when the clock is high. How can I be sure that the line of code that lowers /RD is running when the clock is high?
Connect chip Vdd to the ADC port via the divider. Reset the chip when Vdd is correct.
Your uC is 150MHz. The clock period is 6.67ns which is larger than 4ns required. Whatever you do you cant change the pin faster. Problem does not exist for you.
Connect CLKIN to the input pin. Poll it. Change /RD when clock high
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm trying to understand how Interrupts are handled by the system and how does it work if there is a DMA integrated in the system.
I will express what I understood until now, and I would like to have some feedback if I'm right or not.
In order for the system to catch I/O actions performed by some device, the system uses what's called Interrupts.
The system sets up interrupts for given actions (we're interested in, for example typing on a keyboard), and once the action is performed the system catches it.
Now I have some doubts, once we catch an Interrupt what happens in the background? What are the overheads? What has does the CPU needs to set up? Is there a context switch? How does the interrupt handler works?
The CPU has to do some work in order to handle the interrupt, does it read the registers and writes the "message" in the memory, in order for the user to see it?
If we have a DMA, instead, once the CPU catches the Interrupt it doesn't need to handle the memory access for the device, thus it can perform other thing until the DMA interrupts the CPU telling him that the transfer it completed and that the CPU can safely close the handling?
As you can see there is some stuff I need to clarify. I would really appreciate your help. I know that an answer to all those questions could be written in one book, but all I need is to know how the things are connected, to get an intuition on what's going on behind the scenes in order to reason more easily about it.
Interrupts are handled by something called Interrupt Service Routines (ISRs). These are functions implemented by the kernel and registered with the hardware. Each type of an interrupt is registered with a separate handler.
When a hardware receives an interrupt, it halts the execution of any running process (on that processor), pushes the state of the process (registers, flags, segments) on the stack and executes the ISR.
Apart from saving the context, the hardware also does one more important thing. It changes the processor context to Privileged mode (lower ring). This is of course if the processor is already not in Ring 0 and if Privileged operations are required by the ISR. There is a flag in the Interrupt Descriptor Table (IDT) which tells the processor whether it is a user mode exception or a privileged mode exception.
Since these ISRs are written by the kernel they are trusted. These ISRs perform whatever is required for example in case of a keyboard interrupt, it moves the byte reads into the input stream of the foreground process.
After the ISR is done, (signaled by an iret instruction on X86), the state of the program is popped off and the execution of the process continues.
Yes, this can be thought of a context switch, but it really isn't since other process is not loaded. It can be just thought of as a halt till a more important job is done.
While this has some overhead, it is not much in case of frequent interrupts like keyboards interrupts (the ISRs are very small) and also these interrupts are very infrequent.
But say there is a hardware that does jobs are very regular interval. Like disk read/write or network card. In this case, interrupting again and again would be very costly.
So what we use is DMA (direct memory access). The processor allocates some physical memory to these hardware. They can access this part of the RAM without halting the process, since the processor's intervention is not required.
They keep doing all the IO they need to, but in the end when the job is done (or if it fails), they signal the processor with a single interrupt.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
we are going to assign priority 5 to the Port F then we write the following code
int main() {
NVIC_PRI7_R |=0☓00A00000;
NVIC_EN0_R= 0☓40000000;
My question is that where does this code 0☓00A00000 this code came from?
The Nested Vectored Interrupt Controller (NVIC) registers are present in the ARM Cortex M architecture, so I am supposing that you are programming for that kind of target.
According to the ARM Cortex M manual:
The NVIC supports up to 240 interrupts, each with up to 256 levels of
priority that can be changed dynamically.The processor and NVIC can be
put into a very low-power sleep mode, leaving the Wake Up Controller
(WIC) to identify and prioritize interrupts.
The NVIC_PRI7_R variable is used to trigger the CPU in a given mode through specific registers manipulation. Setting it to a given value will make the CPU enter in a specific mode of priority (read the documentation for more).
About the 0☓00a00000, it acts as a mask to set a few bits to '1' in NVIC_PRI7_R. Precisely, 0x00a00000 == 0b101000000000000000000000, so the operation |= will set two bits to one and trigger the priority mode to 5 (note that 0b101 == 5). Read the ARM Cortex M manual to see that these very precise bits trigger the priority mode of the CPU.
Hope it helped you to understand a bit more this way of doing.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I understand the basic difference between function call & interrupt (ISR) jump from below SE question.
difference between function call & ISR
But I am still not clear about, what are the registers will be pushed /pop to/from stack in both the cases? How context switching will happen in both the case? As we don't know when interrupt will occur, what we need to save (variables, PC, flags (PSW), registers, context) before entering into ISR?
How can we resume back original context without any data lose with multiple thread environment.
I tried to google it & I found the needed information from this:
Interrupts
Context Switch
Thanks #Drew McGowen
So to sum up, the general sequence for an interrupt is as follows:
Foreground code is running, interrupts are enabled
Interrupt event sends an interrupt request to the CPU
After completing the current instruction(s), the CPU begins the interrupt response
automatically saves current program counter
automatically saves some status (depending on CPU)
jump to correct interrupt service routine for this request
ISR code saves any registers and flags it will modify
ISR services the interrupt and re-arms it if necessary
ISR code restores any saved registers and flags
ISR executes a return-from-interrupt instruction or sequence
return-from-interrupt instruction restores automatically-saved status
return-from-interrupt instruction recovers saved program counter
Foreground code continues to run from the point it responded to the interrupt
As usual, the details of this process will depend on the CPU design. Many devices use the hardware stack for all saved data, but RISC designs typically save the PC in a register (the link register). Many designs also have separate duplicate registers that can be used for interrupt processing, thus reducing the amount of state data that must be saved and restored.
Note that saving and restoring the foreground code state is generally a two-step process for reasons of efficiency. The hardware response to the interrupt automatically saves the most essential state, but the first lines of ISR code are usually dedicated to saving additional state (usually in the form of saving condition flags if not saved by the hardware, along with saving additional registers). This two-step process is used because every ISR will have different requirements for the number of registers it needs, and thus every ISR may need to save save different registers, and different numbers of registers, assuring all appropriate state data is saved without wasting time saving registers unnecessarily (that is, saving registers that are not modified in the ISR and thus didn’t need to be saved). A very simple ISR may not need to use any registers, another ISR may need to use only one or two registers, while a more complicated ISR may need to use a large number of registers. In every case, the ISR should only save and restore those registers it actually uses.
I'm sure there are different implementations based on the CPU you're using. On a general level, the function calls store the input parameters within the given registers (%o0-%o9) on SPARC and are available in the (%i0-%i9) registers in the calle's function. The callee function will then place the return value in the %i0 register to be available in the %o0 register for the caller function. According to the Sparc Manual, each interrupt is:
Accompanied by data, referred to as an “interrupt packet”.
An interrupt packet is 64 bytes long, consisting of eight 64-bit doublewords.
According to this source, the data in your current executing thread is:
Saved either on a stack (PDP-11, VAX, MIPS, x86_64)
or in a single set of dedicated registers (ARM, PowerPC)
The above source mentions how interrupts are handeled on a couple of different architectures as well.
Please let me know if you have any questions!