Getting value from TCNT0 AVR - timer

I have a timer which increments by one every 256 cycles. Is there a way that i can get the value from TCNT0 at the point it is called.
I am using AVR Studio 4 and have tried using ldi temp, TCNT0 but i always seem to get 32 which is its address.
Thanks

If TCNT0 is within I/O space then you can use IN to retrieve its value, otherwise you will need to use LDS to load it from its memory address (which is usually offset by 0x20 from the I/O register location).
in tmp,TCNT0

Related

How to determine if an instruction is long or short at the event of an exception? (Variable Length Instructions)

My question is about Chapter 5 in this link.
I have an Error Correction Code which simply increments the program counter (PC) by 2 or 4 bytes according the length of the instruction at the time of exception. The core is e200z4.
As far as I know e200z4 can support Fixed Length Instructions of 4 bytes, too.
The thing I don't understand is that: To determine if Variable Length Instructions (VLE) enabled, we need to check the VLEMI bit in the ESR (Exception Syndrome Register). However, this register always contains 0x00000000. The only interrupt that we end up with is Machine Check Interrupt (IVOR1) (during Power On and Off tests with increasing On and fixed Off intervals).
So, why does the CPU not provide the information about the length of the instruction if VLE is used at the moment of interrupt, for instance via VLEMI bit inside ESR? How could I determine if the instruction at the time of interrupt is 2 bytes or 4 bytes long is fixed length or variable length?
Note1: isOpCode32Bit below is decoding opCode to determine instruction length, but isOpCode32Bit is relevant only if isFixedLength is 0, i.e. when (syndrome & VLEMI_MASK) is equal to 1. So, we need to have VLEMI value in syndrome somehow, but ESR seems to be always 0x00 (why?).
Note2: As mentioned before, we always end up in IVOR1 and the instruction address right before the interrupt is reachable (provided in a register).
// IVOR1 (Machine Check Interrupt Assembly part):
(ASSEMBLY)(mfmcsr r7) // copy MCSR into register 7 (MCSR in Chapter 5 in the link)
(ASSEMBLY)(store r7 &syndrome)
// IVOR2:
(ASSEMBLY)(mfesr r7) // copy ESR into register 7 (ESR in Chapter 5 in the link)
(ASSEMBLY)(store r7 &syndrome)
------------------------------------------------------
#define VLEMI_MASK 0x00000020uL
isFixedLength = ((syndrome & VLEMI_MASK) == 0);
if (isFixedLength || isOpCode32Bit)
{
PC += 4; // instruction is 32-bit, increase PC by 4
}
else
{
PC += 2; // instruction is 16-bit, increase PC by 2
}
When it comes to how these exception handlers work in real systems:
Sometimes handling the exception only requires servicing a page fault (e.g. via copy on write or disc reload).  In such cases, we don't even need to know the length of the instruction, just the effective memory address the instruction is accessing, and the CPUs generally offer that value.  If the page fault can be serviced, then re-running that faulting instruction (without advancing the PC) is appropriate (and if not, then halting the program, also without advancing the PC, is appropriate.)
In other cases, such as software emulation for instructions not present in this hardware, presumably hardware designers consider that such a software handler needs to decode the faulting instruction in order to emulate it, and so will figure out the instruction length anyway.
Thus, hardware turns the job of understanding the faulting instruction over to software.  As such system software needs to have deep knowledge of the instruction set architecture, while also likely requiring customization for each different hardware instantiation of the instruction set.
So, why does the CPU not provide information about the length of the instruction at the moment of interrupt inside ESR?
No CPU that I know tells us of the length of an instruction that caused an exception.  If they did, that would be convenient — but only for toy exception handlers.  For real systems, ultimately, this isn't a true burden.
How to determine if an instruction is long or short at the event of an exception? (Vairable Length Instructions)
Decode the instruction (while considering any instruction modes the CPU was in at the time of exception)!

How to use 32 bit variable for 16 bit TIMER register?

If one need to write a function that takes, as an argument a 32 bit variable and assigns it to 16 bit TIMER register (the embedded target have 16 bit resolution timer and we need to deal with 32 bit values to increase the resolution of the timer interrupt) - how this could be done?
You can use the 16 bit timer to trigger an interrupt that uses a 16 bit software counter. Increment this every interrupt. When it overflows, or hit your target count you can set a flag for the main program loop to do something.
to increase the resolution of the timer interrupt
You cannot increase the resolution, it is 16 bits and the timer hardware pre-scaler sets the resolution limits.
You should perhaps get better accuracy though, by changing the quality of the clock source oscillator.

Where can I find the device specific JTAG instructions for Cortex-M3?

I'm trying to communicate with a Cortex-M3 based microcontroller (LPC1769) through JTAG. I already have the hardware required, and have managed to get an example program to work, but to progress further, I need to know the device-specific JTAG instructions that are available in this case. I have read the corresponding section of the Cortex-M3 technical reference manual (link), and all that told me, was that the device uses a standard CoreSight debug port. In particular, I'd like to read the device ID with the IDCODE instruction. Some sites suggest, that the IDCODE might be b0001 or b1110 for this device, but neither of them seem to work. b0001 seems more likely to me, as that's the value I read from the IR after the TAP has been reset.
I also considered the possibility, that the instruction I'm using is correct, and I'm just not reading the device ID register properly. I'm using an FTDI cable with the FT232H chip, and the application I'm using is based on FTDI's AN129 example code (link), using MPSSE commands. I use the 0x2A command to clock data in from the TAP, the 0x1B command to clock data out to the TAP, and the 0x3B command to do both simultaneously. If anyone could provide some insight, as to what I'm doing wrong (or whether I'm using the right IDCODE instruction at all), that would be much appreciated.
*EDIT:
I made some progress, but the IDCODE instruction still eludes me. I managed to read the Device ID after setting the TAP controller to Test-Logic-Reset state (which loads the IDCODE instruction in the IR). However, I tried all possible (16) instructions, and while some of them resulted in different reads from the DR, none loaded the Device ID register.
This is the function I use to insert the instruction, once the TAP controller is in Shift-IR state:
int clockOut(FT_HANDLE* ftHandle, BYTE data, BYTE length)
{
FT_STATUS ftStatus = FT_OK;
BYTE byOutputBuffer[1024]; // Buffer to hold MPSSE commands and data to be sent to the FT232H
DWORD dwNumBytesToSend = 0; // Index to the output buffer
DWORD dwNumBytesSent = 0; // Count of actual bytes sent - used with FT_Write
byOutputBuffer[dwNumBytesToSend++] = 0x1B;
// Clock data out through Shift-DR
byOutputBuffer[dwNumBytesToSend++] = length - 1;
// Number of clock pulses = (length - 1) + 1; This way, the length given as the parameter of the function is the actual number of clock pulses.
byOutputBuffer[dwNumBytesToSend++] = data;
// Shift out data
ftStatus = FT_Write(*ftHandle, byOutputBuffer, dwNumBytesToSend, &dwNumBytesSent);
// Send off the TMS command
return ftStatus;
}
The length parameter is set to 4, and the data parameter is set to 0x0X (where I tried all possible values for X, neither of which led to success)
I managed to get it to work. The problem was, that when I sent out 4 bits to the IR, it in fact received 5. After finishing the transmission, the next rising edge of TCK was supposed to change the state of the TAP controller, but as it was still in the Shift-IR state, it not only changed the state, but also sampled the TDI, and did another (fifth) shift. To countermand this, I only shifted the lower 3 bits of the instruction, and then used a 0x4B MPSSE command, to simultaneously clock out a TMS signal (to change the state), and send out the MSB of the command.

ATMega128 Output Flicker on Startup

I'm using an ATMega128 micro and have all of my pin inits set to output and set to low under my main section of code:
PORTB=0x00;
DDRB=0xFF;
However on startup, the output associated with PORTB.0 flicks high for a split second (I've caught it on the scope) and it seems the other outputs are doing the same. Seems like it goes LOW-HIGH-LOW. I've done some reading that it could be caused by the tri-state to output switch during startup, so I've set the PUD register to 1 before the pin inits and then back to 0 after and still no luck. Does anyone have any other ideas to keep that output off during startup? It doesn't always occur either which is what has me stumped.
The fundamental problem is a hardware issue - lack of a pull-down resistor on the GPIO so that it is floating when in the reset-default high-impedance input state.
The best you can do in software is to initialise the GPIO at the earliest opportunity immediately after the reset. To do this in CodeVisionAVR you need to use a customised startup.asm in your project as described in section 4.18 of the CoadeVisionAVR compiler manual:
...
Where I suggest you initialise PORTB and DDRB as follows:
LDI R16, 0x00
OUT PORTB, R16
LDI R16, 0xFF
OUT DDRB, R16
immediately before step 2, i.e. the first four instructions. The amount of time the GPIO will be left floating will possibly be too small for the relay to react if it is a mechanical relay. You may still have a problem for a solid state relay. The length of any pulse may depend on the power-supply rise time; if it is slow, you may get a longer pulse.

Why is disabling interrupts necessary here?

static void RadioReleaseSPI(void) {
__disable_interrupt();
spiTxRxByteCount &= ~0x0100;
__enable_interrupt();
}
I understand that multiple tasks may attempt to use the SPI resource. spiTxRxByteCount is a global variable used to keep track of whether the SPI is currently in use by another task. When a task requires the SPI it can check the status of spiTxRxByteCount to see if the SPI is being used. When a task is done using the SPI it calls this function and clears the bit, to indicate that the SPI is now free. But why disable the interrupts first and then re-enable them after? Just paranoia?
The &= will do a read-modify-write operation - it's not atomic. You don't want an interrupt changing things in the middle of that, resulting in the write over-writing with an incorrect value.
You need to disable interrupts to ensure atomic access. You don't want any other process to access and potentially modify that variable while you're reading it.
From Introduction to Embedded Computing:
The Need for Atomic Access
Imagine this scenario: foreground program, running on an 8-bit uC,
needs to examine a 16-bit variable, call it X. So it loads the high
byte and then loads the low byte (or the other way around, the order
doesn’t matter), and then examines the 16-bit value. Now imagine an
interrupt with an associated ISR that modifies that 16-bit variable.
Further imagine that the value of the variable happens to be 0x1234 at
a given time in the program execution. Here is the Very Bad Thing
that can happen:
foreground loads high byte (0x12)
ISR occurs, modifies X to 0xABCD
foreground loads low byte (0xCD)
foreground program sees a 16-bit value of 0x12CD.
The problem is that a supposedly indivisible piece of data, our
variable X, was actually modified in the process of accessing it,
because the CPU instructions to access the variable were divisible.
And thus our load of variable X has been corrupted. You can see that
the order of the variable read does not matter. If the order were
reversed in our example, the variable would have been incorrectly read
as 0xAB34 instead of 0x12CD. Either way, the value read is neither
the old valid value (0x1234) nor the new valid value (0xABCD).
Writing ISR-referenced data is no better. This time assume that the
foreground program has written, for the benefit of the ISR, the
previous value 0x1234, and then needs to write a new value 0xABCD. In
this case, the VBT is as follows:
foreground stores new high byte (0xAB)
ISR occurs, reads X as 0xAB34
foreground stores new low byte (0xCD)
Once again the code (this time the ISR) sees neither the previous
valid value of 0x1234, nor the new valid value of 0xABCD, but rather
the invalid value of 0xAB34.
While spiTxRxByteCount &= ~0x0100; may look like a single instruction in C, it is actually several instructions to the CPU. Compiled in GCC, the assembly listing looks like so:
57:atomic.c **** spiTxRxByteCount &= ~0x0100;
68 .loc 1 57 0
69 004d A1000000 movl _spiTxRxByteCount, %eax
69 00
70 0052 80E4FE andb $254, %ah
71 0055 A3000000 movl %eax, _spiTxRxByteCount
71 00
If an interrupt comes in in-between any of those instructions and modifies the data, your first ISR can potentially read the wrong value. So you need to disable interrupts before you operate on it and also declare the variable volatile.
There are two reasons for why you should be disabling interrupts:
The &= is a read-modify-write operation which is in nature not atomic. It consists of a read, a bitwise-and, and a write. You don't want this operation to be interrupted by an ISR (interrupt service route). The ISR could modify spiTxRxByteCount after the read and before the write. The write would then be based on an outdated value and you would lose information.
__disable_interrupt() and __enable_interrupt() serve as software barriers. Even if optimization is enabled, the compiler must not move the read or the write across the two barriers. Also, the compiler must not cache the value of spiTxRxByteCount across the two barriers. If there were no barriers, the compiler would be allowed to hold a copy of spiTxRxByteCount in some CPU register even across multiple invocations of RadioReleaseSPI(). This would typically happen if inlining is enabled and RadioReleaseSPI() is called repeatedly.
That disabling and enabling interrupts serves as barriers is at least as important as avoiding the interruption by an ISR, IMHO. But it seems to be overlooked, sometimes.

Resources