I am reading a status word that consists of 24 bits, and want the LED to change corresponding to the value of the bit. I want the LED to fully turn off, but sometimes instead of turning off it gets brighter.
I am using a simple pin toggle function to toggle the led
nrf_gpio_pin_toggle(LED_2);
Is it possible that the LED value resets to 0 but acctually stays on, making the LED brighter?
If it gets brighter, then before it was dimmer. To dim an LED, one usually uses pulse-width modulation, which means the LED is being turned on and off very quickly.
The following is speculation ...
If the LED, as part of the PWM process, happens to be in the off state when you try to toggle it, then it will be turned on and the PWM operation (which might be managed for you by a library or interrupt handler) will be canceled. Thus you see the LED at full brightness, which it brighter than it was.
Related
I've been trying to program my ATtiny817-XPRO to interpret input data from a rotary encoder (the Arduino module), however I'm having some trouble and can't seem to figure out what the problem is. What I'm trying to do is essentially program a digital combination lock that blinks a red LED every time the rotary encoder is rotated one "tick" (in either direction), and blinks a green LED once the right "combination" has been detected. It's a little bit more involved than that, so when I ran into trouble upon testing my code, I decided to write a simple method to help me troubleshoot/debug the problem. I've included it below:
void testRotaryInput(){
if(!(PORTC.IN & 0b00000001)){ // if rotary encoder is turned clockwise
PORTB.OUT = 0b00000010; // turn on green LED
}
else if(!(PORTC.IN & 0b00000010)){ // if rotary encoder is turned CCW
PORTB.OUT = 0b00000001; // turn on blue LED
}
else{ // if rotary encoder remains stationary
PORTB.OUT = 0b00000100; // turn on red LED
}
RTC.CNT = 0;
while(RTC.CNT<16384){} // wait 500ms
PORTB.OUT = 0x00; // turn LED off
while(RTC.CNT<32768){} // wait another 500ms
}
int main(void)
{
PORTB.DIR = 0xFF; // PORT B = output
PORTC.DIR = 0x00; // PORT C = input
RTC.CTRLA = RTC_RTCEN_bm; // Enable RTC
PORTB.OUT = 0x00; // Ensure all LEDs start turned off
// ^(not necessary but I do it just in case)^
//testLED(); <-- previous test I used to make sure each LED works upon start-up
while(1)
{
testRotaryInput();
}
}
The idea here is that whichever output line arrives at the AVR first should indicate which direction the rotary encoder was rotated, as this dictates the phase shift between the two signals. Depending on the direction of rotation (or lackthereof), a red/green/blue LED will blink once for 500ms, and then the program will wait another 500ms before listening to the rotary encoder output again. However, when I run this code, the LED will either continuously blink red for awhile or green for awhile, eventually switching from one color to the other with the occasional (single) blue blink. This seems completely random each time, and it seems to completely ignore any rotation I apply to the rotary encoder.
Things I've done to troubleshoot:
Hooked up both outputs of the rotary encoder to an oscilloscope to see if there's any output (everything looked as it should)
Used an external power supply to power the rotary encoder, as I was only reading 1.6V from the 5.0V VCC pin on my ATtiny817-XPRO when it was connected to that (I suspect this was because the LEDs and rotary encoder probably draw too much current)
Measured the voltage from said power supply to ensure that the rotary encoder was receiving 5.0V (I measured approx. 4.97V)
Checked to make sure that the circuitry is correct and working as it should
Unfortunately, none of these things eliminated the problem at hand. Thus, I suspect that my code may be the culprit, as this is my first attempt at using a rotary encoder, let alone interpreting the data generated by one. However, if my code looks like it should work just fine, I'd appreciate any heads-up on this so that I can focus my efforts elsewhere.
Could anyone shed light on what may be causing this issue? I don't think it's a faulty board because I was using these pins two nights ago for a different application without any problems. Also, I'm still somewhat of a newbie when it comes to AVRs, so I'm sure my code is far from being as robust as it could be.
Thanks!
Encoders can behave in various strange ways. You'll have signal bounces like with any switch. You'll have cases where several inputs may be active at once during a turn. Etc. Therefore you need to sample them periodically.
Create a timer which polls the encoder every 5ms or so. Always store the previous read. Don't accept any change to the input as valid until it has been stable for two consecutive reads. This is the simplest form of a digital filter.
Which input is shorted does not show you what direction the encoder was turned. But the order in which they were shorted does.
Normally, rotary encoders have two outputs which are shorted to the ground pin: first shorted, then second shorted, then first released, then second released - this full sequence happened between each click. (Of course there are encoders which have additional "click" in the middle of the sequence, or have no clicks at all, but most of them do like described above).
So, generally speaking, each "CLICK!" movement you may consider as 4 phases:
0. Both inputs are released (high) - default position
1. Input A is shorted to ground (low), input B is released (high)
2. Both inputs are shorted (low)
3. Input A is released (high), B is shorted (low).
Rotation in one direction is a passage thru phases 0-1-2-3-0. Another direction is 0-3-2-1-0. So, whatever direction the encoder is rotated, both inputs will be shorted to ground at some particular moment.
As you can see from the picture above, usually the bouncing happens only at one of inputs. So, you may consider the bouncing as jumping between two adjacent phases, what makes the debounce much simpler.
Since those phases are changes very fast you have to pool input pins very fast, may be 1000 times per second, to handle fast rotations.
Code to handle the rotation may be as follows:
signed int encoder_phase = 0;
void pull_encoder() {
int phase = ((PORTC.IN & (1 << INPUT_A_PINNO)) ? 0 : 1)
^ ((PORTC.IN & (1 << INPUT_B_PINNO)) ? 0 : 0b11);
// the value of phase variable is 0 1 2 or 3, as described above
int phase_shifted = (phase - encoder_phase) & 3;
if (phase_shifted == 2) { // jumped instantly over two phases - error
encoder_phase = 0;
} else if (phase_shifted == 1) { // rotating forward
encoder_phase++;
if (encoder_phase >= 4) { // Full cycle
encoder_phase = 0;
handle_clockwise_rotation();
}
} else if (phase_shifted == 3) { // rotating backward;
encoder_phase--;
if (encoder_phase <= -4) { // Full cycle backward
encoder_phase = 0;
handle_counterclockwise_rotation();
}
}
if (phase == 0) {
encoder_phase = 0; // reset
}
}
As others have noted, mechanical encoders are subject to bouncing. You need to handle that.
The most simple way to read such an encoder would be to interpret one of the outputs (e.g. A) as a 'clock' signal and the other one (e.g. B) as the direction indicator.
You wait for a falling (or rising) edge of the 'clock' output, and when one is detected, immediately read the state of the other output to determine the direction.
After that, include some 'dead time' during which you ignore any other edges of the 'clock' signal which occur due to the bouncing of the contacts.
Algorithm:
0) read state of 'clock' signal (A) and store ("previous clock state")
1) read state of 'clock' signal (A) and compare with "previous clock state"
2) if clock signal did not change e.g. from high to low (if you want to use the falling edge), goto 1).
3) read state of 'direction' signal (B), store current state of clock to "previous clock state"
4) now you know that a 'tick' occurred (clock signal change) and the direction, handle it
5) disable reading the 'clock' signal (A) for some time, e.g. 10ms, to debounce; after the dead time period has elapsed, goto 1)
This approach is not time critical. As long as you make sure you poll the 'clock' at least twice as fast as the shortest time between the change of signal A and the corresponding change of signal B (minus bouncing time of A) you expect to see (depends on maximum expected rotation speed) it will work absolutely reliably.
The edge detection of the 'clock' signal can also be performed by a pin change interrupt which you just disable for the dead time period after the interrupt occurred. Handling bouncing contacts via a pin change interrupt is however generally not recommended because the bouncing (i.e. (very) fast toggling of a pin, can be pulses of nanoseconds duration) may confuse the hardware.
I have a conundrum. The part I am using (NXP KL27, Cortex-M0+) has an errata in its I2C peripheral such that during receive there is no flow control. As a result, it needs to be a high priority interrupt. I am also using a UART that, by its asynchronous nature, has no flow control on its receive. As a result, it needs to be a high priority interrupt.
Circular Priority
The I2C interrupt needs to be higher priority than the UART interrupt, otherwise an incoming byte can get demolished in the shift register before being read. It really shouldn't work this way, but that's the errata, and so it needs to be higher priority.
The UART interrupt needs to be higher priority than the I2C interrupt, because to close out an I2C transaction the driver (from NXP's KSDK) needs to set a flag and wait for a status bit. During this wait incoming characters on the UART can overflow the non-FIFO'd shift register.
In trying to solve an issue with the UART, I discovered this circular dependency. The initial issue saw characters disappearing from the UART receive and the overrun flag being set. When swapping priorities, the UART was rock solid, never missing a character, but I2C transactions ended up stalling due to overruns.
Possible Solution
The solution I came up with involves changing interrupt priorities on the fly. When the I2C driver is closing out a transaction, it is not receiving, which means the errata that causes bytes to flow in uncontrolled is not an issue. I would like to demote the I2C interrupt priority in the NVIC during this time so that the UART is able to take priority over it, thus making the UART happy (and not missing any characters).
Question
I haven't been able to find anything from ARM that states whether changing the interrupt priority while executing that interrupt will take effect immediately, or if the priority of the current interrupt was latched in when it started executing. I am hoping someone can definitely save from the depths of their knowledge of the architecture or from experience that changing the priority will take effect immediately, or not.
Other Possible Solutions
There are a number of other possible solutions and reasons why they are undesirable. Refactoring the I2C driver to handle the loop in the process context rather than interrupt context would be a significant effort digging into the vendor code and affects the application code that calls into it. Using DMA for either of these peripherals uses up a non-trivial amount of the DMA channels available and incurs the overhead of setting up DMA for each transaction (and also affects the application code that calls into the drivers).
I am open to other solutions, but hesitant to go down any path that causes significant changes to the vendor code.
Test
I have an idea for an experiment to test how the NVIC works in this regard, but I thought I would check here first. If I get to the experiment, I will post a follow-up answer with the results.
Architecturally, this appears to be UNPREDICTABLE (changing the priority of a currently active exception). There seems to be no logic in place to enforce more consistent behavior (i.e. the registration logic you are concerned about is not obviously present in M0/M0+).
This means that if you test for the effectiveness of your workaround, it will probably appear to work - and in your constrained scenario it might be effective. However, there is no guarantee that the same code will work on M3, or that it works reliably in all scenarios (for example any interaction with debug). You might even observe some completely unpredictable corner case behavior, but the area-constrained
This is specified as unpredictable in section B1.5.4 of the ARM v6-M ARM.
For v7-M (B1.5.4, Exception Priorities and preemption)
This definition of execution priority means that an exception handler
can be executing at a priority that is higher than the priority of the
corresponding exception. In particular, if a handler reduces the
priority of its corresponding exception, the execution priority falls
only to the priority of the highest-priority preempted exception.
Therefore, reducing the priority of the current exception never
permits:
A preempted exception to preempt the current exception handler.
Inversion of the priority of preempted exceptions.
The v7-M aspect clarifies some of the complex scenarios which must be avoided if you attempt to make use of the unpredictable behavior which you have identified as useful with the M0+ part.
Experiment
I coded up a quick experiment today to test this behavior on my particular variant of the Cortex M0+. I am leaving this as an unaccepted answer, and I believe #Sean Houlihane's answer is the most correct (i.e. it is unpredictable). I still wanted to test the behavior and report in under the specific circumstances for while I am using it.
The experiment was performed on a FRDM-KL43Z board. It has a red LED, a green LED, and two push buttons. The application performed some setup of the GPIO and interrupts and then sat in an infinite loop.
Button 1: Button 1's interrupt handler was initialized to midscale priority (0x80). On every falling edge of button 1 it would pend the interrupt. This interrupt would toggle the green LED's state.
Button 2: Button 2's interrupt handler was initialized to midscale priority (0x80), but would be changed as a part of execution. The button 2 interrupt handler would run a loop that lasted approximately 8 seconds (two phases of four), repeating indefinitely. It would turn on the red LED and decrease it's own priority below that of button 1. After the four seconds, it would turn off the red LED and increase it's own priority above that of button 1. After four seconds it would repeat.
Expected Results
If the hypothesis proves to be true, when the red LED is on, pressing button 1 will toggle the green LED, and when the red LED is off, pressing button 1 will have no effect until the red LED turns off. The button 1 interrupt would not execute until the forever looping button 2 interrupt is of a lower priority.
Results
This is the boring section. Everything I expected in the previous section happened.
Conclusion
For the experimental setup (NXP KL43Z Cortex M0+), changing the interrupt priority of the currently executing interrupt takes effect while the interrupt is running. As a result, my hacky workaround of demoting priority during the busy wait and restoring it after should function for what I need.
Edit:
Later Results
Though the experiment was successful, problems started occurring once the workaround for the original issue was implemented. The interaction between the UART and I2C handlers was relatively consistent, but a third peripheral started having very odd behavior in its interrupt handler. Take heed of the warning of UNPREDICTABLE.
One alternative solution could be to defer to another, lower priority, interrupt for the second half of your processing. A good candidate is the PendSV interrupt (if not already in use), which can (only) be triggered from software.
For a more detailed explanation, see this answer to a similar question and this answer about PendSV in general.
As the title says, is it generally good practice to use General Purpose Timers for dimming a LED (PWM with variable duty cycle) or is it better to use OS scheduling/tasks when available (RTOS ecc)?
I recently saw an example of a blinking led using the RTOS internal timers and i was wondering if the period of the timer can be fastened up to the point where you can dim a led (~2Khz).
Regards,
Pulsing a LED in software could flicker if some other task were to interfere with its scheduling, and you won't get much fine control over brightness. So if PWM hardware is available (and it can work with that pin, and isn't needed for something else), I would use the hardware.
A common pattern is to use PWM to control the visible brightness of the LED, then to have a regularly scheduled sofware task to vary it smoothly (to produce fades, blinks and so forth), based on a counter and some state/variables which might be controlled by other tasks.
I try to do some bare-metal programming on Beaglebone Black using StarterWare. All modifications to run with the Black are already done and I'm running the DMTimer-example which works well.
In next step I have changed this example, the ISR just toggles a GPIO (which should need some dozens of clock cycles only). And I changed the timer and timer reload count of the example to 0xFFFFFF0F which is equal to 10 usec period - so my ISR should be called every 10 usec.
Amazingly this seems o be the limit, when I use bigger timer values which should result in more frequent calls of ISR it still stays at these 10 usec, even 5 usec are not possible with the DMTimer-example. As timer clock source CLK_M_OSC is already used so this shouldn be fine.
So...any idea how ISR can be called faster?
Have you tried adjusting (or disabling) the timer prescaler? I found that the DMTimer example uses the prescaler and I didn't get the behaviour that it suggested (interrupts every 700ms) until I added the line
DMTimerPreScalerClkDisable(SOC_DMTIMER_2_REGS);
After that, it appeared to work correctly.
With the prescaler disabled, you should get 10ms with a reload counter of 0xFFFFFF06 or 5ms with a reload counter of 0xFFFFFF83.
I'm struggling to get tickless support working for our xmega256a3 port of FreeRTOS. Looking around, trying to understand under the hood better, I was surprised to see the following line in vTaskStepTick():
configASSERT( xTicksToJump <= xNextTaskUnblockTime );
I don't have configASSERT turned on, but I would think that if I did, that would be raising issues regularly. xTicksToJump is a delta time, but xNextTaskUnblockTime, if I read the code correctly, is an absolute tick time? Did I get that wrong?
My sleep function, patterned after the documentation example looks like this:
static uint16_t TickPeriod;
void sleepXmega(portTickType expectedIdleTime)
{
TickPeriod = RTC.PER; // note the period being used for ticking on the RTC so we can restore it when we wake up
cli(); // no interrupts while we put ourselves to bed
SLEEP_CTRL = SLEEP_SMODE_PSAVE_gc | SLEEP_SEN_bm; // enable sleepability
setRTCforSleep(); // reconfigure the RTC for sleeping
while (RTC.STATUS & RTC_SYNCBUSY_bm);
RTC.COMP = expectedIdleTime - 4; // set the RTC.COMP to be a little shorter than our idle time, seems to be about a 4 tick overhead
while (RTC.STATUS & RTC_SYNCBUSY_bm);
sei(); // need the interrupt to wake us
cpu_sleep(); // lights out
cli(); // disable interrupts while we rub the sleep out of our eyes
while (RTC.STATUS & RTC_SYNCBUSY_bm);
SLEEP.CTRL &= (~SLEEP_SEN_bm); // Disable Sleep
vTaskStepTick(RTC.CNT); // note how long we were really asleep for
setRTCforTick(TickPeriod); // repurpose RTC back to its original use for ISR tick generation
sei(); // back to our normal interruptable self
}
If anyone sees an obvious problem there, I would love to hear it. The behavior it demonstrates is kind of interesting. For testing, I'm running a simple task loop that delays 2000ms, and then simply toggles a pin I can watch on my scope. Adding some printf's to my function there, it will do the first one correctly, but after I exit it, it immediately reenters, but with a near 65535 value. Which it dutifully waits out, and then gets the next one correct again, and then wrong (long) again, alternating back and forth.