I've been trying to program my ATtiny817-XPRO to interpret input data from a rotary encoder (the Arduino module), however I'm having some trouble and can't seem to figure out what the problem is. What I'm trying to do is essentially program a digital combination lock that blinks a red LED every time the rotary encoder is rotated one "tick" (in either direction), and blinks a green LED once the right "combination" has been detected. It's a little bit more involved than that, so when I ran into trouble upon testing my code, I decided to write a simple method to help me troubleshoot/debug the problem. I've included it below:
void testRotaryInput(){
if(!(PORTC.IN & 0b00000001)){ // if rotary encoder is turned clockwise
PORTB.OUT = 0b00000010; // turn on green LED
}
else if(!(PORTC.IN & 0b00000010)){ // if rotary encoder is turned CCW
PORTB.OUT = 0b00000001; // turn on blue LED
}
else{ // if rotary encoder remains stationary
PORTB.OUT = 0b00000100; // turn on red LED
}
RTC.CNT = 0;
while(RTC.CNT<16384){} // wait 500ms
PORTB.OUT = 0x00; // turn LED off
while(RTC.CNT<32768){} // wait another 500ms
}
int main(void)
{
PORTB.DIR = 0xFF; // PORT B = output
PORTC.DIR = 0x00; // PORT C = input
RTC.CTRLA = RTC_RTCEN_bm; // Enable RTC
PORTB.OUT = 0x00; // Ensure all LEDs start turned off
// ^(not necessary but I do it just in case)^
//testLED(); <-- previous test I used to make sure each LED works upon start-up
while(1)
{
testRotaryInput();
}
}
The idea here is that whichever output line arrives at the AVR first should indicate which direction the rotary encoder was rotated, as this dictates the phase shift between the two signals. Depending on the direction of rotation (or lackthereof), a red/green/blue LED will blink once for 500ms, and then the program will wait another 500ms before listening to the rotary encoder output again. However, when I run this code, the LED will either continuously blink red for awhile or green for awhile, eventually switching from one color to the other with the occasional (single) blue blink. This seems completely random each time, and it seems to completely ignore any rotation I apply to the rotary encoder.
Things I've done to troubleshoot:
Hooked up both outputs of the rotary encoder to an oscilloscope to see if there's any output (everything looked as it should)
Used an external power supply to power the rotary encoder, as I was only reading 1.6V from the 5.0V VCC pin on my ATtiny817-XPRO when it was connected to that (I suspect this was because the LEDs and rotary encoder probably draw too much current)
Measured the voltage from said power supply to ensure that the rotary encoder was receiving 5.0V (I measured approx. 4.97V)
Checked to make sure that the circuitry is correct and working as it should
Unfortunately, none of these things eliminated the problem at hand. Thus, I suspect that my code may be the culprit, as this is my first attempt at using a rotary encoder, let alone interpreting the data generated by one. However, if my code looks like it should work just fine, I'd appreciate any heads-up on this so that I can focus my efforts elsewhere.
Could anyone shed light on what may be causing this issue? I don't think it's a faulty board because I was using these pins two nights ago for a different application without any problems. Also, I'm still somewhat of a newbie when it comes to AVRs, so I'm sure my code is far from being as robust as it could be.
Thanks!
Encoders can behave in various strange ways. You'll have signal bounces like with any switch. You'll have cases where several inputs may be active at once during a turn. Etc. Therefore you need to sample them periodically.
Create a timer which polls the encoder every 5ms or so. Always store the previous read. Don't accept any change to the input as valid until it has been stable for two consecutive reads. This is the simplest form of a digital filter.
Which input is shorted does not show you what direction the encoder was turned. But the order in which they were shorted does.
Normally, rotary encoders have two outputs which are shorted to the ground pin: first shorted, then second shorted, then first released, then second released - this full sequence happened between each click. (Of course there are encoders which have additional "click" in the middle of the sequence, or have no clicks at all, but most of them do like described above).
So, generally speaking, each "CLICK!" movement you may consider as 4 phases:
0. Both inputs are released (high) - default position
1. Input A is shorted to ground (low), input B is released (high)
2. Both inputs are shorted (low)
3. Input A is released (high), B is shorted (low).
Rotation in one direction is a passage thru phases 0-1-2-3-0. Another direction is 0-3-2-1-0. So, whatever direction the encoder is rotated, both inputs will be shorted to ground at some particular moment.
As you can see from the picture above, usually the bouncing happens only at one of inputs. So, you may consider the bouncing as jumping between two adjacent phases, what makes the debounce much simpler.
Since those phases are changes very fast you have to pool input pins very fast, may be 1000 times per second, to handle fast rotations.
Code to handle the rotation may be as follows:
signed int encoder_phase = 0;
void pull_encoder() {
int phase = ((PORTC.IN & (1 << INPUT_A_PINNO)) ? 0 : 1)
^ ((PORTC.IN & (1 << INPUT_B_PINNO)) ? 0 : 0b11);
// the value of phase variable is 0 1 2 or 3, as described above
int phase_shifted = (phase - encoder_phase) & 3;
if (phase_shifted == 2) { // jumped instantly over two phases - error
encoder_phase = 0;
} else if (phase_shifted == 1) { // rotating forward
encoder_phase++;
if (encoder_phase >= 4) { // Full cycle
encoder_phase = 0;
handle_clockwise_rotation();
}
} else if (phase_shifted == 3) { // rotating backward;
encoder_phase--;
if (encoder_phase <= -4) { // Full cycle backward
encoder_phase = 0;
handle_counterclockwise_rotation();
}
}
if (phase == 0) {
encoder_phase = 0; // reset
}
}
As others have noted, mechanical encoders are subject to bouncing. You need to handle that.
The most simple way to read such an encoder would be to interpret one of the outputs (e.g. A) as a 'clock' signal and the other one (e.g. B) as the direction indicator.
You wait for a falling (or rising) edge of the 'clock' output, and when one is detected, immediately read the state of the other output to determine the direction.
After that, include some 'dead time' during which you ignore any other edges of the 'clock' signal which occur due to the bouncing of the contacts.
Algorithm:
0) read state of 'clock' signal (A) and store ("previous clock state")
1) read state of 'clock' signal (A) and compare with "previous clock state"
2) if clock signal did not change e.g. from high to low (if you want to use the falling edge), goto 1).
3) read state of 'direction' signal (B), store current state of clock to "previous clock state"
4) now you know that a 'tick' occurred (clock signal change) and the direction, handle it
5) disable reading the 'clock' signal (A) for some time, e.g. 10ms, to debounce; after the dead time period has elapsed, goto 1)
This approach is not time critical. As long as you make sure you poll the 'clock' at least twice as fast as the shortest time between the change of signal A and the corresponding change of signal B (minus bouncing time of A) you expect to see (depends on maximum expected rotation speed) it will work absolutely reliably.
The edge detection of the 'clock' signal can also be performed by a pin change interrupt which you just disable for the dead time period after the interrupt occurred. Handling bouncing contacts via a pin change interrupt is however generally not recommended because the bouncing (i.e. (very) fast toggling of a pin, can be pulses of nanoseconds duration) may confuse the hardware.
Related
I have a rotary encoder with STM32F4 and configured TIM4 in "Encoder Mode TI1 and TI2". I want to have an interrupt every time the value of timer is incremented or decremented.
The counting works but I only can configure an interrupt on every update event, not every changes in TIM4->cnt. How can I do this?
In other words: My MCU+Encoder in quadrature mode could count from 0 to 99 in one revolution. I want to have 100 interrupts in the revolution but if I set TIM4->PSC=0 and TIM4->ARR=1, results 50 UPDATE_EVENTs, so I should set ARR=0 but it does not work. How can I sole that?
To get 100 interrupts per revolution keep PSC=0, ARR=1, setup the two timer channels in output compare mode with compare values 0 and 1 and interrupts on both channels.
Or even use ARR=3 and setup all four channels, with compare values of 0,1,2 and 3. This will allow to detect the direction.
Normally, the whole point of using the quadrature encoder mode is counting the pulses while avoiding interrupts. You can simply poll the counter register periodically to determine speed and position.
Getting interrupts on every encoder pulse is extremely inefficient, especially with high resolution encoders. Yours seems to be a low resolution one. If you still think you need them for some reason, you can connect A & B into external interrupts and implement the counting logic manually. In this case, you don't need quadrature encoder mode.
This is kind of a basic question, but I am new to PLC's, so please bear with me. I am working on a basic PLC program with the Do-More designer and Simulator for a process mixer. The mixer has two sensors that detect when the tank is empty or full. When empty, Solenoid A opens the input valve until the full sensor detects the tank is full. A motor powering the mixing armature turns on for 10 seconds, then the outlet valve (Solenoid B) opens to drain the tank.
My problem is with the timer. I want it to automatically produce an output that will turn off the motor and have tried several ways to do this, but I can't get it to work. The timer will reset itself to zero, and turn on y5, but y2 only resets momentarily and the timer starts counting again.
Picture of code using tmra timer
Alternatively, I can turn off the motor using a different timer, but then the timer will not reset itself to zero, it runs until the end of the program.
Code using ONDTMR
If anyone knows how to make the timer stop counting in either case, I would appreciate the help. Also, as a side question, is it okay to have multiple outputs on the same rung?
The problem is that the X3 input (Full_Sensor) is continuously setting (turning on) Y2 (Motor_On). T0.Done resets Y2, but on the next scan Y2 is back on again as soon as rung 3 executes again, which then runs the timer again.
Set and Reset coil instructions can get you in trouble with things like this if you aren't careful. If you want Y2 to turn on at an event (the on-transition of X3), but not to be continuously turned on, use a one-shot instruction (in Do-more, this is called "Leading Edge One-Shot on Power Flow"), which will only run the set instruction for one scan. Your Rung 3 would look like this:
With that in place, Rung 5 will turn off Y2, which will turn off the Timer, and Y2 will not turn on again (and the timer will not run again) until X3 (Full_Sensor) goes off and then on again.
I am reading a status word that consists of 24 bits, and want the LED to change corresponding to the value of the bit. I want the LED to fully turn off, but sometimes instead of turning off it gets brighter.
I am using a simple pin toggle function to toggle the led
nrf_gpio_pin_toggle(LED_2);
Is it possible that the LED value resets to 0 but acctually stays on, making the LED brighter?
If it gets brighter, then before it was dimmer. To dim an LED, one usually uses pulse-width modulation, which means the LED is being turned on and off very quickly.
The following is speculation ...
If the LED, as part of the PWM process, happens to be in the off state when you try to toggle it, then it will be turned on and the PWM operation (which might be managed for you by a library or interrupt handler) will be canceled. Thus you see the LED at full brightness, which it brighter than it was.
I am programming a microcontroller of the PIC24H family and using xc16 compiler.
I am relaying U1RX-data to U2TX within main(), but when I try that in an ISR it does not work.
I am sending commands to the U1RX and the ISR() is down below. At U2RX, there are databytes coming in constantly and I want to relay 500 of them with the U1TX. The results of this is that U1TX is relaying the first 4 databytes from U2RX but then re-sending the 4th byte over and over again.
When I copy the for loop below into my main() it all works properly. In the ISR(), its like that U2RX's corresponding FIFObuffer is not clearing when read so the buffer overflows and stops reading further incoming data to U2RX. I would really appreciate if someone could show me how to approach the problem here. The variables tmp and command are globally declared.
void __attribute__((__interrupt__, auto_psv, shadow)) _U1RXInterrupt(void)
{
command = U1RXREG;
if(command=='d'){
for(i=0;i<500;i++){
while(U2STAbits.URXDA==0);
tmp=U2RXREG;
while(U1STAbits.UTXBF==1); //
U1TXREG=tmp;
}
}
}
Edit: I added the first line in the ISR().
Trying to draw an answer from the various comments.
If the main() has nothing else to do, and there are no other interrupts, you might be able to "get away with" patching all 500 chars from one UART to another under interrupt, once the first interrupt has ocurred, and perhaps it would be a useful exercise to get that working.
But that's not how you should use an interrupt. If you have other tasks in main(), and equal or lower priority interrupts, the relatively huge time that this interrupt will take (500 chars at 9600 baud = half a second) will make the processor what is known as "interrupt-bound", that is, the other processes are frozen out.
As your project gains complexity, you won't want to restrict main() to this task, and there is no need to for it be involved at all, after setting up the UARTs and IRQs. After that it can calculate π ad infinitum if you want.
I am a bit perplexed as to your sequence of operations. A command 'd' is received from U1 which tells you to patch 500 chars from U2 to U1.
I suggest one way to tackle this (and there are many) seeing as you really want to use interrupts, is to wait until the command is received from U1 - in main(). You then configure, and enable, interrupts for RXD on U2.
Then the job of the ISR will be to receive data from U2 and transmit it thru U1. If both UARTS have the same clock and the same baud rate, there should not be a synchronisation problem, since a UART is typically buffered internally: once it begins to transmit, the TXD register is available to hold another character, so any stagnation in the ISR should be minimal.
I can't write the actual code for you, since it would be supposed to work, but here is some very pseudo code, and I don't have a PIC handy (or wish to research its operational details).
ISR
has been invoked because U2 has a char RXD
you *might* need to check RXD status as a required sequence to clear the interrupt
read the RXD register, which also might clear the interrupt status
if not, specifically clear the interrupt status
while (U1 TXD busy);
write char to U1
if (chars received == 500)
disable U2 RXD interrupt
return from interrupt
ISR's must be kept lean and mean and the code made hyper-efficient if there is any hope of keeping up with the buffer on a UART. Experiment with the BAUD rate just to find the point at which your code can keep up, to help discover the right heuristic and see how far away you are from achieving your goal.
Success could depend on how fast your micro controller is, as well, and how many tasks it is running. If the microcontroller has a built in UART theoretically you should be able to manage keeping the FIFO from overflowing. On the other hand, if you paired up a UART with an insufficiently-powered micro controller, you might not be able to optimize your way out of the problem.
Besides the suggestion to offload the lower-priority work to the main thread and keep the ISR fast (that someone made in the comments), you will want to carefully look at the timing of all of the lines of code and try every trick in the book to get them to run faster. One expensive instruction can ruin your whole day, so get real creative in finding ways to save time.
EDIT: Another thing to consider - look at the assembly language your C compiler creates. A good compiler should let you inline assembly language instructions to allow you to hyper-optimize for your particular case. Generally in an ISR it would just be a small number of instructions that you have to find and implement.
EDIT 2: A PIC 24 series should be fast enough if you code it right and select a fast oscillator or crystal and run the chip at a good clock rate. Also consider the divisor the UART might be using to achieve its rate vs. the PIC clock rate. It is conceivable (to me) that an even division that could be accomplished internally via shifting would be better than one where math was required.
I'm struggling to get tickless support working for our xmega256a3 port of FreeRTOS. Looking around, trying to understand under the hood better, I was surprised to see the following line in vTaskStepTick():
configASSERT( xTicksToJump <= xNextTaskUnblockTime );
I don't have configASSERT turned on, but I would think that if I did, that would be raising issues regularly. xTicksToJump is a delta time, but xNextTaskUnblockTime, if I read the code correctly, is an absolute tick time? Did I get that wrong?
My sleep function, patterned after the documentation example looks like this:
static uint16_t TickPeriod;
void sleepXmega(portTickType expectedIdleTime)
{
TickPeriod = RTC.PER; // note the period being used for ticking on the RTC so we can restore it when we wake up
cli(); // no interrupts while we put ourselves to bed
SLEEP_CTRL = SLEEP_SMODE_PSAVE_gc | SLEEP_SEN_bm; // enable sleepability
setRTCforSleep(); // reconfigure the RTC for sleeping
while (RTC.STATUS & RTC_SYNCBUSY_bm);
RTC.COMP = expectedIdleTime - 4; // set the RTC.COMP to be a little shorter than our idle time, seems to be about a 4 tick overhead
while (RTC.STATUS & RTC_SYNCBUSY_bm);
sei(); // need the interrupt to wake us
cpu_sleep(); // lights out
cli(); // disable interrupts while we rub the sleep out of our eyes
while (RTC.STATUS & RTC_SYNCBUSY_bm);
SLEEP.CTRL &= (~SLEEP_SEN_bm); // Disable Sleep
vTaskStepTick(RTC.CNT); // note how long we were really asleep for
setRTCforTick(TickPeriod); // repurpose RTC back to its original use for ISR tick generation
sei(); // back to our normal interruptable self
}
If anyone sees an obvious problem there, I would love to hear it. The behavior it demonstrates is kind of interesting. For testing, I'm running a simple task loop that delays 2000ms, and then simply toggles a pin I can watch on my scope. Adding some printf's to my function there, it will do the first one correctly, but after I exit it, it immediately reenters, but with a near 65535 value. Which it dutifully waits out, and then gets the next one correct again, and then wrong (long) again, alternating back and forth.