Global Timer Interrupt for Time in Microcontroller - c

I have a lot of different time to keep track of in my design, but nothing is super critical. 10ms +/- a few ms isn't a big deal at all. But there might be 10 different timers that are all counting at different periods at the same time, which obviously I don't have enough dedicated timers to support each of those in their own independent timer in the MSP-430.
My solution is to create a single ISR for an MSP-430 micro timer that fires at 1 KHz. It simply increments an unsigned long for each ISR entry (so each tick is 1 ms). Then elsewhere in my code I can use the SET_TIMER and EXPIRED define calls below to check to see if a certain amount of time has elapsed. My question is, is this a good way to keep a "global" time?
Timer Definitions:
typedef unsigned long TIMER;
extern volatile TIMER Tick;
#define SET_TIMER(t,i) ((t)=Tick+(i))
#define EXPIRED(t) ((long)((t)-Tick)<0)
Timer Interrupt Service Routine:
void TIMER_B0_ISR(void)
{
Tick++;
}
Example usage in a single file:
case DO_SOMETHING:
if (EXPIRED(MyTimer1))
{
StateMachine = DO_SOMETHING_ELSE;
SET_TIMER(MyTimer1, 100);
}
break;
case DO_SOMETHING_ELSE:
if (EXPIRED(MyTimer1))
...

Your scheme is relatively costly to check for timer wraparound - that you don't seem to do, currently (You need to check for it in all places where you check for "time expired" - That is the reason why you normally want only one such place).
I typically use a sorted linked list of timer expiration entries with the list head as the timer that is going to expire earliest. The ISR then only has to check this single entry and can directly notify that one single subscriber.

Related

Increase count value of hardware timer (on µC) more than one at each timer tick

Has anyone heard of a hardware-timer which can count by different values with one timer tick?
Normally a timer of a µC counts up or down by one. But I have a challenge where I need to add e.g. 500 each timer tick.
There are multiple options for your question. Depending on your microcontroller and timer you could:
Use the interrupt generation of the timer to manually up a variable by a set amount. 500 in your case.
Change the timer prescalers such that instead of 500 times in an expected period, the timer only triggers once during the expected period.
I personally don't know of a timer that has a variable increase amount but that doesn't mean that it doesn't exist. Maybe creating such a timer in VHDL or verilog may be a option.

Switch Debouncing Logic in C

I came across this code by Ganssle regarding switch debouncing. The code seems pretty efficient, and the few questions I have maybe very obvious, but I would appreciate clarification.
Why does he check 10 msec for button press and 100 msec for button release. Can't he just check 10 msec for press and release?
Is polling this function every 5 msec from main the most effecient way to execute it or should I check for an interrupt in the pin and when there is a interrupt change the pin to GPI and go into the polling routine and after we deduce the value switch the pin back to interrupt mode?
#define CHECK_MSEC 5 // Read hardware every 5 msec
#define PRESS_MSEC 10 // Stable time before registering pressed
#define RELEASE_MSEC 100 // Stable time before registering released
// This function reads the key state from the hardware.
extern bool_t RawKeyPressed();
// This holds the debounced state of the key.
bool_t DebouncedKeyPress = false;
// Service routine called every CHECK_MSEC to
// debounce both edges
void DebounceSwitch1(bool_t *Key_changed, bool_t *Key_pressed)
{
static uint8_t Count = RELEASE_MSEC / CHECK_MSEC;
bool_t RawState;
*Key_changed = false;
*Key_pressed = DebouncedKeyPress;
RawState = RawKeyPressed();
if (RawState == DebouncedKeyPress) {
// Set the timer which will allow a change from the current state.
if (DebouncedKeyPress) Count = RELEASE_MSEC / CHECK_MSEC;
else Count = PRESS_MSEC / CHECK_MSEC;
} else {
// Key has changed - wait for new state to become stable.
if (--Count == 0) {
// Timer expired - accept the change.
DebouncedKeyPress = RawState;
*Key_changed=true;
*Key_pressed=DebouncedKeyPress;
// And reset the timer.
if (DebouncedKeyPress) Count = RELEASE_MSEC / CHECK_MSEC;
else Count = PRESS_MSEC / CHECK_MSEC;
}
}
}
Why does he check 10 msec for button press and 100 msec for button release.
As the blog post says, "Respond instantly to user input." and "A 100ms delay is quite noticeable".
So, the main reason seems to be to emphasize that the make-debounce should be kept short so that the make is registered "immediately" by human sense, and that the break debounce is less time sensitive.
This is also supported by a paragraph near the end of the post: "As I described in the April issue, most switches seem to exhibit bounce rates under 10ms. Coupled with my observation that a 50ms response seems instantaneous, it's reasonable to pick a debounce period in the 20 to 50ms range."
In other words, the code in the example is much more important than the example values, and that the proper values to be used depends on the switches used; you're supposed to decide those yourself, based on the particulars of your specific use case.
Can't he just check 10 msec for press and release?
Sure, why not? As he wrote, it should work, even though he wrote (as quoted above) that he prefers a bit longer debounce periods (20 to 50 ms).
Is polling this function every 5 msec from main the most effecient way to execute it
No. As the author wrote, "All of these algorithms assume a timer or other periodic call that invokes the debouncer." In other words, this is just one way to implement software debouncing, and the shown examples are based on a regular timer interrupt, that's all.
Also, there is nothing magical about the 5 ms; as the author says, "For quick response and relatively low computational overhead I prefer a tick rate of a handful of milliseconds. One to five milliseconds is ideal."
or should I check for an interrupt in the pin and when there is a interrupt change the pin to GPI and go into the polling routine and after we deduce the value switch the pin back to interrupt mode?
If you implement that in code, you'll find that it is rather nasty to have an interrupt that blocks the normal running of the code for 10 - 50ms at a time. It is okay if checking the input pin state is the only thing being done, but if the hardware does anything else, like update a display, or flicker some blinkenlights, your debouncing routine in the interrupt handler will cause noticeable jitter/stutter. In other words, what you suggest, is not a practical implementation.
The way the periodic timer interrupt based software debouncing routines (shown in the original blog post, and elsewhere) work, they take only a very short amount of time, just a couple of dozen cycles or so, and do not interrupt other code for any significant amount of time. This is simple, and practical.
You can combine a periodic timer interrupt and an input pin (state change) interrupt, but since the overhead of many of the timer-interrupt-only -based software debounces is tiny, it typically is not worth the effort trying to combine the two -- the code gets very, very complicated, and complicated code (especially on an embedded device) tends to be hard/expensive to maintain.
The only case I can think of (but I'm only a hobbyist, not an EE by any means!) is if you wanted to minimize power use for e.g. battery powered operation, and used the input pin interrupt to bring the device to partial or full power mode from sleep, or similar.
(Actually, if you also have a millisecond or sub-millisecond counter (not necessarily based on an interrupt, but possibly a cycle counter or similar), you can use the input pin interrupt and the cycle counter to update the input state on the first change, then desensitize it for a specific duration afterwards, by storing the cycle counter value at the state change. You do need to handle counter overflow, though, to avoid the situation where a long ago event seems to have happened just a short time ago, due to counter overflowing.)
I found Lundin's answer quite informative, and decided to edit my answer to show my own suggestion for software debouncing. This might be especially interesting if you have very limited RAM, but lots of buttons multiplexed, and you want to be able to respond to key presses and releases with minimum delay.
Do note that I do not wish to imply this is "best" in any sense of the world; I only want you to show one approach I haven't seen often used, but which might have some useful properties in some use cases. Here, too, the number of scan cycles (milliseconds) the input changes are ignored (10 for make/off-to-ON, 10 for break/on-to-OFF) are just example values; use an oscilloscope or trial-and-error to find the best values in your use case. If this is an approach you find more suitable to your use case than the other myriad alternatives, that is.
The idea is simple: use a single byte per button to record the state, with the least significant bit describing the state, and the seven other bits being the desensitivity (debounce duration) counter. Whenever a state change occurs, the next change is only considered a number of scan cycles later.
This has the benefit of responding to changes immediately. It also allows different make-debounce and break-debounce durations (during which the pin state is not checked).
The downside is that if your switches/inputs have any glitches (misreadings outside the debounce duration), they show up as clear make/break events.
First, you define the number of scans the inputs are desensitized after a break, and after a make. These range from 0 to 127, inclusive. The exact values you use depend entirely on your use case; these are just placeholders.
#define ON_ATLEAST 10 /* 0 to 127, inclusive */
#define OFF_ATLEAST 10 /* 0 to 127, inclusive */
For each button, you have one byte of state, variable state below; initialized to 0. Let's say (PORT & BIT) is the expression you use to test that particular input pin, evaluating to true (nonzero) for ON, and false (zero) for OFF. During each scan (in your timer interrupt), you do
if (state > 1)
state -= 2;
else
if ( (!(PORT & BIT)) != (!state) ) {
if (state)
state = OFF_ATLEAST*2 + 0;
else
state = ON_ATLEAST*2 + 1;
}
At any point, you can test the button state using (state & 1). It will be 0 for OFF, and 1 for ON. Furthermore, if (state > 1), then this button was recently turned ON (if state & 1) or OFF (if state & 0) and is therefore not sensitive to changes in the input pin state.
In addition to the accepted answer, if you just wish to poll a switch from somewhere every n ms, there is no need for all of the obfuscation and complexity from that article. Simply do this:
static bool prev=false;
...
/*** execute every n ms ***/
bool btn_pressed = (PORT & button_mask) != 0;
bool reliable = btn_pressed==prev;
prev = btn_pressed;
if(!reliable)
{
btn_pressed = false; // btn_pressed is not yet reliable, treat as not pressed
}
// <-- here btn_pressed contains the state of the switch, do something with it
This is the simplest way to de-bounce a switch. For mission-critical applications, you can use the very same code but add a simple median filter for the 3 or 5 last samples.
As noted in the article, the electro-mechanical bounce of switches is most often less than 10ms. You can easily measure the bouncing with an oscilloscope, by connecting the switch between any DC supply and ground (in series with a current-limiting resistor, preferably).

Simulate Multiple Virtual Timers with one Physical Timer

I am trying to implement a Selective Repeat Protocol using C for an networking assignment but am stumped at how to simulate the timer for each individual packets. I only have access to a single timer and can only call the functions as described below.
/* start timer at A or B (int), increment in time*/
extern void starttimer(int, double);
/* stop timer at A or B (int) */
extern void stoptimer(int);
Kurose and Ross mentioned in their networking textbook that
A single hardware timer can be used to mimic the
operation of multiple logical timers [Varghese 1997].
And I found the following hint for a similar assignment
You can simulate multiple virtual timers using a single physical timer. The basic idea is that you keep a chain of virtual timers ordered in their expiration time and the physical timer will go off at the first virtual timer expiration.
However, I do not have access to any time variables other than RTT as the emulator is on another layer of abstraction. How can I implement the timer for individual packets in this case?
You can do that in the same way it is implemented at Kernel level. You need to have a linked list of "timers" where each timer has a timeout relative to the preceding one. It would be something like:
Timer1: 500 ms from t0, Timer2: 400 ms from t0, Timer3 1000 ms from t0.
Then you will have a linked list in which each element has the timeout relative to the previous one, like this:
HEAD->Timer2(400ms)->Timer1(100ms)->Timer3(500ms)
Every element contains minimum: timerID, relative timeout, absolute init time (timestamp from epoch). You can add a callback pointer per timer.
You use your only timer and set the timeout to the relative timeout of the first element in the list: 400ms (Timer2)
After timeout you will remove first element, probably execute a callback related to Timer2, ideally this callback is executed with another worker thread. Then you set the new timeout at the relative timeout of the next element, Timer1: 100ms.
Now, when you need to create a new timer, say at 3,000 ms, after 300 ms from t0, you need to insert it in the proper position navigating the linked list of timers. The relative timeout in Timer4 will be 2,300. This is calculated with (Timer2.RelativeTimeout - (now - Timer2.AbsoluteTimeout)) and going through the linked list to find the corresponding position, adding relative timeouts of each previous element. Your linked list will become:
HEAD->Timer2(400ms)->Timer1(100ms)->Timer3(500ms)->Timer4(2,300)
In this way you implement many logical timers with one physical timer. Your timer create and find time will be O(n) but you can add various improvements for insertion performance. The most important is that timers timeout handling and update is O(1). Delete complexity will be O(n) for finding the timer and O(1) for delete.
You have to take care of possible race conditions between the thread controlling the timer and a thread inserting or deleting a timer. One way to implement this timer in user space is with condition variables and wait timeout.

Arduino timer within a timer

Arduino Nano and I need a timer within a timer and having some problems getting my head around the logic. I have played with some Libraries on GitHub, Timer, SimpleTimer and Metro but none seem to do what I need. Or, if they can I can't seem to get them to do it.
I need to switch a relay on for about 2-minutes and then off, every hour. I am trying
loop
{ if (millis() - 3600000 > TimeMax)
{ relay(on);
if (millis() - 12000 > relayMax)
TimeMax = millis();
}
}
It doesn't seem to work and I need this to all stay working within the "loop" as I have an nRF24L radio listening.
Could someone please help me with code snippets or at least an outline how to go about this.
Thanks
Ok, first of all timers in embedded dev speak means interrupts that gets fired after a delay. Generally speaking you want interrupts to handle very atomic actions, because you don't want to have an interrupt triggered while another one is triggered, because that could be the scenario of a horror movie.
But why would you want to make something hard, complex and overengineered, when it can be simple?
All you need to do is handle it through a simple dual state machine:
#define OPEN_DELAY 120*1000
#define CLOSE_DELAY 3600*1000
// N.B.: to be precise here, to make 2 minutes every hour,
// CLOSE_DELAY should be 3600*1000-OPEN_DELAY so it
// does not shift by 2 minutes every hour.
void loop() {
static bool open=false;
static long timestamp = millis();
if (!open && millis()-timestamp > CLOSE_DELAY) {
open=true; // change state
timestamp = millis(); // rearm timestamp
set_relay_on();
} else if (open && millis()-timestamp > OPEN_DELAY) {
open=false;
timestamp = millis();
set_relay_off();
}
}
The only reason you might want to use timer would be to save battery by keeping the AVR in sleep mode as much as possible. Then you'd set the timer to the biggest possible value before putting it to sleep, making it wake up the AVR with an interrupt every few seconds or so, so then you run the loop() once in CLOSE state going back to sleep — there you don't need to write an ISR, the main loop() is enough, or keeping it up for the full two minutes in OPEN state.
There's good documentation on timers you might want to read (beware the headaches, though):
http://maxembedded.com/2011/06/introduction-to-avr-timers/
Here's how to put the arduino to sleep for long delays:
http://www.bot-thoughts.com/2013/11/avr-watchdog-as-sleep-delay.html
HTH

Produce tones at certain time-interval using C programming

Im using C language for a PIC18F to produce tones such that each of them plays at certain time-interval. I used PWM to produce a tone. But I don't know how to create the intervals. Here is my attempt.
#pragma code //
void main (void)
{
int i=0;
// set internal oscillator to 1MHz
//OSCCON = 0b10110110; // IRCFx = 101
//OSCTUNEbits.PLLEN = 0; //PLL disabled
TRISDbits.TRISD7 = 0;
T2CON = 0b00000101; //timer2 on
PR2 = 240;
CCPR1L = 0x78;
CCP1CON= 0b01001100;
LATDbits.LATD7=0;
Delay1KTCYx(1000);
while(1);
}
When I'm doing embedded programming, I find it extremely useful to add comments explaining exactly what I'm intending when I'm set configuration registers. That way I don't have to go back to the data sheets to figure out what 0x01001010 does when I'm trying to grok the code the next time I have to change it. (Just be sure to keep the comments in sync with the changes).
From what I can decode, it looks like you've got the PWM registers set up, but no way to change the frequency at your desired intervals. There are a few ways to do it, here are 2 ideas:
You could read a timer on startup, add the desired interval to get a target time, and poll the timer in the while loop. When the timer hits the target, set a new PWM duty cycle, and add the next interval to your target time. This will work fine, until you need to start doing other things in the background loop.
You could set timer0's count to 0xFFFF-interval, and set it to interrupt on rollover. In the ISR, set the new PWM duty cycle, and reset timer0 count to the next interval.
One common way of controlling timing in embedded processes looks like this:
int flag=0;
void main()
{
setup_interrupt(); //schedule interrupt for desired time.
while (1)
{
if (flag)
{
update_something();
flag = 0;
}
}
Where does flag get set? In the interrupt handler:
void InterruptHandler()
{
flag = 1;
acknowledge_interupt_reg = 0;
}
You've got all the pieces in your example, you just need to put them together in the right places. In your case, update_something() would update the PWM. The logic would look like: "If it's on, turn it off; else turn it on. Update the tone (duty cycle) if desired"
There should be no need for additional delays or pauses in the main while loop. The goal is that it just runs over and over again, waiting for something to do. If the program needs to do something else at a different rate, you can add another flag, which is triggered completely independently, and the timing of the two tasks won't interfere with each other.
EDIT:
I'm now confused about what you are trying to accomplish. Do you want a series of pulses of the same tone (on-off-on-off)? Or do you want a series of different notes without pauses (do-re-me-fa-...)? I had been assuming the latter, but now I'm not sure.
After seeing your updated code, I'm not sure exactly how your system is configured, so I'm just going to ask some questions I hope are helpful.
Is the PWM part working? Do you get the initial tone? I'm assuming yes.
Does your hardware have some sort of clock pulse hooked up to the RA4/T0CKI pin? If not, you need T0 to be clock mode, not counter mode.
Is the interrupt being called? You are setting INT0IE, which enables the external interrupt, not the timer interrupt
What interval do you want between tone updates? Right now, you are getting 0xFFFF / (clock_freq/8) You need to set the TMR0H/L registers if you want something different.
What's on LATD7? and why are you clearing it? Are you saying it enables PWM output?
Why the call to Delay()? The timer itself should be providing the needed delay. There seems to be a disconnect about how to do timing. I'll expand my other answer
Where are you trying to change the PWM frequency? Don't you need to write to PR2? You will need to give it a different value for every different tone.
"Halting build on first failure as requested."
This just says you have some syntax error, without showing us the error report, we can't really help.
In Windows you can use the Beep function in kernel32:
[DllImport("kernel32.dll")]
private static extern bool Beep(int frequency, int duration);

Resources