Timer tick implementation in c - c

I am doing some embedded system work and have a couple of question. I have a timer function which fires every 100ms. I now need to get the actual time once the system has started .
Currently my code is something like this:
struct timer
{
uint8_t millis_100;
uint8_t minute_455;
}
void tick()
{
//fires evry 100ms
timer_task.millis_100++;
}
However I am confused if this is the right approach since I will need to check if millis_100 overflowed to 0 and then increment it inside the ISR routine. If I need more than 455*2^8-1 then I would need to put another if statement in the ISR. IS this how system ticks are used to make software timers? Or is there a more elegant solution?

If you are worried about the timer overflowing, why not use an unsigned long variable ??
Check the timer implementation in linux kernel timer.c, timer.h
For more details, read the section The Timer API in LDD3/ch07

Related

Set a time for the for loop

I want to define timing in a for loop:
int count;
int forfunctime;
for(count=0;count<5;count++)
{
output_b(sutun[count]);
output_c(UC[count]);
delay_ms(1);
}
int count;
int forfunctime;
for(forfunctime=0;forfunctime<100;++forfunctime)
{
for(count=0;count<5;count++)
{
output_b(sutun[count]);
output_c(UC[count]);
delay_ms(1);
}
}
In the SECOND code I am able to get a delay using the MIPS instruction processing time by enclosing the for loop but is there a more precise way?
My goal is to set a time for the for loop.
edit: for those who will use this information later: While programming PIC, we use a for loop to scroll the row and column in matrix displays, but if we want to keep this loop active for a certain period of time, we need to use timers for this.
Since I dont't know which PIC you're using, I can't answer precisely to this question, but I can draw you the general map.
What you need is a timer. There are timers on almost every microcontroller. But it is very likely that you will have to manually program it, generally by settings some bits in memory. Here is an example on how to program a timer.
The first and most naive solution is to set one of your microcontroller's timer and read it's value each time you're out of your loop, like below :
setTimer(); //Your work, not a predifined function
while (readTimer() < yourTimeConstant){ //againt, your work, not a predifined function
for(count=0;count<5;count++){
output_b(sutun[count]);
output_c(UC[count]);
}
}
The second and cleaner solution would be to use events if your microcontroller supports them.
You can save the current time just before enter the loop and check if already passed 10 seconds at every loop iteration, but actually that isn't the best way to do that. If you're working on windows system check waitable timer objects, for linux check alarm
I want to run the for loop written in the first code for 10 seconds.
Microcontrollers generally provide timers that you can use instead of delay loops. Timers have at least two advantages: 1) they make it easy to wait a specific amount of time, and 2) unlike delay functions, they don't prevent the controller from doing other things. With a timer, you can pretty much say "let me know when 10 seconds have passed."
Check the docs for your particular chip to see what timers are available and how to use them, but here's a good overview of how to use timers on PIC controllers.

Is there a standard function in C that allows for setting a specific delay in nSec?

In my application I need to generate a function in C that will provide a specific time delay in nano seconds. This delay timer must be done in software as I don't have any hardware timers left in my AVR MCU. My problem is that I would like to be able to set the value in nanoseconds. My MCU clock is 20MHz (50nS period). I thought a quick "for" loop, like;
for (n=0; n<value; n++)
but that won't take into account how many cycles are added to each time around the loop when compiled. Has anyone got any suggestions? I really don't want to write the code in assembler.
You give us too few information btw. but I think I can answer without them but it makes answer long. Lets start with easier problem that is you have this situation that your action need to be executed less times than the most frequent isr is executing. For example you need send byte every 1s but your isr is executing every 1ms. So in short you need to send byte every 1000 executions ISR, then you make counter in ISR thats incrementing every ISR and when reaches 1000 you send byte and set cnt to 0.
ISR()
{
cnt++;
if(cnt >= 1000)
{
execute(Z);
cnt = 0;
}
}
When you have opposed problem, isr is slower than desired time of executing your actions then I stand for redesign your use of timers. You should then make this ISR to execute faster and then divide time by counting exectued isr as I described above. This was mentioned in comments.
My suggestion is that you rethink the way you use timers.
Since you are using an AVR, you should look into using the AVR-Libc delay functions, _delay_us and _delau_ms, which are documented here:
https://www.nongnu.org/avr-libc/user-manual/group__util__delay.html
They are standard in the context of AVRs, but not standard for all C environments in general.
Some example code to get you started:
#define F_CPU 20000000
#include <util/delay.h>
int main() {
while (1) {
_delay_us(0.05);
}
}
Note that even though the _delay_us and _delay_ms functions each take a double as an argument, all floating point arithmetic is done at compile time if possible in order to produce efficient code for your delay.

How to implement twice button interrupt PIC

I'm working with a 16F1703 PIC mcu, and I want to begin a 7segment lcd cycle (0-9) loop at a touch of a button(A1), after that if I touch the button(A1) twice, I want the Pic to enter in sleep mode.
To do that, I implemented this:
#include <test_interrupt.h>
byte const DataExit[10]={0b01000100,
0b01011111,
0b01100010,
0b01001010,
0b01011001,
0b11001000,
0b11000000,
0b01011110,
0b01000000,
0b01001000};
byte const bitMask[8]={1,2,4,8,16,32,64,128};
//show seven numbers
void segmentCycle(void){
int i, j;
for(i=0;i<10;i++){
for (j=0; j<8;j++){
output_low(CLK);
output_bit(DATA,DataExit[i] & bitMask[j]);
output_high(CLK);
}
delay_ms(7000);
output_low(CLR);
delay_ms(6000);
output_high(CLR);
}
}
#INT_IOC
void IOC_isr(void)
{
segmentCycle();
sleep();
}
void main()
{
port_a_pullups(0x02);
enable_interrupts(INT_IOC_A1);
enable_interrupts(INT_IOC_A1_H2L);
enable_interrupts(GLOBAL);
while(TRUE);
}
For now, if I touch the button, sometimes it starts, otherwise It don't.
What do you suggest?
I'm using ccs compiler.
Your code lacks a proper debounce algorithm and your circuit design is probably flawed. Hooking a button to an interrupt is a waste of a valuable resource, especially if it lacks a debounce circuit. That aside for now, your ISR is going off and doing at least 13000ms of work (well "delay")! ISR's should be short and fast. When they happen, they interrupt whatever code is running at the time, and in the absence of any hard/soft debounce mechanisms, are likely to fire many times per button press (put a scope on that button). That means you're ISR routine might be entered many times before it ever exits from the first call, but even that depends on pin configurations we can only guess at because the relevent code is missing from your OP.
Normally, you'd have a main loop that does whatever work needs to happen and the ISR's simply signal state changes via flags, counters or enumerations. When the main loop detects a state change, it calls whatever function(s) handle that change. In your case, it probably needs to check the current time and the last time the button was pressed, and verify that a minimum period has elapsed (500ms is usually good enough on pin with reasonable pull-up). If not enough time has passed, it resets the flag, otherwise it does the needed work.
See page 72 of the device spec. and take note that there are multiple interrupt sources and the ISR is responsible for determining which source caused it to fire. Your code doesn't look at the interrupt flags, nor does it clear the previous interrupt before exit, so you're never going to see more than one interrupt from any particular source.
With a little bit of searching, you should be able to find free code written to work on your specific PIC chip that handles button debounce. I recommend that you find and study that code. It will make for a very good starting point for you to learn and evolve your project from.
Looks like your hangup is that you want the PIC to be sleeping until the button is pressed. I would do something like this, using CCS C Compiler:
#include <16f1703.h>
#use delay(int=8MHz)
#use fast_io(A)
#include <stdbool.h>
#include <stdint.h>
bool g_isr_ioc_flag = 0;
uint8_t g_isr_ioc_porta = 0;
#int_ioc
void isr_ioc(void)
{
if (!g_isr_ioc_flag) //only trap first IOC
{
g_isr_ioc_flag = 1;
g_isr_ioc_porta = input_a();
}
}
void main(void)
{
uint8_t debounced_a;
set_tris_a(2);
port_a_pullups(0x02);
enable_interrupts(INT_IOC_A1);
enable_interrupts(INT_IOC_A1_H2L);
enable_interrupts(GLOBAL);
for(;;)
{
sleep();
if (g_isr_ioc_flag)
{
// cheap debounce. bit in debounced_a set if pin has been low
// for at least 72ms.
delay_ms(72);
debounced_a = ~g_isr_ioc_porta & ~input_a();
g_isr_ioc_flag = 0;
input_a(); // clear IOC flags left by race condition
if (bit_test(debounced_a, 1))
{
// TODO: user code - handle RA1 press
}
}
}
}
Normally a debounce routine would have had to poll the pin to see if it went low to start debounce timing - but in your case the interrupt on change (IOC) ISR does some of that work for us.
The CCS function input_a() automatically clears the relevant IOC flag bits in IOCAF. That means calling input_a() will clear the ISR flag until the next change.
This design lets you handle other wakeup sources in the main loop. Be aware that any other peripheral ISR or the WDT will wakeup the PIC.

Contiki delay in seconds

I am trying to develop a contiki piece of code in which I need to wait for three seconds for a transducer output. Although this may sound quite un-transducer like, at development time, I want to simulate the behavior at human readable speeds and hence I need to set the timer to say 3 seconds.
The contiki timer library is pretty well documented and has a good series of examples which mention the creation, setting and resetting of the timer. However, if I have code like the following:
timer_set(&transducerOutputWaitTimer);
bool if_blk_executed;
if(timer_expired(&&transducerOutputWaitTimer)){
if_blk_executed = true;
//do something
}
if(if_blk_executed)
printf("Sunrise");
else
printf("It is not dawned yet");
Now the expiry is not immediately triggered after the timer is set. So the if block is never executed. Effectively it will never dawn.
Now there are two ways in which I can get the system to wait. One, by adding a while loop on the timer like this:
while(!timer_expired(&&transducerOutputWaitTimer)){};
//do something
or
cpu_delay_usecs{mytimerdur_in_secs*10^6*};
//do something
I do not see that either approaches are elegant. While one wastes CPU cycles, the other enforces an unnecessarily large computation.
Is there any better way? I know that clock
Attached to this question are the following two:
How can I trigger one Contiki protothread from another? If I can do this, I can get the interrupt that causes the transducerOutput to invoke a process from which I can trigger the etimer and process events.
What exactly does the CPU delay mean? Does this mean that the entire CPU clock cycles get held back for the given duration? If yes, how are other processes running currently in the system affected?
Updates
Update 1: the while method did not work. The code possibly went into an infinite loop.
Update 2: I tried with a clock_delay(3*CLOCK_SECOND) approach after setting my timer. It worked. However, in this case, why do I need the timer method at all?
Update 3 (This changes the context of the answer, hence adding this comment as per the suggestion)
My Timer needs to be used outside a process in a different void function. In that case, I need to use the timer() library rather than the etimer (which is specific to a process. So how do I get my method to wait for the desired time in such cases?
There are several Contiki abstractions built on top of the timer library - there is usually no need to user the timer_t structures directly. Instead, there are event timers (struct etimer), which play well together with Contiki processes, and callback timers (struct ctimer), both of which internally use struct timer. For a less RAM hungry option with a more limited API there are second timers (struct stimer). Finally, there is a single real-time timer in the system (struct rtimer).
An example event timer usage: set and wait for 3 seconds:
static struct etimer timer;
etimer_set(&timer, 3 * CLOCK_SECOND);
PROCESS_WAIT_EVENT_UNTIL(etimer_expired(&timer));
How can I trigger one Contiki protothread from another?
There is process_poll - see the process API documentation for more examples.
What exactly does the CPU delay mean?
It's a form of busy waiting. Don't use the delay functions or other forms of busy waiting for delays longer than microseconds - not energy efficient, does not allow other processes to run, and in worst case the watchdog might expire.

Digital Clock program in C - use interrupt?

I am writing a monitoring program for a computer cluster that displays a lot of data onto an LCD screen. As part of the display, I would like to have a digital clock running showing the current date, hour, minute, and second. The problem is, I have a bunch of tasks going on in a big loop (ping requests, A/D conversions, file scanning) and I just don't have time to update the clock during the loop.
I'm using C under Linux (Debian).
Any suggestions of how I should got about solving this? I was thinking of maybe using an interrupt to update the clock every second, but how does one go about doing that under Linux? I've only really used interrupts with microntrollers before now.
You need to run some parallel code to do that. The best way I can think of for this application, is, by the use of threads using pthread_create() with a function like this:
void * clock_routine(void * args) {
while(1) {
// update clock
}
}
Call it similarly to this, in your main function:
pthread_t clock_tid; // Clock thread handle
int rc = pthread_create(&clock_tid, NULL, clock_routine, NULL);
And compile the file with the -lpthread flag.
The code in clock_routine will run in parallel to your other code, and since it's only a clock, you don't have any dependency/synchronization issues to deal with.
P.S.: (Kinda useless, but here goes:) I'm not sure, but using a sleep function inside your while(1) loop, to sleep for a few microseconds, "might" improve your program, since you don't need "real" time clock update, but just a few microseconds of accuracy. Constantly updating the clock like this, "might" be considered busy-waiting. If anyone knows more on this, please, correct me.

Resources