I want to define timing in a for loop:
int count;
int forfunctime;
for(count=0;count<5;count++)
{
output_b(sutun[count]);
output_c(UC[count]);
delay_ms(1);
}
int count;
int forfunctime;
for(forfunctime=0;forfunctime<100;++forfunctime)
{
for(count=0;count<5;count++)
{
output_b(sutun[count]);
output_c(UC[count]);
delay_ms(1);
}
}
In the SECOND code I am able to get a delay using the MIPS instruction processing time by enclosing the for loop but is there a more precise way?
My goal is to set a time for the for loop.
edit: for those who will use this information later: While programming PIC, we use a for loop to scroll the row and column in matrix displays, but if we want to keep this loop active for a certain period of time, we need to use timers for this.
Since I dont't know which PIC you're using, I can't answer precisely to this question, but I can draw you the general map.
What you need is a timer. There are timers on almost every microcontroller. But it is very likely that you will have to manually program it, generally by settings some bits in memory. Here is an example on how to program a timer.
The first and most naive solution is to set one of your microcontroller's timer and read it's value each time you're out of your loop, like below :
setTimer(); //Your work, not a predifined function
while (readTimer() < yourTimeConstant){ //againt, your work, not a predifined function
for(count=0;count<5;count++){
output_b(sutun[count]);
output_c(UC[count]);
}
}
The second and cleaner solution would be to use events if your microcontroller supports them.
You can save the current time just before enter the loop and check if already passed 10 seconds at every loop iteration, but actually that isn't the best way to do that. If you're working on windows system check waitable timer objects, for linux check alarm
I want to run the for loop written in the first code for 10 seconds.
Microcontrollers generally provide timers that you can use instead of delay loops. Timers have at least two advantages: 1) they make it easy to wait a specific amount of time, and 2) unlike delay functions, they don't prevent the controller from doing other things. With a timer, you can pretty much say "let me know when 10 seconds have passed."
Check the docs for your particular chip to see what timers are available and how to use them, but here's a good overview of how to use timers on PIC controllers.
Related
In my application I need to generate a function in C that will provide a specific time delay in nano seconds. This delay timer must be done in software as I don't have any hardware timers left in my AVR MCU. My problem is that I would like to be able to set the value in nanoseconds. My MCU clock is 20MHz (50nS period). I thought a quick "for" loop, like;
for (n=0; n<value; n++)
but that won't take into account how many cycles are added to each time around the loop when compiled. Has anyone got any suggestions? I really don't want to write the code in assembler.
You give us too few information btw. but I think I can answer without them but it makes answer long. Lets start with easier problem that is you have this situation that your action need to be executed less times than the most frequent isr is executing. For example you need send byte every 1s but your isr is executing every 1ms. So in short you need to send byte every 1000 executions ISR, then you make counter in ISR thats incrementing every ISR and when reaches 1000 you send byte and set cnt to 0.
ISR()
{
cnt++;
if(cnt >= 1000)
{
execute(Z);
cnt = 0;
}
}
When you have opposed problem, isr is slower than desired time of executing your actions then I stand for redesign your use of timers. You should then make this ISR to execute faster and then divide time by counting exectued isr as I described above. This was mentioned in comments.
My suggestion is that you rethink the way you use timers.
Since you are using an AVR, you should look into using the AVR-Libc delay functions, _delay_us and _delau_ms, which are documented here:
https://www.nongnu.org/avr-libc/user-manual/group__util__delay.html
They are standard in the context of AVRs, but not standard for all C environments in general.
Some example code to get you started:
#define F_CPU 20000000
#include <util/delay.h>
int main() {
while (1) {
_delay_us(0.05);
}
}
Note that even though the _delay_us and _delay_ms functions each take a double as an argument, all floating point arithmetic is done at compile time if possible in order to produce efficient code for your delay.
I asked this question on EE forum. You guys on StackOverflow know more about coding than we do on EE so maybe you can give more detail information about this :)
When I learned about micrcontrollers, teachers taught me to always end the code with while(1); with no code inside that loop.
This was to be sure that the software get "stuck" to keep interruption working. When I asked them if it was possible to put some code in this infinite loop, they told me it was a bad idea. Knowing that, I now try my best to keep this loop empty.
I now need to implement a finite state machine in a microcontroller. At first view, it seems that that code belong in this loop. That makes coding easier.
Is that a good idea? What are the pros and cons?
This is what I plan to do :
void main(void)
{
// init phase
while(1)
{
switch(current_State)
{
case 1:
if(...)
{
current_State = 2;
}
else(...)
{
current_State = 3;
}
else
current_State = 4;
break;
case 2:
if(...)
{
current_State = 3;
}
else(...)
{
current_State = 1;
}
else
current_State = 5;
break;
}
}
Instead of:
void main(void)
{
// init phase
while(1);
}
And manage the FSM with interrupt
It is like saying return all functions in one place, or other habits. There is one type of design where you might want to do this, one that is purely interrupt/event based. There are products, that go completely the other way, polled and not even driven. And anything in between.
What matters is doing your system engineering, thats it, end of story. Interrupts add complication and risk, they have a higher price than not using them. Automatically making any design interrupt driven is automatically a bad decision, simply means there was no effort put into the design, the requirements the risks, etc.
Ideally you want most of your code in the main loop, you want your interrupts lean and mean in order to keep the latency down for other time critical tasks. Not all MCUs have a complicated interrupt priority system that would allow you to burn a lot of time or have all of your application in handlers. Inputs into your system engineering, may help choose the mcu, but here again you are adding risk.
You have to ask yourself what are the tasks your mcu has to do, what if any latency is there for each task from when an event happens until they have to start responding and until they have to finish, per event/task what if any portion of it can be deferred. Can any be interrupted while doing the task, can there be a gap in time. All the questions you would do for a hardware design, or cpld or fpga design. except you have real parallelism there.
What you are likely to end up with in real world solutions are some portion in interrupt handlers and some portion in the main (infinite) loop. The main loop polling breadcrumbs left by the interrupts and/or directly polling status registers to know what to do during the loop. If/when you get to where you need to be real time you can still use the main super loop, your real time response comes from the possible paths through the loop and the worst case time for any of those paths.
Most of the time you are not going to need to do this much work. Maybe some interrupts, maybe some polling, and a main loop doing some percentage of the work.
As you should know from the EE world if a teacher/other says there is one and only one way to do something and everything else is by definition wrong...Time to find a new teacher and or pretend to drink the kool-aid, pass the class and move on with your life. Also note that the classroom experience is not real world. There are so many things that can go wrong with MCU development, that you are really in a controlled sandbox with ideally only a few variables you can play with so that you dont have spend years to try to get through a few month class. Some percentage of the rules they state in class are to get you through the class and/or to get the teacher through the class, easier to grade papers if you tell folks a function cant be bigger than X or no gotos or whatever. First thing you should do when the class is over or add to your lifetime bucket list, is to question all of these rules. Research and try on your own, fall into the traps and dig out.
When doing embedded programming, one commonly used idiom is to use a "super loop" - an infinite loop that begins after initialization is complete that dispatches the separate components of your program as they need to run. Under this paradigm, you could run the finite state machine within the super loop as you're suggesting, and continue to run the hardware management functions from the interrupt context as it sounds like you're already doing. One of the disadvantages to doing this is that your processor will always be in a high power draw state - since you're always running that loop, the processor can never go to sleep. This would actually also be a problem in any of the code you had written however - even an empty infinite while loop will keep the processor running. The solution to this is usually to end your while loop with a series of instructions to put the processor into a low power state (completely architecture dependent) that will wake it when an interrupt comes through to be processed. If there are things happening in the FSM that are not driven by any interrupts, a normally used approach to keep the processor waking up at periodic intervals is to initialize a timer to interrupt on a regular basis to cause your main loop to continue execution.
One other thing to note, if you were previously executing all of your code from the interrupt context - interrupt service routines (ISRs) really should be as short as possible, because they literally "interrupt" the main execution of the program, which may cause unintended side effects if they take too long. A normal way to handle this is to have handlers in your super loop that are just signalled to by the ISR, so that the bulk of whatever processing that needs to be done is done in the main context when there is time, rather than interrupting a potentially time critical section of your main context.
What should you implement is your choice and debugging easiness of your code.
There are times that it will be right to use the while(1); statement at the end of the code if your uC will handle interrupts completely (ISR). While at some other application the uC will be used with a code inside an infinite loop (called a polling method):
while(1)
{
//code here;
}
And at some other application, you might mix the ISR method with the polling method.
When said 'debugging easiness', using only ISR methods (putting the while(1); statement at the end), will give you hard time debugging your code since when triggering an interrupt event the debugger of choice will not give you a step by step event register reading and following. Also, please note that writing a completely ISR code is not recommended since ISR events should do minimal coding (such as increment a counter, raise/clear a flag, e.g.) and being able to exit swiftly.
It belongs in one thread that executes it in response to input messages from a producer-consumer queue. All the interrupts etc. fire input to the queue and the thread processes them through its FSM serially.
It's the only way I've found to avoid undebuggable messes whilst retaining the low latencty and efficient CPU use of interrupt-driven I/O.
'while(1);' UGH!
I am using microC to program pic16f877a to operate motors and solenoids.
I have some functions making motors move at different space times e.g. motor1 moves for 100ms, stops, moves again for 100ms etc. for 4 loops, motor2 for 200ms and so on. I want these functions to start at the same time.
Think about a robot when you want to move its right hand up and down every 200ms for a total 2 mins and its left hand up and down every 400ms for a total again 2 mins. This process should start at the same time.
So basically I want to start something like :
start:
solenoid1 runs functionQuarter(moves up-down every x time) total like 2 mins
solenoid2 runs functionHalf(moves up-down every 2x time) total like 2 mins
stop
Is it possible to do this with micro c for this pic and how can I call 2 or more functions to start at the same time?
Why do you think you need threads? You know exactly when an operation should occur, so perform that operation exactly at that time. All you need is an appropriate scheduling system that helps you keep track of the operations. Compared to threads, you don't have the problem of unexpected scheduling, probably lower latency, no need for inter-thread synchronization.
Consider this sketch:
// this task structure says at what time to set
// an output to a certain value
struct task {
time_type when;
output_type output;
value_type value;
};
struct task_queue {
struct task** tasks;
size_t count;
};
void task_queue_push(struct task_queue* q, struct task* t);
struct task* task_queue_front(struct task_queue* q);
struct task* task_queue_pop(struct task_queue* q);
Now, in a loop, you keep looking at the first element in the queue and just sleep() until the start of the next task. Of course, that means you need to keep those tasks sorted by their start time! If multiple tasks start at the same time, you need to run them both, the only limit to "at the same time" the the execution time of each task. If necessary, as part of handling one task, you can create one or more other tasks. As a variation, you can also use a callback instead of just the output and value infos that assume you want to set some digital outputs only.
There isn't any solution for the pic16 series (it is far too small) but there is FreeRtos, made specifically for microcontrollers and there is a port for PIC18 (and a few others) check out the supported device list
Although freeRTOS is "free" to obtain and to use in personal projects I would recommend you purchase one of their books to help with implementation. There is free API on their site and demo code as well. It would be easier to understand it with the book (please note I am not tied to freeRTOS in anyway, I used it in projects with atmel controllers and found it very easy to use)
with freeRTOS you create a task (you define your solenoid control function) and then you set the priority, the delay then start the kernel. It's actually really straightforward.
Again, this won't work with your specific chip pic16, but if you can try another one, freeRTOS is a very well known and a pretty easy solution
![enter image description here][1]I have a high voltage control VI and I'd like it to increase the output voltage by a user set increment every x number of seconds. At the moment I have a timed sequence outside the main while loop but it never starts. When it's inside the while loop it delays all other functions. I'm afraid I'm such a beginner at this that I can't post a picture yet. All that needs to happen is an increase in voltage by x amount every y seconds. Is there a way to fix this or a better way of doing it? I'm open to suggestions! Thanks!
Eric,
Without seeing the code I am guessing that you have the two loops in series (i.e. the starting of the while loop depends upon an output of the timed loop; this is the only way that one loop might block another). If this is the case, then decouple the two loops so that they are not directly dependent on each other.
If the while loop is dependent on user input, then use an event structure and then pass the new parameters via a queue (this would be your producer-consumer pattern).
Also, get rid of the timed loop and replace with a while loop. The timed loop is only simulated on non-real time machines and it can disrupt determinisitic features of a real-time system. Given that you are looking for sending out a a signal on the order of seconds, it is absolutely not necessary.
Anyways, if I am off base, please throw the code in question up so that we can review it.
Cheers, Matt
Come someone please tell me how this function works? I'm using it in code and have an idea how it works, but I'm not 100% sure exactly. I understand the concept of an input variable N incrementing down, but how the heck does it work? Also, if I am using it repeatedly in my main() for different delays (different iputs for N), then do I have to "zero" the function if I used it somewhere else?
Reference: MILLISEC is a constant defined by Fcy/10000, or system clock/10000.
Thanks in advance.
// DelayNmSec() gives a 1mS to 65.5 Seconds delay
/* Note that FCY is used in the computation. Please make the necessary
Changes(PLLx4 or PLLx8 etc) to compute the right FCY as in the define
statement above. */
void DelayNmSec(unsigned int N)
{
unsigned int j;
while(N--)
for(j=0;j < MILLISEC;j++);
}
This is referred to as busy waiting, a concept that just burns some CPU cycles thus "waiting" by keeping the CPU "busy" doing empty loops. You don't need to reset the function, it will do the same if called repeatedly.
If you call it with N=3, it will repeat the while loop 3 times, every time counting with j from 0 to MILLISEC, which is supposedly a constant that depends on the CPU clock.
The original author of the code have timed and looked at the assembler generated to get the exact number of instructions executed per Millisecond, and have configured a constant MILLISEC to match that for the for loop as a busy-wait.
The input parameter N is then simply the number of milliseconds the caller want to wait and the number of times the for-loop is executed.
The code will break if
used on a different or faster micro controller (depending on how Fcy is maintained), or
the optimization level on the C compiler is changed, or
c-compiler version is changed (as it may generate different code)
so, if the guy who wrote it is clever, there may be a calibration program which defines and configures the MILLISEC constant.
This is what is known as a busy wait in which the time taken for a particular computation is used as a counter to cause a delay.
This approach does have problems in that on different processors with different speeds, the computation needs to be adjusted. Old games used this approach and I remember a simulation using this busy wait approach that targeted an old 8086 type of processor to cause an animation to move smoothly. When the game was used on a Pentium processor PC, instead of the rocket majestically rising up the screen over several seconds, the entire animation flashed before your eyes so fast that it was difficult to see what the animation was.
This sort of busy wait means that in the thread running, the thread is sitting in a computation loop counting down for the number of milliseconds. The result is that the thread does not do anything else other than counting down.
If the operating system is not a preemptive multi-tasking OS, then nothing else will run until the count down completes which may cause problems in other threads and tasks.
If the operating system is preemptive multi-tasking the resulting delays will have a variability as control is switched to some other thread for some period of time before switching back.
This approach is normally used for small pieces of software on dedicated processors where a computation has a known amount of time and where having the processor dedicated to the countdown does not impact other parts of the software. An example might be a small sensor that performs a reading to collect a data sample then does this kind of busy loop before doing the next read to collect the next data sample.