Increment an output signal in labview - loops

![enter image description here][1]I have a high voltage control VI and I'd like it to increase the output voltage by a user set increment every x number of seconds. At the moment I have a timed sequence outside the main while loop but it never starts. When it's inside the while loop it delays all other functions. I'm afraid I'm such a beginner at this that I can't post a picture yet. All that needs to happen is an increase in voltage by x amount every y seconds. Is there a way to fix this or a better way of doing it? I'm open to suggestions! Thanks!

Eric,
Without seeing the code I am guessing that you have the two loops in series (i.e. the starting of the while loop depends upon an output of the timed loop; this is the only way that one loop might block another). If this is the case, then decouple the two loops so that they are not directly dependent on each other.
If the while loop is dependent on user input, then use an event structure and then pass the new parameters via a queue (this would be your producer-consumer pattern).
Also, get rid of the timed loop and replace with a while loop. The timed loop is only simulated on non-real time machines and it can disrupt determinisitic features of a real-time system. Given that you are looking for sending out a a signal on the order of seconds, it is absolutely not necessary.
Anyways, if I am off base, please throw the code in question up so that we can review it.
Cheers, Matt

Related

Set a time for the for loop

I want to define timing in a for loop:
int count;
int forfunctime;
for(count=0;count<5;count++)
{
output_b(sutun[count]);
output_c(UC[count]);
delay_ms(1);
}
int count;
int forfunctime;
for(forfunctime=0;forfunctime<100;++forfunctime)
{
for(count=0;count<5;count++)
{
output_b(sutun[count]);
output_c(UC[count]);
delay_ms(1);
}
}
In the SECOND code I am able to get a delay using the MIPS instruction processing time by enclosing the for loop but is there a more precise way?
My goal is to set a time for the for loop.
edit: for those who will use this information later: While programming PIC, we use a for loop to scroll the row and column in matrix displays, but if we want to keep this loop active for a certain period of time, we need to use timers for this.
Since I dont't know which PIC you're using, I can't answer precisely to this question, but I can draw you the general map.
What you need is a timer. There are timers on almost every microcontroller. But it is very likely that you will have to manually program it, generally by settings some bits in memory. Here is an example on how to program a timer.
The first and most naive solution is to set one of your microcontroller's timer and read it's value each time you're out of your loop, like below :
setTimer(); //Your work, not a predifined function
while (readTimer() < yourTimeConstant){ //againt, your work, not a predifined function
for(count=0;count<5;count++){
output_b(sutun[count]);
output_c(UC[count]);
}
}
The second and cleaner solution would be to use events if your microcontroller supports them.
You can save the current time just before enter the loop and check if already passed 10 seconds at every loop iteration, but actually that isn't the best way to do that. If you're working on windows system check waitable timer objects, for linux check alarm
I want to run the for loop written in the first code for 10 seconds.
Microcontrollers generally provide timers that you can use instead of delay loops. Timers have at least two advantages: 1) they make it easy to wait a specific amount of time, and 2) unlike delay functions, they don't prevent the controller from doing other things. With a timer, you can pretty much say "let me know when 10 seconds have passed."
Check the docs for your particular chip to see what timers are available and how to use them, but here's a good overview of how to use timers on PIC controllers.

Is there an algorithm to schedule overlapping turn-on/turn-off commands?

The problem I want to solve is as follow:
Each task (the green bar) represents a pair of turn-on (green-dashed-line) and turn-off (red-dashed-line) commands. Tasks may or may not overlap with one another. This a real-time application where we don't know when or if another task is coming. The goal is to turn the valve on if it's not already on and avoid turn off the valve prematurely.
What I mean by not turn off the valve prematurely is, for example, if we turn off the valve at time off-1, that's wrong because the valve should still stay on at that point in time, the correct action is to turn off the valve at time off-4.
Regarding the implementation details, each task is an async task (via. GLib async API). I simulate the waiting for task-duration with the sleep function, a timer is probably more appropriate. Right now, the tasks run independently and there is no coordination between them, thus the valve is being turn-off prematurely.
I searched around to find a similar problem but the closest I found is interval scheduling whose goal is different. Has anyone encountered a similar problem before and can give me some pointers on how to solve this problem?
It seems like this could be solved with a simple counter. You increment the counter for each opening command, and decrement it for each closing command - when the count reaches zero, you close the valve.

Determining cause of delay/pause - kernel scheduler etc

System is an embedded Linux/Busybox core on a small embedded board with a web server (Boa) running.
We are seeing some high latency in responses from the web server - sometimes >500ms for no good reason, so I've been digging...
On liberally scattering debug prints throughout the code it seems to come down to the entire process just... stopping for a bit, in a way which I can only assume must be the process/thread being interrupted by another process.
Using print statements and clock_gettime() to calculate time taken to process a request, I can see the code reach the bottom of a while() loop (parsing input), print something like "Time so far: 5ms" and then the next line at the top of the loop will print "Time so far: 350ms" - and all that the code does between the bottom of the loop and the 1st print back at the top is a basic check along the lines of while(position < end), it has nothing complicated that could hold it up.
There's no IO blocking, the data it's parsing has all arrived already, and it's not making any external calls or wandering off into complex functions.
I then looked into whether the kernel scheduler (CFS in our case) might be holding things up, adding calls to clock() (processor time rather than wall-clock) and again calculating time differences Vs processor time used I can see that the wall-clock time delay may run beyond 300ms from one loop to the next, but the reported processor time taken (which seems to have a ~10ms resolution) is more like 50ms.
So, that suggests the task scheduler is holding the process up for hundreds of milliseconds at a time. I've checked the scheduler granularity and max delay and it's nowhere near 100ms, scheduler latency is set at 6ms for example.
Any advice on what I can do now to try and track down the problem - identifying processes which could hog the CPU for >100ms, measuring/tracking what the scheduler is doing, etc.?
First you should try and run your program using strace to see if there are any system calls holding things up.
If that is ambiguous or does not help I would suggest you try and profile the kernel. You could try OProfile
This will create a call graph that you can analyze and see what is happening.

facing trouble while working with MSp430 timer and interrupts

I am trying to make a project with msp430g2553. The problem i am facing is while i am coding. What i have to do is:
I have enabled inputs on one of the pins of the msp. The timer starts on the rising edge of the input.
It counts to a certain value stored in TACCRO .
This continues for ever.
Now what i have to do here is:
Increment a variable c by 1 when the value in TACCRO is reached.
And also do some calculations with the value of counter stored in TAR register.
Problem
I am not able to figure out where should i write the code for calculation with the value in TAR, should i write it in ISR only or should i write it in the main code.
Can anybody guide me with this?
P.S i am writing a question here for the first time , so if more clarity is required please let me know.
It depends on what you want to achieve at the end but, with the info you provide, I guess the simplest and easier way to do it would be using an ISR for the appropriate counter and writing your code there.
Keep in mind ISRs should be short and quick, so the processor can go on with other tasks. If your calculations are complex or the process is heavy, I would recommend storing the values in global variables, setting a global flag, and letting the complex calculations be done at the main code through a loop that checks and resets that flag.
Hope this helps.

Generic Microcontroller Delay Function

Come someone please tell me how this function works? I'm using it in code and have an idea how it works, but I'm not 100% sure exactly. I understand the concept of an input variable N incrementing down, but how the heck does it work? Also, if I am using it repeatedly in my main() for different delays (different iputs for N), then do I have to "zero" the function if I used it somewhere else?
Reference: MILLISEC is a constant defined by Fcy/10000, or system clock/10000.
Thanks in advance.
// DelayNmSec() gives a 1mS to 65.5 Seconds delay
/* Note that FCY is used in the computation. Please make the necessary
Changes(PLLx4 or PLLx8 etc) to compute the right FCY as in the define
statement above. */
void DelayNmSec(unsigned int N)
{
unsigned int j;
while(N--)
for(j=0;j < MILLISEC;j++);
}
This is referred to as busy waiting, a concept that just burns some CPU cycles thus "waiting" by keeping the CPU "busy" doing empty loops. You don't need to reset the function, it will do the same if called repeatedly.
If you call it with N=3, it will repeat the while loop 3 times, every time counting with j from 0 to MILLISEC, which is supposedly a constant that depends on the CPU clock.
The original author of the code have timed and looked at the assembler generated to get the exact number of instructions executed per Millisecond, and have configured a constant MILLISEC to match that for the for loop as a busy-wait.
The input parameter N is then simply the number of milliseconds the caller want to wait and the number of times the for-loop is executed.
The code will break if
used on a different or faster micro controller (depending on how Fcy is maintained), or
the optimization level on the C compiler is changed, or
c-compiler version is changed (as it may generate different code)
so, if the guy who wrote it is clever, there may be a calibration program which defines and configures the MILLISEC constant.
This is what is known as a busy wait in which the time taken for a particular computation is used as a counter to cause a delay.
This approach does have problems in that on different processors with different speeds, the computation needs to be adjusted. Old games used this approach and I remember a simulation using this busy wait approach that targeted an old 8086 type of processor to cause an animation to move smoothly. When the game was used on a Pentium processor PC, instead of the rocket majestically rising up the screen over several seconds, the entire animation flashed before your eyes so fast that it was difficult to see what the animation was.
This sort of busy wait means that in the thread running, the thread is sitting in a computation loop counting down for the number of milliseconds. The result is that the thread does not do anything else other than counting down.
If the operating system is not a preemptive multi-tasking OS, then nothing else will run until the count down completes which may cause problems in other threads and tasks.
If the operating system is preemptive multi-tasking the resulting delays will have a variability as control is switched to some other thread for some period of time before switching back.
This approach is normally used for small pieces of software on dedicated processors where a computation has a known amount of time and where having the processor dedicated to the countdown does not impact other parts of the software. An example might be a small sensor that performs a reading to collect a data sample then does this kind of busy loop before doing the next read to collect the next data sample.

Resources