Im trying to make a labview program that is supposed to flip a dice and activate a specific led in order of which number it lands on, but if it lands on a 6 its supposed to make all leds blink twice. Right now if it lands on a 6 it only blinks once.
Heres the while loop i made
if false its like this:
Here's how the full program looks like if anyone would like to know:
You are not updating any LEDs inside your loop.
Your code does this if you roll a 6:
The while loop runs 3 times, as fast possible
The while loop passes the last value (TRUE) out.
This "TRUE" value enters the "OR" nodes.
The output of the "OR" nodes get written to the LEDs.
This means: Your LEDs only get updated after your while loop has stopped running.
If you want your LEDs to blink multiple times, you need to update them inside the while loop.
Make sure you understand the concept of Dataflow and make sure your code doesn't have any Race Conditions. (If you don't understand these terms, I recommend you take a course or read a book for LabVIEW beginners)
In addition to JKSH's answer above, note that you'll need to have some timing involved.
As JKSH pointed out, you didn't update the LED inside the loop, but even if you had, without some delay between the updates, you probably wouldn't be able to see the blink because it would happen too quickly.
I mention this because it's possible that you'll get the updating inside the loop correct, but you won't know because it will flash too quickly for you to see.
Related
I want to define timing in a for loop:
int count;
int forfunctime;
for(count=0;count<5;count++)
{
output_b(sutun[count]);
output_c(UC[count]);
delay_ms(1);
}
int count;
int forfunctime;
for(forfunctime=0;forfunctime<100;++forfunctime)
{
for(count=0;count<5;count++)
{
output_b(sutun[count]);
output_c(UC[count]);
delay_ms(1);
}
}
In the SECOND code I am able to get a delay using the MIPS instruction processing time by enclosing the for loop but is there a more precise way?
My goal is to set a time for the for loop.
edit: for those who will use this information later: While programming PIC, we use a for loop to scroll the row and column in matrix displays, but if we want to keep this loop active for a certain period of time, we need to use timers for this.
Since I dont't know which PIC you're using, I can't answer precisely to this question, but I can draw you the general map.
What you need is a timer. There are timers on almost every microcontroller. But it is very likely that you will have to manually program it, generally by settings some bits in memory. Here is an example on how to program a timer.
The first and most naive solution is to set one of your microcontroller's timer and read it's value each time you're out of your loop, like below :
setTimer(); //Your work, not a predifined function
while (readTimer() < yourTimeConstant){ //againt, your work, not a predifined function
for(count=0;count<5;count++){
output_b(sutun[count]);
output_c(UC[count]);
}
}
The second and cleaner solution would be to use events if your microcontroller supports them.
You can save the current time just before enter the loop and check if already passed 10 seconds at every loop iteration, but actually that isn't the best way to do that. If you're working on windows system check waitable timer objects, for linux check alarm
I want to run the for loop written in the first code for 10 seconds.
Microcontrollers generally provide timers that you can use instead of delay loops. Timers have at least two advantages: 1) they make it easy to wait a specific amount of time, and 2) unlike delay functions, they don't prevent the controller from doing other things. With a timer, you can pretty much say "let me know when 10 seconds have passed."
Check the docs for your particular chip to see what timers are available and how to use them, but here's a good overview of how to use timers on PIC controllers.
I asked this question on EE forum. You guys on StackOverflow know more about coding than we do on EE so maybe you can give more detail information about this :)
When I learned about micrcontrollers, teachers taught me to always end the code with while(1); with no code inside that loop.
This was to be sure that the software get "stuck" to keep interruption working. When I asked them if it was possible to put some code in this infinite loop, they told me it was a bad idea. Knowing that, I now try my best to keep this loop empty.
I now need to implement a finite state machine in a microcontroller. At first view, it seems that that code belong in this loop. That makes coding easier.
Is that a good idea? What are the pros and cons?
This is what I plan to do :
void main(void)
{
// init phase
while(1)
{
switch(current_State)
{
case 1:
if(...)
{
current_State = 2;
}
else(...)
{
current_State = 3;
}
else
current_State = 4;
break;
case 2:
if(...)
{
current_State = 3;
}
else(...)
{
current_State = 1;
}
else
current_State = 5;
break;
}
}
Instead of:
void main(void)
{
// init phase
while(1);
}
And manage the FSM with interrupt
It is like saying return all functions in one place, or other habits. There is one type of design where you might want to do this, one that is purely interrupt/event based. There are products, that go completely the other way, polled and not even driven. And anything in between.
What matters is doing your system engineering, thats it, end of story. Interrupts add complication and risk, they have a higher price than not using them. Automatically making any design interrupt driven is automatically a bad decision, simply means there was no effort put into the design, the requirements the risks, etc.
Ideally you want most of your code in the main loop, you want your interrupts lean and mean in order to keep the latency down for other time critical tasks. Not all MCUs have a complicated interrupt priority system that would allow you to burn a lot of time or have all of your application in handlers. Inputs into your system engineering, may help choose the mcu, but here again you are adding risk.
You have to ask yourself what are the tasks your mcu has to do, what if any latency is there for each task from when an event happens until they have to start responding and until they have to finish, per event/task what if any portion of it can be deferred. Can any be interrupted while doing the task, can there be a gap in time. All the questions you would do for a hardware design, or cpld or fpga design. except you have real parallelism there.
What you are likely to end up with in real world solutions are some portion in interrupt handlers and some portion in the main (infinite) loop. The main loop polling breadcrumbs left by the interrupts and/or directly polling status registers to know what to do during the loop. If/when you get to where you need to be real time you can still use the main super loop, your real time response comes from the possible paths through the loop and the worst case time for any of those paths.
Most of the time you are not going to need to do this much work. Maybe some interrupts, maybe some polling, and a main loop doing some percentage of the work.
As you should know from the EE world if a teacher/other says there is one and only one way to do something and everything else is by definition wrong...Time to find a new teacher and or pretend to drink the kool-aid, pass the class and move on with your life. Also note that the classroom experience is not real world. There are so many things that can go wrong with MCU development, that you are really in a controlled sandbox with ideally only a few variables you can play with so that you dont have spend years to try to get through a few month class. Some percentage of the rules they state in class are to get you through the class and/or to get the teacher through the class, easier to grade papers if you tell folks a function cant be bigger than X or no gotos or whatever. First thing you should do when the class is over or add to your lifetime bucket list, is to question all of these rules. Research and try on your own, fall into the traps and dig out.
When doing embedded programming, one commonly used idiom is to use a "super loop" - an infinite loop that begins after initialization is complete that dispatches the separate components of your program as they need to run. Under this paradigm, you could run the finite state machine within the super loop as you're suggesting, and continue to run the hardware management functions from the interrupt context as it sounds like you're already doing. One of the disadvantages to doing this is that your processor will always be in a high power draw state - since you're always running that loop, the processor can never go to sleep. This would actually also be a problem in any of the code you had written however - even an empty infinite while loop will keep the processor running. The solution to this is usually to end your while loop with a series of instructions to put the processor into a low power state (completely architecture dependent) that will wake it when an interrupt comes through to be processed. If there are things happening in the FSM that are not driven by any interrupts, a normally used approach to keep the processor waking up at periodic intervals is to initialize a timer to interrupt on a regular basis to cause your main loop to continue execution.
One other thing to note, if you were previously executing all of your code from the interrupt context - interrupt service routines (ISRs) really should be as short as possible, because they literally "interrupt" the main execution of the program, which may cause unintended side effects if they take too long. A normal way to handle this is to have handlers in your super loop that are just signalled to by the ISR, so that the bulk of whatever processing that needs to be done is done in the main context when there is time, rather than interrupting a potentially time critical section of your main context.
What should you implement is your choice and debugging easiness of your code.
There are times that it will be right to use the while(1); statement at the end of the code if your uC will handle interrupts completely (ISR). While at some other application the uC will be used with a code inside an infinite loop (called a polling method):
while(1)
{
//code here;
}
And at some other application, you might mix the ISR method with the polling method.
When said 'debugging easiness', using only ISR methods (putting the while(1); statement at the end), will give you hard time debugging your code since when triggering an interrupt event the debugger of choice will not give you a step by step event register reading and following. Also, please note that writing a completely ISR code is not recommended since ISR events should do minimal coding (such as increment a counter, raise/clear a flag, e.g.) and being able to exit swiftly.
It belongs in one thread that executes it in response to input messages from a producer-consumer queue. All the interrupts etc. fire input to the queue and the thread processes them through its FSM serially.
It's the only way I've found to avoid undebuggable messes whilst retaining the low latencty and efficient CPU use of interrupt-driven I/O.
'while(1);' UGH!
![enter image description here][1]I have a high voltage control VI and I'd like it to increase the output voltage by a user set increment every x number of seconds. At the moment I have a timed sequence outside the main while loop but it never starts. When it's inside the while loop it delays all other functions. I'm afraid I'm such a beginner at this that I can't post a picture yet. All that needs to happen is an increase in voltage by x amount every y seconds. Is there a way to fix this or a better way of doing it? I'm open to suggestions! Thanks!
Eric,
Without seeing the code I am guessing that you have the two loops in series (i.e. the starting of the while loop depends upon an output of the timed loop; this is the only way that one loop might block another). If this is the case, then decouple the two loops so that they are not directly dependent on each other.
If the while loop is dependent on user input, then use an event structure and then pass the new parameters via a queue (this would be your producer-consumer pattern).
Also, get rid of the timed loop and replace with a while loop. The timed loop is only simulated on non-real time machines and it can disrupt determinisitic features of a real-time system. Given that you are looking for sending out a a signal on the order of seconds, it is absolutely not necessary.
Anyways, if I am off base, please throw the code in question up so that we can review it.
Cheers, Matt
I am trying to make a project with msp430g2553. The problem i am facing is while i am coding. What i have to do is:
I have enabled inputs on one of the pins of the msp. The timer starts on the rising edge of the input.
It counts to a certain value stored in TACCRO .
This continues for ever.
Now what i have to do here is:
Increment a variable c by 1 when the value in TACCRO is reached.
And also do some calculations with the value of counter stored in TAR register.
Problem
I am not able to figure out where should i write the code for calculation with the value in TAR, should i write it in ISR only or should i write it in the main code.
Can anybody guide me with this?
P.S i am writing a question here for the first time , so if more clarity is required please let me know.
It depends on what you want to achieve at the end but, with the info you provide, I guess the simplest and easier way to do it would be using an ISR for the appropriate counter and writing your code there.
Keep in mind ISRs should be short and quick, so the processor can go on with other tasks. If your calculations are complex or the process is heavy, I would recommend storing the values in global variables, setting a global flag, and letting the complex calculations be done at the main code through a loop that checks and resets that flag.
Hope this helps.
I have a strange bug in my code which disappears when I try to debug it.
In my timer interrupt (always running system ticker) I have something like this:
if (a && lot && of && conditions)
{
some_global_flag = 1; // breakpoint 2
}
in my main loop I have
if (some_global_flag)
{
some_global_flag = 0;
do_something_very_important(); // breakpoint 1
}
This condition in the main loop is never called when the conditions in the timer are (I think) fulfilled. The conditions are external (portpins, ADC results, etc).
First I put a breakpoint at the position 1, and it is never triggered.
To check it, I put breakpoint nr. 2 on the line some_global_flag = 1;, and in this case the code works: both breakpoints are triggered when the conditions are true.
Update 1:
To research whether some timing condition is responsible, and the if in the timer is never entered if running without debugging, I added the following in my timer:
if (a && lot && of && conditions)
{
some_global_flag = 1; // breakpoint 2
}
if (some_global_flag)
{
#asm("NOP"); // breakpoint 3
}
The flag is not used anywhere else in the code. It is in RAM, and the RAM is cleared to zero at the beginning.
Now, when all the breakpoints are disabled (or only breakpoint 1 in the main is enabled), the code does not work correctly, the function is not executed. However, if I enable only the breakpoint 3 on the NOP, the code works! The breakpoint is triggered, and after continuing, the function is executed. (It has visible and audible output, so it's obvious if it runs)
Update 2:
The timer interrupt was interruptible, by means of a "SEI" at its beginning. I removed that line, but the behavior is not changed in any noticeable way.
Update 3:
I'm not using any external memory.
As I'm very close to the limit in the flash, I have size optimization in the compiler on maximum.
Can the compiler (CodeVision) be responsible, or did I do something very wrong?
Debuggers can/do change the way the processor runs and code executes so this is not surprising.
divide and conquer. Start removing things until it works. In parallel with that start with nothing add only the timer interrupt and the few lines of code in the main loop with do_something_very_important() being something simple like blinking an led or spitting something out the uart. if that doesnt work you wont get the bigger app to work. If that does work start adding init code and more conditions in your interrupt, but do not complicate the main loop any more than the few lines described. Increase the interrupt handler conditions by adding more of the code back in until it fails.
When you reach the boundary where you can add one thing and fail and remove it and not fail then do some disassembly to see if it is a compiler thing. this might warrant another SO ticket if it is not obvious, "why does my avr interrupt handler break when I add ..."
If you are able to get this down to a small number of lines of code a dozen or so main and just the few interrupt lines, post that so others can try it on their own hardware and perhaps figure it out in parallel.
This is probably an typical optimizing / debugging bug. Make sure that some_global_flag is marked as volatile. This may be an int uint8 uint64 whatever you like...
volatile int some_global_flag
This way you tell the compiler not to make any assumptions on what the value of some_global_flag will be. You must do this because the compiler/optimizer can't see any call to your interrupt routine, so it assumes some_global_flag is always 0 (the initial state) and never changed.
Sorry misread the part where you already tried it...
You can try to compile the code with avr-gcc and see if you have the same behavior...
It might seem strange but it finally proved to be caused by strong transients on one of the input lines (which powers the system but its ADC measurement is also used as a condition).
The system can have periodic power fails for a short time, and important temporary data is kept in part of the internal SRAM, which is not cleaned after startup and designed to retain the data (for as much as 10 minutes or more) with the use of a small capacitor while the CPU is in brown-out.
I did not post this in the question because I tested this part of the system it and worked perfectly, so I did not want to throw you off course.
What I found out at the end, is that a new feature was used in an environment which created very strong transients, and one of the conditions in my question depended on a state which depended on one of those variables in the "permanent RAM", and finally using a breakpoint saved me from the effects of that transient.
Finally the problem was solved with adjustments in timing.
Edit: what helped me find the location of the problem was that I logged the values of my most important variables in the "permanent RAM" area and could see that a few of them got corrupted.
I may be wrong here but if you are using a debugger to attach to the board in question and debug the program on the hardware it was supposed to run on i think it can change the behavior of the microcontroller when it performs an attach.... Other that that and the volatile keyword suggested above i have no clues.
This is written assuming an ARM processor.
using a breakpoint ( RAM or ROM bkpoint ) forces processor to switch from Run mode to Debug Mode at the breakpoint ( either to halt mode or Monitor mode) and force it to run in Debug speed or to run an abort handler and hence JTAG based debugging is basically intrusive debugging.
ETM( embedded Trace Macrocell),specifically in ARM (or other types of bus instrumentation ) is designed to be non intrusive and can log the instructions and data in real time so that we can inspect what really happened.