LabVIEW 2009 holding onto data when I don't want it to - file

I'm new to LabVIEW but have been building a signal analyser code that takes the required data and prints it out to text files after the data has been taken. The problem I'm having is that when it makes a new file it holds on to the data from the previous run and prints that too which is not what I want. I've attached the LabVIEW vi (ver.2009), and any help with this would be greatly appreciated.
Also if someone knows a better way of RMS-ing the data after each iteration than my mess of shift registers I'd be happy to see it.
frequency analyser (fixed).vi

To answer your main question: the part of the code that builds the string (for loop with a shift register) stores the previous data each time you re-run the vi. What you need is to initialise the shift register with an empty string :
Also a couple of notes/suggestions:
You could avoid using shift registers in this case. Divide the DAQ part of the code into say 3 parts: acquire data in the first for loop (store into array), modify the array (you could then perhaps use the build-in RMS vi), visualise on the UI
Build the code in smaller chunks, use subVi's
Keep the code small, nice and tidy (check coding standards), add comments - this will really help you later

Since you asked for advice on the RMS functionality you used I took a more detailed look of your code. And I may be harse, but it doesn't make sense (point by point):
You ask the end user for a number of runs, and then you subtract one. Why? I guess it's because the read data before the for loop. (remove that one).
The Frequency RMS function you use has support for avaraging, and has no limit of the number of averages. Specify the following configuration:
This will add RMS avaraging to you output data, and you can loose all your own calculation with shift registers.
The following code is just plain wrong:
You only shift the data, without actually changing the data. By incrementing the starting frequency you shift the FFT. So a signal that was detected at 55 Hz, no is plotted at 56 Hz. To your end user this is misleading.
One thing you need to be aware of in your code is that you don't have continious sampling. Each iteration of you for loop your data acquisition is started and stopped. You can verify this by plotting the t0's of the waveform that is captured. You'll notice they don't start at a constant interval.
A better aproach is to use the task created by the Express VI in the first iteration:
.
However you should then change the acquisition mode to 'continious samples':
Do not forget to close the task in the last iteration:

Instead of the shift register, you should work with an array which you empty before each run.

Related

ALSA API using in Matlab - buffer issue

being part of a lab course, I have to update the simulation about Pulse Coded Modulation. Initially, the simulation was written in 1998 using the OSS (open sound system) and was never updated thereafter. I have rewritten the entire code and ported it to ALSA.
The code itself is a bit long, that's why I haven't put it here but am providing a link.
Now to my issue: Whenever I want to play a vector of random length containing many samples, I start hearing weird periodic random noises. I have a feeling it's due to a buffer underrun. For a better understanding, I have recorded the output.
I believe it has to do something with the parameters I've set. Even though I tried out many cases, I didn't come to a solution.
Just take a look at the period size, buffer size, periods and the sbplay(..) function. PS.: My HW is set such that buffer size = period size * periods
I hope you can help me somehow! Thanks in advance
Code
Output WAV
BTW.: ALSA: buffer underrun on snd_pcm_writei call
didn't help me much...
Efe,
Why don't you try the audioplayer/audiorecorder functions in MATLAB. They use ALSO on Linux. If you want greater control over the latency try the dsp.AudioPlayer/AudioRecorder system objects.
Dinesh

Proccessing "JACK audio" data with C?

My question is slightly abstract but with good grounds. I have successfully ran a JACK script written in C that loops the microphone audio data to the speaker, However I would like to know how to alter the stream of audio my self during playback, perhaps one thing I'd like to try is filter the high(or low) frequencies (CUT them completely off). From my understanding audio comes through as an analog signal and converted to a digital value (within a certain range).
I'm guessing I'm forced to go about this one of two ways, I think one way is to process each value and check if it below the frequency (or above the frequency) I don't want and then alter the value to 0(or the previous value from the last loop cycle to prevent blank spots in the audio during playback). The second way i'm guessing is that JACK presents the buffer with a full array of values that are assigned by frequency spectrum. How do I go about doing this? (In the future I want to do other things with the raw data but I think this is a great start to get familiar with raw audio processing)
Here is my simplified code: http://pastebin.com/Hmiumqkz
You can see that I tried printing the in value as its supposed to be a "float" I thought I might be able to filter frequencies from there but I'm not sure as I don't get anything printed in the console when i run this code it just loops back the mic to speaker but with out any printing to the console.....
NOTES: I have already successfully compiled and tested programs that used the Gstreamer, ALSA, NAudio, irrKang, and the Phonon libraries, they don't allow me to have the cross compatibility i need between OSs and the raw audio data I require for my project, all i ask is to please think twice before lazily report for me to use "other libraries" only for the sake of it being "easier" but I have already tried them and they all fail me.
You haven't really asked a question that can be answered here on SO, so I'll point you to some outside resources.
Here is a tutorial for designing EQs based on the popular RBJ filters:
http://blog.bjornroche.com/2012/08/basic-audio-eqs.html
Most of it is written in C-like psuedocode and will walk you through step-by-step.
Here is the correct answer (You'll notice a printf() function in the proccess(){} call back function) the for loop prints out the the current frames in the buffer(Frequency domain but the for() loop is printing over time so its the time-domain as well -- its both frequency and time)
http://pastebin.com/axDLw7cc

File compression and codes

I'm implementing a version of lzw. Let's say I start off with 10 bit codes and increase whenever I max out on codes. For example after 1024 codes, I'll need 11 bits to represent 1025. Issue is in expressing the shift.
How do I tell decode that I've changed the code size? I thought about using 00, but the program can't distinguish between 00 as an increment and 00 as just two instances of code zero.
Any suggestions?
You don't. You shift to a new size when the dictionary is full. The decoder's dictionary is built synchronized with the encoder's dictionary, so they'll both be full at the same time, and the decoder will shift to the new size exactly when the encoder does.
The time you have to send a code to signal a change is when you've filled the dictionary completely -- you've used all of the largest codes available. In this case, you generally want to continue using the dictionary until/unless the compression rate starts to drop, then clear the dictionary and start over. You do need to put some marker in to tell when that happens. Typically, you reserve the single largest code for this purpose, but any code you don't use for any other purpose will work.
Edit: as an aside, note that you normally want to start with codes exactly one bit larger than the codes for the input, so if you're compressing 8-bit bytes, you want to start with 9 bit codes.
This is part of the LZW algorithm.
When decompressing you automatically build up the code dictionary again. When a new code exactly fills the current number of bits, the code size has to be increased.
For the details see Wikipedia.
You increase the number of bits when you create the code for 2n-1. So when you create the code 1023, increase the bit size immediately. You can get a better description from the GIF compression scheme. Note that this was a patented scheme (which partly caused the creation of PNG). The patent has probably expired by now.
Since the decoder builds the same table as the compressor, its table is full on reaching the last element (so 1023 in your example), and as a consequence, the decoder knows that the next element will be 11 bits.

What should my program do when it sees an integer overflow that does not affect the program run?

There is a small program which takes input from users on a prompt. It takes predefined inputs from the users and executes them.
It also displays a number with the prompt indicating the count of the commands :
myprompt 1) usercommand1
...
myprompt 2) usercommand2
...
...
myprompt 3)
I do not expect the user to give more than 65535 commands at a time, so the count is stored as an unsigned short data.
Problem:
I am not sure how the program should handle the case when the user actually crosses this limit of the number of commands. Should I let the count to roll over to 0 (and keep looping) or to stay put at 65535?
I want the program to still function normally, as in take user inputs and process them just as before. Also, the value of count has no effect at all on the command execution.
I looks like you're tackling a problem that might never occur.
Let's assume your users are quite fast, and it takes them 10 seconds to input a command line. Rollover would happen after 655350 seconds, i.e. approximately seven and a half days.
Let the counter roll over. If that still troubles you, then take the high path and make it an unsigned long. Then it will only roll over after 1361 years (on 32-bit machines).
If you ask yourself this question it means you should go the easy way: make the counter an unsigned int.
How to handle the limit is very dependant on what this counter is used for. My feeling is that it is not used for any really interesting thing so your question is kind of moot. Whichever choice you make it will still work correctly.
On the other hand if this counter as some real use you should ask the user of this counter the correct way to proceed: both have some pros and cons (either counter going back in time or stalling) so your user risk being surprised.
You forgot to mention other alternatives: terminate your program. Or remove the limit and use some form of big integers (GMP lib for example) but this souns overkill.
Note that the DNS choose to wraparound the serial number at 2^32. This makes it usable forever. Users of the counter are supposed to detect the overflow. RFC 1982
To be honest this:
I want the program to still function
normally, as in take user inputs and
process them just as before. Also, the
value of count has no effect at all on
the command execution.
answers your own question, if it has no effect at all then just let it start on 0 again.

Microcontroller Serial Command Interpreter in C/C++; Ways to do it;

I'd like to interpret a command string, recieved by a microcontroller (PIC16f877A if that makes any difference) via serial.
The strings have a pretty simple and straight-foward formatting:
$AABBCCDDEE (5 "blocks" of 2 chracters+'$' for 11 characters in total) where:
$AA= the actual name of the command (could be letters, numbers, both; mandatory);
BB-EE= parameters (numbers; optional);
I'd like to write the code in C/C++.
I figure I could just grab the string via serial, hack it up into blocks, switch () {case} and memcmp the command block ($AA). Then I could have a binary decision tree to make use of the BB CC DD and EE blocks.
I'd like to know if that's the right way to do it (It kinda seems ugly to me, surely there must be a less tedious way to do this!).
Don't over design it ! It does not mean to go blindly coding, but once you have designed something that looks like it can do the job, you can start to implement it. Implementation will give you feedback about your architecture.
For example, when writing your switch case, you might see yourself rewriting code very similar to the one you just wrote for the preceding case. Actually writing down an algorithm will help you see some problem you did not think off, or some simplification you did not see.
Don't aim for the best code on the first try. Aim for
easy to read
easy to debug
Take litlle steps. You do not have to implement the whole thing in one go.
Grab the string from the serial port. Looks easy, right ? Well, let's do that first, just printing out the commands.
Separate the command from the parameters.
Extract the parameters. Will the extraction be the same for each command ? Can you design a data structure valid for every command ?
Once you have done it right, you can start to think of a better solution.
ASCII interfaces are ugly by definition. Ideally you have some sort of frame structure, which maybe you have, the $ indicates the division between frames and you say they are 11 characters in length. If always 11 that is good, if only sometimes that is harder, hopefully there is a $ at the start and 0x0A and or 0x0D/0x0A at the end (CR/LF). Normally I have one module of code that simply extracts bytes from the serial port and puts them into a (circular) buffer. The buffering dating to the days when serial ports had very little of no buffer on board, but even today, esp with microcontrollers, that is still the case. Then another module of code that monitors the buffer searching for frames. Ideally this buffer is big enough to leave the frame there and have room for the next frame and not require another buffer for keeping copies of the frames received. using the circular buffer this second module can move (discarding if necessary as it goes) the head pointer to the beginning of frame marker and waits for a full frames worth of data. Once a full frame appears to be there it calls another function that processes that frame. That function may be the one you are asking about. And "just code it" may be the answer, you are in a microcontroller, so you cant use lazy high level desktop application on an operating system solutions. You will need some sort of strcmp function if created yourself or available to you through a library, or not depending on your solution. The brute force if(strncmp(&frame[1],"bob",3)==0) then, else if(strncmp(&frame[1],"ted",3) then, else if... Certainly works but you may chew up your rom with that kind of thing, or not. And the buffering required for this kind of approach can chew up a lot of ram. This aproach is very readable and maintainable, and portable though. May not be fast (maintainable normally conflicts with reliable and/or performance), but that may not be a concern, so long as you can process this one before the next one comes along, and or before unprocessed data falls out of the circular buffer. Depending on the task the frame checker routine may simply check that the frame is good, I normally put start and end markers, length and some sort of arithmetic checksum and if it is a bad frame it is discarded, this saves on a lot of code checking for bad/corrupt data. When the frame processing routine returns to the search for frame routine it moves the head pointer to purge the frame as it is no longer needed, good frame or bad. The frame checker may only validate a frame and hand it off to yet another function that does the parsing. Each lego block in this arrangement has a very simple task, and operates on the assumption that the lego block below it has performed its task properly. Modular, object oriented, whatever term you want to use makes the design, coding, maintenance, debugging much easier. (at the cost of peformance and resources). This approach works well for any serial type stream be it serial port in a microcontroller (with enough resources) as well as applications on a desktop looking at serial data from a serial port or TCP data which is also serial and NOT frame oriented.
if your micro doesnt have the resources for all that, then the state machine approach also works quite well. Each byte that arrives ticks the state machine one state. Start with idle waiting for the first byte, is the first byte a $? no discard it and go back to idle. if first byte is a $ then go to the next state. If you were looking for say the commands "and", "add", "or", and "xor", then the second state would compare with "a","o", and "x", if none of these then go to idle. if an a then go to a state that compares for n and d, if an o then go to a state that looks for the r. If the look for the r in or state does not see the r then go to idle, if it does then process the command and then go to idle. The code is readable in the sense that you can look at the state machine and see the words a,n,d, a,d,d, o,r, x,o,r, and where they ultimately lead to, but generally not considered readable code. This approach uses very little ram, leans on the rom a bit more but overall could use the least amount of rom as well compared to other parsing approaches. And here again is very portable, beyond microcontrollers, but outside a microcontroller folks might think you are insane with this kind of code (well not if this were verilog or vhdl of course). This approach is harder to maintain, harder to read, but is very fast and reliable and uses the least amount of resources.
To matter what approach once the command is interpreted you have to insure you can perform the command without losing any bytes on the serial port, either through deterministic performance of the code or interrupts or whatever.
Bottom line ascii interfaces are always ugly, the code for them, no matter how many layers of libraries you use to make the job easier, the resulting instructions that get executed are ugly. And one size fits no-one by definition. Just start coding, try a state machine and try the if-then-else-strncmp, and optimizations in between. You should see quickly which one performs best both with your coding style, the tools/processor, and the problem being solved.
It depends on how fancy you want to get, how many different commands there are, and whether new commands are likely to be frequently added.
You could create a data structure that associates each valid command string with a corresponding function pointer - a sorted list accessed with bsearch() is probably fine, although a hash table is an alternative which may have better performance (since the set of valid commands is known beforehand, you could construct a perfect hash with a tool like gperf).
The bsearch() approach might look something like this:
void func_aa(char args[11]);
void func_cc(char args[11]);
void func_xy(char args[11]);
struct command {
char *name;
void (*cmd_func)(char args[11]);
} command_tbl[] = {
{ "AA", func_aa },
{ "CC", func_cc },
{ "XY", func_xy }
};
#define N_CMDS (sizeof command_tbl / sizeof command_tbl[0])
static int comp_cmd(const void *c1, const void *c2)
{
const struct command *cmd1 = c1, *cmd2 = c2;
return memcmp(cmd1->name, cmd2->name, 2);
}
static struct command *get_cmd(char *name)
{
struct command target = { name, NULL };
return bsearch(&target, command_tbl, N_CMDS, sizeof command_tbl[0], comp_cmd);
}
Then if you have command_str pointing to a string from the serial port, you'd do this to dispatch the right function:
struct command *cmd = get_cmd(command_str + 1);
if (cmd)
cmd->cmd_func(command_str);
Don't know if you're still working on this. But I'm working on a similar project and found an embedded command line interpreter http://sourceforge.net/projects/ecli/?source=recommended. That's right, they had embedded applications in mind .
The cli_engine function really helps in taking the inputs from your command line.
Warning: there is no documentation besides a readme file. I'm still working through some bugs integrating the framework but this definitely gave me a head start. You'll have to deal with comparing the strings (i.e. using strcmp) yourself.

Resources