Alsa Lib Hardware Parameters setting - c

I'm trying to record sound on my linux (debian) embedded device with alsa library. My embedded hardware is this [1], and according to its datasheet page 33 [2],
Analog audio signals are featured by the on-SOM TLV320AIC3106 audio codec.
and the datasheet of this Texas Instruments audio codec [3],
Supports Rates From 8 kHz to 96 kHz
I use the example application code for alsa lib, for initial work I didn't change the code. In the example code, the sampling rate was set to 44100Hz. I successfully recorded sound and played after. For now, I think, I can record sound with alsa-lib with the sampling rate of 8000Hz based on the datasheets. I set the sampling rate to 8000Hz but while the alsa configuration, it changes to 16000Hz.
I set sampling rate to 8000Hz;
snd_pcm_hw_params_set_rate_near(handle, params, &(record_params->rate), &dir);
snd_pcm_hw_params_set_channels(handle, params, record_params->channel);
rc = snd_pcm_hw_params(handle, params);
But after invoking this method;
snd_pcm_hw_params_get_period_time(params, &(record_params->rate), &dir);
it changes to 16000. There is no other method call between above. Are my settings wrong or may be the codec doesn't support for 8kHz?
UPDATE: When I set rate to 16000, it changes to 8000. I'm really confused more.
[1] = http://www.variscite.com/products/system-on-module-som/cortex-a9/dart-mx6-cpu-freescale-imx6
[2] = http://www.variscite.com/images/stories/DataSheets/DART-MX6/DART-MX6_v1_2_datasheet_v2_1.pdf
[3] = http://www.ti.com/lit/ds/symlink/tlv320aic3106.pdf

Period time and rate are two different things.
The period of a PCM is basically the amount of frames that get transferred between device interrupts. It's done this way because making data transfers to a device frame by frame would be extremely inefficient.
The ALSA library allows the setting of the period size to be specified by microseconds (using snd_pcm_get_period_time) or by frame count (using snd_pcm_get_period_size).
If you're trying to calculate what size buffer to allocate for reading or writing to a PCM, it would be more intuitive to use snd_pcm_get_period_size (which returns the number of frames in a period) and then call snd_pcm_frames_to_bytes, which converts a frame count of a PCM to a byte count.

Related

Supported sample rates in Windows

I'm working on a music player on Windows, and I'm using the waveOutOpen (with friends) API. To specify the format of the audio data I fill out the WAVEFORMATEX structure, specifying the correct wFormatTag. However, I recently learned that there exists audio files with a sample rate of 88.2 kHz. That is not among the available WAVE_FORMAT_*, which puzzles me.
Assuming the DAC being used supports 88.2 kHz input, how can that be used on Windows? And what about even higher sample rates like 192 kHz which is also not among the WAVE_FORMAT_* values? I see that there's WAVEFORMATEXTENSIBLE which as far as I can tell allows for specifying a higher bits per sample value, but what about higher sample rates?
In my "Speaker Properties" I see that my system supports 24-bit 192 kHz audio, but how would one provide data at that sample rate on Windows?

Log data from MPU6050 through serial (UART) fails (data loss)

here is the problem I am facing. I have interfaced my ATmega328P with a 6-axis IMU (MPU6050 with the GY521 breakout board). I can read data through the TWI interface (Atmel's I2C) and send it to my PC (running Ubuntu) via the UART. I am using custom-built libraries for both these communication protocols, but they are pretty standard and seem to work just fine. The goal of the project is to compute orientation data from the IMU readings in real-time, say at 100 Hz.
The main problem is that I cannot log data from the device at 100 Hz (not even at 50 Hz). The orientation filter I am using (here) requires a quite high frequency and 100 Hz turned out to work fine (tested offline acquiring data from another device).
Right now, I am using the 16-bit timer of the ATmega328P to sample data at 100 Hz and this seem to work, as I have added to the ISR a line to toggle the built-in LED and it looks to me that it is blinking at 100 Hz (I can barely see it turning on and off). In the same ISR, I read the values from the inertial sensor and, just to log them, send these values through the serial port. Every 10 ms (maximum), I send 9 floats (36 bytes) with a baud rate of 115200. If I use the Arduino IDE's Serial Monitor to visualize this data stream, I notice something very weird, as in the following screenshot.
https://imgur.com/zTBdkhv
As you notice taking a look at the timestamps, there is a common 33 ms delay every 2 or 3 sets of samples received. Moreover, I get roughly the 60% of the data. For example, an acquisition of 10 seconds only gets me less than 600 samples (per each variable) instead of 1000. Moreover, I tested the same sending only one variable through the UART (i.e. only a single float, 4 bytes) and this results in the same behavior!
By the way, I am exploiting the following to send each byte (char) via the UART interface.
void writeCharUART(char c) {
loop_until_bit_is_set(UCSR0A, UDRE0);
UDR0 = c;
}
Even though my ISR runs at 100 Hz (LED blinking seem to confirm that), data loss may occur at the level of the TWI transmission. To prove that, I modified the code of the ISR to send just a normal char (T) instead of data from the MPU and I got a similar behavior. Something like this:
00:10:05.203 -> T
00:10:05.203 -> T
00:10:05.236 -> T
00:10:05.236 -> T
00:10:05.236 -> T
00:10:05.236 -> T
00:10:05.269 -> T
So, I guess there is something wrong with the UART library and I actually sample at 100 Hz, but the logging frequency is much lower (and not constant). How can I solve this issue and/or debug the UART library? Do you see other reasons to justify this issue?
EDIT 1
As pointed out in the comments, it seems to be a problem of the receiving software that limits the frequency to ~30 Hz by some sort of buffering. To confirm that, I programmed the ATmega328P with the following code (this time using the IDE).
void loop() {
Serial.println("T");
}
At first, I thought there was no delay this time, but I could find it after 208 samples. So, there are ~200 samples received at the same timestamp and another bunch of samples after 33 ms. This may be proof that the receiving software introduces this delay.
I also tested a simple serial monitor that I had developed in C and, even though there is no timestamp functionality, I am also loosing samples if I fix the duration of the acquisition sampling at 100 Hz. My serial monitor is based on the termios.h library, but I could not find any documentation about its way of buffering incoming data.
There are two issues here:
You are missing messages. You checked the sample rate just with your eyes and told us that you can still see a very fast blinking. Depending on the colour of your LED, the ambient light, your physical state, and your eyes this could mean anything from 30 Hz to 100 Hz.
I would not trust my eyes to estimate and rather use an oscilloscope or a frequency counter to measure.
You could reduce the frequency of the LED blinking to 1Hz or even lower by dividing in software. Such a low frequency can be measured by hand via a stop watch. For example count 30 blinks and check the time needed for this.
Add a counter to the message and increment it with each message. You will see it right away if you're losing data.
The timestamps seem to indicate that the messages are "clustered" at about 30 Hz.
I'm guessing that the source of the timestamp in running at 30 Hz. So it can not give you more accurate values.
I kind of solved my issues! First of all, thanks to the comments I have checked that my ISR was correctly running at 100 Hz. Doing so, I could be sure that the problem where somewhere else, namely in the UART communication.
I found this very helpful: Linux, serial port, non-buffering mode
Apparently, the Serial Monitor provided by the Arduino IDE uses exploits the termios.h library and uses its default settings. I checked also the user manual and switched to the polling-read mode. Quoting from the user manual
If data is available, read(2) returns immediately, with the lesser of the number of bytes available, or the number of bytes requested. If no data is available, read(2) returns 0.
Hence, I switched back to my serial monitor code and changed the initPort() function adding the following lines of code.
struct termios options;
(...)
options.c_cc[VTIME] = 0;
options.c_cc[VMIN] = 0;
I noticed right away a much higher data frequency in the terminal. I kept the 1 Hz LED blinking in the ISR and there is no period stretching. Moreover, an acquisition of 10 seconds this time gave me roughly 1000 samples per variable, consistent with a sampling rate of 100 Hz.
On the AVR side, I also changed the way I send data through the UART. Before, I was sending 9 floats like this:
sprintf(buffer, "%f, %f, %f", value1_x, value1_y, value1_z);
serial_print(buffer); // no "\n" sent here
sprintf(buffer, "%f, %f, %f", value2_x, value2_y, value2_z);
serial_print(buffer); // again, no "\n" sent
sprintf(buffer, "%f, %f, %f", roll, pitch, yaw);
serial_println(buffer); // "\n" is sent here once the last data byte is sent
Now, I replaced all this with a single call to the function serial_println() and I write only 6 floats to the buffer.

C code to Read data from nonin Pulse Oximeter device via bluetooth Serial Port profile in linux

I am trying to communicate to the Nonin Pulse oximeter device to read the data (Pulse rate and SPO2 level) via Bluetooth. Nonin device supports SPP and HDP profile. I want to communicate through SPP profile. I am able to scan and pair with the device by the sample code available in Bluez.
Please tell me next steps how to send command and read data from the device. I have got struck at this point.
I realize this is a late response, but I recently setup data acquisition from a Nonin PalmSAT 2500A VET unit. I am using the RTC-1000 cable and an RS232 to USB converter.
This is straight from the manual:
"Information from the device, in the real-time mode, is sent in an ASCII serial format at 9600 baud with 9 data bits, 1 start bit, and 1 stop bit. The data are output at a rate of once per second.
NOTE: The 9th data bit is used for odd parity in memory playback mode. In real-time mode, it is always set to the mark condition. Therefore the real-time data may be read as 8 data bits, no parity.
Real-time data may be printed or displayed by devices other than the pulse oximeter. On power up a header is sent identifying the format and the time and date. Thereafter, the data are sent once per second in the following format:
SPO2=XXX HR=YYY
where “XXX” represents the SpO2 value, and “YYY” represents the pulse rate. The SpO2 and pulse rate will be displayed as “---” if there are no data available for the data reading."
Link to manual:
http://www.proactmedical.co.uk/proshop_support_docs/2500aman.pdf
What model oximeter are you working with?

Increase Beaglebone Black ADC sampling rate?

I'm working on a project that requires the use of a microcontroller, and for this reason, I decided to use the Beaglebone Black. I'm still new to the Beaglebone world and I'm facing some problems that I hope you guys can help me with.
In my project I will have to continuously read from all the 7 analog read pins and do some processing accordingly. My question is, what will be the fastest programming language to do so (I must read as much samples as possible and in a very short time!) and how to increase the sampling rate from KHz to MHz?
I tried the following codes:
Javascript Code:
var b = require('bonescript');//this variable is to refer to my beaglebone
time = new Date();
b.analogRead("P9_39");
console.log(new Date() - time);
this code will simply perform one analog read and will print out the time needed to perform the read. Surprisingly, the result was 111ms!! which means that my sampling rate is 10 if I'm not wrong.
An alternative was to use pyhton:
import Adafruit_BBIO.ADC as ADC
import time
ADC.setup()
millis = int(round(time.time() * 1000))
ADC.read_raw("P9_39")
millis = millis = int(round(time.time() * 1000)) - millis
print millis
this code took less time (4ms) but still, if I wanted to read form the 7 analog input pins, I will only be able to read around 35 samples from each.
Using the terminal:
echo cape-bone-iio > /sys/devices/bone_capemgr.*/slots
time cat /sys/devices/ocp.3/helper.15/AIN0
############OR############
time cat /sys/devices/ocp.3/44e0d000.tscadc/tiadc/iio\:device0/in_voltage0_raw
and this took 50ms.
I want my sampling rate to be something in MHz. How can I do so? I know that the Beaglebone Black is capable of that but I could not find a clear way to do so. Any help is appreciated.
Thanks in advance.
Sampling rate of AM335x ADC is 200K (link). This means you won't get into MHz range with stock BeagleBone Black ADC.
To get something working with a latency of 5 µs in non-real-time OS like Linux is impossible. You will be at a mercy of OS to schedule your execution thread. Other kernel threads will take priority and will preempt your thread, even if you assign it the highest scheduling priority.
From my experience with digital IO on BeagleBone Black, I stated seeing missed frames starting around 1K samples per second. Now, it will depend on your level of tolerance to missing samples -- if you only need working semi-reliably you can probably squeeze out 10 K samples per second by switching to C/C++ and increasing priority of your process with nice --10 ... command. However if you cannot tolerate missed frames, you have to do one of these:
Bypass OS entirely and write C program for naked AM335x processor (no OS).
Use another hardware -- an ADC with a buffer to accumulate samples while your program is preempted.
Use PRUSS processors on BBB. They run at 200 MHz, so if you have a tight loop with e.g. 20 assembly instructions you will get reliable sampling rate of 10 MHz. That is if you had a faster ADC in the first place, and of course it would handle the stock 200 KHz ADC easily.
I personally went with option #3 and was happy to see my device perform sub-millisecond GPIO operations extremely reliably.
Use 127 beaglebone blacks plugged into 127 usb hub ports and breakout visual basic and write a usb program to automatically sequencially fire 127 beagle bones 1 after the other and read the data in a textbox...You will get around 16 mhz / msps consective adcs per fast cpu with say windows 10....lyj2021
You may have over lapping data...But you can track this with each fire of each beagle bone black...consecutively...

can anyone say how sampling rate and framesize is realted?

Can anyone say how sampling rate and framesize are related ?
I decoded a spx file to wav, with sampling rate of 10 kHz and at 16 bit. The frame size applied during the decoding process was 640.
The decoded file is playable in vlc. But I want to play that file in Flex.
Flex supports rate of 44.1 kHz, 22.5 kHz and 11.2 kHz only. I want to increase the sampling rate during decoding process. I know how to do that in the code but I guess the framesize also should be increased. I don't know the dependency between these two. Can anyone help?
Frame size and sampling rate are generally orthogonal concepts. They don't need to affect each other unless a particular format demands it.
For PCM .wav, the frame size will always be bits/channels * channels. In your case, 16 bits for mono, or 32 bits for stereo.
Also, there is no need to change the decoding frame size only because you later apply resampling.
You mix two independent tasks: spex decoding and resampling. The mentioned frame size should be considered only as a buffer that contains PCM samples. These PCM samples you should pass to a resampler (for example SSRC: http://shibatch.sourceforge.net/).
Frame Size depends on the codec used to compress the original data. It will contain an integral number of samples (320 in this case).
If I'm correct in thinking raw audio has a frame size equal to the sample size. However some codecs perform compression over a range of samples. Usually the larger the frame size, the more memory needed to compress the data but the potentially better compression you can achieve.
You can't increase the sampling rate during decoding however you could resample the decoded audio. Presumably you're actually re-encoding the data to send it to Flex? You'll need to have a look at the codec you're using to rencode. Which codec are you using?
irrespective of number of channels used, frame rate and sampling rate are same.
because that is the purpose of TDM.
New channels are introduced in the gap left between two consecutive samples.
As the number of channels increase time allotted to each channel decrease there by time taken by each bit.
but tame gap between consecutive samples of any channel will remain constant and it will equal to the total frame time.
i.e. Time gap between samples = Frame time, hence Frame rate is equal to sample rate.

Resources