Rpi wheezy duplicate capture on usb & dummy cards - alsa

I am trying to create an application that will stream audio with Darkice as well as provide a LED VU meter indication of the audio stream.
I have created a virtual card with . This card is recognized by alsamixer, aplay, and arecord but I can not transfer the line-in signal from the usb card (hw:0,0) to the dummy card (hw:2,0).
I have tried several .asoundrc scripts that I found both in your Q&A as well as Google using alsa dmix, dsnoop, and multi but nothing has worked so far.
I am presently using one python program (LED_VU.py) that autostarts in one terminal, and the second python program containing Darkice (streamer.diDual.py) in a second terminal. The configuration portion of the LED program is:
### LED VU Meter on RPI ###
#!/usr/bin/env python
import alsaaudio as AA
import audioop
from time import sleep
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BOARD)
GPIO.setwarnings(False)
#Define physical header pin numbers for 10 LEDs
RPiPins=[11,12,13,15,16,18,22,7,3,5]
#set all pins as output
for pin in RPiPins:
GPIO.setup(pin, GPIO.OUT)
#Set up audio
card = 'hw:0,0'
The configuation portion of darkiceDual.cfg is:
# Darkice Configuration File - Generated by Streamer
[general]
duration = 0 # duration of encoding, in seconds. 0 means forever
bufferSecs = 5 # size of internal slip buffer in seconds
reconnect = yes # reconnect to server if disconnected
[input]
device = hw:2,0 # alsa usb soundcard device for audio input
sampleRate = 44100 # sample rate in Hz
bitsPerSample = 16 # bits per sample
channel = 2 # channels. 1 = mono, 2 = stereo
My .asoundrc file is:
pcm.!default {
type plug
slave.pcm "mdev"
route_policy "duplicate"
}
pcm.mdev {
type multi
slaves.a.pcm "hw:0,0"
slaves.a.channels 2
slaves.b.pcm "dmixer"
slaves.b.channels 2
bindings.0.slave a
bindings.0.channel 0
bindings.1.slave a
bindings.1.channel 1
bindings.2.slave b
bindings.2.channel 0
bindings.3.slave b
bindings.3.channel 1
}
pcm.dmixer {
type dmix
ipc_key 1024
slave {
pcm "hw:2,0"
period_time 0
period_size 1024
buffer_size 4096
rate 44100
channels 2
format S16_LE
}
}
What am I doing wrong?
The streamer will have no audio if I use hw:2,0 and have the 'Can not connect' error if I use hw:0,0 (LED_VU.py is using this). If I change the card setting of the LED program to hw:2,0 the LEDs will lockup with all of them lit.
Any help is appreciated!
Thank you for the help. The two programs both use the usb line-in as expected.
I am not able to use alsamixer or amixer now. Pulseaudio is causing the problem now. If it is installed, the LED_VU.py program will not run. When it is uninstalled, the python programs will run but not alsamixer.

Apparently, you want to run the VU meter and DarkIce from the same audio data, i.e., you need to allow two programs to share one recording device.
This can be done with the dsnoop plugin. Which is enabled by default for USB devices.
Tell both programs to record from the device default. If that was redefined, try dsnoop:0 instead.

Related

ALSA ignores configuration

I have an audio box that can be connected via USB to my laptop.
I've written a C application that uses the ALSA API to open a communication channel with this audio box.
The communication should be established at 8kHz, running with a 10ms period size (that is 80 samples).
If I'm connecting the audio box to my laptop and then start the app, it seems that the min period size supported is 170 (e.g. snd_pcm_hw_params_get_period_size_min sets min period size to 170), while snd_pcm_hw_params_set_period_size_near sets the period size to 170.
Looking to /proc/asound/name-of-the-card/stream0, I can see Momentary freq = 48000 Hz (0x30.0000), but the sampling rate requested by me is 8kHz.
Also, snd_pcm_hw_params_set_rate_near call, is not changing the value that I've passed.
By starting the app first and then connecting the audio box to my laptop, snd_pcm_hw_params_get_period_size_min sets the min period size to 16 and when calling snd_pcm_hw_params_set_period_size_near, the period size is set to 80 (which represents what I want to achieve).
Checking again /proc/asound/name-of-the-card/stream0, I can see Momentary freq = 8000 Hz (0x8.0000), that is correct.
I have to mention that my app is trying to open the card associated with the audio box and if the operation doesn't succeed, is retrying the open it every 200ms until succeeds.
My feeling is that in the second case when the period size is set accordingly, my application sets the configuration before the system does (I'm not sure if the system does this).
I've tried to modify defaults.pcm.dmix.rate to 8000 in /usr/share/alsa/alsa.conf, but in this case the period size that is returned by acting as in the first scenario is 1024.
Below are some configurations from /usr/share/alsa/alsa.conf if this helps.
defaults.pcm.minperiodtime 5000 # in us
defaults.pcm.ipc_key 5678293
defaults.pcm.ipc_gid audio
defaults.pcm.ipc_perm 0666
defaults.pcm.dmix.max_periods 0
defaults.pcm.dmix.channels 2
defaults.pcm.dmix.rate 48000
Is there a config file that has a higher priority than what I want to configure via the API?

how does the rate plugin work in the alsa-lib?

I'm using the alsa-lib in my embeded linux. The next is my .asoundrc.
pcm.rate16k {
type plug
slave {
pcm "hw:0,0"
rate 16000
}
}
It works well when I play a mono audio file(rate is 48000Hz and format is S16_LE) using commands aplay -D rate16k -c 1 -r 48000 -f S16_LE test_48k.raw.
What I am confused is where the resample process implemented. I read the alsa-project introduction and read source code of alsa-lib and analyze the snd_pcm_open API.
snd_pcm_open ==> _snd_pcm_plug_open ==> _snd_pcm_hw_open ==> snd_pcm_hw_open ==> snd_pcm_hw_open_fd and here return the slave pcm pointer. Then aplay then calls snd_pcm_writei to write audio data and eventually calls snd_pcm_mmap_writei.
I try to analyze snd_pcm_mmap_writei but the alsa-lib is so complex that I still have no idea about how the rate converted. Please help or try to give some ideas how to analyze this process.

GNU ARM Eclipse: how to simulate pin input?

I want to pass sine wave data onto a pin (any possible one), so that my program would be able to read it when being run in an emulator.
How how can I pass data in the form of (time:value) or just pass a function float generatorForPinX(int time); to act as signal generator into the GNU ARM Eclipse (I use QEMU but if any other emulator is required I am willing to migrate) board emulator?
These instructions are for emulating an Olimex STM32 P103 Development Kit.
Download and build
First download and build Qemu STM32, which includes patches for emulating the ADC peripheral on the STM32:
wget https://github.com/beckus/qemu_stm32/archive/stm32.tar.gz
tar xf stm32.tar.gz
cd qemu_stm32-stm32
./configure --target-list="arm-softmmu"
make
cd ..
If the configure step fails, then install the missing requirements. See the README for more information.
Then download the Olimex STM32 P103 Development Kit Demos:
wget https://github.com/beckus/stm32_p103_demos/archive/master.tar.gz
tar xf master.tar.gz
Look in stm32_p103_demos-master/demos/adc_single/main.c for an example program which uses the ADC.
Run the demo application
To build and run the adc_single demo:
cd stm32_p103_demos-master
QEMU_ARM_DIR=../qemu_stm32-stm32/arm-softmmu/ make adc_single_QEMURUN_TEL
(from another terminal) telnet localhost 7777
UART2 is attached to the telnet server on port 7777, which you should see output from. See the README for more information on how to build and run the demo applications.
Looking at the source for the adc_single demo application, it has 3 different modes:
Mode 1 (the default) will read from the temperature sensor on ADC channel 16
Mode 2 will read the Vdd value from ADC channel 16
Mode 3 will read from ADC channel 8.
The modes can be selected by using a button, but since we are emulating the hardware with QEMU, the button is not available. I switched between the modes by changing the int mode = 1; value and recompiling the program.
ADC emulation
The method that QEMU uses to emulate each ADC channel is viewable in the stm32_adc_start_conv function in hw/arm/stm32_adc.c:
static void stm32_adc_start_conv(Stm32Adc *s)
{
uint64_t curr_time = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
int channel_number=stm32_ADC_get_channel_number(s,1);
// Write result of conversion
if(channel_number==16){
s->Vdda=rand()%(1200+1) + 2400; //Vdda belongs to the interval [2400 3600] mv
s->Vref=rand()%(s->Vdda-2400+1) + 2400; //Vref belongs to the interval [2400 Vdda] mv
s->ADC_DR= s->Vdda - s->Vref;
}
else if(channel_number==17){
s->ADC_DR= (s->Vref=rand()%(s->Vdda-2400+1) + 2400); //Vref [2400 Vdda] mv
}
else{
s->ADC_DR=((int)(1024.*(sin(2*M_PI*qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL)/1e9)+1.))&0xfff);
}
s->ADC_SR&=~ADC_SR_EOC; // jmf : indicates ongoing conversion
// calls conv_complete when expires
timer_mod(s->conv_timer, curr_time + stm32_ADC_get_nbr_cycle_per_sample(s,channel_number));
}
As you can see, ADC channel 16 will emulate a random Vdd input, ADC channel 17 will emulate a random temperature input, and all other channels will follow a sine wave from 0 to 2048. Here is a graph of the ADC values returned from all 3 modes:
If you want to have an ADC channel use a different emulation pattern, you can modify stm32_adc_start_conv and rebuild QEMU following the steps above.

how to run alsa application without killing pulseaudio?

I am writing an application that uses alsa. I have to kill pulseaudio each time I run my program, otherwise I have a "ressource busy" error message. I use the "default" device in my alsa program.
Here is my asoundrc:
pcm.!default {
type plug
slave.pcm "dmixer"
}
pcm.dmixer {
type dmix
ipc_key 1024
slave {
pcm "hw:1,0"
period_time 0
period_size 1024
buffer_size 4096
rate 44100
}
bindings {
0 0
1 1
}
}
ctl.dmixer {
type hw
card 1
}
Your .asoundrc explicitly bypasses PulseAudio.
The purpose of these definitions is to do software mixing, and to use the second card by default.
Both can be done with PulseAudio, so just remove this file.
To suspend (without killing nor uninstalling) pulseaudio to run a program use the pasuspender application. Like so :
pasuspender -- program args
For example, with aplay :
pasuspender -- aplay music.wav
There is a second potential problem in that pulse audio can override your asoundrc default device. This is a problem on some Linux distributions when you want to run ALSA applications using the default device in the ~/.asoundrc file. For some reason, ALSA still decides to override our default specification (in the ~/.asoundrc file) and use pulse instead.
The reason why this happens on some distributions is that alsa.conf searches many places for configuration files (as well as your ~/.asoundrc file). One of the places it searchs is /etc/alsa/conf.d/. On my system /etc/alsa/conf.d/ has the file 99-pulseaudio-default.conf.example which seems to be processed last and overrides any personal choices for default. The 99-pulseaudio-default.conf.example file sets the following :
pcm.!default pulse
One way to override pulse as your default device (without uninstalling pulseaudio) is to put a load hook into your ~/.asoundrc file. At the top of your ~/.asoundrc file, instruct ALSA to re-load the config file ~/.asoundrc. An example ~/.asoundrc file is as follows :
#hooks [
{
func load
files [
"~/.asoundrc"
]
errors false
}
]
pcm.!default {
type hw
card "AudioInjector.Pro"
}
ctl.!default {
type hw
card "AudioInjector.Pro"
}

ALSA Api: How to play two wave files simultaneously?

What is the required API configuration/call for playing two independent wavefiles overlapped ?
I tried to do so , I am getting resource busy error. Some pointers to solve the problem will be very helpful.
Following is the error message from snd_pcm_prepare() of the second wavefile
"Device or resource busy"
You can configure ALSA's dmix plugin to allow multiple applications to share input/output devices.
An example configuration to do this is below:
pcm.dmixed {
type dmix
ipc_key 1024
ipc_key_add_uid 0
slave.pcm "hw:0,0"
}
pcm.dsnooped {
type dsnoop
ipc_key 1025
slave.pcm "hw:0,0"
}
pcm.duplex {
type asym
playback.pcm "dmixed"
capture.pcm "dsnooped"
}
# Instruct ALSA to use pcm.duplex as the default device
pcm.!default {
type plug
slave.pcm "duplex"
}
ctl.!default {
type hw
card 0
}
This does the following:
creates a new device using the dmix plugin, which allows multiple apps to share the output stream
creates another using dsnoop which does the same thing for the input stream
merges these into a new duplex device that will support input and output using the asym plugin
tell ALSA to use the new duplex device as the default device
tell ALSA to use hw:0 to control the default device (alsamixer and so on)
Stick this in either ~/.asoundrc or /etc/asound.conf and you should be good to go.
For more information see http://www.alsa-project.org/main/index.php/Asoundrc#Software_mixing.
ALSA does not provide a mixer. If you need to play multiple audio streams at the same time, you need to mix them together on your own.
The easiest way this can be accomplished is by decoding the WAV files to float samples, add them, and clip them when converting them back to integer samples.
Alternatively, you can try to open the default audio device (and not a hardware device like "hw:0") multiple times, once for each stream you wish to play, and hope that the dmix ALSA plugin is loaded and will provide the mixing functionality.
As ALSA provides a mixer device by default (dmix), you can simply use aplay, like so :
aplay song1.wav &
aplay -Dplug:dmix song2.wav
If your audio files are the same rate and format, then you don't need to use plug. It becomes :
aplay song1.wav &
aplay -Ddmix song2.wav
If however you want to program this method, there are some C++ audio programming tutorials here. These tutorials show you how to load audio files and operate different audio subsystems, such as jackd and ALSA.
In this example it demonstrates playback of one audio file using ALSA. It can be modified by opening a second audio file like so :
Sox<short int> sox2;
res=sox2.openRead(argv[2]);
if (res<0 && res!=SOX_READ_MAXSCALE_ERROR)
return SoxDebug().evaluateError(res);
Then modify the while loop like so :
Eigen::Array<int, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor> buffer, buffer2;
size_t totalWritten=0;
while (sox.read(buffer, pSize)>=0 && sox2.read(buffer2, pSize)>=0){
if (buffer.rows()==0 || buffer.rows()==0) // end of the file.
break;
// as the original files were opened as short int, summing will not overload the int buffer.
buffer+=buffer2; // sum the two waveforms together
playBack<<buffer; // play the audio data
totalWritten+=buffer.rows();
}
You can use this configuration also
pcm.dmix_stream
{
type dmix
ipc_key 321456
ipc_key_add_uid true
slave.pcm "hw:0,0"
}
pcm.mix_stream
{
type plug
slave.pcm dmix_stream
}
Update it in ~/.asoundrc or /etc/asound.conf
You can use command
For wav file
aplay -D mix_stream "filename"
For raw or pcmfile
aplay -D mix_stream -c "channels" -r "rate" -f "format" "filename"
Enter the value for channels, rate, format and filename as per your audio file
following is a very simplified multi-thread playback solution (assuming both files are the same sample format, same channel number and same frequency):
starting buffer based thread per each file decoding (have to make this code 2 times - for file1 and for file2):
import wave
import threading
periodsize = 160
f = wave.open(file1Wave, 'rb')
file1Alive = True
file1Thread = threading.Thread(target=_playFile1)
file1Thread.daemon = True
file1Thread.start()
file decoding thread itself (also has to be defined twice - for file1 and for file2) :
def _playFile1():
# Read data from RIFF
while file1Alive:
if file1dataReady:
time.sleep(.001)
else:
data1 = f.readframes(periodsize)
if not data1:
file1Alive = False
f.close()
else:
file1dataReady == True
starting merging thread (aka funnel) to merge file decodings
import alsaaudio
import threading
sink = alsaaudio.PCM(alsaaudio.PCM_PLAYBACK, device="hw:CARD=default")
sinkformat = 2
funnelalive = True
funnelThread = threading.Thread(target=self._funnelLoop)
funnelThread.daemon = True
funnelThread.start()
merge and play (aka funnel) thread
def _funnelLoop():
# Reading all Inputs
while funnelalive:
# if nothing to play - time to selfdestruct
if not file1Alive and not file2Alive:
funnelalive = False
sink.close()
else:
if file1dataReady and file2dataReady:
# merging data from others but first
datamerged = audioop.add(data2, data2, sinkformat)
file1dataReady = False
file2dataReady = False
sink.write(datamerged)
time.sleep(.001)

Resources