how to run alsa application without killing pulseaudio? - alsa

I am writing an application that uses alsa. I have to kill pulseaudio each time I run my program, otherwise I have a "ressource busy" error message. I use the "default" device in my alsa program.
Here is my asoundrc:
pcm.!default {
type plug
slave.pcm "dmixer"
}
pcm.dmixer {
type dmix
ipc_key 1024
slave {
pcm "hw:1,0"
period_time 0
period_size 1024
buffer_size 4096
rate 44100
}
bindings {
0 0
1 1
}
}
ctl.dmixer {
type hw
card 1
}

Your .asoundrc explicitly bypasses PulseAudio.
The purpose of these definitions is to do software mixing, and to use the second card by default.
Both can be done with PulseAudio, so just remove this file.

To suspend (without killing nor uninstalling) pulseaudio to run a program use the pasuspender application. Like so :
pasuspender -- program args
For example, with aplay :
pasuspender -- aplay music.wav
There is a second potential problem in that pulse audio can override your asoundrc default device. This is a problem on some Linux distributions when you want to run ALSA applications using the default device in the ~/.asoundrc file. For some reason, ALSA still decides to override our default specification (in the ~/.asoundrc file) and use pulse instead.
The reason why this happens on some distributions is that alsa.conf searches many places for configuration files (as well as your ~/.asoundrc file). One of the places it searchs is /etc/alsa/conf.d/. On my system /etc/alsa/conf.d/ has the file 99-pulseaudio-default.conf.example which seems to be processed last and overrides any personal choices for default. The 99-pulseaudio-default.conf.example file sets the following :
pcm.!default pulse
One way to override pulse as your default device (without uninstalling pulseaudio) is to put a load hook into your ~/.asoundrc file. At the top of your ~/.asoundrc file, instruct ALSA to re-load the config file ~/.asoundrc. An example ~/.asoundrc file is as follows :
#hooks [
{
func load
files [
"~/.asoundrc"
]
errors false
}
]
pcm.!default {
type hw
card "AudioInjector.Pro"
}
ctl.!default {
type hw
card "AudioInjector.Pro"
}

Related

Send bidirectional signal from bluetooth PCM to loopback to use it in linphone

I have a bluetooth-speaker (with microphone) connected to my system. I am using bluez 5.50 and bluealsa 1.3.1 and my ~/.asoundrc currently looks like this:
pcm.!default {
type asym
playback.pcm "looptest"
capture.pcm "looprec"
}
pcm.looptest {
type plug
slave {
pcm {
type bluealsa
device E4:22:A5:58:09:95
profile "a2dp"
}
}
hint {
show on
description "Calisto"
}
}
ctl.looptest {
type bluealsa
}
pcm.looprec {
type plug
slave {
pcm {
type bluealsa
interface "hci0"
device E4:22:A5:58:09:95
profile "sco"
}
}
hint {
show on
description "Calisto REC"
}
}
ctl.looprec {
type bluealsa
}
When playing audio with aplay the bluetoothspeaker is used as default, so I only have to type aplay soundfile.wav. Also when recording audio using arecord -f cd record.wav the sound gets recorded correctly.
My main problem is, that when using linphone, only "real" soundcards can be selected as playback/capture devices. What somehow helped was to create an alsa-loopback device. When starting alsaloop -P "hw:Loopback,1,0" -C "looptest" -t 500000 -d and then making a call in linphone, I can hear the voice of the callee. But the callee can not hear my voice, which is obvious, as until now, I did not configure a way of the microphone to the loopback device.
How to create this channel? I tried alsaloop -P "hw:Loopback,1,1" -C "looprec" -t 500000 -r 44100 and also tried several other loopback index-combinations like 0,0 0,1 1,0 but none did the trick. As my current alsa-knowledge is very limited, any hints what I might do wrong? Maybe even the Loopback solution is not needed and the trick can be done with some asoundrc-magic? Or are there other solutions? The only thing I want to avoid is Pulseaudio, as it does not work well with bluealsa
I finally solved the problem by using pulseaudio which correctly worked with the Bluetooth speaker and also is supported by linphone.
In case anyone is interested in my summary, how to connect to Bluetooth speakers and make calls with linphone under a Raspberry Pi with Raspbian Stretch have a look at https://gist.github.com/stefan-wegener/db61bd83a19b4901a2dbc6d78e237b63

I've added a MAX7320 i2c output chip. How can I get the kernel to load the driver for it?

I've added a MAX7320 i2c expander chip to i2c bus 0 on my ARM Linux board.
The chip works correctly from userspace with commands such as /usr/sbin/i2cset -y 0 0x5d 0x02 and /usr/sbin/i2cget -y 0 0x5d.
There is a drivers/gpio/gpio-max732x.c file in the kernel source, which is compiled into the kernel that I'm running. (I've built it from source.)
How do I tell the kernel that it should instantiate the gpio-max732x driver on "i2c bus 0, chip id 0x5d"?
Do I need to modify the device tree .dts file and put a new .dtb file in /boot/dtbs/?
What would the clause for instantiating a gpio-max732x module look like?
P.S. I've seen https://lkml.org/lkml/2015/1/13/305 but I can't figure out how to get the patch files.
Device Tree
There must be appropriate Device Tree definition for your chip, in order for driver to instantiate. There are 2 ways to do so:
Modify .dts Device Tree file for your board (look in arch/arm/boot/dts/), then recompile it and re-flash it to your device.
This way is preferred in case when you have access you kernel sources for your board and you are able to re-flash .dtb file to your device.
Create Device Tree Overlay file, compile it and load it on your device.
This way is preferred when you don't have access to kernel sources for your board, or you are unable to flash new device tree blob to your device.
Your device definition in Device Tree should look like (according to Documentation/devicetree/bindings/gpio/gpio-max732x.txt):
&i2c0 {
expander: max7320#5d {
compatible = "maxim,max7320";
reg = <0x5d>;
gpio-controller;
#gpio-cells = <2>;
};
};
Kernel configuration
As your expander chip (MAX7320) has no input GPIOs, you don't need IRQ support for MAX732x. So you can disable CONFIG_GPIO_MAX732X_IRQ in your kernel configuration.
Matching device with driver
Once you have your Device Tree loaded (with definition for MAX7320), MAX732x driver will be matched with device definition, and instantiated. Below is explained how matching happens.
In Device Tree file you have compatible property:
compatible = "maxim,max7320";
In MAX732x driver you can see this table:
static const struct of_device_id max732x_of_table[] = {
...
{ .compatible = "maxim,max7320" },
...
When driver is being loaded, and when Device Tree blob is being loaded, kernel tries to find the match for each driver and Device Tree definition. Just by comparing strings above. If strings are matched -- kernel instantiates driver, passing corresponding device parameters to it. Look at i2c_device_match() function for details.
Obtaining patches
The best way is to use kernel sources that already have Device Tree support of MAX732x (v4.0+). But if it's not the case, then...
You can cherry-pick patches from upstream kernel to your kernel:
$ git remote add upstream git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
$ git fetch --all
$ git cherry-pick 43c4bcf9425e
$ git cherry-pick 479f8a5744d8
$ git cherry-pick 09afa276d52e
$ git cherry-pick 996bd13f28e6
And if you still want to apply patches manually (worst option, actually), here you can find direct links to patches. Click (patch) link to get a raw patch.
Also check later patches for gpio-max732x.c here.
Hardware concerns
To be sure that your chip has 0x5d I2C address, check that configuration pins are tied to next lines (as per datasheet):
Pin Line
-----------
AD2 V+
AD0 V+

Rpi wheezy duplicate capture on usb & dummy cards

I am trying to create an application that will stream audio with Darkice as well as provide a LED VU meter indication of the audio stream.
I have created a virtual card with . This card is recognized by alsamixer, aplay, and arecord but I can not transfer the line-in signal from the usb card (hw:0,0) to the dummy card (hw:2,0).
I have tried several .asoundrc scripts that I found both in your Q&A as well as Google using alsa dmix, dsnoop, and multi but nothing has worked so far.
I am presently using one python program (LED_VU.py) that autostarts in one terminal, and the second python program containing Darkice (streamer.diDual.py) in a second terminal. The configuration portion of the LED program is:
### LED VU Meter on RPI ###
#!/usr/bin/env python
import alsaaudio as AA
import audioop
from time import sleep
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BOARD)
GPIO.setwarnings(False)
#Define physical header pin numbers for 10 LEDs
RPiPins=[11,12,13,15,16,18,22,7,3,5]
#set all pins as output
for pin in RPiPins:
GPIO.setup(pin, GPIO.OUT)
#Set up audio
card = 'hw:0,0'
The configuation portion of darkiceDual.cfg is:
# Darkice Configuration File - Generated by Streamer
[general]
duration = 0 # duration of encoding, in seconds. 0 means forever
bufferSecs = 5 # size of internal slip buffer in seconds
reconnect = yes # reconnect to server if disconnected
[input]
device = hw:2,0 # alsa usb soundcard device for audio input
sampleRate = 44100 # sample rate in Hz
bitsPerSample = 16 # bits per sample
channel = 2 # channels. 1 = mono, 2 = stereo
My .asoundrc file is:
pcm.!default {
type plug
slave.pcm "mdev"
route_policy "duplicate"
}
pcm.mdev {
type multi
slaves.a.pcm "hw:0,0"
slaves.a.channels 2
slaves.b.pcm "dmixer"
slaves.b.channels 2
bindings.0.slave a
bindings.0.channel 0
bindings.1.slave a
bindings.1.channel 1
bindings.2.slave b
bindings.2.channel 0
bindings.3.slave b
bindings.3.channel 1
}
pcm.dmixer {
type dmix
ipc_key 1024
slave {
pcm "hw:2,0"
period_time 0
period_size 1024
buffer_size 4096
rate 44100
channels 2
format S16_LE
}
}
What am I doing wrong?
The streamer will have no audio if I use hw:2,0 and have the 'Can not connect' error if I use hw:0,0 (LED_VU.py is using this). If I change the card setting of the LED program to hw:2,0 the LEDs will lockup with all of them lit.
Any help is appreciated!
Thank you for the help. The two programs both use the usb line-in as expected.
I am not able to use alsamixer or amixer now. Pulseaudio is causing the problem now. If it is installed, the LED_VU.py program will not run. When it is uninstalled, the python programs will run but not alsamixer.
Apparently, you want to run the VU meter and DarkIce from the same audio data, i.e., you need to allow two programs to share one recording device.
This can be done with the dsnoop plugin. Which is enabled by default for USB devices.
Tell both programs to record from the device default. If that was redefined, try dsnoop:0 instead.

Generation of RTE_Components.h

I'm working with MDK-Pro and the File System library.
In my application, I require an SPI interface to the SD card. I've managed to setup the project properly, except that in the RTE_Components.h file that Keil generates the line #define RTE_Drivers_MCI0 which subsequently triggers a preprocessor error ("SDIO not configured in RTE_Device.h").
Although I can manually comment out this line in RTE_Components.h, every so often Keil updates this file and I get the above problem. Does anyone know what exactly generates this file, and how I can stop it from adding the SDIO-related definitions into the project?
The RTE_Components.h is not supposed to be modified and will always be automatically generated. That the stack tries to connect via MCI interface is related to your configuration made in the "FS_Config_MC_0.h".
// <o>Connect to hardware via Driver_MCI# <0-255>
// <i>Select driver control block for hardware interface
#define MC0_MCI_DRIVER 0
// <o>Connect to hardware via Driver_SPI# <0-255>
// <i>Select driver control block for hardware interface when in SPI mode
#define MC0_SPI_DRIVER 0
// <o>Memory Card Interface Mode <0=>Native <1=>SPI
// <i>Native uses a SD Bus with up to 8 data lines, CLK, and CMD
// <i>SPI uses 2 data lines (MOSI and MISO), SCLK and CS
// <i>When using SPI both Driver_SPI# and Driver_MCI# must be specified
// <i>since the MCI driver provides the control interface lines.
#define MC0_SPI 1

ALSA Api: How to play two wave files simultaneously?

What is the required API configuration/call for playing two independent wavefiles overlapped ?
I tried to do so , I am getting resource busy error. Some pointers to solve the problem will be very helpful.
Following is the error message from snd_pcm_prepare() of the second wavefile
"Device or resource busy"
You can configure ALSA's dmix plugin to allow multiple applications to share input/output devices.
An example configuration to do this is below:
pcm.dmixed {
type dmix
ipc_key 1024
ipc_key_add_uid 0
slave.pcm "hw:0,0"
}
pcm.dsnooped {
type dsnoop
ipc_key 1025
slave.pcm "hw:0,0"
}
pcm.duplex {
type asym
playback.pcm "dmixed"
capture.pcm "dsnooped"
}
# Instruct ALSA to use pcm.duplex as the default device
pcm.!default {
type plug
slave.pcm "duplex"
}
ctl.!default {
type hw
card 0
}
This does the following:
creates a new device using the dmix plugin, which allows multiple apps to share the output stream
creates another using dsnoop which does the same thing for the input stream
merges these into a new duplex device that will support input and output using the asym plugin
tell ALSA to use the new duplex device as the default device
tell ALSA to use hw:0 to control the default device (alsamixer and so on)
Stick this in either ~/.asoundrc or /etc/asound.conf and you should be good to go.
For more information see http://www.alsa-project.org/main/index.php/Asoundrc#Software_mixing.
ALSA does not provide a mixer. If you need to play multiple audio streams at the same time, you need to mix them together on your own.
The easiest way this can be accomplished is by decoding the WAV files to float samples, add them, and clip them when converting them back to integer samples.
Alternatively, you can try to open the default audio device (and not a hardware device like "hw:0") multiple times, once for each stream you wish to play, and hope that the dmix ALSA plugin is loaded and will provide the mixing functionality.
As ALSA provides a mixer device by default (dmix), you can simply use aplay, like so :
aplay song1.wav &
aplay -Dplug:dmix song2.wav
If your audio files are the same rate and format, then you don't need to use plug. It becomes :
aplay song1.wav &
aplay -Ddmix song2.wav
If however you want to program this method, there are some C++ audio programming tutorials here. These tutorials show you how to load audio files and operate different audio subsystems, such as jackd and ALSA.
In this example it demonstrates playback of one audio file using ALSA. It can be modified by opening a second audio file like so :
Sox<short int> sox2;
res=sox2.openRead(argv[2]);
if (res<0 && res!=SOX_READ_MAXSCALE_ERROR)
return SoxDebug().evaluateError(res);
Then modify the while loop like so :
Eigen::Array<int, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor> buffer, buffer2;
size_t totalWritten=0;
while (sox.read(buffer, pSize)>=0 && sox2.read(buffer2, pSize)>=0){
if (buffer.rows()==0 || buffer.rows()==0) // end of the file.
break;
// as the original files were opened as short int, summing will not overload the int buffer.
buffer+=buffer2; // sum the two waveforms together
playBack<<buffer; // play the audio data
totalWritten+=buffer.rows();
}
You can use this configuration also
pcm.dmix_stream
{
type dmix
ipc_key 321456
ipc_key_add_uid true
slave.pcm "hw:0,0"
}
pcm.mix_stream
{
type plug
slave.pcm dmix_stream
}
Update it in ~/.asoundrc or /etc/asound.conf
You can use command
For wav file
aplay -D mix_stream "filename"
For raw or pcmfile
aplay -D mix_stream -c "channels" -r "rate" -f "format" "filename"
Enter the value for channels, rate, format and filename as per your audio file
following is a very simplified multi-thread playback solution (assuming both files are the same sample format, same channel number and same frequency):
starting buffer based thread per each file decoding (have to make this code 2 times - for file1 and for file2):
import wave
import threading
periodsize = 160
f = wave.open(file1Wave, 'rb')
file1Alive = True
file1Thread = threading.Thread(target=_playFile1)
file1Thread.daemon = True
file1Thread.start()
file decoding thread itself (also has to be defined twice - for file1 and for file2) :
def _playFile1():
# Read data from RIFF
while file1Alive:
if file1dataReady:
time.sleep(.001)
else:
data1 = f.readframes(periodsize)
if not data1:
file1Alive = False
f.close()
else:
file1dataReady == True
starting merging thread (aka funnel) to merge file decodings
import alsaaudio
import threading
sink = alsaaudio.PCM(alsaaudio.PCM_PLAYBACK, device="hw:CARD=default")
sinkformat = 2
funnelalive = True
funnelThread = threading.Thread(target=self._funnelLoop)
funnelThread.daemon = True
funnelThread.start()
merge and play (aka funnel) thread
def _funnelLoop():
# Reading all Inputs
while funnelalive:
# if nothing to play - time to selfdestruct
if not file1Alive and not file2Alive:
funnelalive = False
sink.close()
else:
if file1dataReady and file2dataReady:
# merging data from others but first
datamerged = audioop.add(data2, data2, sinkformat)
file1dataReady = False
file2dataReady = False
sink.write(datamerged)
time.sleep(.001)

Resources