I am using the fluid_synth_process () function in my custom fx function to modulate audio according to the example in. But this seems to modulate audio from all midi channels globally. How do I modulate audio output from every midi channel separately?
Related
I am trying to create an application that will stream audio with Darkice as well as provide a LED VU meter indication of the audio stream.
I have created a virtual card with . This card is recognized by alsamixer, aplay, and arecord but I can not transfer the line-in signal from the usb card (hw:0,0) to the dummy card (hw:2,0).
I have tried several .asoundrc scripts that I found both in your Q&A as well as Google using alsa dmix, dsnoop, and multi but nothing has worked so far.
I am presently using one python program (LED_VU.py) that autostarts in one terminal, and the second python program containing Darkice (streamer.diDual.py) in a second terminal. The configuration portion of the LED program is:
### LED VU Meter on RPI ###
#!/usr/bin/env python
import alsaaudio as AA
import audioop
from time import sleep
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BOARD)
GPIO.setwarnings(False)
#Define physical header pin numbers for 10 LEDs
RPiPins=[11,12,13,15,16,18,22,7,3,5]
#set all pins as output
for pin in RPiPins:
GPIO.setup(pin, GPIO.OUT)
#Set up audio
card = 'hw:0,0'
The configuation portion of darkiceDual.cfg is:
# Darkice Configuration File - Generated by Streamer
[general]
duration = 0 # duration of encoding, in seconds. 0 means forever
bufferSecs = 5 # size of internal slip buffer in seconds
reconnect = yes # reconnect to server if disconnected
[input]
device = hw:2,0 # alsa usb soundcard device for audio input
sampleRate = 44100 # sample rate in Hz
bitsPerSample = 16 # bits per sample
channel = 2 # channels. 1 = mono, 2 = stereo
My .asoundrc file is:
pcm.!default {
type plug
slave.pcm "mdev"
route_policy "duplicate"
}
pcm.mdev {
type multi
slaves.a.pcm "hw:0,0"
slaves.a.channels 2
slaves.b.pcm "dmixer"
slaves.b.channels 2
bindings.0.slave a
bindings.0.channel 0
bindings.1.slave a
bindings.1.channel 1
bindings.2.slave b
bindings.2.channel 0
bindings.3.slave b
bindings.3.channel 1
}
pcm.dmixer {
type dmix
ipc_key 1024
slave {
pcm "hw:2,0"
period_time 0
period_size 1024
buffer_size 4096
rate 44100
channels 2
format S16_LE
}
}
What am I doing wrong?
The streamer will have no audio if I use hw:2,0 and have the 'Can not connect' error if I use hw:0,0 (LED_VU.py is using this). If I change the card setting of the LED program to hw:2,0 the LEDs will lockup with all of them lit.
Any help is appreciated!
Thank you for the help. The two programs both use the usb line-in as expected.
I am not able to use alsamixer or amixer now. Pulseaudio is causing the problem now. If it is installed, the LED_VU.py program will not run. When it is uninstalled, the python programs will run but not alsamixer.
Apparently, you want to run the VU meter and DarkIce from the same audio data, i.e., you need to allow two programs to share one recording device.
This can be done with the dsnoop plugin. Which is enabled by default for USB devices.
Tell both programs to record from the device default. If that was redefined, try dsnoop:0 instead.
I am receiving audio data in RTP stream. The audio can be either in G711 A-law or u-law depending on the source. How to decode the audio byte stream using ffmpeg api's? Can ALSA on Linux directly play the G711 audio format?
Libav for sure supports G.711. The associated codec ID are AV_CODEC_ID_PCM_MULAW and AV_CODEC_ID_PCM_ALAW. I suggest you start from the example program they provide and modify audio_decode_example() in order to use G.711.
avcodec.h: http://libav.org/doxygen/master/avcodec_8h.html
libav example: http://libav.org/doxygen/master/avcodec_8c-example.html
What is the required API configuration/call for playing two independent wavefiles overlapped ?
I tried to do so , I am getting resource busy error. Some pointers to solve the problem will be very helpful.
Following is the error message from snd_pcm_prepare() of the second wavefile
"Device or resource busy"
You can configure ALSA's dmix plugin to allow multiple applications to share input/output devices.
An example configuration to do this is below:
pcm.dmixed {
type dmix
ipc_key 1024
ipc_key_add_uid 0
slave.pcm "hw:0,0"
}
pcm.dsnooped {
type dsnoop
ipc_key 1025
slave.pcm "hw:0,0"
}
pcm.duplex {
type asym
playback.pcm "dmixed"
capture.pcm "dsnooped"
}
# Instruct ALSA to use pcm.duplex as the default device
pcm.!default {
type plug
slave.pcm "duplex"
}
ctl.!default {
type hw
card 0
}
This does the following:
creates a new device using the dmix plugin, which allows multiple apps to share the output stream
creates another using dsnoop which does the same thing for the input stream
merges these into a new duplex device that will support input and output using the asym plugin
tell ALSA to use the new duplex device as the default device
tell ALSA to use hw:0 to control the default device (alsamixer and so on)
Stick this in either ~/.asoundrc or /etc/asound.conf and you should be good to go.
For more information see http://www.alsa-project.org/main/index.php/Asoundrc#Software_mixing.
ALSA does not provide a mixer. If you need to play multiple audio streams at the same time, you need to mix them together on your own.
The easiest way this can be accomplished is by decoding the WAV files to float samples, add them, and clip them when converting them back to integer samples.
Alternatively, you can try to open the default audio device (and not a hardware device like "hw:0") multiple times, once for each stream you wish to play, and hope that the dmix ALSA plugin is loaded and will provide the mixing functionality.
As ALSA provides a mixer device by default (dmix), you can simply use aplay, like so :
aplay song1.wav &
aplay -Dplug:dmix song2.wav
If your audio files are the same rate and format, then you don't need to use plug. It becomes :
aplay song1.wav &
aplay -Ddmix song2.wav
If however you want to program this method, there are some C++ audio programming tutorials here. These tutorials show you how to load audio files and operate different audio subsystems, such as jackd and ALSA.
In this example it demonstrates playback of one audio file using ALSA. It can be modified by opening a second audio file like so :
Sox<short int> sox2;
res=sox2.openRead(argv[2]);
if (res<0 && res!=SOX_READ_MAXSCALE_ERROR)
return SoxDebug().evaluateError(res);
Then modify the while loop like so :
Eigen::Array<int, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor> buffer, buffer2;
size_t totalWritten=0;
while (sox.read(buffer, pSize)>=0 && sox2.read(buffer2, pSize)>=0){
if (buffer.rows()==0 || buffer.rows()==0) // end of the file.
break;
// as the original files were opened as short int, summing will not overload the int buffer.
buffer+=buffer2; // sum the two waveforms together
playBack<<buffer; // play the audio data
totalWritten+=buffer.rows();
}
You can use this configuration also
pcm.dmix_stream
{
type dmix
ipc_key 321456
ipc_key_add_uid true
slave.pcm "hw:0,0"
}
pcm.mix_stream
{
type plug
slave.pcm dmix_stream
}
Update it in ~/.asoundrc or /etc/asound.conf
You can use command
For wav file
aplay -D mix_stream "filename"
For raw or pcmfile
aplay -D mix_stream -c "channels" -r "rate" -f "format" "filename"
Enter the value for channels, rate, format and filename as per your audio file
following is a very simplified multi-thread playback solution (assuming both files are the same sample format, same channel number and same frequency):
starting buffer based thread per each file decoding (have to make this code 2 times - for file1 and for file2):
import wave
import threading
periodsize = 160
f = wave.open(file1Wave, 'rb')
file1Alive = True
file1Thread = threading.Thread(target=_playFile1)
file1Thread.daemon = True
file1Thread.start()
file decoding thread itself (also has to be defined twice - for file1 and for file2) :
def _playFile1():
# Read data from RIFF
while file1Alive:
if file1dataReady:
time.sleep(.001)
else:
data1 = f.readframes(periodsize)
if not data1:
file1Alive = False
f.close()
else:
file1dataReady == True
starting merging thread (aka funnel) to merge file decodings
import alsaaudio
import threading
sink = alsaaudio.PCM(alsaaudio.PCM_PLAYBACK, device="hw:CARD=default")
sinkformat = 2
funnelalive = True
funnelThread = threading.Thread(target=self._funnelLoop)
funnelThread.daemon = True
funnelThread.start()
merge and play (aka funnel) thread
def _funnelLoop():
# Reading all Inputs
while funnelalive:
# if nothing to play - time to selfdestruct
if not file1Alive and not file2Alive:
funnelalive = False
sink.close()
else:
if file1dataReady and file2dataReady:
# merging data from others but first
datamerged = audioop.add(data2, data2, sinkformat)
file1dataReady = False
file2dataReady = False
sink.write(datamerged)
time.sleep(.001)
For example, this is how to use pulseaudio: http://freedesktop.org/software/pulseaudio/doxygen/pacat-simple_8c-example.html
but I'm not clear on how I can simply play a wav file or an ogg file for that matter.
The example code will play raw PCM data from a file. The trick is getting the data from a wav file into this format. The Microsoft wav files look like this:
https://ccrma.stanford.edu/courses/422/projects/WaveFormat/
Wav files just store raw PCM data. You just have to strip the header off the wav file and dump the rest into a file (extension is irrelevant, but I like to use .raw). That is, you can write a program that either: copies everything past byte 44 into a new file, or just read everything after that directly into a buffer. Pass either format to the pulseaudio example and you should be good to go.
Things to look out for: the endianness of the file and your system; bitdepth; number of channels. These are in the wav header and you may have to read them and tell pa_simple before you play the data. Although, I'm not sure if pa_simple detects this information for you. I like to work with the asynchronous implementation and there I just input the format directly.
-E
linux-commands-examples - pacat
pacat --list-file-formats
aiff AIFF (Apple/SGI)
au AU (Sun/NeXT)
avr AVR (Audio Visual Research)
caf CAF (Apple Core Audio File)
flac FLAC (Free Lossless Audio Codec)
htk HTK (HMM Tool Kit)
iff IFF (Amiga IFF/SVX8/SV16)
mat MAT4 (GNU Octave 2.0 / Matlab 4.2)
mat MAT5 (GNU Octave 2.1 / Matlab 5.0)
mpc MPC (Akai MPC 2k)
oga OGG (OGG Container format)
paf PAF (Ensoniq PARIS)
pvf PVF (Portable Voice Format)
raw RAW (header-less)
rf64 RF64 (RIFF 64)
sd2 SD2 (Sound Designer II)
sds SDS (Midi Sample Dump Standard)
sf SF (Berkeley/IRCAM/CARL)
voc VOC (Creative Labs)
w64 W64 (SoundFoundry WAVE 64)
wav WAV (Microsoft)
wav WAV (NIST Sphere)
wav WAVEX (Microsoft)
wve WVE (Psion Series 3)
xi XI (FastTracker 2)
Please guide me to achieve the following result in my program (written in C):
I have a stream source as HTTP MPEG TS stream (codecs h264 & aac), It has 1 video and 1 audio substream.
I need to get MPEG ES frames (of same codecs), to send them via RTP to
RTSP clients. It'll be best if libavformat give frames with RTP
header.
MPEG ES is needed, because, as i know, media players on Blackberry
phones do not play TS (i tried it).
Although, i appreciate if anyone point me some another format, easier to get
in this situation, that can hold h264 & aac, and plays well on
blackberry and other phones.
I've already succeed with other task to open the stream and remux to
FLV container.
Tried to open two output format contexts with "rtp" formats, also got
frames. Sent to client. No success.
I've also tried writing frames to "m4v" AVFormatContext, have got
frames, have cut them by NAL, added RTP header before each frame, and sent to client. Client displays 1st frame and hangs, or plays a second of video+audio (faster than needed) each 10 seconds or more.
In VLC player log i have this: http://pastebin.com/NQ3htvFi
I've scaled timestamps to make them start with 0 for simplicity.
I compared it with what VLC (or Wowza, sorry i dont remember) incremented audio TS by 1024, not 1920, so i did additional linear scaling to be similar to other streamers.
Packet dump of playback of bigbuckbunny_450.mp4 is here:
ftp://rtb.org.ua/tmp/output_my_bbb_450.log
BTW in both cases i've hardly copied SDP from Wowza or VLC.
What is the right way to get what i need?
I'm also interested if there's some library similar to
libavformat? Maybe even in embryo state.