Capturing sound by ALSA - c

I am trying to capture the sound from sound card by ALSA in linux systems. Its read the data from the vector in PCM format. I need a way to find out the right way to capturing and save it to in the file and play to check the recevied data is correct or not.

To capture audio to a file with alsa , you can use arecord. By using this you can simply capture input audio to a file. Or you can write your own application which read PCM data. You can use snd_pcm_readi API for this purpose.

Related

Pause ALSA pcm handle without snd_pcm_pause()

I am working on an audio player written in C that uses libasound (ALSA) as the audio back-end. I have implemented a callback mechanism that allows the audio player to send audio to ALSA in a timely manner. I configured my pcm_handle to internally hold a buffer with 500ms (= buffer_time) worth of audio data. By using poll() on ALSA's poll descriptors the program is notified to add more data to the buffer. I also poll() on my own custom poll descriptor in order to notify the loop when to stop/pause.
I want to be able to pause the audio output (just like pausing a song) and thus I must pause the pcm handle, i.e. tell ALSA to stop sending audio frames from the internal buffer to the soundcard. One could use the snd_pcm_pause() function, however, as the documentation shows, this function does not work on all hardware. My audio player is targeted towards embedded devices so I want to support all hardware, as a result I prefer not to use that function.
Alternatively I could use the snd_pcm_drain() function, which will pause the pcm handle after all the pending audio samples in the 500ms buffer have been played. This will however, result in a latency of up to 500ms. It would be possible to minimize this latency by decreasing the buffer size, but this will never eliminate the problem and would increase the chances for an underrun.
Another option is to use the snd_pcm_drop() function, this function will pause the pcm handle and discard any pending audio samples in the buffer. This solves the latency problem but the result is that when that pcm_handle is restarted, some audio samples are lost, which makes this option also not ideal.
I was personally thinking about using the snd_pcm_drop() function. To solve the lost samples problem I am looking for a way to retrieve all the pending samples in the ALSA buffer so that I can play them as soon as the pcm handle is started again. I tried using snd_pcm_readi() just before calling snd_pcm_drop() in order to retrieve the pending samples but this gave me segmentation faults. I believe ALSA does not allow the use of this function on SND_PCM_STREAM_PLAYBACK handles.
So is there another option to pause the pcm handle without latency and without losing pending audio frames? If not, as suggested in my solution, is there a way to retrieve those pending frames from ALSA's internal buffer?
My implementation strongly resembles the "write and wait transfer method" shown here. The following pseudo code gives a simplified version of my implementation (without a solution for the current problem):
write_and_poll_loop(...) {
while(1) {
poll(...) ; /** Here we wait for room in the buffer or for a pause command to come in */
if(ALSA_buffer_has_room) {
customAudioCallback(); /** In this callback audio samples are written to the ALSA buffer */
}
else if (pause_command) {
snd_pcm_drop(); /** Discard all samples in the ALSA buffer */
snd_pcm_perpare(); /** Prepare pcm_handle for future playback */
block_until_play_command() /** Block until we want to play audio again */
}
}
}
EDIT: I realized that by using the snd_pcm_hw_params_can_pause() I can check whether the hardware supports pausing, if it cannot, I fall back to the snd_pcm_drop() method and just pay the price of losing samples. Nevertheless, I would love to see a solution that is independent of the hardware.

Convert a stereo wav to mono in C

I have developed the Synchronous Audio Interface (SAI) driver for a proprietary Real-Time Operating System (RTOS) using C language. My driver is configured to output left and right channel data (I2S) to the amplifier. But, since the amplifier attached is mono, it only outputs left or right channel audio data to the speaker. Now, i have a stereo PCM 16-bit audio data file and i want to somehow mix the left and right channel audio data in my application and send it to either of the left or right channel in the SAI driver. In this way, i will be able to play combined stereo audio data as mono on the speaker attached to the mono amplifier.
Can anyone suggest me that what's the best possible solution to do it?
As said in a comment, the usual way to mix two stereo channels in a mono one is to divide the sample of each channel by 2 and add them.
Example in C like :
int left_channel_sample, right_channel_sample;
int mono_channel = (left_channel_sample / 2) + ( right_channel_sample / 2);
You mentioned some driver you coded, modify it or add some new feature. Can't really help more given the mess of your question...

Linux application decode mmc ext csd

Currently from a user space application with su access, I am parsing ext_csd from debugfs filesystem, converting the string into raw byte data and passing it into a decode ext_csd function to fill in the structure that I wrote myself.
I am wondering if there is any more efficient way to do this, or less error prone? For example there is a
mmc_read_ext_csd() and mmc_decode_ext_csd() in the kernel path drivers/mmc/core/mmc.c
Is there any way to use this driver function from user application? Or ioctl command? I can't seem to find any API documentation for ioctl commands for mmcblk0, only in the kernel source kode /block/ioctl.c
Is there also a way to see if the emmc is a high capacity card from user app?
mmc-utils can issue ext_csd read through ioctl and output parsed text.
http://git.kernel.org/cgit/linux/kernel/git/cjb/mmc-utils.git/
There are others like these that can parse hex strings obtained from debugfs. It is hard to say if it is any more reliable than your own code.
https://github.com/haoxingz/emmc5_register_reader
I am not sure about high capacity detection.

C Linux Device Programming - Reading Straight from /Dev

I have been playing with creating sounds using mathematical wave functions in C. The next step in my project is getting user input from a MIDI keyboard controller in order to modulate the waves to different pitches.
My first notion was that this would be relatively simple and that Linux, being Linux, would allow me to read the raw data stream from my device like I would any other file.
However, research overwhelmingly advises that I write a device driver for the MIDI controller. The general idea is that even though the device file may be present, the kernel will not know what system calls to execute when my application calls functions like read() and write().
Despite these warnings, I did an experiment. I plugged in the MIDI controller and cat'ed the "/dev/midi1" device file. A steady stream of null characters appeared, and when I pressed a key on the MIDI controller several bytes appeared corresponding to the expected Message Chunks that a MIDI device should output. MIDI Protocol Info
So my questions are:
Why does the cat'ed stream behave this way?
Does this mean that there is a plug and play device driver already installed on my system?
Should I still go ahead and write a device driver, or can I get away with reading it like a file?
Thank you in advanced for sharing your wisdom in these areas.
Why does the cat'ed stream behave this way?
Because that is presumably the raw MIDI data that is being received by the controller. The null bytes are probably some sort of sync tick.
Does this mean that there is a plug and play device driver already installed on my system?
Yes.
However, research overwhelmingly advises that I write a device driver for the MIDI controller. The general idea is that even though the device file may be present, the kernel will not know what system calls to execute when my application calls functions like read() and write().
<...>
Should I still go ahead and write a device driver, or can I get away with reading it like a file?
I'm not sure what you're reading or how you're coming to this conclusion, but it's wrong. :) You've already got a perfectly good driver installed for your MIDI controller -- go ahead and use it!
Are you sure you are reading NUL bytes? And not 0xf8 bytes? Because 0xf8 is the MIDI time tick status and is usually sent periodically to keep the instruments in sync. Try reading the device using od:
od -vtx1 /dev/midi1
If you're seeing a bunch of 0xf8, it's okay. If you don't need the tempo information sent by your MIDI controller, either disable it on your controller or ignore those 0xf8 status bytes.
Also, for MIDI, keep in mind that the current MIDI status is usually sent once (to save on bytes) and then the payload bytes follow for as long as needed. For example, the pitch bend status is byte 0xeK (where K is the channel number, i.e. 0 to 15) and its payload is 7 bits of the least significant byte followed by 7 bits of the most significant bytes. Thus, maybe you got a weird controller and you're seeing only repeated payloads of some status, but any controller that's not stupid won't repeat what it doesn't need to.
Now for the driver: have a look at dmesg when you plug in your MIDI controller. Now if your OSS /dev/midi1 appears when you plug in your device (udev is doing this job), and dmesg doesn't shoot any error, you don't need anything else. The MIDI protocol is yet-another-serial-protocol that has a fixed baudrate and transmits/receives bytes. There's nothing complicated about that... just read from or write to the device and you're done.
The only issue is that queuing at some place could result in bad audio latency (if you're using the MIDI commands to control live audio, which I believe is what you're doing). It seems like those devices are mostly made for system exclusive messages, that is, for example, downloading some patch/preset for a synthesizer online and uploading it to the device using MIDI. Latency doesn't really matter in this situation.
Also have a look at the ALSA way of playing with MIDI on Linux.
If you are not developing a new MIDI controller hardware, you shouldn't worry about writing a driver for it. It's the user's concern installing their hardware, and the vendor's obligation to supply the drivers.
Under Linux, you just read the file. Now to interpret and make useful things with the data.

How to multiplex Vorbis and Theora streams using libogg

I am currently writing a simple Theora video encoder, which uses libogg, libvorbis and libtheora. Currently, I can submit frames to the Theora encoder, and PCM samples to the Vorbis encoder, pass the resulting packets to Ogg streams (one for Theora and one for Vorbis) and get pages out.
When the program starts, it flushes the headers first from the Theora encoder, then from the Vorbis encoder to the output file (obviously, both streams have unique serial numbers). Then, I write interleaved pages to the file from both of the streams.
When writing just the video, or just the audio, I am able to play back the output in mplayer just fine, however when I attempt to write both, I get the following:
Ogg demuxer error : we met an unknown stream
I'm guessing I'm doing the multiplexing wrong. I have read through the documentation for multiplexing streams on Xiph.org, and I can't see where I differ. I cannot seem to find any example code for doing this, short of going through the source of an open-source encoder (which I'm having some trouble understanding). Would anyone be able to explain how to multiplex streams correctly using libogg? I'm trying to do this in C on Ubuntu 10.04, using the libraries from the Ubuntu repository.
Many thanks in advance!
Tom
Ok, for anyone who was reading this, I have to some extent solved it.
You should not flush all of the the header packets from each stream - just the first (setup) packet, which for Vorbis and Theora gets its own page by default. Put the other header packets into their respective streams, but do not flush until the setup pages from all streams have been written to the file.
Once you have done this, try to keep the streams as closely sync'd as possible (mplayer gave some errors for me when they got too far out). At 24fps video and 44.1 KHz audio, 1 frame should span 1837.5 audio samples (with PCM audio, this is 7,350 bytes).
If anyone else has any tips / info, it would be good to hear - I've never done anything with audio / video before!
Thanks!
Tom

Resources