Playing a wav file using pulseaudio APIs? - c

For example, this is how to use pulseaudio: http://freedesktop.org/software/pulseaudio/doxygen/pacat-simple_8c-example.html
but I'm not clear on how I can simply play a wav file or an ogg file for that matter.

The example code will play raw PCM data from a file. The trick is getting the data from a wav file into this format. The Microsoft wav files look like this:
https://ccrma.stanford.edu/courses/422/projects/WaveFormat/
Wav files just store raw PCM data. You just have to strip the header off the wav file and dump the rest into a file (extension is irrelevant, but I like to use .raw). That is, you can write a program that either: copies everything past byte 44 into a new file, or just read everything after that directly into a buffer. Pass either format to the pulseaudio example and you should be good to go.
Things to look out for: the endianness of the file and your system; bitdepth; number of channels. These are in the wav header and you may have to read them and tell pa_simple before you play the data. Although, I'm not sure if pa_simple detects this information for you. I like to work with the asynchronous implementation and there I just input the format directly.
-E

linux-commands-examples - pacat
pacat --list-file-formats
aiff AIFF (Apple/SGI)
au AU (Sun/NeXT)
avr AVR (Audio Visual Research)
caf CAF (Apple Core Audio File)
flac FLAC (Free Lossless Audio Codec)
htk HTK (HMM Tool Kit)
iff IFF (Amiga IFF/SVX8/SV16)
mat MAT4 (GNU Octave 2.0 / Matlab 4.2)
mat MAT5 (GNU Octave 2.1 / Matlab 5.0)
mpc MPC (Akai MPC 2k)
oga OGG (OGG Container format)
paf PAF (Ensoniq PARIS)
pvf PVF (Portable Voice Format)
raw RAW (header-less)
rf64 RF64 (RIFF 64)
sd2 SD2 (Sound Designer II)
sds SDS (Midi Sample Dump Standard)
sf SF (Berkeley/IRCAM/CARL)
voc VOC (Creative Labs)
w64 W64 (SoundFoundry WAVE 64)
wav WAV (Microsoft)
wav WAV (NIST Sphere)
wav WAVEX (Microsoft)
wve WVE (Psion Series 3)
xi XI (FastTracker 2)

Related

I am trying to decode a compressed rtp packet to evs and make it into a wav file

I am trying to decode a compressed rtp packet to evs and make it into a wav file.
I use C language in Redhat 6.8 64bit environment.
I have rtp packet dump ( evs )
I used EVS_dec in 3GPP TS 26.443 V15.1.0. C source code.
rtp packet -> g.192 format file -> wav
I have successfully created a wav file, but I can not hear it.
3gpp It does not understand well when I look at the document.
I want to know more about how to use EVS_dec.
Media Pipeline should be
RTP Unpack (buffer with EVS encoded data) -> EVS decoder (buffer with PCM Data)-> Wav File Writer ( Pcm data is written in a wav file)
STEPS to be followed:
you need to write a RTP stack to handle the unpacking .
Use the EVS codec to decode the EVS payload data.
Write the PCM data to a wave file.

Can ffmpeg decode G711 audio

I am receiving audio data in RTP stream. The audio can be either in G711 A-law or u-law depending on the source. How to decode the audio byte stream using ffmpeg api's? Can ALSA on Linux directly play the G711 audio format?
Libav for sure supports G.711. The associated codec ID are AV_CODEC_ID_PCM_MULAW and AV_CODEC_ID_PCM_ALAW. I suggest you start from the example program they provide and modify audio_decode_example() in order to use G.711.
avcodec.h: http://libav.org/doxygen/master/avcodec_8h.html
libav example: http://libav.org/doxygen/master/avcodec_8c-example.html

Reading output of a USB webcam in Linux

I was experimenting with a little bit with fread and fwrite in C. So i wrote this little program in C to get data from a webcam and dump it into a file. The following is the source:
#include <stdio.h>
#include <stdlib.h>
#define SIZE 307200 // number of pixels (640x480 for my webcam)
int main() {
FILE *camera, *grab;
camera=fopen("/dev/video0", "rb");
grab=fopen("grab.raw", "wb");
float data[SIZE];
fread(data, sizeof(data[0]), SIZE, camera);
fwrite(data, sizeof(data[0]), SIZE, grab);
fclose(camera);
fclose(grab);
return 0;
}
The program works when compiled (gcc -o snap camera.c). What took me by surprise was that the output file was not a raw data dump but a JPEG file. Output of the file command on linux on the programs output file showed it was a JPEG image data: JFIF Standard 1.01. The file was viewable on an image viewer, although a little saturated.
How or why does this happen? I did not use any JPEG encoding libraries in the source or the program. Does the camera output JPEG natively? The webcam is a Sony Playstation 2 EyeToy which was manufactured by Logitech. The system is Debian Linux.
The Sony EyeToy has an OV7648 sensor with the quite popular OV519 bridge. The OV519 outputs frames in JPEG format - and if I remember correctly from my own cameras that's the only format that it supports.
Cameras like this require either application support, or a special driver that will decompress the frames before delivery to userspace. Apparently in your case the driver delivers the JPEG frames in their original form, which is why you are getting JPEG data in the output.
BTW, you should really have a look at the Video4Linux2 API for the proper way to access video devices on Linux - a simple open()/read()/close() is generally not enough...

Remux MPEG TS -> RTP MPEG ES

Please guide me to achieve the following result in my program (written in C):
I have a stream source as HTTP MPEG TS stream (codecs h264 & aac), It has 1 video and 1 audio substream.
I need to get MPEG ES frames (of same codecs), to send them via RTP to
RTSP clients. It'll be best if libavformat give frames with RTP
header.
MPEG ES is needed, because, as i know, media players on Blackberry
phones do not play TS (i tried it).
Although, i appreciate if anyone point me some another format, easier to get
in this situation, that can hold h264 & aac, and plays well on
blackberry and other phones.
I've already succeed with other task to open the stream and remux to
FLV container.
Tried to open two output format contexts with "rtp" formats, also got
frames. Sent to client. No success.
I've also tried writing frames to "m4v" AVFormatContext, have got
frames, have cut them by NAL, added RTP header before each frame, and sent to client. Client displays 1st frame and hangs, or plays a second of video+audio (faster than needed) each 10 seconds or more.
In VLC player log i have this: http://pastebin.com/NQ3htvFi
I've scaled timestamps to make them start with 0 for simplicity.
I compared it with what VLC (or Wowza, sorry i dont remember) incremented audio TS by 1024, not 1920, so i did additional linear scaling to be similar to other streamers.
Packet dump of playback of bigbuckbunny_450.mp4 is here:
ftp://rtb.org.ua/tmp/output_my_bbb_450.log
BTW in both cases i've hardly copied SDP from Wowza or VLC.
What is the right way to get what i need?
I'm also interested if there's some library similar to
libavformat? Maybe even in embryo state.

formats.h file containing WAV file format info

I am trying to make a prog which can record voice and store it in digital audio format on LINUX using ALSA. (currently using Ubuntu).
While looking for some help on net, I found this code from here
#include "formats.h"
...
...
WaveChunkHeader wch, ch = {WAV_FMT,16};
WaveHeader h;
WaveFmtBody f;
wch.type = WAV_DATA;
...
...
However, I don't have "formats.h" header file on my system. Any one know from where(which dev pkg) I can get this header file (containing audio file format related info)?
Thanks,
Vikram
it should be in the alsa-utils package, subdirectory aplay:
http://alsa-utils.sourcearchive.com/documentation/1.0.17/formats_8h-source.html

Resources