I am receiving audio data in RTP stream. The audio can be either in G711 A-law or u-law depending on the source. How to decode the audio byte stream using ffmpeg api's? Can ALSA on Linux directly play the G711 audio format?
Libav for sure supports G.711. The associated codec ID are AV_CODEC_ID_PCM_MULAW and AV_CODEC_ID_PCM_ALAW. I suggest you start from the example program they provide and modify audio_decode_example() in order to use G.711.
avcodec.h: http://libav.org/doxygen/master/avcodec_8h.html
libav example: http://libav.org/doxygen/master/avcodec_8c-example.html
Related
I am trying to decode a compressed rtp packet to evs and make it into a wav file.
I use C language in Redhat 6.8 64bit environment.
I have rtp packet dump ( evs )
I used EVS_dec in 3GPP TS 26.443 V15.1.0. C source code.
rtp packet -> g.192 format file -> wav
I have successfully created a wav file, but I can not hear it.
3gpp It does not understand well when I look at the document.
I want to know more about how to use EVS_dec.
Media Pipeline should be
RTP Unpack (buffer with EVS encoded data) -> EVS decoder (buffer with PCM Data)-> Wav File Writer ( Pcm data is written in a wav file)
STEPS to be followed:
you need to write a RTP stack to handle the unpacking .
Use the EVS codec to decode the EVS payload data.
Write the PCM data to a wave file.
For example, this is how to use pulseaudio: http://freedesktop.org/software/pulseaudio/doxygen/pacat-simple_8c-example.html
but I'm not clear on how I can simply play a wav file or an ogg file for that matter.
The example code will play raw PCM data from a file. The trick is getting the data from a wav file into this format. The Microsoft wav files look like this:
https://ccrma.stanford.edu/courses/422/projects/WaveFormat/
Wav files just store raw PCM data. You just have to strip the header off the wav file and dump the rest into a file (extension is irrelevant, but I like to use .raw). That is, you can write a program that either: copies everything past byte 44 into a new file, or just read everything after that directly into a buffer. Pass either format to the pulseaudio example and you should be good to go.
Things to look out for: the endianness of the file and your system; bitdepth; number of channels. These are in the wav header and you may have to read them and tell pa_simple before you play the data. Although, I'm not sure if pa_simple detects this information for you. I like to work with the asynchronous implementation and there I just input the format directly.
-E
linux-commands-examples - pacat
pacat --list-file-formats
aiff AIFF (Apple/SGI)
au AU (Sun/NeXT)
avr AVR (Audio Visual Research)
caf CAF (Apple Core Audio File)
flac FLAC (Free Lossless Audio Codec)
htk HTK (HMM Tool Kit)
iff IFF (Amiga IFF/SVX8/SV16)
mat MAT4 (GNU Octave 2.0 / Matlab 4.2)
mat MAT5 (GNU Octave 2.1 / Matlab 5.0)
mpc MPC (Akai MPC 2k)
oga OGG (OGG Container format)
paf PAF (Ensoniq PARIS)
pvf PVF (Portable Voice Format)
raw RAW (header-less)
rf64 RF64 (RIFF 64)
sd2 SD2 (Sound Designer II)
sds SDS (Midi Sample Dump Standard)
sf SF (Berkeley/IRCAM/CARL)
voc VOC (Creative Labs)
w64 W64 (SoundFoundry WAVE 64)
wav WAV (Microsoft)
wav WAV (NIST Sphere)
wav WAVEX (Microsoft)
wve WVE (Psion Series 3)
xi XI (FastTracker 2)
I am experimenting with video and would like to know how I can extract I-frames from H264 contained in MPEG-TS container.
What I want to do is generate preview images out of a video stream.
As the I-frame is supposed to be a complete picture fro which P- and B-Frames derive, is there a possibility to just extract the data of the picture without having to decode it using a codec?
I have already done some work with MPEG-TS container format but I am not that much specialized in codecs.
I am rather in search of information.
Thanks a lot.
I am no expert in this domain but I believe the answer to your question is NO.
If you want to save the I-frame as a JPEG image, you still need to "transcode" the video frame i.e. you first need to decode the I-frame using a H264 decoder and then encode it using a JPEG encoder. This is so because the JPEG encoder does not understand a H264 frame, it only accepts uncompressed video frames as input.
As an aside, since the input to the JPEG encoder is an uncompressed frame, you can generate a JPEG image from any type of frame (I/P/B) as it would already be decoded (using reference I frame, if needed) before feeding to the encoder.
As others have noted decoding h.264 is complicated. You could write your own decoder but it is a major effort. Why not use an existing decoder?
Intel's IPP library has the basic building blocks for a decoder and a sample decoer:
Code Samples for the IntelĀ® Integrated Performance Primitives
There's libavcodec:
Using libavformat and libavcodec
Revised avcodec_sample.0.4.9.cPP
I am not expert in this domain too. But I've played with decoding. Use this gstreamer pipeline to extract preview from video.mp4:
gst-launch -v filesrc location=./video.mp4 ! qtdemux name=demux demux.video_00 ! ffdec_h264 ! videorate ! 'video/x-raw-yuv,framerate=1/1' ! jpegenc ! multifilesink location=image-%05d.jpeg
If you want to write some code, replace videorate with appsrc/appsink elements. Write control program to the pipelines (see example):
filesrc location=./video.mp4 ! qtdemux name=demux demux.video_00 ! ffdec_h264 ! appsink
appsrc ! 'video/x-raw-yuv,framerate=1/1' ! jpegenc ! multifilesink location=image-%05d.jpeg
Buffers without GST_BUFFER_FLAG_DELTA_UNIT flag set is I-frames. You can safely skip many frames and start decoding stream at any I-frame.
world.
I'm trying to grab rtsp mjpeg stream from IP-camera (realtime) as described in http://www.inb.uni-luebeck.de/~boehme/using_libavcodec.html, but ported to newer version.
It works well with mpeg file (loading it full as one AVPacket), but working with stream, avcodec_decode_video2 returns -1 (error). AVPacket in this case is a part of a frame.
How can I fix this?
Please guide me to achieve the following result in my program (written in C):
I have a stream source as HTTP MPEG TS stream (codecs h264 & aac), It has 1 video and 1 audio substream.
I need to get MPEG ES frames (of same codecs), to send them via RTP to
RTSP clients. It'll be best if libavformat give frames with RTP
header.
MPEG ES is needed, because, as i know, media players on Blackberry
phones do not play TS (i tried it).
Although, i appreciate if anyone point me some another format, easier to get
in this situation, that can hold h264 & aac, and plays well on
blackberry and other phones.
I've already succeed with other task to open the stream and remux to
FLV container.
Tried to open two output format contexts with "rtp" formats, also got
frames. Sent to client. No success.
I've also tried writing frames to "m4v" AVFormatContext, have got
frames, have cut them by NAL, added RTP header before each frame, and sent to client. Client displays 1st frame and hangs, or plays a second of video+audio (faster than needed) each 10 seconds or more.
In VLC player log i have this: http://pastebin.com/NQ3htvFi
I've scaled timestamps to make them start with 0 for simplicity.
I compared it with what VLC (or Wowza, sorry i dont remember) incremented audio TS by 1024, not 1920, so i did additional linear scaling to be similar to other streamers.
Packet dump of playback of bigbuckbunny_450.mp4 is here:
ftp://rtb.org.ua/tmp/output_my_bbb_450.log
BTW in both cases i've hardly copied SDP from Wowza or VLC.
What is the right way to get what i need?
I'm also interested if there's some library similar to
libavformat? Maybe even in embryo state.