APLAY is not streaming the audio into speaker - alsa

Whenever I issue below command
"aplay 89400_Text1.wav -D sysdefault:CARD=0"
I could see the output as follows
Playing WAVE '89400_Text1.wav' : Signed 16 bit Little Endian, Rate 22050 Hz, Mono
But nothing is heard on speaker, could anyone help me to identify what might be the problem here.
Before issuing the aplay command noise was continuously heard from the speaker the same is observed after the aplay command.
I was expecting the audio to be played when I used the aplay command.
Thank you.

Related

How can I simulate OpenFile in FFmpeg?

Most gif capture software capture screen and then save them one by one single frame picture file on disk,then read them into memory and combine them to gif,makes the whole procdure very slowly.
I got a idea to capture screen with DirectX(so we could also capture directx window faster since it direct operate the screen d3d device)API to got the bitmap,then save them to memory(such as buffer),then passing the memory location to ffmpeg to produce a video so we don't need disk storge as a middle buffer so it could be ten more faster since the disk is now most slowly part on pc now.
the directx capture screen part is already.But I found that ffmpeg using OpenFile to read the picture file,so here may we can simulate the OpenFile?
If answer is yes,how could we do it?
You can open a named pipe and use that as a source.
An example:
ffmpeg -f rawvideo -vcodec ravideo -s $width$x$height$ -r $framerate -pix_fmt $pixelFormat$ -i "\\.\pipe\$pipeName$" Output.gif
You have to fix the format of the frames you are going to feed FFmpeg, hence the -s and the -pix_fmt parameters.

How to dump raw RTSP stream to file?

Is it possible to dump a raw RTSP stream to file and then later decode the file to something playable?
Currently I'm using FFmpeg to receive and decode the stream, saving it to an mp4 file. This works perfectly, but is CPU intensive, and will severely limit the number of RTSP streams I can receive simultaneously on my server.
I would like to save the stream to file without decoding it, and delay the decoding part to when the file needs to be opened.
Is this possible?
I have tried VLC, which is even more CPU intensive than FFmpeg. I've also looked at this question where the answer says dumping RTSP to file is not useful, and this question, where the comment below the question says "Raw RTSP content is not well suited for save and replay...", which seems to indicate that there is way.
EDIT
Here is the command I'm using for FFmpeg:
ffmpeg -i rtsp://#192.168.241.1:62159 -r 15 C:/DB_Videos/2013-04-30 17_18_34.703.mp4
If you are reencoding in your ffmpeg command line, that may be the reason why it is CPU intensive. You need to simply copy the streams to the single container. Since I do not have your command line I cannot suggest a specific improvement here. Your acodec and vcodec should be set to copy is all I can say.
EDIT: On seeing your command line and given you have already tried it, this is for the benefit of others who come across the same question. The command:
ffmpeg -i rtsp://#192.168.241.1:62156 -acodec copy -vcodec copy c:/abc.mp4
will not do transcoding and dump the file for you in an mp4. Of course this is assuming the streamed contents are compatible with an mp4 (which in all probability they are).
With this command I had poor image quality
ffmpeg -i rtsp://192.168.XXX.XXX:554/live.sdp -vcodec copy -acodec copy -f mp4 -y MyVideoFFmpeg.mp4
With this, almost without delay, I got good image quality.
ffmpeg -i rtsp://192.168.XXX.XXX:554/live.sdp -b 900k -vcodec copy -r 60 -y MyVdeoFFmpeg.avi
You can use mplayer.
mencoder -nocache -rtsp-stream-over-tcp rtsp://192.168.XXX.XXX/test.sdp -oac copy -ovc copy -o test.avi
The "copy" codec is just a dumb copy of the stream. Mencoder adds a header and stuff you probably want.
In the mplayer source file "stream/stream_rtsp.c" is a prebuffer_size setting of 640k and no option to change the size other then recompile. The result is that writing the stream is always delayed, which can be annoying for things like cameras, but besides this, you get an output file, and can play it back most places without a problem.

processing .raw file image with ffmpeg api or C code

I am trying to process a .raw image file captured using vrl2, it's a h264 encoded image with yuv422 color space from a Logitech c920 webcam, dcraw is not working for me however from my previous question this command is working fine with low performance (a 32kb jpg image however using opencv capture I get a 900kb image for the same 640x480 resolution):
ffmpeg -f rawvideo -s 640x480 -pix_fmt yuyv422 -i frame-1.raw
frame-1.jpg
I need a code written in C or the ffmpeg api/OpenCV etc .. to do the same as this command,I don't want to use QProcess in Qt(I am working on a server using Qt where I am trying to send the raw file from a Raspberry PI to the server and process it their), dcraw output is a corrupted image.
http://ffmpeg.org/doxygen/trunk/examples.html
There should be some api samples in there that show how to get the image out with that specific encoding.
When interacting with a RAW file, I have also used IrfanView. If you know the headersize of the file and the width and the height and the bits per pixel per color, you can see what it looks like quickly that way.
EDIT: I tried using Irfanview with your RAW, and I got something close, but not quite. The coloring was always off. I don't think it can handle that particular encoding of a RAW file right now.

Writing output of an application as a sound file

I am using espeak on BSD to output text as sound. My problem is that I want it to take it as a .mp3 but I am having little luck. I tried piping the output to tee but I guess that only works with stdout not just playing a sound.
Any ideas? My last resort would be recompiling my own version of espeak that allows me to save to a file instead of playing it
you can write it as wave and then convert it with ffmpeg:
espeak "HelloWorld" -w <file>.wav
Or pipe to ffmpeg
espeak "HelloWorld" --stdout | ffmpeg -i pipe:0 output.mp3
From the documentation:
-w < wave file>
Writes the speech output to a file in WAV format, rather than speaking it.
--stdout
Writes the speech output to stdout as it is produced, rather than speaking it. The data starts with a WAV file header which indicates the sample rate and format of the data. The length field is set to zero because the length of the data is unknown when the header is produced.
It looks like both of those options produce WAV files, but you can easily convert those without another program like ffmpeg.

Explain relation between asound.conf file with HFP and A2DP commands

Here i am looking for Testing A2DP and HFP (Hands free) Profiles.
So here in HFP i am using dbus command for sending message over dbus and execute service address of bluez. for connecting and disconnecting.
here i am using below command for audio playing in HFP.
aplay -D hw:0,1 -c 2 -f S16_LE file_name &
can you explain me what is the meaning of hw:0,1 .
HFP supports only 8000 Hz sampling rate wav files.
IN Advanced Audio Distribution Profile (A2DP) defines how the high quality audio can be streamed from one device to another over Bluetooth connection.
here i am using this command , but before this command i have to update asound.conf file.
aplay -Dplug:bluetooth file_name > /dev/null > /dev/null &
and in both case i am using same asound.conf file. which is given below.
pcm.!bluetooth {
type bluetooth
device "BD_ADDR" //bluetooth address of hands free device.
}
pcm.!default {
type plug
slave.pcm "bluetooth"
}
So i want to know the relation of this asound.conf file with HFP command and A2DP command.
Please Help me to sort out this confusion.
can you explain me what is the meaning of hw:0,1 .
The numbers after hw: stand for the sound card number and the device number. A third number can be added (hw:0,0,0) for the sub-device number, but it defaults to the next sub-device avaliable. The numbers start from zero, so, for example, to access the first device on the second sound card, you would use hw:1,0.
So i want to know the relation of this asound.conf file with HFP command and A2DP command.
asound.conf is configuration file for your PulsAudio server, normally you do not need it at all but in some cases you can setup there some specific options or behavior for your hardware. HFP and A2DP are just Bluetooth profiles which are used to communicate with your headset. You can use asound.conf to link sound from your PulsAudio server with Bluetooth device which you are pair. Which means that for example you can set default output/input to this particular BT device, that all applications in your system will use it to play and record sound.
But as I mention before normally all those things happen automatically and you do not need to do anything to make it work.
More about how to use asoundrc/asound.config you can find here: http://alsa.opensrc.org/.asoundrc

Resources