Wait until playback has completed - c

I'm using PortAudio as a front-end to a speech synthesis (Text to Speech) engine, and I want to provide a synchronous speak function that waits until playback has completed.
It seems like all of the PortAudio functions that deal with this only wait until the underlying API has finished consuming the audio data, not until playback has finished.
Is this possible with PortAudio? If not, are there any good cross-platform alternatives to PortAudio (has to include a C interface) that might support this?

I am not sure if the streamFinished callback, as documented here:
http://portaudio.com/docs/v19-doxydocs/portaudio_8h.html#aa11e7b06b2cde8621551f5d527965838
is what you want. It may suffer from the same issue, but I think it would work.
Two other possibilities are:
Use lower latency settings.
Use the hardware timing. This information is available from calls like GetStreamTime(). For example:
get the current time
push x seconds of audio to the hardware
wait for the hardware clock to show the start time plus x seconds
You might also be interested in this document:
http://www.rossbencina.com/static/writings/portaudio_sync_acmc2003.pdf
I'm afraid I don't know of another API with better support for this sort of thing.

Related

Linux, Using hardware interrupts on I/O to place data into user accessable area via Direct memory access

I am currently working with the BeagleBone Black using Ubuntu and I am trying to find some direction. I have created a c program that listens for SIGIO and runs a read() to get the data on that line. From my research on the internet and looking through some books, it appears that this method is not very efficient in that using a loop listening for a Signal interrupt is bad because of the large amount of context switching (it should be noted that this I/O line will be busy so the SIGIO will trigger at least 4 times a second and this is an asynchronous). It was suggested to use hardware interrupts and have that trigger a response to take the data from the line and place it into a register and will be accessable from the User using Direct Memory Access preferably. So the question remains to be where can I look to get more info on how to do this, I find a lot of info on this topic but most of which just talk about how to OS does interrupts or using Signals, which with a busy line is pretty taxing.
If you are that much concerned about the timings and latency, you should probably use some real time system.
Fortunately, Beaglebone black has real-time processing cores on its SOC, called the PRU (Programmable real-time units).
If you are new to the concept of PRUs, you probably would like to start here and then, once you have understood the need and purpose of the PRUs, that same website has some tutorial to get started.
With the latest software support like remoteproc, rpmsg and Beaglescope project, PRUs can be used quite easily, once you have understood its working.

How to control the transmission speed under libuv?

As we all know, libuv is an asynchronous network library, it will do its best to send out the data, however, in some cases, we can not take all the bandwidth, transmission speed needs to be controlled at the specified value, how to do this with libuv api?
libuv does not provide a built-in mechanism to do this, but it does give you enough information to build it. Assuming you're using TCP, you'd be calling uv_write repeatedly. You can then query the write_queue_size (http://docs.libuv.org/en/v1.x/stream.html#c.uv_stream_t.write_queue_size) and stop waiting until it has drained a bit. You can do this check in the callback passes to uv_write.

How to determine the last time the audio device was playing a file?

I would like to use C in order to get the last time the soundboard was playing a file. Is there a way I could do that?
None of the components you are using (tools, libraries, sound servers, drivers, kernel) logs the time when a sound is played.
If you are using one specific tool to play sounds, you could modify it to log the time.
Otherwise, you have to actively monitor the current status of the sound device.
(With ALSA, you could poll /proc/asound/card*/pcm*/sub*/status.)
I think it's not possible because of ALSA(Advanced Linux Sound Architecture) is just kernel component that provide device drivers for sound card.But i don't know if some user-space API's and library's like (alsa-ustils) can do that!,I advice may is better to check Sound-Player applications(VLC etc..) log ?!

OS X/Linux audio playback with an event-based interface?

I'm working on a streaming audio player for Linux/OS X with a bizarre use case that has convinced me nothing that already exists will work. For the first portion, I just want to receive MP3 data and play it. I'm currently using libmad for decoding and libao for playback. My problem is with libao, and I'm not convinced it's my best option.
In particular, the ao_play function is blocking. It doesn't return until the entire buffer passed to it has been played. This doesn't give enough time to decode blocks between calls to ao_play, so the decoding has to be done either entirely ahead of time, or concurrently. Since this is intended to be streaming, I'm rejecting ahead-of-time decoding offhand. (It's conceivable I could send more than an hour's worth of audio data - I don't want to use that much memory.) This leaves concurrency. But while pthreads is standard across Linux and OS X, many of the surrounding libraries are not. I'm not really convinced I want to go to concurrency - so I'm reconsidering my choice of libao.
For my application, the best model I can think of for audio playback would be getting a file descriptor I could select on to get notified when it's ready for writes, then issue non-blocking writes to. (This is due to the rest of the details of the use case, which imply I really want a select loop anyway.)
Is there a library that works on both Linux and OS X that works this way?
Although it's much hated, PulseAudio basically works exactly like you describe (using the Asynchronous API, not the simple one).
Unless what you want to do involves low-latencies or advanced sound work, in which case you might want to look at the JACK Audio Connection Kit.
PortAudio is your one. It has a simple callback driven API. It is cross-platform and low-latency. It is the best solution if you don't need any fancy features (3D, audio-graphs,...).

Best way to ensure accurate timing with C

I am a beginning C programmer (though not a beginning programmer) looking to dive into a project to teach myself C. My project is music-based, and because of this I am curious whether there are any 'best practices' per-se, when it comes to timing functions.
Just to clarify, my project is pretty much an attempt to build some barebones music notation/composition software (remember, emphasis on barebones). I was originally thinking about using OSX as my platform, but I want to do it in C, not Obj-C (though I know it would probably be easier...CoreAudio looked like a pretty powerful tool for this kind of stuff). So even though I don't have to build OSX apps in Obj-C, I will probably end up building this on a linux system (probably debian...).
Thanks everyone, for your great answers.
There are two accurate methods for timing functions:
Single process execution.
Timer event handler / callback
Single Process Execution
Most modern computers execute more than one program simultaneously. Actually, they execute pieces of many programs, swapping them out based on priorities and other metrics to look like more than one program is executing at the same time. This overhead effects timing in programs. Either the program gets delayed in reading the time or the OS gets delayed in setting its own time variables.
The solution in this case is to eliminate as many tasks from running. The ideal environment is for best accuracy is to have your program as the sole program running. Some OSes provide API for superuser applications to block all other programs or kill them.
Timer event handling / callback
Since the OS can't be trusted to execute your program with high precision, most OS's will provide Timer APIs. Many of these APIs include the ability to call one of your functions when the timer expires. This is known as a callback function. Other OS's may send a message or generate an event when the timer expires. These fall under the class of timer handlers. The callback process has less overhead than the handlers and thus is more accurate.
Music Hardware
Although you may have your program send music to the speakers, many computers now have separate processors that play music. This frees up the main processor and provides more continuous notes, rather than sounds separated by silent gaps due to platform overhead of your program send the next sounds to the speaker.
A quality music processor has at least these to functions:
Start Playing
End Music Notification
Start Playing
This is the function where you tell the music processor where your data is and the size of the data. The processor will start playing the music.
End Music Notification
You provide the processor with a pointer to a function that it will call when the music data has been processed. Nice processors will call the function early so there will be no gaps in the sounds while reloading.
All of this is platform dependent and may not be standard across platforms.
Hope this helps.
This is quite a vast area, and, depending on exactly what you want to do, potentially very difficult.
You don't give much away by saying your project is "music based".
Is it a musical score typesetting program?
Is it processing audio?
Is it filtering MIDI data?
Is it sequencing MIDI data?
Is it generating audio from MIDI data
Does it only perform playback?
Does it need to operate in a real time environment?
Your question though hints at real time operation, so in that case...
The general rule when working in a real time environment is don't do anything which may block the real time thread. This includes:
Calling free/malloc/calloc/etc (dynamic memory allocation/deallocation).
File I/O.any
Use of spinlocks/semaphores/mutexes upon threads.
Calls to GUI code.
Calls to printf.
Bearing these considerations in mind for a real time music application, you're going to have to learn how to do multi-threading in C and how to pass data from the UI/GUI thread to the real time thread WITHOUT breaking ANY of the above restrictions.
For an open source real time audio (and MIDI) (routing) server take a look at http://jackaudio.org
gettimeofday() is the best for wall clock time. getrusage() is the best for CPU time, although it may not be portable. clock() is more portable for CPU timing, but it may have integer overflow.
This is pretty system-dependent. What OS are you using?
You can take a look at gettimeofday() for fairly high granularity. It should work ok if you just need to read time once in awhile.
SIGALRM/setitimer can be used to receive an interrupt periodically. Additionally, some systems have higher level libraries for dealing with time.

Resources