Playing 15 audio tracks at once with <50ms latency? - c

To summarise, my question is: is it possible to decode and play 15 lossily-compressed audio tracks on-the-fly at the same time with under 50ms latency and with no stuttering?
Background
I'm writing a sound library in plain C for a game I'm creating. I'm hoping to have up to 15 audio tracks playing at once with less than 50ms latency.
As of now, the library is able to play raw PCM files (48000Hz packed 16-bit samples), and can easily play 15 sounds at once at 45ms latency without stuttering and with minimal CPU usage. This is on my relatively old Intel Q9300 + SSD machine.
Since raw audio files are huge though, I augmented my library to support playing back OPUS files using opusfile (https://mf4.xiph.org/jenkins/view/opus/job/opusfile-unix/ws/doc/html/index.html). I was hoping that I'd still be able to play 15 sounds at once without the audio files taking up 200MB+. How wrong I was - I was only able to play 3 or 4 OPUS tracks at once before I could hear stuttering and other buffer underrun symptoms. CPU usage was also massively increased compared to raw PCM playback.
I also tried including VORBIS support using vorbisfile (http://www.xiph.org/vorbis/doc/vorbisfile/). I thought maybe decoding VORBIS on-the-fly wouldn't be as CPU intensive. VORBIS is a little better than OPUS - I can play 5 or 6 sounds at once before stuttering becomes audible (I guess VORBIS is indeed easier to decode) - but this is still nowhere near as good as playing back raw PCM files.
Before I delve into the low-level libvorbis/libopus APIs and investigate other audio compression formats, is it actually feasible to decode and play 15 lossily-compressed audio tracks on-the-fly at the same time with under 50ms latency and with no stuttering on a medium-to-low end desktop computer?
If it helps, my sound library currently calls a function approximately every 15ms which basically does the following (error-handling and post-processing omitted for clarity):
void onBufferUpdateNeeded(int numSounds, struct Sound *sounds,
uint16_t *bufferToUpdate, int numSamplesNeeded, uint16_t *tmpBuffer) {
int i, j;
memset(bufferToUpdate, 0, numSamplesNeeded * sizeof(uint16_t));
for (i = 0; i < numSounds; ++i) {
/* Seek to the specified sample number in the already-opened
file handle. The implementation of this depends on the file
type (vorbis, opus, raw PCM). */
seekToSample(sounds[i].fileHandle, sounds[i].currentSample);
/* Read numSamplesNeeded samples from the file handle into
tmpBuffer. */
readSamples(tmpBuffer, sounds[i].fileHandle, numSamplesNeeded);
/* Add the samples into the buffer. */
for (j = 0; j < numSamplesNeeded; ++j) {
bufferToUpdate[j] += tmpBuffer[j];
}
}
}
Thanks in advance for any help!

It sounds like you already know the answer to your own question: NO. Normally, the only advice I would have to questions like these (especially performance-related queries) is to try it and find out if it's possible. But you have already collected that data.
It's true that perceptual/lossy audio codecs tend to be computationally intensive to decode. It sounds like you want to avoid the storage overhead of raw PCM. In that case, if you can safely assume you'll have enough memory reserved for your application, you can decode the audio streams in advance, or employ some caching mechanism to deal with memory constraints. Perhaps this can be offloaded to a different thread (since the Q9300 CPU mentioned in your question is dual core).
Otherwise, you will need to seek out a compressor that has lower computational requirements. You might be interested in FLAC, sponsored by the same organization as Vorbis and Opus. It's lossless, so it won't compress quite as well as the lossy algorithms, but it should be much, much faster to decode.
And if that's still not suitable, browse around on this big list of ~150 audio codecs until you find one that meets your standards. Since you control the client software, you have a lot of choices (vs, e.g., streaming to a web browser).

Related

Play multiple wav audio with C and libao at same time

I'm using libao (ao_play) to play some buffers. I listen the keyboard keys and for each key I have a wav sound to play. It's simple.
With ao_play I see that the application blocks while is playing the sound. Because I want to play multiple audios at same time, I needed to use threads (with pthread lib).
It works, but I fell like a workaround and if I play to much files (maybe 10 or something like this) so everything stuck for some seconds and so come back.
Well, my question is: how to play multiple sounds at same time non-blocking using libao (and not using threads)?
This not a real design, more like a guess.
First of all, you'll need threads because it's a good old tradition to separate computations from visualisations, or audializations in this case. You'll need an audio thread that renders the stream and sends it to the output.
So, each time your main thread discovers a keypress, it sends a note to the audio thread. That latter captures an event and adds a wave to the currently played stream. The stream is rendered in frames (64, or 1024, or 10240 samples, or whatever you fancy your latency, if the wave itself is a simple mix of few possible samples, it can be notably realtime.) You should keep track of notes currently played, position per each sample. If latency is low, thus granularity high, you can even align sample edges by buffer edges, which would notably simplify rendering.
And after current buffer is rendered you simply send it to DAC and proceed with the next frame.
A quick glance at libao's help page does not reveal any mixing capabilities, so you'll need to create a simple mixer on your own, or you may actually need an existing solution, some simple opensource audio rendering library.

Raw Sockets Vs Libpcap in sending performance

I'm currently attempting to get the best sending performance for an 802.11 frame, I am using libpcap but I wondered if I could speed it up using raw sockets (or any other possible method).
Consider this simple example code for libpcap with a device handle already created previously:
char ourPacket[60][50] = { {0x01, 0x02, ... , 0x50}, ... , {0x01, 0x02, ... , 0x50} };
for( ; ; )
{
for(int i; i = 0; i < 60; ++i)
{
pcap_sendpacket(deviceHandle, ourPacket[i], 50);
}
}
This code segment is done on a thread for each separate CPU core. Is there any faster way to do this for raw 802.11 frame/packets containing Radiotap headers that are stored in an array?
Looking at pcap's source code for pcap_inject (the same function but different return value), it doesn't seem to be using raw sockets to send packets? No clue.
I don't care about capturing performance, as a lot of the other questions have answered that. Is raw sockets even for sending layer 2 packets/frames?
As Gill Hamilton mentioned, the answer will depend on a lot of things. If you see super gains on one system, you may not see them on another, even if both are "running Linux". You're better off testing the code yourself. That being said, here is some info from what my team has found:
Note 1: all the gains were for code that did not just write frames/packets to sockets, but analyzed them and processed them, so it is likely that much or most of our gains were there rather than the read/write.
We are writing a direct raw socket implementation to send/receive Ethernet frames and IP packets. We're seeing about a 250%-450% performance gain on our measliest R&D system which is a MIPS 24K 5V system on a chip with a MT7530 integrated Ethernet NIC/Switch which can barely handle sustained 80 Mbit. On a very modest but much beefier test system with an Intel Celeron J1900 and I211 Gigabit controllers it drops to about 50%-100% vs c libpcap. In fact, we only saw about 80%-150% vs. Python dpkt/scapy implementation. We only saw maybe about a 20% gain on a generic i5 Linux dual-gigabit system vs. a c libpcap implementation. So based on our non-rigorous testing, we saw up to a 20x difference in performance gains of the code depending on the system.
Note 2: All of these gains were when using maximum optimizations and strictest compile parameters during compiling of the custom c code, but not necessarily for the c libpcap code (making all warnings errors on some of the above systems make the libpcap code not compile, and who wants to debug that?), so the differences may be less dramatic. We need to squeeze out every last ounce of performance to enable some sophisticated packet processing using no more than 5.0V and 1.5A, so we'll ultimately be going with a custom ASIC which may be FPGA. That being said, it's A LOT of work to get it working without bugs and we're likely going to be implementing significant portions of the Ethernet/IP/TCP/UPD stack, so I don't recommend it.
Last note: The CPU usage on the MIPS 24K system was about 1/10 for the custom code, but again, I would say that that vast majority of that gain was from the processing.

How do I write audio data at a certain sample rate?

I am making a synthesizer by piping data into aplay (I know it's not ideal) and the sound is lagging behind the keypresses which alter the sound. I believe this is because aplay is going at a constant 8000 Hz, but the c program is going at an unstable rate. How do I get the for loop to go at 8000 Hz in C?
To generate audio samples at 8000 Hz (or any fixed rate) you don't want your loop to "run at" that rate. That would involve huge amounts of overhead (99.99% or more) spinning doing nothing until time to generate the next sample, and (especially if you sleep rather than spinning) would be unreliable in that your process might not wake-up/get-scheduled in time for some of the samples.
Instead, you just want to be producing samples at an overall rate matching what the consumer (aplay/the audio device) expects. You can compute the overall current sample number you should be generating up to as something like:
current_time + buffer_depth - start_time
then, after generating up to that sample, sleep for some period proportional to the buffer depth, but sufficiently less that you won't be in trouble if your process doesn't get scheduled again right away. The buffer depth you can use depends on what kind of latency you need. If you're making sounds for live/realtime events, you probably want a buffer depth of 1/50 sec (20 ms) or less. If not, you can happily use huge buffers like 5-10 seconds.
If you are piping data to aplay, you will not experience any problems with the sample rate (8 kHz, for example) because the kernel will block your program when you write() when the buffer is full. This will effectively limit your audio generation to 8 kHz with no work on your part.
However, this is far from ideal. Your application will only be throttled once the kernel buffer for the pipe is full, and the default size for pipe buffers on Linux is 64 kB. For stereo 16-bit data at 8 kHz, this is two full seconds of audio data, so you would expect your audio to lag at least two seconds from the user input. This is unacceptable for synthesizer applications.
The only real solution is to use the ALSA library directly (or some alternative sound API). Using this API, you can send buffered audio data to your audio output device without accumulating excessive queued data in kernel buffers.
See A Guide Through The Linux Sound API Jungle for some tips.

Algorithm - Handling Jitter and Drift with External Codec/Modem

I am writing a small module in C to handle jitter and drift for a full-duplex audio system. It acts as a very primitive voice chat module, which connects to an external modem that uses a separate clock, independent from my master system clock (ie: it is not slaved off of the system master clock).
The source is based off of an existing example available online here: http://svn.xiph.org/trunk/speex/libspeex/jitter.c
I have 4 audio streams:
Network uplink (my voice, after processing, going to the far side speaker)
Network downlink (far side's voice, before processing, coming to me)
Speaker output (the far side's voice, after processing, to the local speakers)
Mic input (my voice, before processing, coming from the local microphone)
I have two separate threads of execution. One handles the local devices and buffer (ie: playing processed audio to the speakers, and capturing data from the microphone and passing it off to the DSP processing library to remove background noise, echo, etc). The other thread handles pulling the network downlink signal and passing it off to the processing library, and taking the processed data from the library and pushing it via the uplink connection.
The two threads use mutexes and a set of shared circular/ring buffers. I am looking for a way to implement a sure-fire (safe and reliable) jitter and drift correction mechanism. By jitter, I am referring to a clock having variable duty cycle, but the same frequency as an ideal clock.
The other potential issue I would need to correct is drift, which would assume both clocks use an ideal 50% duty cycle, but their base frequency is off by ±5%, for example.
Finally, these two issues can occur simultaneously. What would be the ideal approach to this? My current approach is to use a type of jitter buffer. They are just data buffers which implement a moving average to count their average "fill" level. If a thread tries to read from the buffer, and not-enough data is available and there is a buffer underflow, I just generate data for it on-the-fly by either providing a spare zeroed-out packet, or by duplicating a packet (ie: packet loss concealment). If data is coming in too quickly, I discard an entire packet of data, and keep going. This handles the jitter portion.
The second half of the problem is drift correction. This is where the average fill level metric comes in useful. For all buffers, I can calculate the relative growth/reduction levels in various buffers, and add or subtract a small number of samples every so often so that all buffer levels hover around a common average "fill" level.
Does this approach make sense, and are there any better or "industry standard" approaches to handling this problem?
Thank you.
References
Word Clock – What’s the difference between jitter and frequency drift?, Accessed 2014-09-13, <http://www.apogeedigital.com/knowledgebase/fundamentals-of-digital-audio/word-clock-whats-the-difference-between-jitter-and-frequency-stability/>
Jitter.c, Accessed 2014-09-13, <http://svn.xiph.org/trunk/speex/libspeex/jitter.c>
I faced a similar, although admittedly simpler, problem. I won't be able to fully answer your question but i hope sharing my solutions to some practical problems i ran into will benefit you anyway.
Last year i was working on a system which should simultaneously record from and render to multiple audio devices, each potentially ticking off a different clock. The most obvious example being a duplex stream on 2 devices, but it also handled multiple inputs/outputs only. All in all being a bit simpler than your situation (single threaded and no network i/o). In the end i don't believe dealing with more than 2 devices is harder than 2, any system with multiple clocks is going to have to deal with the same problems.
Some stuff i've learned:
Pick one stream and designate it's clock as "the truth" (i.e., sync all other streams to a common master clock). If you don't do this you won't have a well-defined notion of "current sample position", and without it there's nothing to sync to. This also has the benefit that at least one stream in the system will always be clean (no dropping/padding samples).
Your approach of using an additional buffer to handle jitter is correct. Without it you'd be constantly dropping/padding even on streams with the same nominal sample rate.
Consider whether or not you'd want to introduce such a jitter buffer for the "master" stream also. Doing so means introducing artificial latency in the master stream, not doing so means the rest of your streams will lag behind.
I'm not sure whether it's a good idea to drop entire packets. Why not try to use up as much of the samples as possible? Especially with large packet sizes this is far less noticeable.
To elaborate on the above, I got badly bitten by the following case: assume s1 (master) producing 48000 frames every second and s2 producing 96000 every 2 seconds. Round 1: read 48000 from s1, 0 from s2. Round 2: read 48000 from s1, 96000 from s2 -> overflow. Discard entire packet. Round 3: read 48000 from s1, 0 from s2. Etc. Obviously this is a contrived example but i ran into cases where on average I dropped 50% of secondary stream's data using this scheme. Introduction of the jitter buffer does help but didn't completely fix this problem. Note that this is not strictly related to clock jitter/skew, it's just that some drivers like to update their padding values periodically and they will not accurately report to you what is really in the hardware buffer.
Another variation on this problem happens when you really do got clock jitter but the API of your choice doesn't let you control packet size (e.g., allows you to request less frames than are actually available). Assume s1 (master) recording #1000 Hz and s2 alternating each second #1000 and 1001hz. Round 1, read 1000 frames from both. Round 2, read 1000 frames from s1, and 1001 from s2 -> overflow. Etc, on average you'll dump around 50% of frames on s2. Note that this is not so much a problem if your API lets you say "give me 1000 samples even though i know you've got more". By doing so though, you'll eventually overflow the hardware input buffer.
To have the most control over when to drop/pad, I found it easiest to allways keep input buffers empty and output buffers full. This way all dropping/padding takes place in the jitter buffer and you'll at least know and control what's happening.
If possible try to separate your program logic: the hard part is finding out where to pad/drop samples. Once you've got that in place it's easy to try different variations of pad/drop, sample-and-hold, interpolation etc.
All in all I'd say your solution looks very reasonable, although I'm not sure about the "drop entire packet thing" and I'd definitely pick one stream as the master to sync against. For completeness here's the solution I eventually came up with:
1 Assume a jitter buffer of size J on each stream.
2: Wait for a packet of size M to become available on the master stream (M is typically derived from the stream latency). We're going to deliver M frames of input/output to the app. I didn't implement an additional buffer on the master stream.
3: For all input streams: let H be the number of recorded frames in the hardware buffer, B be the number of recorded frames currently in the jitter buffer, and A being the number of frames available to the application: A equals H + B.
3a: If A < M, we have input underflow. Offer A recorded frames + (M - A) padding frames to the app. Since the device is likely slow, fill 1/2 of the jitter buffer with silence.
3b: If A == M, offer A frames to the app. The jitter buffer is now empty.
3c: If A > M but (A - M) <= J, offer M recorded frames to the app. A - M frames stay in the jitter buffer.
3d: If A > M and (A - M) > J, we have input overflow. Offer M recorded frames to the app, of the remaining frames put J/2 back in the jitter buffer, we use up M + J/2 frames and we drop A - (M + J/2) frames as overflow. Don't try to keep the jitter buffer full because the device is likely fast and we don't want to overflow again on the next round.
4: Sort of the inverse of 3: for outputs, fast devices will underflow, slow devices will overflow.
A, H and B are the same thing but this time they don't represent available frames but available padding (e.g., how much frames can i offer to the app to write to).
Try to keep hardware buffers full at all costs.
This scheme worked out quite well for me, although there's a few things to consider:
It involves a lot of bookkeeping. Make sure that for input buffers, data always flows from hardware->jitter buffer->application and for outputs always from app->jitter buffer->hardware. It's very easy to make the mistake of thinking you can "skip" frames in the jitter buffer if there's enough samples available from the hardware directly to the app. This will essentially mess up the chronological order of frames in an audio stream.
This scheme introduces variable latency on secondary streams because i try to postpone the moment of padding/dropping as long as possible. This may or may not be a problem. I found that in practice postponing these operations gives audibly better results, probably because many "minor" glitches of only a few samples are more annoying than the occasional larger hiccup.
Also, PortAudio (an open source audio project) has implemented a similar scheme, see http://www.portaudio.com/docs/proposals/001-UnderflowOverflowHandling.html. It may be worthwile to browse through the mailinglist and see what problems/solutions came up there.
Note that everything i've said so far is only about interaction with the audio hardware, i've no idea whether this will work equally well with the network streams but I don't see any obvious reason why not. Just pick 1 audio stream as the master and sync the other one to it and do the same for the network streams. This way you'll end up with two more-or-less independent systems connected only by the ringbuffer, each with an internally consistent clock, each running on it's own thread. If you're aiming for low audio latency, you'll also want to drop the mutexes and opt for a lock-free fifo of some sorts.
I am curious to see if this is possible. I'll throw in my two bits though.
I am a novice programmer, but studied audio engineering/interactive audio.
My first assumption is that this is not possible. At least not on a sample-to-sample basis. Especially not for complex audio data and waveforms such as human speech. The program could have no expectation of what the waveform "should" look like.
This is why there are high-end audio interfaces with temperature controlled internal clocks.
On the other hand, maybe there is a library that can detect the symptoms of jitter, somehow...
In which case I would be very curious to hear about it.
As far as drift correction, maybe I don't understand something on the programming front, but shouldn't you be pulling audio at a specific sample rate? I believe sample rate/drift is handled at the hardware level.
I really hope this helps. You might have to steer me closer to home.

Delay added to sound

I'm going to write an application in Silverlight that consists of 2 threads, one that plays sound and another that records sound. And whatever is recorded will be what was played plus some ambient noise.
The problem is that Silverlight adds a delay to the sound to be played, and because I don't know how much is this delay, I would not know precisely what was played when something is recorded.
Do you know where I can find more information about this delay (how much is it, is it constant, will it change if I restart my application or computer, will it be the same in different computers, ...), or how could I measure it with an accuracy of 1 ms?
To measure the delay you can play some form of generated sound (like sin wave with increasing amplitudes), capture it and match input and output signals.
The delay itself is a complicated matter especially when dealing with low latencies. There are a lot of things involved in building delay including SL itself, audio stack, OS and audio hardware. Some additional info is here.

Resources