Programming Linux application to play multiple sounds simultaneously - c

I've got a need to write a Linux application that does the following:
1- Continuously play a WAV file in the background. So the entire
time the application is running this background music plays.
2- Be able to play short sounds when certain events happen while the background music continues to play.
What is required to mix in the additional event sounds when they happen with the background music so that both are heard at the same time?
I've never written Linux sound code, so this is ALL new to me. I'm assuming that I will need to write to the ALSA API? Or some other library that will facilitate this?
If somebody could provide sample code to get me started I would greatly appreciate it. After a few days I will create a bounty and provide a good deal of reputation for sample code that does what is needed.

You usually don't want to use ALSA API directly. It's hard to use, and not really portable (since ALSA is specific to Linux).
If you are using some specific libraries in your application (like Qt or something like that), there may be already a counter-part sound library for playing sounds.
If you are looking for a good, general-use sound library, I suggest you take a look at SDL. It's quite nice, small and portable; very popular for games. They have quite a nice example code on their site to get you started.

For the part of playing sounds, one library that I used which is easy to learn, use and has a good example in its documentation is fmod. The documentation that comes with the download has a very easy to understand example which you can modify and get your sounds played very quickly.

Related

How to play multiple sounds like SDL_Mixer does, but natively in SDL2?

In my last question I figured out how to play sounds natively in SDL2: How to lower the quality and specs of a wav file on linux
The issue I have now is wanting to mimic the 1 music and many sounds thing that SDL_Mixer does. A theory is I can use different channels, mono, stereo etc to play multiple sounds. Another theory is to look deep into the SDL Audio functions and try and find someway to play many sounds. I thought even using threads may work, but the problem with that is I find overwriting the default audio device seems to remove my old sound.
Has anyone done this or have any idea of playing multiple sounds with background music natively in SDL2 using the SDL_OpenAudioDevice with WAV files?
Having dug into the code for a bit, the answer is not to use SDL_memset(stream, 0, len); as described here which was silencing the sound. It appears at first glance that you can mix in other sounds without it overwriting. Will post later if I find changes in the answer.

Simple C audio library

I'm looking for a simple-ish library for outputting audio. I'd like it to meet these criteria:
Licensed under LPGL/zlib/MIT or something similar – i'm going to use it in an indie commercial application and i don't have the money for a license.
Written in C, but C++ is fine.
Cross-platform (Windows, Linux, maybe OSX)
Able to read from some sort of audio file (i'd prefer WAV or OGG but i will gladly use less popular formats if need be) in memory (i've seen the use of a memfile struct and user-defined I/O callbacks). I need the file to be in memory because i put all my resources into a .zip archive, and i use another library to load those archived files into memory.
Supports playing multiple sounds at the same time, having a max of 8 or so is ok.
I'd really like to either have the source code or a static library (MinGW/GCC lib???.a), but if nothing else is available i will use a shared library.
I must have come accross two dozen different audio libraries in my search, all of which haven't quite met these criteria...
I would recommend PortAudio + libsndfile. Very popular combo, meets your requirements. Used by many other software applications including audacity.
Some of the candidates that immediately spring to my mind are:
SDL (there is a tutorial that demonstrates how to play a .wav format sound)
libav
ffmpeg
libao
OpenAL Soft
Jack Audio
You may have already looked at these and eliminated them, though. Can you give some more detail about the libraries that you have eliminated from consideration and why? This will help narrow down our recommendations.
You might want to look into SDL and SDL_mixer. Here is a good tutorial.
I've used SDL_mixer and it makes it easy to play background sounds or music and play multiple simultaneous sounds without having a need to write your own sound sample mixer.
I ended up using PortAudio (very low-level, flexible license) and wrote a mixer myself. See this topic i made on the C++ forums for some other people's tips on writing a custom mixer. It's not hard at all, really; i'm surprised that there are so many mixer libraries out there. For a breakdown of the WAV format (ready-to-stream raw audio data with a 44-byte header) see this.

Starting with the Core Audio framework

For a project that I intend to start on soon, I will need to play back compressed and uncompressed audio files. To do that, I intend to use the Core Audio framework. However, I have no prior experience in audio programming, and I'm really not sure where to start. Are there any beginner level resources or sample projects that can demonstrate how to build a simple audio player using Core Audio?
A preview of a book on Core Audio just came out. I've started reading it and as a beginner myself I find it helpful.
It has a tutorial style teaching method and is very clear in its explanations. I highly recommend it.
Although the questiona has already been answered.. I would like to add in a little more tips since I've struggled with the same issue for months:
Here is a very simple example code I created based on sample code on the learning core audio book.
The Matt Gallagher audio streaming tutorial is a definite must.. in addition to providing an excellent example of streaming audio live.. its also provides a simple example of multi-threading.. which brings me to the next VERY IMPORTANT point
In Apple's concurrency guide.. they advise against using multithreading.. and give you a host of suggestions like GCD and NSOperations etc etc.. NOT A GOOD IDEA when it comes to core audio.. at least real time audio.. b/c real time audio is extremely sensitive to any kind of blocking or expensive operations.. more than you can imagine (ie sometimes even simple NSLog statements can make the audio break up or even not play at all!!) Here is an indispensable read regarding this part of audio.
Audio programming is a different kind of programming than what most of us are used to. Hence take your time to understand concepts.. a lot of them will take time to sink in.. for example the difference between an audio file format and an audio streaming format.. the difference between compressed audio and PCM (non-compressed) audio.. examples abound.
One key point that took me a while to comprehend: to access audio files within the iPad library.. the only way to read them is via the AVAssetReader API methods.. not through the other APIs like AudioFileReadPackets etc.. (although if you store a file manually in your project.. then you can).. the AVAssetReader is a lot less user friendly than the other API.. but once the concepts of core audio sink in.. you won't find much of a difference.. My example uses AVAssetReader
See the discussion I've been having with Justin here.. in it you'll see a lot of pitfalls that i've fallen into and you'll get an idea on how to avoid them. Remember, especially with Core Audio.. it's not about knowing how to solve the problem.. it's about knowing what problem to solve in first place.
If you or any one else has any questions with regards to core audio please feel free to write up a question on stack overflow and point it out to me by commenting on one of my own questions just to bring it to my attention.. i've been helped a lot by the community here and i really wanna offer help in return
I wrote some sample code after spending a long time trying to figure out a similar problem as yours.
The sample code allows the user to select a song from their iPod library, it then creates an uncompressed (LPCM) copy of the file (using AVAssetReader/AVAssetWriter), and plays it back using AudioUnit (which is part of CoreAudio).
If you want to play an arbitrary file, just remove the bits of my code that create the uncompressed copy (look for AVAssetReader/AVAssetWriter), and instead have the class point to some other song file.
http://www.libsdl.org/
i think for your requirement you can get better support from above link

How do I play a tone in Linux using C?

I'm trying to write a program to randomly generate music based on a simple set of rules. I would like the program to be able to generate its own sounds, as opposed to having a file with audio for each note. Does anyone know a simple way of doing this? It would be nice (but not essential) for the sound to be polytonal, and I would like a solution for Linux, using C.
I suggest you try the PortAudio library. It is a lean cross-platform library that abstracts the audio-output functionality.
It comes with a bunch of small examples. One of them plays a single sine-wave, one plays a bunch of sine-waves at the same time. Since the examples already do 90% of what you need you should have your audio up and running in less than half an hour.
Hint: The best documentation of PortAudio is in the headerfile!
Here is an ALSA example that plays a pure sine-wave tone. Accidentally, I guess, it also demonstrates why you might not want to do this directly against the ALSA library.
You can try to find a C midi sequencer (such as MIDI Sequencer). Also look into building .au formatted audio files (i.e. look at the specs for .au headers and sound data format). You won't be able to use .wav format because it requires a length in the header to be filled in before playback.

What's a good API for creating music via programming?

I'm looking into playing around with procedurally generating music. I'm hoping to find a really a simple API where I can just call out instrument, note, duration and string together a song (I'll take anything of course, but that would be my preference). Is there any library that does this?
Your best bet is a music programming environment, of which there are several.
Csound is one of the best known ones. Here is their website.
Max MSP is also another widely used option, and it provides a visual programming interface too. It is, however, commercial.
Another well known option (and widely used by experimental electronic musicians) is SuperCollider. This is its webpage.
Here's a Wikipedia article describing similar languages/environments.
You can also use a general programming language with the right libraries to do audio/music work. Java, for one, provides the Java Sound API.
JFugue was developed specifically to support procedural generation of music. It's a free, open-source (LGPL) Java API.
It's hard to give specific recommendations, since you didn't specify a language. Most languages have a decent MIDI library though, that would be the first place I would look, unless you need something heavier than the MIDI format allows.
Maybe Generative music is a good start.
Googling leads a couple interesting links, too. Brian Eno created procedurally generated music for Spore.
You might want to look at Common Music.
It's a music composition system that transforms high-level algorithmic representations of musical processes and structure into a variety of control protocols for sound synthesis and display

Resources