I'm looking for a simple-ish library for outputting audio. I'd like it to meet these criteria:
Licensed under LPGL/zlib/MIT or something similar – i'm going to use it in an indie commercial application and i don't have the money for a license.
Written in C, but C++ is fine.
Cross-platform (Windows, Linux, maybe OSX)
Able to read from some sort of audio file (i'd prefer WAV or OGG but i will gladly use less popular formats if need be) in memory (i've seen the use of a memfile struct and user-defined I/O callbacks). I need the file to be in memory because i put all my resources into a .zip archive, and i use another library to load those archived files into memory.
Supports playing multiple sounds at the same time, having a max of 8 or so is ok.
I'd really like to either have the source code or a static library (MinGW/GCC lib???.a), but if nothing else is available i will use a shared library.
I must have come accross two dozen different audio libraries in my search, all of which haven't quite met these criteria...
I would recommend PortAudio + libsndfile. Very popular combo, meets your requirements. Used by many other software applications including audacity.
Some of the candidates that immediately spring to my mind are:
SDL (there is a tutorial that demonstrates how to play a .wav format sound)
libav
ffmpeg
libao
OpenAL Soft
Jack Audio
You may have already looked at these and eliminated them, though. Can you give some more detail about the libraries that you have eliminated from consideration and why? This will help narrow down our recommendations.
You might want to look into SDL and SDL_mixer. Here is a good tutorial.
I've used SDL_mixer and it makes it easy to play background sounds or music and play multiple simultaneous sounds without having a need to write your own sound sample mixer.
I ended up using PortAudio (very low-level, flexible license) and wrote a mixer myself. See this topic i made on the C++ forums for some other people's tips on writing a custom mixer. It's not hard at all, really; i'm surprised that there are so many mixer libraries out there. For a breakdown of the WAV format (ready-to-stream raw audio data with a 44-byte header) see this.
Related
I'm encoding images as video using FFmpeg using custom C code rather than linux commands because I am developing the code for an embedded system.
I am currently following through the first dranger tutorial and the code provided in the following question.
How to encode a video from several images generated in a C++ program without writing the separate frame images to disk?
I have found some "less abstract" code in the following github location.
https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/encode_video.c
And I plan to use it as well.
My end goal is simply to save video on an embedded system using embedded C source code, and I am coming up the curve too slowly. So in summary my question is, Does it seem like I am following the correct path here? I know that my system does not come with hardware for video codec conversion, which means I need to do it with software, but I am unsure if FFmpeg is even a feasible option for embedded work because I am yet to compile.
The biggest red flag for me thus far is that FFmpeg uses dynamic memory allocation. I am unfamiliar with how to assess the amount of dynamic memory that it uses. This is very important information to me, and if anyone is familiar with the amount of memory used or how to assess it before compiling, I would greatly appreciate the input.
After further research, it seems to me that encoding video is often a hardware intensive task that can use multiple processors and mega-gigbyte sizes of RAM. In order to avoid this I am performing a minimal amount of compression by utilizing the AVI format.
I have found that FFmpeg can't readily be utilized for raw-metal embedded systems because the initial "make" of the library sets up configuration settings specific to the computer compiling, which conflicts with the need to cross compile. I can see that there are cross compilation flags available, but I have not found any documentation describing how to use them. Either way I want to avoid big heaps and multi-threading, so I moved on.
I decided to look for more basic source code elsewhere. mikekohn.net/file_formats/libkohn_avi.php Is a great resource for very basic encoding without any complicated library dependencies or multi-threading. I am yet to implement, so no guarantees, but best of luck. This is actually one of the only understandable encoding source codes that I have found for image to video applications, other than https://www.jonolick.com/home/mpeg-video-writer. However, Jon Olick's source code uses lossy encoding and a minimum framerate (inherent to MPEG), both of which I am trying to avoid.
I've got a need to write a Linux application that does the following:
1- Continuously play a WAV file in the background. So the entire
time the application is running this background music plays.
2- Be able to play short sounds when certain events happen while the background music continues to play.
What is required to mix in the additional event sounds when they happen with the background music so that both are heard at the same time?
I've never written Linux sound code, so this is ALL new to me. I'm assuming that I will need to write to the ALSA API? Or some other library that will facilitate this?
If somebody could provide sample code to get me started I would greatly appreciate it. After a few days I will create a bounty and provide a good deal of reputation for sample code that does what is needed.
You usually don't want to use ALSA API directly. It's hard to use, and not really portable (since ALSA is specific to Linux).
If you are using some specific libraries in your application (like Qt or something like that), there may be already a counter-part sound library for playing sounds.
If you are looking for a good, general-use sound library, I suggest you take a look at SDL. It's quite nice, small and portable; very popular for games. They have quite a nice example code on their site to get you started.
For the part of playing sounds, one library that I used which is easy to learn, use and has a good example in its documentation is fmod. The documentation that comes with the download has a very easy to understand example which you can modify and get your sounds played very quickly.
Both has bindings for C, both can play various formats.
Which one is more superior? in terms of simplicity, performance, overhead and memory footprint.
Also which one is better at handling multiple streams?
I have not programmed with either of those, but I believe that OpenAL has been designed to render and output multiple-channel audio for games, with real-time performance as a requirement.
libSoX is more for input and output from audio files, as well as for format conversions. There are lots of plugins but AFAIK it has not been designed for real-time audio output. It seems significantly simpler to use, though.
You might also want to have a look at libsndfile.
What exactly is it that you want to do?
While I have never used OpenAL, I've heard a lot of bad things about it that make it sound unprofessional and basically a dead end. From Wikipedia:
OpenAL was originally developed by Loki Software in order to help them in their business of porting Windows games to Linux. After the demise of Loki, the project was maintained for a time by the free software/open source community, and implemented on NVIDIA nForce sound cards and motherboards. It is now hosted (and largely developed) by Creative Technology with on-going support from Apple, Blue Ripple Sound, and free software/open source enthusiasts.
While the OpenAL charter says that there will be an "Architecture Review Board" (ARB) modeled on the OpenGL ARB, no such organization has ever been formed and the OpenAL specification is generally handled and discussed via email on its public mailing list.
Since 1.1, the implementation by Creative has turned proprietary, with the last releases in free licenses still accessible through the project's subversion. However, OpenAL Soft is a widespread Open Source alternative.
There was also an issue with it messing up the state of the calling application; I believe just linking it caused some global constructors to run before the invocation of main in a way that altered the program's initial environment, and broke some programs (MPlayer perhaps?). It's unclear to me whether this issue was ever fixed, but it screams bad library and I would be skeptical of ever trusting a library historically contained such abuses.
I need to create a WPF control that will play an rtp stream with the requirement that the latency needs to be as low as possible.
I've looked at the following two projects:
http://vlcdotnet.codeplex.com/
http://wpfmediakit.codeplex.com/
As far as I know, I can't use VLC because we're shipping a commercial application with a more restrictive license than GPL (i.e. we can't ship our source).
Wpf media kit is nice, but I can't seem to find a good/free rtp directshow source filter and I wanted to ask if there is a simpler solution out there that I'm missing before I jump into writing my own.
Any ideas?
VLC uses the LIVE555 library for the RTP/RTSP side of things so perhaps that will be useful to you, it's licensed under LGPL. It is a C++ library so you'd have to get out pinvoke and since I haven't ever used the library I can't say how difficult that would be.
There is pjsip.net but looks like it's GPL since that's what the underlying pjsip and pjmedia are.
Here's a handy list of RTP stacks.
There's not simple solution that I've come across. I have made RTSP filter's in the past using LIVE555, but I don't think that falls into the realm of "easy".
I did see this on source forge, but I read comments questioning if it even works.
I'm trying to write a program to randomly generate music based on a simple set of rules. I would like the program to be able to generate its own sounds, as opposed to having a file with audio for each note. Does anyone know a simple way of doing this? It would be nice (but not essential) for the sound to be polytonal, and I would like a solution for Linux, using C.
I suggest you try the PortAudio library. It is a lean cross-platform library that abstracts the audio-output functionality.
It comes with a bunch of small examples. One of them plays a single sine-wave, one plays a bunch of sine-waves at the same time. Since the examples already do 90% of what you need you should have your audio up and running in less than half an hour.
Hint: The best documentation of PortAudio is in the headerfile!
Here is an ALSA example that plays a pure sine-wave tone. Accidentally, I guess, it also demonstrates why you might not want to do this directly against the ALSA library.
You can try to find a C midi sequencer (such as MIDI Sequencer). Also look into building .au formatted audio files (i.e. look at the specs for .au headers and sound data format). You won't be able to use .wav format because it requires a length in the header to be filled in before playback.