How to play sounds at different tones in C without using any external library? I know there are dozens of sound libraries in C that allows you to play sound but what I want to know is how does that work behind? How do you tell the computer to play a certain note at a certain tone/frequency?
I know it's possible on windows using the sound() function but I can't find any documentation talking about Linux, all that I found is the beep() function (or write(1, "\a", 1)) that outputs the default terminal beep but I can't figure out how to play different sounds.
The Linux kernel native audio API is ALSA (Advanced Linux Sound Architecture).
Example of raw audio playback with ALSA:
https://gist.github.com/ghedo/963382/815c98d1ba0eda1b486eb9d80d9a91a81d995283
However, ALSA is a low-level API that is not recommended to be used directly by higher-level applications.
A modern system audio API for GNU/Linux would be either PulseAudio (the current default on Ubuntu), or the newer and arguably better PipeWire (the default on Fedora).
Example of raw audio playback with PipeWire that generates audio "from scratch":
https://docs.pipewire.org/page_tutorial4.html
How do you tell the computer to play a certain note at a certain tone/frequency?
Sound is a mechanical vibration that propagates through the air (or another medium). It can be represented digitally as a sequence of numerical values representing air pressure at a given sampling rate. To play a given tone/frequency, generate a sine wave of that frequency (at the playback sampling rate) and use the sound API of your choice to play it.
See the PipeWire tutorial above for an example generating a 440Hz tone.
About PulseAudio/PipeWire:
These libraries are typically part of the OS and exposed as system APIs (so they are not "external libraries" if that means some library to ship with your program or to ask users to install), and they should be used by applications to play audio.
Behind the scene, these libraries handle audio routing, mixing, echo-canceling, recording, and playback to the kernel through ALSA (or using Bluetooth, etc). Everything that users and developers expect from the system audio layer.
Until recently, PulseAudio was the de-facto universal desktop system audio API, and many apps still use the PulseAudio API to play audio on GNU/Linux.
PipeWire includes compatibility with PulseAudio, so that apps using the PulseAudio API will keep working in the foreseeable future.
Example of raw audio playback with PulseAudio:
https://freedesktop.org/software/pulseaudio/doxygen/pacat-simple_8c-example.html
Related
I would like to use C in order to get the last time the soundboard was playing a file. Is there a way I could do that?
None of the components you are using (tools, libraries, sound servers, drivers, kernel) logs the time when a sound is played.
If you are using one specific tool to play sounds, you could modify it to log the time.
Otherwise, you have to actively monitor the current status of the sound device.
(With ALSA, you could poll /proc/asound/card*/pcm*/sub*/status.)
I think it's not possible because of ALSA(Advanced Linux Sound Architecture) is just kernel component that provide device drivers for sound card.But i don't know if some user-space API's and library's like (alsa-ustils) can do that!,I advice may is better to check Sound-Player applications(VLC etc..) log ?!
I am attempting to turn my four-wire apartment buzzer into a VOIP phone using a raspberry pi and a custom circuit. The problem is that two way communication is not supported. I can either be listening or speaking. I want to use a standard SIP setup with asterisk, but do VAD on the sound output of the raspberry pi in order to send a digital signal switching the intercom to "speak mode" whenever there is a voice on the audio output. Is there any pre-existing c function or include that listens to the ALSA mixer and throws a 1 for speech and a 0 for absence of speech with sufficiently low latency to be used in this walkie-talkie like system?
Once again, I would prefer pre-existing libraries, and because this is live, low latencies.
ALSA is a simple audio mixer, and it's interface only consists of mixer-related methods. It's meant to abstract away the hardware driver. What you will be able to do is get the audio data from ALSA is real time, however you will need to implement your own voice activity detection.
This question on Signal Processing SE has a few good suggestions for libraries and codec implementations to get you started.
Can I detect insertion and removal of headset using alsa API ? Which API should I use ? My kernel is 3.0 Linux running on ARM.
Having wanted precisely this functionality for a embedded project, I did some investigations and came to the conclusion (about 6 months ago) that there isn't any generic support in ALSA for jack insertion detection.
Interestingly, I did find headphone (and microphone) detection support in the codec driver I was using (tlv310aic3xxx), but is didn't seem to be plumbed into any upper layers. I suspect that the reason this exists is is Android.
There are essentially two ways of adding this support:
Add support to the codec driver - probably exposing a sysfs node that something in user-space can then block on.
Force access to the I2C bus on which the codec is hung (the codec driver usually 'owns' the device) and programme the relevant registers from user-space.
You may face an additional architectural problem in that whilst the codec can detect insertion events, it needs some way of interrupting the CPU. The tlv310aic3xxx devices have programmable GPIO pins, which can be connecting to a interrupt line on the main CPU (if on an embedded platform, this will be another GPIO line). Without this, you'll need to poll it.
I am building a small OS. For that i wanted to play audio. I can use ports,interrupts. I have no restrictions as i am building an OS itself.
So how can i play an audio file using a C program. Please remember that I cannot and don't want to use any ready made library or so.
If you're building your own OS, you also need to care about the physical details of your audio hardware.
Differences in hardware is why operating systems introduced the concept of the device driver.
If your audio hardware is sound blaster compatible, have a look here:
http://www.phatcode.net/res/243/files/sbhwpg.pdf
There's an archive of lots of hardware-near audio code here (various hardware platforms):
http://www.dcee.net/Files/Programm/Sound/
Here's a chapter on programming sound devices:
http://www.oreilly.de/catalog/multilinux/excerpt/ch14-01.htm
I am researching a project in which I need to playback simultaneously a multi-track audio source. ( >30 mono channels ) The audio on all channels needs to start simultaneously and be sustained for hours of playback.
What is the best audio API to use for this? WDM and ASIO have come up in my searches. I will be using a MOTU PCI Audio interface to get this many channels. The channels show up as normal audio channels in the host PC.
ASIO is definitely the way to go about this. It will keep everything in sync properly, with low latency, and is the defacto industry standard way to do it. Any pro audio interfaces supports ASIO, and for interfaces that don't, there is a wrapper that is capable of syncing multiple devices.