Reading wave files for mono conversion (Minix 3) - c

I'm about to start working on a project for Minix 3 (in C).
My idea is to create some kind of a music player. I want to be able to read files (WAV) and then convert them to a stream of frequencies send to the Timer 2.
Since, has far as I know, there is no easy way to read real music files, I thought of approaching the real frequencies in a block, to a simple mono curve sent to the timer 2.
Ok, issues:
I read and learned, how to read wav headers, but, I can't find anywhere what's the meaning of the data in the data chunk. How should I interpret it?
My initial idea was to make a real music player, but, in my classes we didn't learned how to work with the sound board in Minix 3. Is there some tutorial, anything where I can learn it?
As far as I could realize, C as already a library to manage sound (BASS). Can and How I install it in Minix 3?
Finally, Is it a way to make all this simpler?

A WAV files is not a "stream of frequencies". It contains a series of samples formatted according to the information written in the header.
In best of worlds you just set up your sound card to handle the data format specified in the header, then you just have to keep providing the raw data in the "DATA" chunks to your sound cards data buffers.
How this is done in Minix 3 is out of bounds for this answer (I just don't know how Minix handles sound at all) but I'm sure it will be to great help for understanding the basics of digital audio.

Related

how to convert jpegs to video with fixed fps?

I have a series of jpegs,I would like to pack and compress them to a Video.
I use tool mpeg streamclip, but it double the whole play time.
If I have 300 jpegs, set fixed fps 30, I expect to get a video of 10s length . but using stream clip I get a 20s long video.
One answer is to get someone who understands programming. The programming APIs (application interfaces, the way client programs call libraries) to the lig libraries like ffmeg have ways in which frame rate can be controlled, and it's usually quite a simple matter to modify a program to produce fewer intermediate frames if you are creating a video from a list of JPEGs.
But the best answer is probably to find a tool that supports what you want to do. That's not a question to ask a programmer especially. Ask someone who knows about video editing. (It would take me about two days to write such a tool from scratch on top of my own JPEG codec and ffmpeg, so obviously I can't do it in response to this question, but that's roughly the level of work you're looking at).

How to filter specific frequencies from an Audio file in C

After searching on various search engines, and also here, there is very little information applicable to my situation.
Basically I want to make a program in C that does the following:
Open an Audio File (flac Mp3 and wav, to represent a bit of variety)
Filter and cut out a specific set of frequencies (for Example 4000-5200hz, the frequencies should be entered upon inquiry)
Save the new file (without the filtered frequencies) in the same format as the input file.
Things that would be of interest to me:
Open-Source examples of software that does the same or a similar thing, preferably in C
ANY literature on audio programming in C
Explanations on how the different formats are structured, any sources appreciated
Ps.: I apologise if some parts of the question can be easily googled, but I tried, and there wasn't anything that described this well in detail.
Thanks a lot!!
Answers:
FFmpeg does a lot of audio slicing and dicing, and it's written in pure C. It's pretty big, though, and might be difficult to digest in one go.
"Audio programming" is a bit vague. But from the rest of your question, it sounds like you want to open an audio file from disk, apply some transformations to the audio, and write the data to a new file. (Other areas under the "audio programming" umbrella would include accessing platform-specific APIs to read from a microphone and write audio to an output device).
Broad topic again, but we'll start simple.
I suggest getting (or generating) a .WAV file to start with. WAV files are probably the simplest audio files to read and write manually. Here is a page that describes what you need to know about the WAV format.
Pulse code modulation (PCM) is the simplest audio format to work with since you don't need to worry about decompressing it first. Here is a page (that I wrote) describing different PCM formats.
As for filtering and cutting different frequencies, I think what you're looking for would be low-pass, high-pass, or band-pass filters.
I hope that helps you get started. Ask more questions here on Stack Overflow as needed.

Midi C Library for creating a Game

I'm creating a simple midi based game in C and I am wondering if there are any libraries to load a midi file, play and also manipulate it by getting the note values.
Thanks in advance!
Check SDL. More especifically, SDL_mixer.
Description:
SDL_mixer is a sample multi-channel audio mixer library.
It supports any number of simultaneously playing channels of 16 bit stereo audio, plus a single channel of music, mixed by the popular MikMod MOD, Timidity MIDI, Ogg Vorbis, and SMPEG MP3 libraries.
If you want to manipulate the MIDI and process its contents yourself rather than just playing it, SDL_Mixer might not be what you want. In this case I would just read the spec and write your own code. MIDI is an extremely simple format and you can probably write whatever code you need in 15 minutes or so... :-)
I realize this question has already been answered, but I would have to agree with #R.. in that SDL is probably overkill for what you are trying to do. You should also take a look at jdksmidi, a much smaller MIDI library written in C++.

Audio (mp3) editing in C

What I wish to do is to "split" an .mp3 into two separate files, taking duration as an argument to specify how long the first file should be.
However, I'm not entirely sure how to achieve this in C. Is there a particular library that could help with this? Thanks in advance.
I think you should use the gstreamer framework for this purpose. So you can write an application where you can use existing plugins to do this job. I am not sure of this but you can give it a try. Check out http://gstreamer.freedesktop.org/
For queries related to gstreamer: http://gstreamer-devel.966125.n4.nabble.com/
If you don't find any library in the end, then understand the header of a mp3 file and divide the original file into any number of parts and add individual headers to them. Its not gonna be easy but its possible.
It can be as simple as cutting the file at a position determined by the mp3 bitrate and duration requested. But:
your file should be CBR - that method won't work on ABR or VBR files, because they have different densityes
your player should be robust enough not to break if it gets partial mp3 frame at a start. Most of the playback libraries will handle mp3s split that way very gracefully.
If you need more info, just ask, I am on the mobile and can get you more info later.
If you want to be extra precise when cutting, you can use some library to parse mp3 frame headers, and then write frames that you need. That way, and the way mentioned before, you'll get only frame alignment as a minimum, and you have to live with thaty and that's 40ms.
If that isn't enough, you must decode mp3 to PCM, split at sample boundary, then recompress to mp3 again.
Good luck
P.s.
When you say split, I hope you don't expect them to play one after another with no audible 'artifacts'. Mp3 frames aren't self-sufficient, as they carry information from the frame before.

How are files (especially audio files) organized internally?

I try to grok that: Apple is talking about "packets" in audio files, and there is a fancy function called AudioFileReadPackets which takes a lot of arguments. One of them specifies the "start packet", and another one the number of packets which you want to read.
So I imagine an audio file to look like this, internally: It's made up of a lot of packets. If it's an audio file which has an variable bit rate format, then every packet may have a different size. If the file has an constant bit rate format, then every packet is the same size. So an audio file is like a truck full of boxes, and every box contains some interesting stuff.
Is that correct? Does it apply to any kind of file? Is this how files actually look like?
The question (even with the "especially audio files" qualification) is far too broad; different file formats are, well, different!
So to answer the question you will first have to specify a particular file type; then the answer to the question will invariably to look at its specification. Proprietary formats may not have a publicly available specification.
Specifications for many files (official and reverse engineered) can be found at the brilliant Wotsit's Format site.
AAC used by Apple iTunes and others is defined by ISO/IEC 13818-7:2006. The document will cost you 252 Swiss Francs (about US$233)! You'd have to be really interested (commercially) to pay that rather than use an existing AAC Codec.
"Packet" is a term commonly used in data transmission, so may be more applicable to audio streaming than audio files, where a "frame" may be more appropriate, or for data files in general a "record", but the terminology is flexible because it means whatever the person that wrote it thought it meant! If enough people misuse a term, it essentially becomes redefined (or multiply defined) to mean that, so I would not get too hung up on that. The author was do doubt using it to define a unit that has a defined format within a file that has multiple such units repeated sequentially.
"Packet" looks to me like Apple-specific terminology. I just did a lot of reading and coding to process WAV and MP3 files and I don't believe I saw the term "packet" once.
Files contain whatever the application that created them chose to place in them. Files are essentially a sequence of bytes. Any further organisation is a semantic distinction made by the program that created them. It is untrue to think of all files containing the same structure.
That said, certain data storage problems are similar enough to be solved in similar ways, and patterns start to emerge. Splitting data into records or packets is an example of that.
That's pretty much what audio files look like: a series of chunks of data, or frames. AudioFileReadPacketData and AudioFileReadPackets shield you from the details of, for instance, how big a frame might be in bytes (because you might be reading from a WAV file, which has a different structure to an MP3 file, or your MP3 file uses a variable bit rate).
The concept of frames doesn't apply in general to any file, but then you wouldn't be using the Audio File Services API to access just any old file.
For MP3 (and MP1, MP2) the file consists of frames. And yes, your understanding is correct - in VBR files packets have different size. In WAV files packets have the same length if memory serves (I wrote a decoder / player 11 years ago,).

Resources