here's an interesting question:
Suppose I have an audio recording of a C major chord (C-E-G) played on the piano, which I would like to separate into three separate audio files - one with only C, one with only E, and one with only G playing (even a single audio file of a single note would suffice). I know software like Melodyne is capable of doing such things but would love for anyone to guide me in the right direction here.
What I've already tried: writing a neural network to do this (but honestly that feels like overkill.. this shouldn't be that challenging), and playing with the STFT and trying to figure something out from there.
Any help would be greatly appreciated!
Related
I would like to get into the field of FPGAs a bit. I currently have a PolarFire Everest Dev Board and would like to try something small on it for testing purposes. My current level is very low, i.e. complete beginner. My first working project was a counter that counts binary to 15 and outputs it via the LEDs of the board. Now I wanted to play with RISC-V. Unfortunately I can't find anything on the internet that meets my expectations and almost nothing is "beginner friendly". My current goal is actually just to implement something on the level of a Hello World program in C via the SoftConsole. Unfortunately I have no idea how to go about it. Can anyone help me or recommend a good entry on the internet? Most of the stuff is either unusable, requires licenses I can't get, or is simply no longer available (which happens to me quite often with PDFs from Microsemi).
Since I don't really know what I could do with it to start with, I don't have any code yet that I would like to include. The plan would actually be to create something where I also get feedback via the board that something has been done. Later when I have more understanding SRAMs should be managed with it.
I have a series of jpegs,I would like to pack and compress them to a Video.
I use tool mpeg streamclip, but it double the whole play time.
If I have 300 jpegs, set fixed fps 30, I expect to get a video of 10s length . but using stream clip I get a 20s long video.
One answer is to get someone who understands programming. The programming APIs (application interfaces, the way client programs call libraries) to the lig libraries like ffmeg have ways in which frame rate can be controlled, and it's usually quite a simple matter to modify a program to produce fewer intermediate frames if you are creating a video from a list of JPEGs.
But the best answer is probably to find a tool that supports what you want to do. That's not a question to ask a programmer especially. Ask someone who knows about video editing. (It would take me about two days to write such a tool from scratch on top of my own JPEG codec and ffmpeg, so obviously I can't do it in response to this question, but that's roughly the level of work you're looking at).
[wmv2 # 0xb42400]
warning, clipping 1 dct coefficients to -255..255
I'm modifying some code in a C API that interacts between FFmpeg and and an AS3 Air application to encode a video after creating something.
WMV was working okay earlier but now I've set things back I'm getting this very peculiar warning coming from the c library but it doesn't make any sense and googles not providing many answers.
I was wondering if anyone out there knew what this warning was about? When the file comes back to me it's totally empty with 0 frames. I must have changed something so I'm comparing the file from a few days ago with this one looking for anything that may have caused it to no longer work. But I was wondering if anyone had any better ideas than blindly looking through old and new code.
This particular warning comes from the FFmpeg core code. It is just warning that it had to perform an adjustment on some bits of the video stream in order to successfully decode it. It might help to understand that the WMV2 algorithm (being decoded, per your error message) was reverse engineered from binary code and reimplemented in FFmpeg, which is why things like this slip through the cracks.
After searching on various search engines, and also here, there is very little information applicable to my situation.
Basically I want to make a program in C that does the following:
Open an Audio File (flac Mp3 and wav, to represent a bit of variety)
Filter and cut out a specific set of frequencies (for Example 4000-5200hz, the frequencies should be entered upon inquiry)
Save the new file (without the filtered frequencies) in the same format as the input file.
Things that would be of interest to me:
Open-Source examples of software that does the same or a similar thing, preferably in C
ANY literature on audio programming in C
Explanations on how the different formats are structured, any sources appreciated
Ps.: I apologise if some parts of the question can be easily googled, but I tried, and there wasn't anything that described this well in detail.
Thanks a lot!!
Answers:
FFmpeg does a lot of audio slicing and dicing, and it's written in pure C. It's pretty big, though, and might be difficult to digest in one go.
"Audio programming" is a bit vague. But from the rest of your question, it sounds like you want to open an audio file from disk, apply some transformations to the audio, and write the data to a new file. (Other areas under the "audio programming" umbrella would include accessing platform-specific APIs to read from a microphone and write audio to an output device).
Broad topic again, but we'll start simple.
I suggest getting (or generating) a .WAV file to start with. WAV files are probably the simplest audio files to read and write manually. Here is a page that describes what you need to know about the WAV format.
Pulse code modulation (PCM) is the simplest audio format to work with since you don't need to worry about decompressing it first. Here is a page (that I wrote) describing different PCM formats.
As for filtering and cutting different frequencies, I think what you're looking for would be low-pass, high-pass, or band-pass filters.
I hope that helps you get started. Ask more questions here on Stack Overflow as needed.
I'm about to start working on a project for Minix 3 (in C).
My idea is to create some kind of a music player. I want to be able to read files (WAV) and then convert them to a stream of frequencies send to the Timer 2.
Since, has far as I know, there is no easy way to read real music files, I thought of approaching the real frequencies in a block, to a simple mono curve sent to the timer 2.
Ok, issues:
I read and learned, how to read wav headers, but, I can't find anywhere what's the meaning of the data in the data chunk. How should I interpret it?
My initial idea was to make a real music player, but, in my classes we didn't learned how to work with the sound board in Minix 3. Is there some tutorial, anything where I can learn it?
As far as I could realize, C as already a library to manage sound (BASS). Can and How I install it in Minix 3?
Finally, Is it a way to make all this simpler?
A WAV files is not a "stream of frequencies". It contains a series of samples formatted according to the information written in the header.
In best of worlds you just set up your sound card to handle the data format specified in the header, then you just have to keep providing the raw data in the "DATA" chunks to your sound cards data buffers.
How this is done in Minix 3 is out of bounds for this answer (I just don't know how Minix handles sound at all) but I'm sure it will be to great help for understanding the basics of digital audio.