Create an Audio Graphic Equalizer for Windows - winforms

I need to create an audio graphic equalizer with the commonly used presets, for an application in Windows. I need to apply the equalization effects globally across all applications in Windows (ex DFX audio enhancer v11.1 applies effects at system level).
Currently I can get to the frames of the system audio using sAPO samples provided by Microsoft. But I need to apply graphic equalization to this.
Does Microsoft provides any API or Sample code for creating graphic equalizer in Windows?
Kindly inform me if there are any other Libraries or Open Source project that I can use for this purpose.

You didn't say whether your sticky point is creating the audio filters or injecting your audio in to the system's audio stream. I can only offer some insight about the filtering part.
In an abstract sense, a graphic equalizer is a set of notch filters, each one tuned to a specific frequency. Center your EQ on 1KHz (1000 Hz) and go up and down in factors of 2. For example: 31, 62, 125, 250, 500, 1000, 2000, 4000, 8000, 16000
The best eq system is a set parametric eq's. A parametric EQ lets you set the specific frequency of each filter, and a good parametric EQ plugin will let you set as many or as few filters as you need.
So what you need to build is a programmable notch filter, then stack them to get as many bands as you need.
I would start by seeing what the open source programs do: Audacity is one: you can explore the equalization and audio filter plugins to see if they meet your requirements.
Some Google searching also turned up this resource: http://music.columbia.edu/cmc/music-dsp/
I hope that helps get you started (at least on the filtering part.) If you figure out how to write a real-time filter that can directly inject itself in to the Windows sound architecture, let us know.

Related

Is it possible with Codename One to record microphone input and play it back simultaneously?

I am using Codename One to record the microphone input and play it back to the connected earphones.
First of all if I record audio from mic to a file, and play it back when the recording is over, it works as expected. That's why based on this 2014 question I implemented 2 periodic tasks (timer and timertask), as long as 2 files : one for recording, one for playing. I set the periodic tasks period to values between 100 ms and some seconds, but the result was awful on the Android device. Indeed there were random gaps, it was not smooth at all, nor understandable.
I assume the overhead of writing to a file every period is too high and consequently is causing that behaviour. So using proper high-level Codename One methods does not seem the way to go.
Then in the same question from 2014, the requester is suggesting to create an inputstream from the recording Media and use it as input for the playing Media. However the method MediaManager.createMediaRecorderStream() does not seem to be available anymore. I tried to use the file used to record audio as InputStream for the playing Media through fs.openInputStream(recFilepath) but it did not output any sound nor error on the device.
So my question is whether or not I can achieve my goal with bare Codename One or do I have to use the native interface ? Moreover Shai (in the 2014 above mentioned question) wrote that the second approach with MediaManager.createMediaRecorderStream() might work on some platforms : is the android platform among these, or only iOS platform was aimed at ?
Any help appreciated and sorry for not posting code since I cleared it as soon as an attempt did not appear to work. So I really messed up with my code which now is not doing anything I targetted initially.
Cheers,
As far as I recall Android back in the day didn't support input stream for media and later only allowed capturing input directly as uncompressed WAV which makes full duplex usage impractical. This might have changed since as I recall they did some overhaul of their media libraries.
I'm not sure if this is exposed in our higher level code. Besides using native interfaces you can also help us improve Codename One by forking and hacking it e.g. this is the relevant code in the Android project:
https://github.com/codenameone/CodenameOne/blob/master/Ports/Android/src/com/codename1/impl/android/AndroidImplementation.java#L2804-L2858
This is a contribution guide to Codename One, it covers running in the simulator but that's a good start: https://www.codenameone.com/blog/how-to-use-the-codename-one-sources.html
You can test your changes on an Android device with instructions here: https://www.codenameone.com/blog/debug-a-codename-one-app-on-an-android-device.html

iOS 6 Audio multi-route - use external microphone AND internal speaker simultaneously

This presentation: http://www.slideshare.net/invalidname/core-audioios6portland on Core Audio in iOS6 seems to suggest (slide 87) that it is possible to over-ride the automatic output / input of audio devices using Av Session.
So, specifically, it is possible to have an external mic.plugged into an iOS6 device and output sound through the internal speaker ? I've seen this asked before on this site: iOS: Route audio-IN thru jack, audio-OUT thru inbuilt speaker but no answer was forthcoming.
Many thanks !
According to Apple's documentation:
https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVAudioSession_ClassReference/Reference/Reference.html#//apple_ref/occ/instm/AVAudioSession/overrideOutputAudioPort:error:
https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVAudioSession_ClassReference/Reference/Reference.html#//apple_ref/doc/c_ref/AVAudioSessionPortOverride
You can override to the speaker, but if you look more closely at the C based Audio Session services (which is actually being deprecated, but still has helpful information) reference:
https://developer.apple.com/library/ios/documentation/AudioToolbox/Reference/AudioSessionServicesReference/Reference/reference.html#//apple_ref/doc/constant_group/Audio_Session_Property_Identifiers
If a headset is plugged in at the time you set this property’s value
to kAudioSessionOverrideAudioRoute_Speaker, the system changes the
audio routing for input as well as for output: input comes from the
built-in microphone; output goes to the built-in speaker.
I would suggest looking at the documentation for iOS 7 to see if they've added any new functionality. I'd also suggest running tests with external devices like iRiffPort or USB based inputs (if you have an iPad with CCK).

Hardware Acceleration with Multiple Monitors

I grabbed this multi window test code, changed it to use D3DXCreateTeapot instead of D3DXLoadMeshFromX (I couldn't find a teapot.x file), moved the EndScene call below the DrawText call and set NUM_WINDOWS to 1. With those minor changes, the test works and creates two windows, each with its teapot.
I built the test and deployed it in a machine which has an Intel HD Graphics onboard GPU with two heads, each attached to a monitor. Then I moved one window to each monitor, and enlarged both windows to take up about 80% of each monitor space.
With this setup, which is quite close to what my app needs, the window in the secondary monitor always goes too slow. If I swap the windows, it's the same: the one in the secondary monitor starts crawling, and slows down the whole system.
I googled around and some sources (albeit dated) state that only the primary monitor can use hardware acceleration when not in fullscreen mode. I cannot use fullscreen because direct3d9 rendering in my app is done inside a user control embedded in a Winforms GUI.
Is it really impossible to get hardware acceleration for both monitors in windowed mode? The legacy version of our application uses MFC + DirectDraw and manages to perform fast enough, but those are obsolete technologies and we'd abhor going back there.
You have 3 options:
Try many combinations, like: D3DPRESENT_INTERVAL_IMMEDIATE +
(D3DSWAPEFFECT_FLIP or D3DSWAPEFFECT_COPY) +
D3DCREATE_ADAPTERGROUP_DEVICE. Maybe some will offer better
performance.
Render to a surface, convert to a bitmap, and use it like any other bitmap in your form. Something like this
D3DXSaveSurfaceToFileInMemory.
Change your code to DirectX11. You have more options for GDI interaction there. Better rendering behavior. Maybe better drivers.
Some years ago, I made some multiple window code, also multiple device, using DirectX10/11. I can't tell about DirectX9 having this issue, seems absurd to me, but could be your Windows version, or Intel driver.

Produce video from OpenGL C program

I have a C program that runs a scientific simulation and displays a visualisation in an OpenGL window. I want to make this visualisation into a video, which will eventually go on YouTube.
Question: What's the best way to make a video from a C / OpenGL program?
The way I've done it in the past is to use a screen capture program, but this is very labour-intensive (have to start/stop the screen capture program, save the video file, etc...). It seems like there should be a way to automate the process of making a video from within the C program. Then I can leave it running overnight and have 20 videos to look through in the morning, and choose the best one to put on YouTube.
YouTube recommend "MPEG4 (Divx, Xvid) format at 640x480 resolution".
I'm using GLUT 3.7.6_3, if that makes a difference. I can change windowing system if there's a good reason.
I'm running Windows (XP), so would prefer answers that work on Windows, but Linux answers are ok too. I can use Linux if it's not possible to do the video stuff easily on Windows. I have a friend who makes a .png image for each frame of the video and then stitches them together using "mencoder" on Linux.
you can use the glReadPixels function (see example)
But if the stuff you are trying to display is made of simple objects (i.e. spheres, rods, etc..), I would "export" each frame into a POV-ray files, render these, and then make a video out of these pictures. you will reach a much higher quality like that.
Use a 3rd party application like FRAPS to do the job for you.
Fraps can capture audio and video up
to 2560x1600 with custom frame rates
from 1 to 120 frames per second!
All movies are recorded in outstanding
quality.
They have video samples on the site. They seem good.
EDIT:
You could execute a tool to record the screen from your C application by calling it like system("C:\screen_recorder_app.exe -params"). Check camstudio, it has a command line version.

Which is the best way to encode batch videos on server side?

I am making a general question since I am a developer and I have no advance experience on video elaboration. I have to preparare a web application with the purpose to allow video files upload on our company server and then video elaboration by server, on user command. The purpose of the web application is to allow to the user to make some elaboration on video depending on user action launch from the web app:
(server has to ) convert video in different format(mp4, flv...)
extact keyframes from video and saves them in jpeg format
possibility to extract audio from video
automatic control of quality audio & video (black frames,silences detection)
change scene detection and keyframe extraction
.....
This what's my bosses wanted from the web based application (with the server support obviously), and I understand only the first 3 points of this list, the rest for me was arabic....
My question is: Which is the best and fastest server side application for this works, that can support multiple batch video conversions, from command line (comand line for php-soap-socket interaction or something else..)?
Is suitable Adobe Media Server for batch video conversion?
Which are adobe products that can be used for this purpose?
Note: I have experience with Indesign Server scripting programing (sending xml with php and soap call...), and I am looking to something similiar for video elaboration.
I will appreciate any answers.
THANKS ALL
I suggest you start with the open source project FFmpeg. You can call the program from the command line and via a series of arguments specify the desired output types, thumbnails, etc.
As an aside, when you start looking around at Video related projects (MediaShare for example) you will find they are all using FFmpeg for their video processing.
as Nathan suggested, FFMPEG is the first choice. Also you can check MEncoder
Just to elaborate:
1) (server has to ) convert video in different format(mp4, flv...)
both FFMPEG and mencoder do this well
2) extact keyframes from video and saves them in jpeg format
as I know it's impossible using command-line interface of FFMPEG, not sure about mencoder. However they can save all frames as separate images
3) possibility to extract audio from video
both FFMPEG and mencoder do this well
4) automatic control of quality audio & video (black frames,silences detection)
you need to code this, using FFMPEG libraries or mencoder
5) change scene detection and keyframe extraction
it's not clear what your boss imposes here
I tried lot of videos converting in server side using advance Xuggler API libraries.
Xuggler is a free open-source library for Java developers which can be used to uncompress,
manipulate, and compress recorded or live video in real time. Xuggler uses the very
powerful FFmpeg media handling libraries under the hood, essentially playing the role of a
java wrapper around them. It is the easy way to uncompress, modify, and re-compress any
media file (or stream) from Java.
WebLinks : 1) http://www.xuggle.com/ -official website
2) http://www.javacodegeeks.com/2011/02/introduction-xuggler-video-
manipulation.html - example

Resources