I want the animation to be done based on the audio that will be played?Is this possible to do?
Basically I have a pulse animation that need to be done based on the audio?How can I do this.Hope got my idea.
This is possible. Detailed implementation of audio capture and processing is available in AudioGraph (https://github.com/tkzic/audiograph). The same demo contains a very basic example of animating based on audio (just displaying a level meter using progress bar controls).
For better animations, you can use OpenGL ES together with the code above. I have done this in one of my projects.
Related
I have a program that plays an mp4 video and uses the video's audio track to plot its waveform. How do I best denote the progress of the waveform plot( there are two samples per second) as it remains in sync with the video?
I would like the waveform and whatever I use to denote current time on the waveform plot to remain in sync with the video playing using the Timeline and Media Clock that I have implemented.
Other programs denote it by shading the area that has been played, or by keeping a bar that the waveform moves across where the bar denotes current time. I need some help in deciding which to implement and where to begin in the implementation.
If I was not clear, I would be happy to clarify the question, just let me know.
I am coding in C#, on a WPF program, using Naudio as well.
Well it's up to you which you think looks best. The easier one to implement is a cursor on top of the waveform, since you can just move it as you are playing.
If you want to shade in the early part of the waveform, it very much depends on how you implemented the waveform drawing. If it is based on a series of vertical lines, then you can just change the colour of the lines before the now playing time. But if you'd constructed the waveform as a polygon then it will be harder. There are some clever tricks you could use such as drawing a rectangle and telling WPF to clip it according to the waveform, but generally you'll find it easier to use the vertical line drawing approach.
I have a video playing of lines being drawn on the screen. Is it possible to create a pixel shader (for WPF) that turns newly colored pixels a certain color for N milliseconds?
That way, there can be some indication to the user to movement on the screen when the lines don't move often and the user isn't always looking at the screen.
You can use DirectShow. Its written in unmanaged code, so you need to use this wrapper DirectShow.NET in order to use it in your C# application which is running in managed environment (samples are included, even with EVR which stands for Enhanced video Renderer which means MUCH better video quality). And when you will be passing a control handle to wrapper method for setting the video output, you need a WinForms control, because only from them you can get your desired control handle. That WinForms control you can then host in your WPF application using the WindowsFormsHost control provided for such situations when you need to use some WinForms control(s) in a WPF application. Its just theory, so i dont know if its an ultimate solution for you.
BTW: The whole idea is based on fact, that DirectShow is just some query constructed from separated filters. Renderer is a filter (EVR, VMR-7, VMR-9). Sound player is a filter. And they are connected through their pins. Its like a diagram. Electronic schema or something like that. And you can put for example Grey scale filter in there. And voila, video output will be greyscale. There is a bunch of tutorials for that. And completed simple filters as well. Unfortunately, filters must be written in C++:(
PS: I never said its gonna be easy:D
I'm developing an application that shall receive images from a camera device and display them in a GTK window.
The camera delivers raw RGB images (3 bytes per pixel, no alpha channel, fixed size) at a varying frame rate (1-50 fps).
I've already done all that hardware stuff and now have a callback function that gets called with every new image captured by the camera.
What is the easyest but fast enough way to display those images in my window?
Here's what I already tried:
using gdk_draw_rgb_image() on a gtk drawing area: basically worked, but rendered so slow that the drawing processes overlapped and the application crashed after the first few frames, even at 1 fps capture rate.
allocating a GdkPixbuf for each new frame and calling gtk_image_set_from_pixbuf() on a gtk image widget: only displays the first frame, then I see no change in the window. May be a bug in my code, but don't know if that will be fast enough.
using Cairo (cairo_set_source_surface(), then cairo_paint()): seemed pretty fast, but the image looked striped, don't know if the image format is compatible.
Currently I'm thinking about trying something like gstreamer and treating those images like a video stream, but I'm not sure whether this is like an overkill for my simple mechanism.
Thanks in advance for any advice!
The entire GdkRGB API seems to be deprecated, so that's probably not the recommended way to solve this.
The same goes for the call to render a pixbuf. The documentation there points at Cairo, so the solution seems to be to continue investigating why your image looked incorrect when rendered by Cairo.
unwind is right, cairo is the way to go if you want something that will work in GTK2 and GTK3. As your samples are RGB without alpha, you should use the CAIRO_FORMAT_RGB24 format. Make sure the surface you paint is in that format. Also try to make sure that you're not constantly allocating/destroying the surface buffer if the input image keeps the same size.
I'm using VB.net 2010 and WPF 4. I need to have a smooth transition between two videos played on the mediaelement. I absolutely cannot use anything that requires me to use a winhost in the WPF window, as that will make my project impossible (since the video is full screen, and the controls are over the video)
Basically, I need for the video to play through, and then smoothly go to another video specified in code behind. I cannot splice the two videos together - they must be separate.
How do I have the videos transition smoothly, with no "blink"?
I'm guessing without testing here. You're probably going to need some CPU cores and a good video card.
If you have the memory, use two MediaElements.
Queue up both videos, one on each element.
Set the opacity of the second one to completely transparent. They're UIElements so this should work...
Use timers of some kind keyed from the start of playback on the first one so that you get an event a couple of seconds before playback ends.
With that event delegate, start the video in the second MediaElement, animate the first one's opacity to zero while simultaneously animating the second one to fully opaque.
If you need to do it again, set up the timer again and make sure your delegate animates things the other way.
I'm trying to be able to affect SpeedRatio on a MediaElement whilst having the media play in a continous loop.
This is possible through code behind; I can reset the position of the media once it has ended, but that creates a seam in the playback.
For seamless playback, I use a MediaTimeline, but when I use I media timeline, I can't change the SpeedRatio.
Has anyone got a different approach to looping playback in a mediaElement, or handling SpeedRatio?
UPDATE:
If I stop the timeline, change the timeline's SpeedRatio, I can produce the result I'm looking for. The only remaining problem is getting the timeline to start from the same position that it was stopped at. Pausing the timeline does not allow for the SpeedRatio to be changed.
I still welcome any alternatives.
My MediaUriElement in my open-source project has a "Loop" property that can provide seemless looping. Get it from the source because it's the newest.
The only solution I've found for this is to use XNA to control audio. It's got a lot more responsiveness.