Creating media visualization in Silverlight - silverlight

I'd like to create to some custom visualization effects to a sound wave in Silverlight. As of Silverlight 3 there's the MediaElement class which does a great job in playing sounds/videos.
To visualize however I would need some kind of event callback with some information on the currently played segment of the sound. Does the framework have some support for achieving this?

I wanted the same so I've created exactly that.
You can see a live demo at http://prefix.teddywino.com/post/SilverlightMediaKitLiveDemo.aspx
The library and demo source code are available at http://salusemediakit.codeplex.com/
The demo shows the added feature to alter the raw audio data to create effects.
Currently works only with MP3s and is still under development

Sadly this is not possible in Silverlight unless you go the whole way and create your own MediaStreamSource to decode the audio (e.g. from MP3) yourself.
Can you get away with cheating? A lot of web players show a fake graphic equaliser which just has bars going up and down randomly during playback. I seem to remember that MySpace and SoundClick used to do this (may still do).

If you implement a custom MediaStreamSource, you could potentially inspect/analyze the data being generated by it, but you will immediately run in to UI threading issues if you try and update the UI directly from the custom MediaStreamSource, or vice versa.
One way to get this to work might be to implement a custom MediaStreamSource that writes (or duplicates?) extra audio data to a thread-safe buffer where your UI could access it.

Related

Using an external realtime renderer in Wpf with Blend for VS

I plan to make a project where the user can view the selected items real time and rotate them around. I would like anyone experienced to tell me which sort of renderer apps could be used for this? The way I imagine would be that upon a click another windows appears where one can rotate the object around, fully textured and lit.
What I could think of is Marmoset, that has a decent online real time viewer, Unreal Engine - if it's possible, or Unity where I can even use coded .shader files for my materials. Any ideas on which to use and how to achieve the effect I'm after?
Marmoset has a decent viewer that you can integrate to WPF seamlessly by using Mongoose http server and cefsharp (webgl enabled).

ESRI silverlight print map without using print task

I have a silverlight application which different panels, One of the panel contains the ESRI map, I want to print what is coming on the map panel, It's more like a print screen for map (but it should not include rest of the controls of the client application).
On investigation I found that we can use PrintTask but it uses GP server. I do not want to invoke the ESRI service for print.
Is there any other way to print what is coming on the screen (inside the map panel) in the silver light?
Atul Sureka
If you are using the latest version of the Esri Silverlight API, you have access to client side printing. See their example here:
https://developers.arcgis.com/silverlight/sample-code/start.htm#ClientPrinting
It is nice in that it gives you a proper WYSIWYG interface for printing, visibly seeing the extent and can handle custom markers way better than the print service. The downside is though that unless your source map is high enough resolution you'll end up with quite low-res maps unless you perform some kind of map switching when a user triggers the print interface. You'll also need to define print templates in xaml rather than in ArcMap.
It basically boils down to cloning your map and copying all the layers across.

Microsoft UI Automation Library Vs Coded UI Test

I'm very much new to Test Automation kind of thing. Recently I've been assigned to a project where I have to write an application (or, a script may be, I'm not sure) that will automate the UI testing of a CAD-like WPF application which misses lots of AutomationIds.
After doing a little searching on MSDN and other sources I'm a bit confused about whether I should use the Microsoft UI Automation Library or the new Coded UI Test feature included in VS2010. I'm not getting the clear picture of which one of these two applies in which scenarios, what advantages one has over the other and which one suits my purpose.
Please shade some light if you have experience/knowledge on the matter. Thanks in advance.
Basically Microsoft UIA is the new accesibility library in .Net 4.0. WPF applications and controls have built-in support for UIA through the AutomationPeer class.
Coded-UI test is a Record & Play automation tool which uses the Microsoft UIA Library underneath. Since being a tool compared to writing code in C# it improves QA productivity for recording more test cases.
For applications with automation support planned into it, Coded-Ui should be sufficient. If the AutomationIDs are missing make sure the controls have some unique property like Name. Use UIVerify or Inspect to check for this.
If NO unique property is avialble, there are the other below mentioned techniques you can use in combination with Coded-UI.
From an Event
When your application receives a UI Automation event, the source object passed to your event handler is an AutomationElement. For example, if you have subscribed to focus-changed events, the source passed to your AutomationFocusChangedEventHandler is the element that received the focus. For more information, see Subscribe to UI Automation Events.
From a Point:
If you have screen coordinates (for example, a cursor position), you can retrieve an AutomationElement by using the static FromPoint method.
From a Window Handle:
To retrieve an AutomationElement from an HWND, use the static FromHandle method.
From the Focused Control:
You can retrieve an AutomationElement that represents the focused control from the static FocusedElement property.
If you can leverage and use the Coded UI Test then go that route. Make sure to verify that your given configuration is supported.
The UI Automation Library resolves everything in the code behind. This then forces you to use a tool like UISpy to gain access to the controls internals so that you can then build out your test.
A Coded UI Test on the other hand still has code behind however it allows for the recording of steps through the given application which you are testing which will greatly increase the number of tests you can create.
UI Automation library is a low-level library. Usually, you don't want to write tests against it directly as it requires a pretty decent amount of work.
I would recommend looking at more high-level libraries. You mentioned one of them - Coded UI; another good choice would be White from TestStack. They both suits different kinds of projects. Coded UI is good when you don't want to invest a lot of efforts into your test suite. At the same time, it doesn't scale much so if you are going to write a lot of tests, you are better of choosing White.
Here I compare the two frameworks in more detail: Coded UI vs White
To complement the above responses, please look at CUITE that helps quite a bit and may be an appropriate approach for you.
I began 'rolling-my-own' 'semi-framework' using the CodedUITest library and devised a paradigm for separating the details of automation from the (C#) code.
Basically, I am creating a driver that reads what needs to be done from spreadsheet(s) where each line in it is a test step (or a pointer to a scenario in a different worksheet).
At present, incomplete, but promising, I have it working against a WPF application with partial success.
One of the main problems is that the developers neglected to identify controls uniquely and consistently.
Bey

How to control MFC application from another program?

I have a binary application on windows (train timetable software) which contains a lot of interresting data which I need for my project (nothing illegal, just some weird optimization algorithm). But the application has no api and the data files have undocumented binary form.
So my idea is to control the application from my own code. I would like to send keystrokes to it to fill a form, run query and save result to a file (there are buttons and menu items for this in the app). And repeat many times.
Is there a library for this? Or an example? I have a general idea how to do it, but I am lazy and I do not want to invent the wheel.
Also, the same data is available on the web. Is there some solution for the same task with ASP (Win forms) web applications? I could probably handle parsing the results but I do not know how to fill the values of webforms controls.
Thanks in advance.
You can use simple Win32 APIs to do this.
FindWindowEx and then once you have the window handle you can send any message (such as WM_KEYDOWN) to it by using SendMessage.
A good tool which helps with this process is Spy++ because it allows you to see the window hierarchy more easily and also which messages are being used internally for the application you are monitoring.
As for web form controls, you will probably have to do more work than this because typically the web will be one canvas control that things are drawn custom onto depending on the browser. Perhaps doing this via some kind of proxy is a better approach where you actually filter the HTML pages.

Drawing directly to the screen via GTK or GDK

I am working on a demo application for a library me and two colleagues are writing to allow GNOME applications that run audio events though libCanberra to allow users to select visual events to replace them. This is an accessibility-minded effort to help both visually and aurally impaired users gain the benefits of audio alerts and such.
For our first demo we're simply trying to make the entire screen flash with a color when a button is pressed in our simple GTK sample app. I've been looking at the GTK documentation and all drawing that I've seen has had to do with drawing directly to a window or other widget. I want to control the entire screen's hue. Would this be a GDK thing? Am I completely off base?
Any links/help will be much appreciated! Thanks.
PS: This is being written in C, though functions should be the same between languages with proper bindings, I assume.
You cannot. Your application has access only to its own window, and does not (and should not) know anything about other windows, or the screen. The "screen" is managed by whatever back-end GTK uses (X? Win32? DirectFB?).
That said, you could try to create a "full-screen" window that covers the entire screen area. That is the way full-screen apps are implemented in most windowing systems.
GTK doesn't have such option AFAIK, you probably want to use the backend: Xlib (or Xcb) for that.

Resources