Is there a way to animate the edges of an image in WPF? - wpf

We have a few icons in our WPF application. We want to do an animation, pretty much like a small beacon of light going around the edges of the animation, just endlessly going around it, and following the silhouette of the icons. We found a way to do it by manually creating a path around the icons and have the beacon follow that path (which matches the silhouette), but it's too much manual work because we have a lot of different shaped icons. We're wondering if there's a way for WPF to do this automatically, so we just have to program it once, and then using on the rest of the icons.
Any suggestion very welcome.
Thanks.
Edit
Something like this.

Gee. Isn't that overkill to use wpf animation capabilities for that? Can't you just create a bunch of small animations in Photoshop or using something else and just put them in?
Like animated .GIFs. the only problem would be that: if I'm remembering it right, WPF have problems with animating .GIFs as embedded resources. So you have to load them from the disk. Or you can have them as embedded resources, but you have to extract them temporarily to the disk and then load them into your app's window.

If you are using .NET 3.5 SP1 or greater and you are requiring a code solution instead of the animated GIFs, my suggestion would be a Pixel Shader. You would need to write your own Pixel Shader that does the following:
Detect the edges. http://www.codeproject.com/KB/openGL/EdgeDetection.aspx
Takes an input parameter that can be animated with a storyboard that indicates the position of the beacon. http://www.codeproject.com/KB/dialog/WpfParentWindowShader.aspx
Highlights the edge that is indicated by the beacon position parameter and returns the original color for all other points in the image.
If you haven't worked with Pixel Shaders I would recommend downloading the Shazzam Tool, http://shazzam-tool.com/. It includes an interactive development environment to create and test your shader on simple images and also includes a decent number of Pixel Shaders with source code to help you learn about them.

Related

Setting D3DImage to Image and Rendering in WPF

I would like to render to a surface using DX12 and present it through WPF. There are claims on the web that this is possible. To me it seems simple. I can render as a traditional image(ie tga) on hard drive, or some mapped memory, and then present that image through D3DImage--or simply an image brush in wpf. I would think the most straight forward approach would be to render to a DXGI surface, and then copy that over to a IDirect3DSurface9. I don't see how I can map from the source to the destination in any scenario--whether it be presenting a tga, or passing the rendered surface from dxgi to dx9. Microsoft's solution on GitHub is broken, and the part with the passing of the image is shrouded in darkness. Previous MS examples have been deleted, and codeproject has examples from last decade. I have no code to date because I don't know what to put into the relevant section. I have no real interest in using the managed solutions that are available.
it seems i may have been a little impatient. the solution to my problem would seem to be ID3D##Device::OpenSharedResource(). when rendering, render to texture with CreateTexture() and pSharedHandle. this will allow the texture resource to be shared with d3d9. then i can consume the texture with a basic d3d9 pipeline that renders the texture to a basic rectangle in wpf through d3dimage. if i'm missing something let me know.

Implementing true Pinch and Zoom in the OxyPlot 2D library with MonoTouch

For plotting graphs I used the coreplot library for a while in my MonoTouch based iPhone app, but with iOS 6.0 the already annoying binding problems become so many that I decided to drop it for a library natively written in C#.
Searching around I found the excellent OxyPlot 2D library, and more specifically the MonoTouch port made by dvkwong.
The library works fine and has tons of useful features, but its output is just a rendered bitmap UIImage.
This means that I need to add myself the pinch and zoom features to the library.
The current implementation, based on the dvkwong preliminary example, uses the UIScrollView to zoom and unzoom the resulting bitmap image added to a simple subview.
This is not a good solution because when zooming the aliasing of the bitmap is made visible , and if the resolution is increased the text fonts becomes unreadable because are not optimized for the current zoom resolution.
I need to render the image each time at the correct resolution, without using UIScrollView but just overriding the DrawRect() call in a custom UIView.
But how to reproduce the the pinch-zoom gesture of Apple UIScrollView and draw the correct subrect of the OxyPlot plot model?
I tried to implement this method suggested here:
position the pinched view between the two fingers
But it doesn't work because I need to know the sub rect to draw, not applying a transformation matrix. Also there is no "draw sub rect" method in the OxyPlot library, so I need to set a cliprect in the image context and drawing a bigger image first and then clipping it. This is clearly too slow, because at some zoom levels the image can become huge (and I need the user to be able to zoom indefinitely on any part of the graph).
Any help is appreciated.
Thanks in advance.
I solved the problem myself.
I've created another MonoTouch port of the OxyPlot 2D library, this time supporting both Pan & Pinch-Zoom gestures. I've also added iPad support.
Now we have a native C# plot library for MonoTouch.
You can download it under the MIT Licence here:
https://github.com/Emasoft/OxyPlot.2DGraphLib.MonoTouch

WPF-DirectX Interop Problem (D3DImage)

I'm writing a Video application utilizing D3DImage. Frames are from memory and rendered as textures in native code with DirectX9, finally exposed by D3DImage to the WPF GUI. I have some Overlays on top, created with WPF's painting framework (Text, shapes etc.). Up to this point, it works like a charm.
Now, I would like encode the composited image from my underlying native C++ code. Video is 640x480 BGR, 25 FPS and has to be rendered and encoded in parallel, also on older Hardware with Windows versions down to XP/SP3.
Problem is, I cannot find any documentation describing the composition process between WPF and D3DImage. They 'blend' in some sense, but what is the meaning of this? And is it possible to get a handle to the WPF's part of the drawing or even the composited image in my native C++ code?
p.s: I'm also open to managed solutions, but didn't found much performant up to now.
There is global static method called "CompositionTarget.Rendering". Add an event to that and every time WPF renders that method will be called before WPF presents(the FPS can vary though). So just updated your renderTarget accordingly.
There might be a better way, but i'm not aware of it.
NOTE:: Also for D3DImage on WindowsXP you use a D3D9 device with a lockable renderTarget while on Vista/7 you use a D3D9Ex device with a non-lockable renderTarget. Just a note.

Display 360 Image in Silverlight 3.0 (Not Panorama)

I have a lot of images taken from a 360 camera which I would like to be able to display in Silverlight 3. They are NOT regular panorama images. The camera which took the image actually creates a distorted jpeg that becomes undistorted once wrapped around a sphere as a texture. I have desktop software that will allow viewing of the image (not just side-to-side, but straight up, down, etc.) and I need to try to get the same functionality in Silverlight. It is very similar to Google StreetView.
What I think I need is to create a sphere, wrap the jpeg on the sphere as a texture, then put the "camera" inside the sphere. I doubt this is possible in Silverlight, but perhaps there is a way to simulate this?
So far, Google searches aren't bringing anything up. Can anyone point me in the right direction to figure out how to do this? Are there any existing projects that do this?
An example of a typical image is here.
These might help you out (probably not). They are 3d engines for silverlight, but they will probably wrap the image outside of the sphere instead of inside, which is probably what you need.
Kit3D http://www.codeplex.com/Kit3D
Balder http://www.codeplex.com/Balder
Another, possibly more promising option, would be to use javascript. So far you've probably researched how to do this in Silverlight, but you might do some similar searching for using javascript for this. There may be an option out there already, and since Silverlight can interopt with Javascript, you might be in luck.
Your gonna have to map the texture to a sphere then, like you said. But afaik silverlight 3 doesn't support hardware accelerated 3d.
So your options are:
Try and find a silverlight software 3d library (Like this)
Write your own software rasterizer (multi page guide)
Hope this helps
You might want to try cropping a window from the image and display it. if the user want to go right, move the window right and crop. if the user wants to go left, move the window left and crop. to zoom out, expand the window, to zoom in make the window smaller. if you move the frame far right then stitch the image data from the left side.
You might need to modify the image to eliminate the distortion, this shouldn't be too hard and depends on the camera lens focal length.
Don't try mapping the image to a sphere, it is much harder.
At https://hdviewsl.codeplex.com it says that HD View SL (Silverlight version) supports
"orthographic (2D), with wrapping for 360-degree panoramas"
Also you could try to port PtViewer source code to Silverlight from Java if no one else has
UPDATE:
VRLight might be the solution in your case:
http://vrlight.thecloudsite.net/
http://vrlight.thecloudsite.net/tutorial.html
http://ivrpa.org/blog/3651/vrlight_vredit_20
Its author (Jurgen Eidt) is also making cPicture (http://cpicture.thecloudsite.net/index.en.html), if you can't find him from the VRLight site, try from the cPicture one, or try from his blog at IVRPA website (http://ivrpa.org/blog/3651), which seems to have recent posts

Sprite / Character animation in Silverlight (v2)

We have a Silverlight 2 project (game) that will require a lot of character animation. Can anyone suggest a good way to do this. Currently we plan to build the art in Illustrator, imported to Silverlight via Mike Snow's plug-in as this matches the skills our artists have.
Is key framing the animations our only option here? And if it is, what's the best way to do it? Hundreds of individual png's or is there some way in Silverlight to draw just a portion of a larger image?
You can use the Clip property on the image itself or on a container for the image to display a specific piece of a larger image, like a sprite sheet. This may or may not be more performant than swapping pngs. Also you could use the ImageBrush on a Rectangle to show just what you want, this would probably be a bit more efficient than the Clip property.
I just posted some code using Bill's suggestion regarding the Rectange and ImageBrush.
Silverlight at this time does not support bitmap effects nor has any libraries to manipulate the images. Your option now is to use keyframe animations from one png to another.
Now you can get at the raw bytes of an image. If you have your own image processing libraries you can compile them with the Silverlight dlls and then use the library in your Silverlight app.

Resources