What is causing this OpenTK GLControl texture mapping problem? - winforms

I'm working on a Winforms project using GLControl 3.10 and I'm trying to use some legacy code written for mobile devices using Xamarin and OpenTK 1.0. It's almost working but I'm having a problem with texture mapping.
I have created a test cube textured with transparent PNG images. The first screen shot below is how it appears on the mobile devices. The second shows how it appears in the new WinForms project.
I've tried adjusting pretty much every GL function call I can find and tried modifying the shaders but I have no idea where the problem might lie. Does anyone have any suggestions where to look?

Related

Implementing true Pinch and Zoom in the OxyPlot 2D library with MonoTouch

For plotting graphs I used the coreplot library for a while in my MonoTouch based iPhone app, but with iOS 6.0 the already annoying binding problems become so many that I decided to drop it for a library natively written in C#.
Searching around I found the excellent OxyPlot 2D library, and more specifically the MonoTouch port made by dvkwong.
The library works fine and has tons of useful features, but its output is just a rendered bitmap UIImage.
This means that I need to add myself the pinch and zoom features to the library.
The current implementation, based on the dvkwong preliminary example, uses the UIScrollView to zoom and unzoom the resulting bitmap image added to a simple subview.
This is not a good solution because when zooming the aliasing of the bitmap is made visible , and if the resolution is increased the text fonts becomes unreadable because are not optimized for the current zoom resolution.
I need to render the image each time at the correct resolution, without using UIScrollView but just overriding the DrawRect() call in a custom UIView.
But how to reproduce the the pinch-zoom gesture of Apple UIScrollView and draw the correct subrect of the OxyPlot plot model?
I tried to implement this method suggested here:
position the pinched view between the two fingers
But it doesn't work because I need to know the sub rect to draw, not applying a transformation matrix. Also there is no "draw sub rect" method in the OxyPlot library, so I need to set a cliprect in the image context and drawing a bigger image first and then clipping it. This is clearly too slow, because at some zoom levels the image can become huge (and I need the user to be able to zoom indefinitely on any part of the graph).
Any help is appreciated.
Thanks in advance.
I solved the problem myself.
I've created another MonoTouch port of the OxyPlot 2D library, this time supporting both Pan & Pinch-Zoom gestures. I've also added iPad support.
Now we have a native C# plot library for MonoTouch.
You can download it under the MIT Licence here:
https://github.com/Emasoft/OxyPlot.2DGraphLib.MonoTouch

WPF-DirectX Interop Problem (D3DImage)

I'm writing a Video application utilizing D3DImage. Frames are from memory and rendered as textures in native code with DirectX9, finally exposed by D3DImage to the WPF GUI. I have some Overlays on top, created with WPF's painting framework (Text, shapes etc.). Up to this point, it works like a charm.
Now, I would like encode the composited image from my underlying native C++ code. Video is 640x480 BGR, 25 FPS and has to be rendered and encoded in parallel, also on older Hardware with Windows versions down to XP/SP3.
Problem is, I cannot find any documentation describing the composition process between WPF and D3DImage. They 'blend' in some sense, but what is the meaning of this? And is it possible to get a handle to the WPF's part of the drawing or even the composited image in my native C++ code?
p.s: I'm also open to managed solutions, but didn't found much performant up to now.
There is global static method called "CompositionTarget.Rendering". Add an event to that and every time WPF renders that method will be called before WPF presents(the FPS can vary though). So just updated your renderTarget accordingly.
There might be a better way, but i'm not aware of it.
NOTE:: Also for D3DImage on WindowsXP you use a D3D9 device with a lockable renderTarget while on Vista/7 you use a D3D9Ex device with a non-lockable renderTarget. Just a note.

Image Application in WPF and Perfomance

I am planning to build Image processing application using WPF. Brightness /Contrast and Histogram are main operation of this application. I have downloaded the application " Foundations: Bitmaps and Pixel Bits" from
http://msdn.microsoft.com/en-us/magazine/cc534995.aspx
. But when I tried to open the images which are more than 1200x1600, It is very slow. How to increase the performance. Is any one worked on Image processing in WPF.
Please suggest me how to solve this perfomance issue in WPF for image(more than 1600x1200) operation.
Thanks you,
Harsha
After a week searching the net I got some useful information. People are using COM DLLs for all the Image related calucualtions and update the WPF application. Here is the link to MSDN:Custom BitmapEffect Sample - RGBFilter
http://msdn.microsoft.com/en-us/library/ms771475(VS.90).aspx
Buut Problem with this is, one have register the COM dll.
But I have also found the sample code where registration of COM Dll is not required.
http://johnmelville.spaces.live.com/cns!79D76793F7B6D5AD!115.entry
I have Opened the Image of size 3000x3500 and changed the RGB values. It is very smooth.
But I didn't understand how slider in the XMAL interact with this COM DLL and How to write this COM DLL.
If any one who understand the this code please explain. It will be very helpful for all.
Thanks and regards
Harsha

Fast WPF Image Control

I'm looking for an Image control for WPF which can rapidly change images. The built in WPF one is quite slow for the image sizes im using (scaled). I only need about ~3 FPS. I have considered dropping to WinForms and even D3D but I'm not sure thats the best way.
Can anyone suggest something?
WPF's Image control uses the native "Windows Imaging" and Direct3D subsystems of Windows to do all its dirty work, so if used with the right paremeters it will be pretty much as fast as anything you will find.
I suspect the problem is that your settings are causing Windows Imaging load the image at full resolution, then having Direct3D scale it. The solution to this is to do the scaling as you load the image by setting DecodePixelHeight and DecodePixelWidth on the BitmapImage you are using as an ImageSource.
Another technique that many graphics apps use to speed things up is to preload the images in the background. For example, the Windows picture viewer automatically starts loading the next image as soon as the current image is shown.
If you are preloading images, consider doing it in a separate thread. Also make sure you use BitmapCacheOption.OnLoad when you create the BitmapImage or the preloading won't actually occur (the default is OnDemand).

Display 360 Image in Silverlight 3.0 (Not Panorama)

I have a lot of images taken from a 360 camera which I would like to be able to display in Silverlight 3. They are NOT regular panorama images. The camera which took the image actually creates a distorted jpeg that becomes undistorted once wrapped around a sphere as a texture. I have desktop software that will allow viewing of the image (not just side-to-side, but straight up, down, etc.) and I need to try to get the same functionality in Silverlight. It is very similar to Google StreetView.
What I think I need is to create a sphere, wrap the jpeg on the sphere as a texture, then put the "camera" inside the sphere. I doubt this is possible in Silverlight, but perhaps there is a way to simulate this?
So far, Google searches aren't bringing anything up. Can anyone point me in the right direction to figure out how to do this? Are there any existing projects that do this?
An example of a typical image is here.
These might help you out (probably not). They are 3d engines for silverlight, but they will probably wrap the image outside of the sphere instead of inside, which is probably what you need.
Kit3D http://www.codeplex.com/Kit3D
Balder http://www.codeplex.com/Balder
Another, possibly more promising option, would be to use javascript. So far you've probably researched how to do this in Silverlight, but you might do some similar searching for using javascript for this. There may be an option out there already, and since Silverlight can interopt with Javascript, you might be in luck.
Your gonna have to map the texture to a sphere then, like you said. But afaik silverlight 3 doesn't support hardware accelerated 3d.
So your options are:
Try and find a silverlight software 3d library (Like this)
Write your own software rasterizer (multi page guide)
Hope this helps
You might want to try cropping a window from the image and display it. if the user want to go right, move the window right and crop. if the user wants to go left, move the window left and crop. to zoom out, expand the window, to zoom in make the window smaller. if you move the frame far right then stitch the image data from the left side.
You might need to modify the image to eliminate the distortion, this shouldn't be too hard and depends on the camera lens focal length.
Don't try mapping the image to a sphere, it is much harder.
At https://hdviewsl.codeplex.com it says that HD View SL (Silverlight version) supports
"orthographic (2D), with wrapping for 360-degree panoramas"
Also you could try to port PtViewer source code to Silverlight from Java if no one else has
UPDATE:
VRLight might be the solution in your case:
http://vrlight.thecloudsite.net/
http://vrlight.thecloudsite.net/tutorial.html
http://ivrpa.org/blog/3651/vrlight_vredit_20
Its author (Jurgen Eidt) is also making cPicture (http://cpicture.thecloudsite.net/index.en.html), if you can't find him from the VRLight site, try from the cPicture one, or try from his blog at IVRPA website (http://ivrpa.org/blog/3651), which seems to have recent posts

Resources