Multisampling (MSAA) for DirectX11/DirectX10 with D3DImage shared resource - wpf

I am trying to get MSAA in DX11 using D3DImage, but is seems, it is not possible, since shared multisampling texture are not allowed, as stated here: http://msdn.microsoft.com/en-us/library/windows/desktop/ff476531(v=vs.85).aspx
Actually, I use the SharpDX implementation of the D3DImage, which works fine for DX11 and DX10 as long one can leave without anti alisasing.
Approaches to solve it are described in this thread: http://sharpdx.org/forum/5-api-usage/1000-d3d11-problem-with-usage-of-texture2d which are not successful. There is yet another thread asking a similar question: Multisampling and Direct3D10 / D3DImage interop
Finally, the question is in fact, if anyone can confirm, that it is definitly NOT possible to use D3DImage for rendering of anti-aliased content from DX10/DX11?

As said in the microsoft link (tried several times as well), multisampled shared textures are not allowed (actually texture must also have no mip level(s), as additional info)
The only way to share the texture is to create a non multisampled version (same format/parameters), then use
DeviceContext.ResolveSubresource
in SharpDX to convert the msaa texture into a non multisampled one, then you can share the result of that.

Related

Is CreateCompatibleDC() necessary working with windows on one display?

This example code manually reads a bitmap file, uses CreateDIBSection() to make GDI allocate memory for it, and create an hbitmap handle. Then it uses a MemoryDC to draw the bitmap to a window DC:
ftp://ftp.oreilly.com/examples/9781572319950/cd_contents/Chap15/DibSect/DibSect.c
hdc = BeginPaint (hwnd, &ps) ;
...
hdcMem = CreateCompatibleDC (hdc) ;
Why can't we use GetDC() with NULL or with hwndDesktop instead? Why can't we cache the device context instead of repeatedly creating it?
If the machine has only one display device and we are only drawing to windows why do we need to constantly harmonize bitmaps and device contexts? Once the pixeldata is copied to the buffer provided by GDI, does GDI update it when that HBITMAP is loaded into a DC and drawn on? If the user also wishes to draw on it is it necessary to synchronize access? (By calling GDIFlush() first?)
It's hard to figure this out when most all of the object properties are opaque and abstracted. I've read almost all of the related MSDN, a lot of Petzold's book, and some articles:
Display Device Contexts
CreateCompatibleDC()
CreateDIBSection()
Memory Device Contexts
Guide to Win32 Memory DC
Guide to WIN32 Paint for Intermediates
Programming Windows®, Fifth Edition
Edit:
I think my question boils down to this:
Is a device context a TYPE of display or is it an INSTANCE of graphical data that is able to be displayed. A computer typically has only a handful of displays but it could have hundreds of things to display on them.
GetDC(NULL) is the screen HDC and the screen is a shared resource, therefore you should only do read/query operations on this HDC. Writing to this HDC is not a good idea on Vista and higher because of the DWM.
Since a HDC can only contain one bitmap, one brush and one pen, Windows/applications obviously need more than one HDC provided by the graphics engine.
You can count on CreateCompatibleDC to be relatively cheap operation and I believe Windows has a cache of DCs it can hand out. If you are creating a game/animation type application you might want to cache some of these graphic objects on your own but a normal application should not.
You don't generally call GDIFlush unless you are sharing GDI objects across several threads. You can use SetDIBits if you want to mix raw pixel bytes access and GDI.
I don't really get the once screen argument, Windows has supported multiple monitors since Windows 98 and there is not much you can do to prevent the user from connecting another monitor.
I think your problem is that you are getting hung up on Microsoft's names for things, Microsoft's name "device context" and the names for calls like "CreateCompatibleDC".
"Device Context" is a bad name. The Win32 documentation will tell you that a device context is a data structure for storing the state of a particular device used for rendering graphics commands. This is only partially true. Look at the different kinds of DCs that exist: (1) screen device contexts, (2) printer device contexts, (3) the device context used by a bitmap in memory, and (4) metafile device contexts. Of these only (1) or (2) are actually doing exactly what the documentation claims they are doing. In the other cases device contexts serve as a target for drawing calls but not as containers for the state of some physical device. (This is really noticeably true in the case of metafile DC's: metafiles were an old Win32 thing that basically just cache the GDI calls going in to them to be replayed later, kind of a crude vector format.)
In a hypothetical object oriented programming version of Win32, device contexts could be instances of some class that implements an interface that exposes graphics drawing calls. A better name for such a class would be something like "Graphics" and indeed in GDI+ this is what the analogous construct is actually called. When we "Create" -- via CreateDC, CreateCompatibleDC, etc. -- we create one of these objects. When we GetDC we grab such an object that already exists.
To answer your questions:
Is a device context a TYPE of display or is it an INSTANCE of graphical data that is able to be displayed. ?
They are in so sense types of displays. You can think of them as instances of a class of objects with private implementations that expose a public interface exposing drawing commands.
Why can't we use GetDC() with NULL or with hwndDesktop instead?
You can't use GetDC(NULL) as the device context into which you are going to select an in memory bitmap because in such a situation you need to create a device context that does not already exist; GetDC(NULL) is like a singleton instance that is already in use.
So instead you usually CreateCompatibleDC(NULL) or CreateCompatibleDC(hdcScreen). Again CreateCompatibleDC(...) is a confusing name. Imagine the hypothetical object-oriented version of what is going on here. Say there is an IGraphics interface that is implemented by RasterGraphics, PrinterGraphics, and MetafileGraphics. Imagine the "RasterGraphics" class is used for both the screen and for in memory bitmaps. Then CreateCompatibleDC(...) would be would be like a factory call Graphics.CreateFrom(IGraphics g) that return a new instance of the same concrete type with perhaps some state variables initialized.
Why can't we cache the device context instead of repeatedly creating it?
You can. You do not need to delete device contexts across function calls. The only reason people often do is that they are a shared, finite resource and creating them is cheap. I think actually that they used to be very limited under old versions of Windows so old Win32 programmers tend to not cache them out of muscle memory from the old days, from Windows 95 days.
If the machine has only one display device and we are only drawing to windows why do we need to constantly harmonize bitmaps and device contexts?
Don't think of the "compatible" in CreateCompatibleDC(...) to be about "harmonizing with the screen" think of it as meaning "Okay Windows I want to create one of your graphics interface objects and I want the kind like this one, which is a normal raster graphics one and not for printers or for metafiles."

How can I take a programmatic screenshot of an Open GL ES 2.0 scene using GLKit (in iOS 6)?

I've found numerous posts regarding this, but I haven't been able to work out a solution, largely due to the fact that I don't have a very thorough understanding of OpenGL or GLKit.
I added the method described here to my project.
They specifically mention:
Important: You must call glReadPixels before calling
EAGLContext/-presentRenderbuffer: to get defined results unless you're
using a retained back buffer.
I tried unsuccessfully to set up a retained back buffer and given that doing so has 'adverse performance implications' I would rather avoid it.
The problem is, according to a comment in another post:
In GLKit, the GLKView will automatically present itself and discard unneeded renderbuffers at the end of each rendering cycle.
That being the case, how can I call the 'Snapshot' method at the appropriate time when using GLKit?
To date, in iOS 5 I get a weirdly yellow coloured version of the scene (as though there were no other colours) and in iOS 6 I get a pure white image (I imagine because I am using white as the clear colour).
Further, I have no idea what they (apple) are talking about in this comment:
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, _colorRenderbuffer);
so I have commented out the call in my app. If it matters my objects are using VBOs with position, texture coords and colour.

EGL_BAD_CONFIG When Creating A PixelBuffer

I am having trouble creating a PixelBuffer in OpenGL ESv2.
If my config specifies EGL_WINDOW_BIT I can successfully call eglCreateContext. However, when using EGL_PBUFFER_BIT I am getting an EGL_BAD_CONFIG.
I am working with an embedded system where I will be calling OpenGL ESv2 to do some GPGPU. I do not have a windowing system to render to so I feel that I must use PixelBuffers. My rendering calls render directly to a FBO with an attached Texture2D as the color buffer.
I am out of ideas on what's wrong with my configuration or how I can adjust it. Any advice would be great. Thank you.
I ended up writing a function to print out all my possible configurations. It turns out that even though glChooseConfiguration was returning GL_TRUE, it wasn't returning a configuration.
I wasn't getting any configurations.
OpenGL ES emulator apparently does not support PixelBuffers for OpenGL ES v2, only v1

Automatically tile/arrange windows created with OpenCV (HighGUI)

I'm creating windows for debugging like this:
cvNamedWindow("a",0); cvShowImage("a", imageA);
cvNamedWindow("b",0); cvShowImage("b", imageB);
cvNamedWindow("c",0); cvShowImage("c", imageC);
OpenCV creates all these windows in the exact same spot, which is not very practical, since only one of them is visible unless you move them around.
Can I make OpenCV automatically arrange the windows so that all of them can be seen (like a tiling window manager)? Or do I have to do this myself using cvMoveWindow?
No, this is impossible - there's no such feature in User Interface (I also was wondering about such functionality a month ago).
You have to manualy set window(s) position by calling MoveWindow - this is the only solution.

how to find overlapping region between images in opencv?

I am trying to implement alpha blending with two images for image stitching .
My first image is this ->
here is my second image ->
here is my result image ->
As you can see the result is not proper.I think I first have to find the overlapping region between then and then implement alpha blending on the overlapping part.
First of all, have you seen a new "stitching" module introduced in OpenCV 2.3?
It provides a set of building blocks for stitching pipeline including blending and "finding an overlap" (e.g. registration) steps. Here is a documentation: http://docs.opencv.org/modules/stitching/doc/stitching.html and an example of stitching application: stitching_detailed.cpp
I recommend you to study the code of this sample for better understanding of the details.
Regarding the finding of overlap there are several common approaches in computer vision:
optical flow
template matching
feature matching
For your case I recommend the last one - it works very well on the photos. And this approach is already implemented in OpenCV - explore the OpenCv source and see how the cv::detail::BestOf2NearestMatcher works.
I think the most common approach is SIFT, find a few keypoints in both images, then warp them to get your result. See this
Here are explanations about SIFT and panorama stitching.

Resources