Is CreateCompatibleDC() necessary working with windows on one display? - c

This example code manually reads a bitmap file, uses CreateDIBSection() to make GDI allocate memory for it, and create an hbitmap handle. Then it uses a MemoryDC to draw the bitmap to a window DC:
ftp://ftp.oreilly.com/examples/9781572319950/cd_contents/Chap15/DibSect/DibSect.c
hdc = BeginPaint (hwnd, &ps) ;
...
hdcMem = CreateCompatibleDC (hdc) ;
Why can't we use GetDC() with NULL or with hwndDesktop instead? Why can't we cache the device context instead of repeatedly creating it?
If the machine has only one display device and we are only drawing to windows why do we need to constantly harmonize bitmaps and device contexts? Once the pixeldata is copied to the buffer provided by GDI, does GDI update it when that HBITMAP is loaded into a DC and drawn on? If the user also wishes to draw on it is it necessary to synchronize access? (By calling GDIFlush() first?)
It's hard to figure this out when most all of the object properties are opaque and abstracted. I've read almost all of the related MSDN, a lot of Petzold's book, and some articles:
Display Device Contexts
CreateCompatibleDC()
CreateDIBSection()
Memory Device Contexts
Guide to Win32 Memory DC
Guide to WIN32 Paint for Intermediates
Programming Windows®, Fifth Edition
Edit:
I think my question boils down to this:
Is a device context a TYPE of display or is it an INSTANCE of graphical data that is able to be displayed. A computer typically has only a handful of displays but it could have hundreds of things to display on them.

GetDC(NULL) is the screen HDC and the screen is a shared resource, therefore you should only do read/query operations on this HDC. Writing to this HDC is not a good idea on Vista and higher because of the DWM.
Since a HDC can only contain one bitmap, one brush and one pen, Windows/applications obviously need more than one HDC provided by the graphics engine.
You can count on CreateCompatibleDC to be relatively cheap operation and I believe Windows has a cache of DCs it can hand out. If you are creating a game/animation type application you might want to cache some of these graphic objects on your own but a normal application should not.
You don't generally call GDIFlush unless you are sharing GDI objects across several threads. You can use SetDIBits if you want to mix raw pixel bytes access and GDI.
I don't really get the once screen argument, Windows has supported multiple monitors since Windows 98 and there is not much you can do to prevent the user from connecting another monitor.

I think your problem is that you are getting hung up on Microsoft's names for things, Microsoft's name "device context" and the names for calls like "CreateCompatibleDC".
"Device Context" is a bad name. The Win32 documentation will tell you that a device context is a data structure for storing the state of a particular device used for rendering graphics commands. This is only partially true. Look at the different kinds of DCs that exist: (1) screen device contexts, (2) printer device contexts, (3) the device context used by a bitmap in memory, and (4) metafile device contexts. Of these only (1) or (2) are actually doing exactly what the documentation claims they are doing. In the other cases device contexts serve as a target for drawing calls but not as containers for the state of some physical device. (This is really noticeably true in the case of metafile DC's: metafiles were an old Win32 thing that basically just cache the GDI calls going in to them to be replayed later, kind of a crude vector format.)
In a hypothetical object oriented programming version of Win32, device contexts could be instances of some class that implements an interface that exposes graphics drawing calls. A better name for such a class would be something like "Graphics" and indeed in GDI+ this is what the analogous construct is actually called. When we "Create" -- via CreateDC, CreateCompatibleDC, etc. -- we create one of these objects. When we GetDC we grab such an object that already exists.
To answer your questions:
Is a device context a TYPE of display or is it an INSTANCE of graphical data that is able to be displayed. ?
They are in so sense types of displays. You can think of them as instances of a class of objects with private implementations that expose a public interface exposing drawing commands.
Why can't we use GetDC() with NULL or with hwndDesktop instead?
You can't use GetDC(NULL) as the device context into which you are going to select an in memory bitmap because in such a situation you need to create a device context that does not already exist; GetDC(NULL) is like a singleton instance that is already in use.
So instead you usually CreateCompatibleDC(NULL) or CreateCompatibleDC(hdcScreen). Again CreateCompatibleDC(...) is a confusing name. Imagine the hypothetical object-oriented version of what is going on here. Say there is an IGraphics interface that is implemented by RasterGraphics, PrinterGraphics, and MetafileGraphics. Imagine the "RasterGraphics" class is used for both the screen and for in memory bitmaps. Then CreateCompatibleDC(...) would be would be like a factory call Graphics.CreateFrom(IGraphics g) that return a new instance of the same concrete type with perhaps some state variables initialized.
Why can't we cache the device context instead of repeatedly creating it?
You can. You do not need to delete device contexts across function calls. The only reason people often do is that they are a shared, finite resource and creating them is cheap. I think actually that they used to be very limited under old versions of Windows so old Win32 programmers tend to not cache them out of muscle memory from the old days, from Windows 95 days.
If the machine has only one display device and we are only drawing to windows why do we need to constantly harmonize bitmaps and device contexts?
Don't think of the "compatible" in CreateCompatibleDC(...) to be about "harmonizing with the screen" think of it as meaning "Okay Windows I want to create one of your graphics interface objects and I want the kind like this one, which is a normal raster graphics one and not for printers or for metafiles."

Related

Multisampling (MSAA) for DirectX11/DirectX10 with D3DImage shared resource

I am trying to get MSAA in DX11 using D3DImage, but is seems, it is not possible, since shared multisampling texture are not allowed, as stated here: http://msdn.microsoft.com/en-us/library/windows/desktop/ff476531(v=vs.85).aspx
Actually, I use the SharpDX implementation of the D3DImage, which works fine for DX11 and DX10 as long one can leave without anti alisasing.
Approaches to solve it are described in this thread: http://sharpdx.org/forum/5-api-usage/1000-d3d11-problem-with-usage-of-texture2d which are not successful. There is yet another thread asking a similar question: Multisampling and Direct3D10 / D3DImage interop
Finally, the question is in fact, if anyone can confirm, that it is definitly NOT possible to use D3DImage for rendering of anti-aliased content from DX10/DX11?
As said in the microsoft link (tried several times as well), multisampled shared textures are not allowed (actually texture must also have no mip level(s), as additional info)
The only way to share the texture is to create a non multisampled version (same format/parameters), then use
DeviceContext.ResolveSubresource
in SharpDX to convert the msaa texture into a non multisampled one, then you can share the result of that.

Identifying that a resolution is virtual on a X11 screen by it's API (or extensions)

I'm working in a embarked application on linux that can be used with different PC hardware (displays specifically)
This application should set the environment for the highest allowed resolution (get by
the function XRRSizes from libXrandr).
The problem is: With some hardware, trying to set for the highest option creates a virtual desktop i.e. a desktop where the real resolution is smaller and you have to scroll w/ the mouse in the edges of the screen to access all of it.
Is there a way to detect within the Xlib (or one of it's siblings) that I am working with a virtual resolution (In other words, the re-size didn't go as expected) ?
Hints for a work around for this situation would also be appreciated...
Thanks
Read this: http://cgit.freedesktop.org/xorg/proto/randrproto/tree/randrproto.txt
You need to learn the difference between "screen", "output" and "crtc". You need to check the modes available for each of the outputs you want to use, and then properly set the modes you want on the CRTCs, associate the CRTCs with the outputs, and then make the screen size fit the values you set on each output.
Take a look at the xrandr source code for examples: http://cgit.freedesktop.org/xorg/app/xrandr/tree/xrandr.c

Performance issue with "Measure"

I'm encountering a performance issue in my application. Essentially I click a button, and a list is populated with databound data (this is virtualized because of the large amount of data), and then click another button which will add a row to an associated list view. I'm being vague because I think it's just necessary to illustrate how little is actually going on in the UI.
Here's what I know:
I don't see the issue on my my beefy dev computer running Win 7 Pro, nor on XP SP3 machines with decent specs. I only see it on a certain brand of laptops (Lenovo ThinkPads) running Win 7 enterprise with 4 GB RAM and Core i5 CPU (much beefier than the XP desktop).
Because of the aforementioned findings, I'm not thinking this is an issue with code.
I profiled with Microsoft's PerfView tool and noticed what I would assume to be an incredibly large number of calls to UIElement.Measure (not ever invoked directly by our code), something I don't see when I profile on the other machines.
The laptop has a 1360x780 resolution, so I thought that perhaps the small resolution was causing the GPU to unnecessarily render the controls because of some data binding that I might be doing (which might explain the large number of calls to Measure()). I extended the laptop's display to my 24" monitor and didn't see any improvement.
Right now I'm assuming that the issue is with the GPU. I've updated the driver with no improvements.
Even though I don't think it's an issue with code, is there a WPF equivalent to "SuspendLayout()"
Is there a way to profile GPU performance to see if it is being hammered during certain processes
(far shot) Has anyone had similar performance issues that seem to be computer specific and suggestions on how to track them down?
Sorry if this is a vague question. I tried to make it comply with SO's usage reqs. Let me know if you want any more info.
Just as an addendum: The program is using WPF, C# 4.0, the issue seems to be around Telerik controls (though I don't think they're suspect since we use them elsewhere without issue).
Turns out it's caused by a known Microsoft issue. I’d try to explain, but I won’t. Mainly because I can’t.
Article talking about fix (see post by Viðar on 3 August 2010):
Microsoft Hotfix site: http://support.microsoft.com/kb/2484841/en-us
Fix: http://archive.msdn.microsoft.com/KB2484841/Release/ProjectReleases.aspx?ReleaseId=5583
1. Answer
To to prevent rampant MeasureOverride calls originating from the WPF ContextLayoutManager:
protected override void OnChildDesiredSizeChanged(UIElement el)
{
/* base.OnChildDesiredSizeChanged(el); */ // avoid rampant remeasuring
}
2. Relevant citation
UIElement.OnChildDesiredSizeChanged(UIElement) Method   ...
The OnChildDesiredSizeChanged(UIElement) method has the default implementation of calling InvalidateMeasure() on itself. A typical implementation would be: do whatever optimization your own element supports, and then typically call base OnChildDesiredSizeChanged(UIElement) from a̲t̲ l̲e̲a̲s̲t̲ o̲n̲e̲ of the code branches...
...the implication (and fact-of-the-matter) being that, for any single parent layout pass originated by any one of its children, the parent's MeasureOverride will be called additionally—and likely extraneously—once for each of its children whose size(s) have changed as well.
3. Discussion
In the case where multiple children change their sizes "at the same time", the parent will typically detect and account for the new overall layout amongst all of its children entirely during just the first of these calls. This is standard practice in WPF, and is encouraged by MeasureOverride(…) deliberately excluding any indication of some specific triggering child. Besides the fact that in the most common cases there is no such child (see above link for details), it makes code for attempting any sort of "partial layout" onerous. But mostly, why would any layout calculation ever want to proceed without first obtaining all the very latest available measurements anyway?
So we see that after a single MeasureOverride call, triggered by whichever child happened to be "first" (it shouldn't matter), the layout of the parent should actually be final regarding all of its latest child size information. But this doesn't mean that any queued OnChildDesiredSizeChanged notifications—for other children whose size(s) had also changed—have gone away. Those calls are still pending with the parent, and unless the virtual base call is explicitly abandoned (as shown in bullet #1), each will generate one additional, now-extraneous MeasureOverride call.
4. Caveat
The code shown here disables a̲l̲l̲ child-initiated measure invalidations, which is appropriate for cases where the parent either willfully forbids such changes, or is inherently already aware of them. This is not uncommon; for example, it includes any parent that always fully determines and enforces the size of its children, or more generally, any parent that only adopts DesiredSize values from its children during its own measure pass. What's important is that the measuring of the parent be sufficiently guaranteed by its own parent only.
The situation where a parent might wish to cancel some of the child-initiated measure notifications while preserving/allowing others will naturally depend on additional particular circumstances. It seems more obscure, so it is not addressed here.

How to handle data from an external program on Mac OSx

I would like to make a program (I would prefer in C language) , but even in cocoa , that can take data from an external program (such as iTunes or adium) and will use them. For example i would like to take the data of a listbox or the text of the chat so as to manipulate it. I need a place to start. In windows I think it is possible with some apis that find the hWnd of a window and then find a pointer to the listbox or textbox. Please give me some info on how to start. Thanks you in advance.
It's not clear exactly what you want to do. It's either impossible or severely restricted.
For one thing, different applications use different ways of constructing a “listbox”—Cocoa applications use NSTableView, Carbon applications use DataBrowser, and GTK, Qt, and Java applications use even more different APIs. These do not all go through some common kind of list box thingy; each is an independent implementation.
(You could hope that either NSTableView or DataBrowser would be based on the other, but don't count on it.)
For another, it is impossible to obtain a pointer to that control. You cannot access another application's NSTableView or DataBrowser view or GTK/Qt/Java equivalent unless (and this only works for NSTableView) that application deliberately serves it up to you. It doesn't sound like that's your situation.
The closest you can get to that is Accessibility, which may be pretty close, but is unlikely to work with most applications not based on Cocoa.
Even then, the view may not be showing you all the data. A table view may be lazily populated, and a table view designed in imitation of the iOS UITableView may even never have all the data (because it only has what it can show).
(All of the above applies to every kind of view, not just table views. Collection views, text fields, buttons—same deal for all of them.)
The only way to get at the true, complete copy of the data is to ask the controller that owns it. And, again, that's impossible if the application is not specifically offering it to you. Not to mention, the application might not even have a controller (not object-oriented, not MVC, or just sloppily made).
… so as to manipulate it.
Getting the data in the first place is the easy part. It is nigh-impossible to mess with data in another application—for good reason.
The closest you're going to get to either of these goals is the Accessibility interfaces.

Are there any good software development "patterns" for memory intensive .net programs?

Basically I'm working on a program that processes a lot of large video and image files, and I'm struggling with the memory management side of it because I've never dealt with anything quite like this before.
For instance, it stores all these images in a database, and loads a list of videos, and then you can switch between the videos and view images from the video. Right now, it's keeping all of those images in memory all the time, which is eating up a lot of space. I know I can lazy load the images, but once you've switched back and forth you get all of them stuck in memory.
I want to take advantage of the WPF databinding functionality and MVVM as much as possible, but if I need to look at a different architecture I will.
I'm just looking for general advice, tips, links to articles, or anything that could help.
One of the things you could look at is data virtualization, which is not provided in WPF by default (they provide UI virtualization instead). Data virtualization can say 'load and bind the data for an item / range of items while visible, then unload when not visible'.
Here's a great article that describes a concrete implementation that you may be able to use as-is or adapt:
http://www.codeproject.com/KB/WPF/WpfDataVirtualization.aspx
It sounds like the main problem you're having is not so much the performance-intensiveness of the application (which things like fixed-size buffers and static allocation will help with) but its overall memory footprint. The way to control that is with virtualization.
Lazy loading gets you halfway there: you don't actually create the object until something needs it. That's fine, but the longer the user works with the application and the more objects he visits in the UI, the more objects get created, and eventually the application runs out of memory.
So you want to throw away objects that the user doesn't need anymore. Figuring out which objects the user doesn't need can be a hard problem, but it can also be as easy as assuming that the user doesn't need the object that he used least recently. You use a least-recently-used (LRU) cache to do this.
This is totally consistent with the MVVM pattern. In your view class, you make your property getter for the object use this pseudocode:
if object hasn't been loaded
load object
add object to the LRU cache (whether you loaded it or not)
return object
The LRU cache I wrote keeps a simple queue of the objects it contains. When you add an object to the cache, if it's not already in the queue it gets added to the back, and if it is already in the queue it gets moved to the back.
If the queue's at its capacity when you add an object, it pops off whatever is at the front of the queue (which is the one that was used least recently) and raises the DiscardingOldestItem event.
This event is the object's chance to tell anything that holds a reference to it (i.e. the view object that it's a property of) that it needs to be discarded (probably by raising an event of its own). The view object's event handler should first raise the PropertyChanged event. If the property getter gets called when it does this, there's a binding somewhere that's still looking at the property, so it shouldn't be discarded yet. (Also, since the getter was called, the object just got moved to the back of the queue.) Otherwise, it can be thrown away.
(Note that if you have more objects visible in the UI than the cache can hold, this little dance becomes an infinite loop and you'll get a stack overflow.)
A more sophisticated approach would have the LRU cache start discarding old items when the application started running low on memory (it uses a fixed capacity right now). That's a straightforward change, but if you make that change, the scenario described in the previous paragraph is something you need to give more thought to; one very large object could result in the whole UI going kablooey.
It seems that to increase raw performance you would actually want to avoid patterns. They have their uses, don't get me wrong, but if you're trying to blast video at the highest performance possible the last thing you need to do it introduce abstraction layers that are designed to write higher quality code, not increase application performance.
this article on informIt has a lot of good info on the subject although it is more c and c++.
Static Allocation Pattern: Allocates memory up front
It suggests,
Pool Allocation Pattern: Preallocates pools of needed objects
Fixed Sized Buffer Pattern: Allocates memory in same-sized blocks
Smart Pointer Pattern: Makes pointers reliable
Garbage Collection Pattern: Automatically reclaims lost memory
Garbage Compactor Pattern: Automatically defragments and reclaims memory
"I know I can lazy load the images,
but once you've switched back and
forth you get all of them stuck in
memory."
This is not true to my understanding. The images can get garbage collected just like anything else, by removing all references. Are you sure you dont have a reference to them somewhere? Try a memory profiler like memprofiler or ANTS to see whats happening.
To those who have found this question looking for general patterns (not WPF) to reduce memory, the famous one (which I have never seen used!) is The Flyweight pattern

Resources