Is there any functional difference between creating a GDI Pattern Brush with a BitMap then filling in a rect with that brush, and Blitting directly from a Device Independent Bitmap?
for clarification what i mean by the first scenario is creating a pattern brush using a bitmap, then just filling the entire screen with a patblt using PATCOPY. I mean certainly Blitting directy from the source Bitmap using bitblit seems a lot more efficient, but I'm not sure if they are functionally the same (very new to Windows, so sorry if this is a little vague or hard to understand)
Method 1: Create a pattern brush, selected it into the DC, and use PatBlt with PATCOPY
Method 2: Select a DIB section into a memory DC and use BitBlt.
The main differences between these methods are:
Method 1 will tile the image for you if the destination rectangle is larger than the source. With Method 2, you'd have to call BitBlt repeatedly.
With Method 2, you have to create and manage a memory DC.
In terms of performance, they are probably approximately the same in modern versions of Windows. The mapping of the DIB colors to the destination's color format happens just once when selected into the DC. Given enough memory on the card, the image should be transferred over the graphics bus just once. Both methods probably have optimized paths for special cases.
With PatBlt, you can re-use a monochrome pattern brush and set different colors, just by changing the text and background colors in the DC. With BitBlt, you'd have to update the bitmap in the memory DC first.
If I recall correctly, in the old days, pattern brushes were limited in size to something pretty small (like hatch brushes). Pattern brushes were often monochrome (1 bit per pixel) and used to fill backgrounds by setting the text and background colors and quickly tiling them with PatBlt.
Related
In the last days, while I'm working on a project, I was introduced to the sprite - Byte Array.
Unfortunately, I didnt find out any kond of information about the sprite which can tell me mote about what is this and how it's works.
I really be pleased if you can give me some information and examples for sprite.
A sprite is basically an image with a transparent background color or alpha channel which can be positioned on the screen and moved (usually involving redraw the background over the old position). In the case of an animated sprite, the sprite may consist of several actual images making up the frames of the animation. The format of the image depends entirely on the hardware and/or technology being used to draw or render it. For speed, the dimensions are usually powers of two (8,16,32,64 etc) but this may not be necessary for modern hardware.
Traditionally (read: back in my day), you might have a 320x200x256 screen resolution and a 16x16x256 sprite with color 0 being transparent. Each refresh of the screen would begin with redrawing the background under the sprites, taking a copy of the background under their new position and then redrawing only the visible colors of every sprite in their new position.
With modern hardware, however, it is more efficient to pass data in a format that the driver can handle (hopefully in the graphics accelerator) rather than do everything by hand.
I've not fully understand the basics of Bit blit bitmaps.
I'm using the WriteableBitmapEx framework (WPF). My bitmap represents a map and what I wanna achieve is to copy a (moving) symbol into that map.
For actual copying, I use the function Blit:
_bitmap.Blit(myObject.Value.Location.ToWindowsPoint(), symbol, rect, Colors.Cyan,
WriteableBitmapExtensions.BlendMode.Additive);
where symbol is a png image(transparent background).
This works in prinicpal but I do not understand how the color (Colors.Cyan) is applied by the blend mode. I've tried out all available blend modes but I've not succeeded in getting Cyan as the color of the symbol or I got the color but then the transparent background was also copied to the source bitmap (black background).
Is 'Bliting' the wrong approach for my use case?
Thanks.
A much easier approach is to use images (corresponding WPF ui element) and layer it above the bitmap. This has also the advantage that you can move the image without redrawing the bitmap at all.
The System.Drawing.Graphics class has a property CompositionMode with two options: SourceOver (which, based on the alpha component, blends whatever is drawn with the background already existing) or SourceCopy which simply overwrites the background with whatever is being drawn.
Does something similar exist in WPF?
In WPF when i draw a PolyLine for example on top of another the new PolyLine always alphablends with the background. I think that is independent of the container being used. I am using a Canvas but could not find a blend mode property anywhere. What I want to do is what the SourceCopy compositionmode mentioned above does. I.e. the new PolyLine should simply overwrite whatever is already on the Canvas.
Is there a simple way to do that, short of using pixel shaders (which - as far as I understand - wouldn't work anyways because I don't have access to the Canvas backbuffer).
I am not stuck with a Canvas and would be happy to use any container that supports overwrite mode.
I currently have a solution based on a WriteableBitmap for which I obtain a System.Drawing.Graphics context and then manipulate the CompositionMode. It works but since my window is fullscreen that solution has serious performance impacts.
Clarification and example:
The WPF window is fully transparent and so is the Canvas (back ground color(0,0,0,0)). Now I draw a PolyLine with a Color.FromArgb(128,128,0,0). I now have a semi-transparent red polyline. Next I draw the same PolyLine with Color.FromArgb(0,0,0,0). The result is the same as before because of the alpha blending taking place. What I want, however, is that the red polyline is erased with the second polyline (which is exactly what the SourceCopy mode in the Graphics class does.
I think all you need to do is make sure that the brushes used to fill/stroke the PolyLine have fully opaque alpha values (i.e. 255). Then the background shouldn't be blended into it.
You could apply a Clipping Mask, this way you can provide the path to clip over the elements that are below it, but it might be tough to maintain after a lot elements are required to be clipped...
I am looking to render a DrawingVisual (visual in the example) to a bitmap using RenderTargetBitmap with the view to set this bitmap as the background to a Canvas as below:
var bmp = new RenderTargetBitmap(2000, 50, 120, 96, PixelFormats.Indexed2);
bmp.Render(visual);
var brush = new ImageBrush(bmp) { Stretch = Stretch.Fill };
Canvas.Background = brush;
When using PixelFormats.Default as the last argument to RenderTargetBitmap, the image renders as expected. However, when I choose PixelFormats.Indexed2 (or any of the PixelFormats.IndexedX), my code seems to exit the method without an exception, the bmp.Render line is never called and hence the image is not displayed on the Canvas.
How to use the IndexedX pixel formats with RenderTargetBitmap? Or are there other ways to reduce the memory footprint of the image? It only uses three colors, so using a palette rather than 32bit RGB seemed the way to go.
You can't. RenderTargetBitmap only supports the Pbgra32 pixel format. That's because WPF's rendering system works entirely in 32 bits per pixel. That's the format in which it generates images, and it's also the format in which it prefers images to be in if you want to render them. (If you provide it with a bitmap in any other format, it'll need to convert it into a 32 bit per pixel representation first.)
What are you planning to do with this bitmap? If you want to render it in a WPF application, it'll need to be converted to a 32bpp format first in any case, so you risk using more memory if you attempt to hold it internally in any other format. (You'll have your supposedly memory-efficient representation and the version WPF's actually able to work with.) Not to mention the extra CPU time spent converting between your chosen format and a format WPF can work with.
Imagine a document window in a MDI application which contains a child WPF window, say a sidebar for example. How can one get a bitmap containing both the WPF pixels AND the GDI (non-wpf) pixels?
I've discovered that when making my thumbnail preview for the Win7 taskbar app icon hover, I get black in the parts of the preview where the WPF pixels should be. My current method simply grabs a bitmap capture of the document window. Then I get a DC for the preview, make a memory DC from it and select my bitmap into it. Then I do some size adjustments and bitblt the memory dc to the real dc. I'm guessing that the BitBlt operation doesn't take into account the fact that the WPF pixels are hardware accelerated and therefore need to be grabbed from the graphics hardware. All the stuff in GDI is managed just fine, though and when there's no WPF child windows, the preview image looks fine.
I'm wondering if it's at all possible to grab a bitmap of the WPF window from native C++. Then I can blt that onto the black area of the previous preview.
Maybe I'm not understanding your current approach correctly, but could you do a BitBlt() from the screen DC to your memory DC? You'd need to get the screen rect of your window, but that shouldn't be too bad.
To solve this, I had to create an abstract class in native code containing a virtual method to get the bitmap that was implemented in C++/CLI. In the managed implementation, I used .NET's RenderTargetBitmap class to get a bitmap capture of the WPF window and then I filled up the passed in CBitmap object (see How to get an BITMAP struct from a RenderTargetBitmap in C++/CLI?). In the unmanaged caller routine, I used the virtual method to obtain the Bitmap.
In short, there was no way to get the bitmap by simply using unmanaged C++ since WPF and GDI really don't work together for all practical purposes.