How to use PixelFormats.IndexedX with RenderTargetBitmap? - wpf

I am looking to render a DrawingVisual (visual in the example) to a bitmap using RenderTargetBitmap with the view to set this bitmap as the background to a Canvas as below:
var bmp = new RenderTargetBitmap(2000, 50, 120, 96, PixelFormats.Indexed2);
bmp.Render(visual);
var brush = new ImageBrush(bmp) { Stretch = Stretch.Fill };
Canvas.Background = brush;
When using PixelFormats.Default as the last argument to RenderTargetBitmap, the image renders as expected. However, when I choose PixelFormats.Indexed2 (or any of the PixelFormats.IndexedX), my code seems to exit the method without an exception, the bmp.Render line is never called and hence the image is not displayed on the Canvas.
How to use the IndexedX pixel formats with RenderTargetBitmap? Or are there other ways to reduce the memory footprint of the image? It only uses three colors, so using a palette rather than 32bit RGB seemed the way to go.

You can't. RenderTargetBitmap only supports the Pbgra32 pixel format. That's because WPF's rendering system works entirely in 32 bits per pixel. That's the format in which it generates images, and it's also the format in which it prefers images to be in if you want to render them. (If you provide it with a bitmap in any other format, it'll need to convert it into a 32 bit per pixel representation first.)
What are you planning to do with this bitmap? If you want to render it in a WPF application, it'll need to be converted to a 32bpp format first in any case, so you risk using more memory if you attempt to hold it internally in any other format. (You'll have your supposedly memory-efficient representation and the version WPF's actually able to work with.) Not to mention the extra CPU time spent converting between your chosen format and a format WPF can work with.

Related

Windows GDI: Difference Between a Pattern Brush & BitBlt of a Bitmap

Is there any functional difference between creating a GDI Pattern Brush with a BitMap then filling in a rect with that brush, and Blitting directly from a Device Independent Bitmap?
for clarification what i mean by the first scenario is creating a pattern brush using a bitmap, then just filling the entire screen with a patblt using PATCOPY. I mean certainly Blitting directy from the source Bitmap using bitblit seems a lot more efficient, but I'm not sure if they are functionally the same (very new to Windows, so sorry if this is a little vague or hard to understand)
Method 1: Create a pattern brush, selected it into the DC, and use PatBlt with PATCOPY
Method 2: Select a DIB section into a memory DC and use BitBlt.
The main differences between these methods are:
Method 1 will tile the image for you if the destination rectangle is larger than the source. With Method 2, you'd have to call BitBlt repeatedly.
With Method 2, you have to create and manage a memory DC.
In terms of performance, they are probably approximately the same in modern versions of Windows. The mapping of the DIB colors to the destination's color format happens just once when selected into the DC. Given enough memory on the card, the image should be transferred over the graphics bus just once. Both methods probably have optimized paths for special cases.
With PatBlt, you can re-use a monochrome pattern brush and set different colors, just by changing the text and background colors in the DC. With BitBlt, you'd have to update the bitmap in the memory DC first.
If I recall correctly, in the old days, pattern brushes were limited in size to something pretty small (like hatch brushes). Pattern brushes were often monochrome (1 bit per pixel) and used to fill backgrounds by setting the text and background colors and quickly tiling them with PatBlt.

Convert YUV to RGB on DeckLink using hardware

I'm currently ingesting HD1080p video at 59.94 FPS from a camcorder via the HDMI input on the DeckLink 4K Extreme.
My goal is to replicate the incoming image in a WPF UI element. To accomplish this I'm using the DeckLink SDK in a C# WPF application.
In this program I've implemented the VideoInputFrameArrived callback. In this callback I'm copying the bytes from each frame into a WriteableBitmap which I've set as the source for an Image.
All this works as it should, and when I run the program, the Image is indeed updated in real time as the frames arrive.
My problem then, is that the only two supported Pixel Formats for the video input are 8BitYUV and 10BitYUV, neither of which can be natively displayed on computer monitors.
The WriteableBitmap can only take in various RGB, Black and White, and CMYK formats.
Here is what I've tried so far.
I've tried to convert each frame using the IDeckLinkVideoConversion::ConvertFrame()
Problem: ConvertFrame() requires a destination frame to be rendered on the DeckLink using IDeckLinkOutput::CreateVideoFrame(). As I currently understand it, the DeckLink cannot act as both an input (to capture the video feed) and an output (to render the destination frame).
I've set the incoming stream to 8BitYUV, and copied each frame into the WriteableBitmap with a format of BGR32.
Problem: As I mentioned earlier, this will display an image, but the color is incorrect and the picture is only half the width that it needs to be.
The reason for this is that the incoming stream of 8BitYUV is 16 bits/pixel, whereas the Bitmap expects 32 bits/pixel, and so the Bitmap treats each incoming MacroPixel (4 bytes) as one pixel instead of the 2 pixels it really is.
Currently I'm using a pixel shader to fix the color and a RenderTransform to scale the Image horizontally by a factor of 2 to "fix" the aspect ratio. The porblem is that I have half of the original resolution.
I don't believe this is a hardware limitation, because when I hook up another monitor to the HDMI output on the DeckLink, the incoming picture displays in full 1080p in perfect color. Would it be possible to capture that outgoing stream somewhere in memory?
TL;DR
What is the best way to convert 4:2:2 YUV (UYVY) into a RGB or CMYK pixel format in real time? (1080p # 59.94 FPS)
Preferably a hardware solution i.e. DeckLink or GPU.
You have several options here.
First of all, you can display UYVY directly. Most video adapters will accept UYVY data through DirectDraw, DirectShow, DirectX versions up to 9 APIs and you won't need a real time conversion for the video frames. Integrating this into WPF application might require some effort, and perhaps the most popular way is to utilize DirectShow through DirectShow.NET library and WPF Media Kit. On this way, however, you could also capture video using DeckLink's video capture DirectShow filter. You could connect all parts together faster, however you already capture using DeckLink SDK and this way you have more control and flexibility on the capture process so you might not want to get back to DirectShow.
Second option is to convert to RGB as you wanted. I don't think DeckLink can do it for you, and GPU based conversion definitely exists (the conversion formula is well known, simple and easy to parallelize), however is hardware dependent or otherwise not immediately available. Instead, Microsoft ships Color Converter DSP which can do the conversion (from 8 bits, not 10 though) in a very efficient way. The API is native, and you might need Media Foundation .NET to access it from your app. An alternate efficient software conversion can also be done using FFmpeg's libswscale (for managed app through respective wrappers).
I just did this with the decklink api because the card I have can act as both inputs and outputs. And the outputs do not need to be in playback mode to access this part of the api:
com_ptr<IDeckLinkOutput> m_deckLinkOutput;
if (SUCCEEDED(m_deckLink->QueryInterface(IID_IDeckLinkOutput, (void **)&m_deckLinkOutput)))
{
IDeckLinkMutableVideoFrame *pRGBFrame;
if (SUCCEEDED(m_deckLinkOutput->CreateVideoFrame(videoFrame->GetWidth(), videoFrame->GetHeight(), videoFrame->GetWidth() * 4, bmdFormat8BitBGRA, videoFrame->GetFlags(), &pRGBFrame)))
{
m_deckLinkVideoConversion->ConvertFrame(pFrame, pRGBFrame);
//use the rgbFrame
pRGBFrame->Release();
}
}

WPF RenderTargetBitmap Missing Elements

I have a TreeView with small icons displayed in the data template. I'm trying to save the Treeview as a PNG using RenderTargetBitmap.
The image saves correctly on small data sets. However, if the data set becomes too large, some of the icons are excluded from the final image. The magic number seems to be 200 items. It doesn't seem to matter if the tree is deep or wide, after 200 items, the icons are not rendered.
Added Code
So here is my code that I'm using to create an image.
RenderTargetBitmap targetBitmap = new RenderTargetBitmap(
(int)_treeView.ActualWidth,
(int)_treeView.ActualHeight,
96, 96, PixelFormats.Default);
targetBitmap.Render(_treeView);
Added Screen Shot
Notice the missing icons way over on the right side of the tree.
Now if I collapse a few branches, thus hiding some of the other icons, then these icons are included. It's almost like RenderTargetBitmap.Render doesn't have the power to render all of the icons. Or it may have something to do with virtual panels.
Here is a closer look.
What I immediately noticed that you have HUGE image. Width 12000. I am surprised that you even got that close.
As MSDN states, the texture width/height are limited by DirectX texture limits.
The maximum rendered size of a XAML visual tree is restricted by the maximum dimensions of a Microsoft DirectX texture; for more info see Resource Limits (Direct3D). This limit can vary depending on the hardware whre the app runs. Very large content that exceeds this limit might be scaled to fit. If scaling limits are applied in this way, the rendered size after scaling can be queried using the PixelWidth and PixelHeight properties. For example, a 10000 by 10000 pixel XAML visual tree might be scaled to 4096 by 4096 pixels, an example of a particular limit as forced by the hardware where the app runs.
http://msdn.microsoft.com/library/windows/apps/dn298548
I suspect these things:
Virtualization cutting off some things - I've had the exact problem in past with DataGrid, and the problem was virtualization. Your case doesn't seem like one though.
Too big texture can cause undefined behaviour.
You can try disabling hardware acceleration. The thing causes quite few hardcore bugs. http://msdn.microsoft.com/en-us/library/system.windows.media.renderoptions.processrendermode.aspx
Other than that - it will be tricky, but I am pretty sure that it will work beautifully:
1) start with the root object, and traverse the root object childrens recursively, until you find an object that is less than 1000 x 1000. Take picture of it using RenderTargetBitmap(BMP) and merge it to IN-MEMORY-BMP. Do it for each children.
You should be able to calculate all this stuff.
For the records: there's a workaround.
Instead of rendering your Visual directly with RenderTargetBitmap, use an interim DrawingVisual. Paint your Visual into the DrawingVisual using a VisualBrush and then use RenderTargetBitmap with the DrawingVisual.
Like this:
public BitmapSource RenderVisualToBitmap(Visual visual)
{
var contentBounds = VisualTreeHelper.GetContentBounds(visual);
var drawingVisual = new DrawingVisual();
using (var drawingContext = drawingVisual.RenderOpen())
{
var visualBrush = new VisualBrush(visual);
drawingContext.DrawRectangle(visualBrush, null, contentBounds);
}
var renderTargetBitmap = new RenderTargetBitmap((int)contentBounds.Width, (int)contentBounds.Height, 96, 96, PixelFormats.Default);
renderTargetBitmap.Render(drawingVisual);
return renderTargetBitmap;
}
Note however that as your VisualBrush gets bigger the resulting image gets more and more fuzzy (when rendering with high DPI). To work around this problem use a series of smaller VisualBrush "tiles" as described here:
https://srndolha.wordpress.com/2012/10/16/exported-drawingvisual-quality-when-using-visualbrush/

Create and resize an image in WPF from System.Drawing.Bitmap areas

I'm trying to implement a function that takes a System.Drawing.Bitmap object and renders it on a WPF Canvas. The bitmap has to be cropped and joined a few times before rendering.
Environment: WPF application running on .NET 3.5 SP1
Input: System.Drawing.Bitmap object, of size 800x600 and pixel format RGB24
Goal: to display an image which is composed of two stripes of the input bitmap (on one line). The stripes are two bitmap halves - (0,0,800,300) and (0,300,800,600). Later on I want to be able to scale the image up or down.
I've already implemented a solution with GDI and Graphics.DrawImage (that renders into a Bitmap object), but I want to improve performance (this function could be called 30 times per second).
Is there a faster way to implement this with WPF, assuming I want to render the image on a WPF window?
The best solution I found so far is using WriteableBitmap, something like this:
void Init()
{
m_writeableBitmap = new WriteableBitmap(DesiredWidth, DesiredHeight, DesiredDpi, DesiredDpi, PixelFormats.Pbgra32, null);
{
void CopyPixels(System.Drawing.Bitmap frame, Rectangle source, Point destBegin)
{
var bmpData = frame.LockBits(source, ImageLockMode.ReadOnly, frame.PixelFormat);
m_writeableBitmap.Lock();
var dest = new Int32Rect(destBegin.X, destBegin.Y, bmpData.Width, bmpData.Height);
m_writeableBitmap.WritePixels(dest, bmpData.Scan0, bmpData.Stride * bmpData.Height, bmpData.Stride);
m_writeableBitmap.Unlock();
frame.UnlockBits(bmpData);
}
CopyPixels would be called twice for the use case I described in my question (two stripes).

Font graphics routines

How do you do your own fonts? I don't want a heavyweight algorithm (freetype, truetype, adobe, etc) and would be fine with pre-rendered bitmap fonts.
I do want anti-aliasing, and would like proportional fonts if possible.
I've heard I can use Gimp to do the rendering (with some post processing?)
I'm developing for an embedded device with an LCD. It's got a 32 bit processor, but I don't want to run Linux (overkill - too much code/data space for too little functionality that I would use)
C. C++ if necessary, but C is preferred. Algorithms and ideas/concepts are fine in any language...
-Adam
In my old demo-scene days I often drew all characters in the font in one big bitmap image. In the code, I stored the (X,Y) coordinates of each character in the font, as well as the width of each character. The height was usually constant throughout the font. If space isn't an issue, you can put all characters in a grid, that is - have a constant distance between the top-left corner of each character.
Rendering the text then becomes a matter of copying one letter at a time to the destination position. At that time, I usually reserved one color as being the "transparent" color, but you could definitely use an alpha-channel for this today.
A simpler approach, that can be used for small b/w fonts, is to define the characters directly in code:
LetterA db 01111100b
db 11000110b
db 11000110b
db 11111110b
db 11000110b
db 11000110b
The XPM file format is actually a file format with C syntax that can be used as a hybrid solution for storing the characters.
Pre-rendered bitmap fonts are probably the way to go. Render your font using whatever, arrange the characters in a grid, and save the image in a simple uncompressed format like PPM, BMP or TGA. If you want antialiasing, make sure to use a format that supports transparency (BMP and TGA do; PPM does not).
In order to support proportional widths, you'll need to extract the widths of each character from the grid. There's no simple way to do this, it depends on how you generate the grid. You could probably write some short little program to analyze each character and find the minimal bounding box. Once you have the width data, you put it in an auxiliary file which contains the coordinates and sizes of each character.
Finally, to render a string, you look up each character and bitblit its rectangle from the font bitmap onto your frame buffer, advancing the raster position by the width of the character.
We have successfully used the SRGP package for fonts. We did use fixed-pitch fonts, so I'm not sure if it can proportional fonts.
We're using bitmap fonts generated by anglecode#s bitmap font generator :
http://www.angelcode.com/products/bmfont/
This is very usable as it has XML output which will be easy to convert to any data format you need.
Angel Code's bmfont also adds kerning and better packing to the old alternative that was MudFont.

Resources