C, GTK: display stream of RGB images at < 60 fps - c

I'm developing an application that shall receive images from a camera device and display them in a GTK window.
The camera delivers raw RGB images (3 bytes per pixel, no alpha channel, fixed size) at a varying frame rate (1-50 fps).
I've already done all that hardware stuff and now have a callback function that gets called with every new image captured by the camera.
What is the easyest but fast enough way to display those images in my window?
Here's what I already tried:
using gdk_draw_rgb_image() on a gtk drawing area: basically worked, but rendered so slow that the drawing processes overlapped and the application crashed after the first few frames, even at 1 fps capture rate.
allocating a GdkPixbuf for each new frame and calling gtk_image_set_from_pixbuf() on a gtk image widget: only displays the first frame, then I see no change in the window. May be a bug in my code, but don't know if that will be fast enough.
using Cairo (cairo_set_source_surface(), then cairo_paint()): seemed pretty fast, but the image looked striped, don't know if the image format is compatible.
Currently I'm thinking about trying something like gstreamer and treating those images like a video stream, but I'm not sure whether this is like an overkill for my simple mechanism.
Thanks in advance for any advice!

The entire GdkRGB API seems to be deprecated, so that's probably not the recommended way to solve this.
The same goes for the call to render a pixbuf. The documentation there points at Cairo, so the solution seems to be to continue investigating why your image looked incorrect when rendered by Cairo.

unwind is right, cairo is the way to go if you want something that will work in GTK2 and GTK3. As your samples are RGB without alpha, you should use the CAIRO_FORMAT_RGB24 format. Make sure the surface you paint is in that format. Also try to make sure that you're not constantly allocating/destroying the surface buffer if the input image keeps the same size.

Related

Rendering video using WriteableBitmap causes choppy animation

So here's my setup:
Camera images coming in at 1920x1080 # 25 FPS
Writing image data to WriteableBitmap on UI thread (simple copy, no processing)
Two Image controls in two different windows on two different monitors has their Source property set to the WriteableBitmap
Some generic UI stuff goes over the camera images
This works great, uses about 4% CPU on an old laptop (8 logical processors). The video is as smooth as can be. However, my UI has some animations (stuff moving around). When the camera display is running, those animations gets choppy.
Right now, the camera image is in Gray8 format, so it will be converted (I guess when calling WritePixels?). I also tried forcing one of the animations to 25 FPS too, no change.
Where should I start to resolve this? Is the issue that I'm locking the bitmap for too long, or is there something else going on? From what I can see locking the bitmap will cause the render thread to block, so moving that to another thread seems pointless. And it does feel like that somewhat defeats the purpose of WriteableBitmap.
This is always going to be tricky because you're capturing at 25FPS whilst WPF tries to update at 60. It's difficult to offer any meaninful advice without seeing a testable project but I'd probably start by doing the updates in a CompositionTarget.Rendering handler.

Convert YUV to RGB on DeckLink using hardware

I'm currently ingesting HD1080p video at 59.94 FPS from a camcorder via the HDMI input on the DeckLink 4K Extreme.
My goal is to replicate the incoming image in a WPF UI element. To accomplish this I'm using the DeckLink SDK in a C# WPF application.
In this program I've implemented the VideoInputFrameArrived callback. In this callback I'm copying the bytes from each frame into a WriteableBitmap which I've set as the source for an Image.
All this works as it should, and when I run the program, the Image is indeed updated in real time as the frames arrive.
My problem then, is that the only two supported Pixel Formats for the video input are 8BitYUV and 10BitYUV, neither of which can be natively displayed on computer monitors.
The WriteableBitmap can only take in various RGB, Black and White, and CMYK formats.
Here is what I've tried so far.
I've tried to convert each frame using the IDeckLinkVideoConversion::ConvertFrame()
Problem: ConvertFrame() requires a destination frame to be rendered on the DeckLink using IDeckLinkOutput::CreateVideoFrame(). As I currently understand it, the DeckLink cannot act as both an input (to capture the video feed) and an output (to render the destination frame).
I've set the incoming stream to 8BitYUV, and copied each frame into the WriteableBitmap with a format of BGR32.
Problem: As I mentioned earlier, this will display an image, but the color is incorrect and the picture is only half the width that it needs to be.
The reason for this is that the incoming stream of 8BitYUV is 16 bits/pixel, whereas the Bitmap expects 32 bits/pixel, and so the Bitmap treats each incoming MacroPixel (4 bytes) as one pixel instead of the 2 pixels it really is.
Currently I'm using a pixel shader to fix the color and a RenderTransform to scale the Image horizontally by a factor of 2 to "fix" the aspect ratio. The porblem is that I have half of the original resolution.
I don't believe this is a hardware limitation, because when I hook up another monitor to the HDMI output on the DeckLink, the incoming picture displays in full 1080p in perfect color. Would it be possible to capture that outgoing stream somewhere in memory?
TL;DR
What is the best way to convert 4:2:2 YUV (UYVY) into a RGB or CMYK pixel format in real time? (1080p # 59.94 FPS)
Preferably a hardware solution i.e. DeckLink or GPU.
You have several options here.
First of all, you can display UYVY directly. Most video adapters will accept UYVY data through DirectDraw, DirectShow, DirectX versions up to 9 APIs and you won't need a real time conversion for the video frames. Integrating this into WPF application might require some effort, and perhaps the most popular way is to utilize DirectShow through DirectShow.NET library and WPF Media Kit. On this way, however, you could also capture video using DeckLink's video capture DirectShow filter. You could connect all parts together faster, however you already capture using DeckLink SDK and this way you have more control and flexibility on the capture process so you might not want to get back to DirectShow.
Second option is to convert to RGB as you wanted. I don't think DeckLink can do it for you, and GPU based conversion definitely exists (the conversion formula is well known, simple and easy to parallelize), however is hardware dependent or otherwise not immediately available. Instead, Microsoft ships Color Converter DSP which can do the conversion (from 8 bits, not 10 though) in a very efficient way. The API is native, and you might need Media Foundation .NET to access it from your app. An alternate efficient software conversion can also be done using FFmpeg's libswscale (for managed app through respective wrappers).
I just did this with the decklink api because the card I have can act as both inputs and outputs. And the outputs do not need to be in playback mode to access this part of the api:
com_ptr<IDeckLinkOutput> m_deckLinkOutput;
if (SUCCEEDED(m_deckLink->QueryInterface(IID_IDeckLinkOutput, (void **)&m_deckLinkOutput)))
{
IDeckLinkMutableVideoFrame *pRGBFrame;
if (SUCCEEDED(m_deckLinkOutput->CreateVideoFrame(videoFrame->GetWidth(), videoFrame->GetHeight(), videoFrame->GetWidth() * 4, bmdFormat8BitBGRA, videoFrame->GetFlags(), &pRGBFrame)))
{
m_deckLinkVideoConversion->ConvertFrame(pFrame, pRGBFrame);
//use the rgbFrame
pRGBFrame->Release();
}
}

Displaying pixel data with GDI+

I am writing a simple 3D rendering engine.
The end result of my 3D processing is pixel data. Next I need to display it on the screen with GDI+.
I am using WinForms and Visual Basic. I am drawing directly on form's ClientRectangle.
I have some questions.
After I process a pixel, should I be writing pixel data to a buffer first, instead of sending each pixel to GDI+ individually?
- If so, how much of a screen should I buffer at one time? Full screen, half, quarter, eighth? I think there may be RAM usage / performance trade-offs here.
- What is the best data structure for the pixel buffer?
- Which GDI+ command do I use to render the pixel buffer (or the individual pixel)? Is it possible to avoid creating the bitmap as an intermediate step and send pixel data directly to screen?
Maximum screen size I anticipate is 1600x1200. RAM could be as low as 1GB.
Thanks.
Hope you can find some of those answers here
Write the data into a buffer of RGBA structs first. This will make it easy if, for example you want to render multiple "layers" and then composite those as well. It will also make it easy if you want to perform any deferred processing at some point. Once a full (tile?) render is complete, you can flush it to the output bitmap/file.
This depends on what resolutions you allow the user to render to. If you want to render gigapixel images, you will need to tile it at some reasonable size. I would recommend that the tile size be configurable and then you can set it at a reasonable default after testing.
I would recommend starting out with a simple RGBA buffer if you're not looking to perform any deferred shading.
If you are NOT performing tiled rendering/rendering images that can fit in memory, you can simply use Bitmap.LockBits and write the data that way. If you are using tiled rendering, you will need to either find a library that allows you to render a scanline at a time (and make that a "tile") or fix the file format you want to write TGA, PNG and seek/write directly to the file. Dumping the image as a RAW file and then using a command-line tool to convert it would also be another option.
Hope this helps!

OpenGL, Showing Text and getting values using C

This has been my problem since I started using openGL.
What code am I going to use to show text and get value. I could not use printf and scanf and my only header file is glut.h.
This has been my problem since I started using openGL.
What code am I going to use to show text
Difficult subject, because OpenGL itself doesn't deal with text output. You can:
render text to an image and display that
create a texture atlas from the glyphs of a font, then render from that font texture
draw the font glyph outlines as geometry
If you Google "OpenGL font rendering" you'll get a large number of results of papers on the topic. Recent and old ones alike.
and get value.
Not with OpenGL. OpenGL is a drawing API. You send it points, lines and triangles, and it draws nice pictures for you. User input is outside the scope of OpenGL. That's on part of the GUI system. Most likely one of
Windows GDI
MacOS Cocoa
X11
Standard user input event processing applies. Usually one uses a toolkit like Qt, GTK or similar. Those toolkits deal with user input processing through their event mechanism.
http://linux.die.net/man/3/glutstrokestring
How about this?
#include <openglut.h>
glutStrokeString(GLUT_STROKE_ROMAN, "I will draw this string at the origin of the model");

Saving xlib XImage to PNG

I am using xlib.
I have an XImage structure filled with information from an XGetImage() call. Is there a popular method to get from XImage to something more meaningful.. namely PNG?
I have looked at libpng, but have heard from pretty much everyone that it's a beast to tame. Would this still be the recommended path to take?
See also How to save XImage as bitmap? though that person had the constraint that they couldn't use a library.
If you can use a library, Cairo is a good one that will do this for you I believe. It has PNG saving dealing with the libpng mess for you, and it has code to get the pixels from X. However, it may make it hard to get pixels from an XImage; it will want to get them from a window or pixmap. If you can just replace your XGetImage() with cairo, then it might work fine. The way you would do things roughly in cairo I think is:
create an Xlib surface pointed at your drawable
save to PNG http://cairographics.org/manual/cairo-PNG-Support.html
You could also use the Xlib surface as source to draw to an image surface, and then do other stuff with the image surface (scale or paint on it or whatever) if you wanted, before saving as PNG.
If you're using any kind of UI toolkit, it probably has code for this too, e.g. GTK+ has gdk_pixbuf_get_from_drawable() etc.

Resources