Displaying pixel data with GDI+ - winforms

I am writing a simple 3D rendering engine.
The end result of my 3D processing is pixel data. Next I need to display it on the screen with GDI+.
I am using WinForms and Visual Basic. I am drawing directly on form's ClientRectangle.
I have some questions.
After I process a pixel, should I be writing pixel data to a buffer first, instead of sending each pixel to GDI+ individually?
- If so, how much of a screen should I buffer at one time? Full screen, half, quarter, eighth? I think there may be RAM usage / performance trade-offs here.
- What is the best data structure for the pixel buffer?
- Which GDI+ command do I use to render the pixel buffer (or the individual pixel)? Is it possible to avoid creating the bitmap as an intermediate step and send pixel data directly to screen?
Maximum screen size I anticipate is 1600x1200. RAM could be as low as 1GB.
Thanks.

Hope you can find some of those answers here
Write the data into a buffer of RGBA structs first. This will make it easy if, for example you want to render multiple "layers" and then composite those as well. It will also make it easy if you want to perform any deferred processing at some point. Once a full (tile?) render is complete, you can flush it to the output bitmap/file.
This depends on what resolutions you allow the user to render to. If you want to render gigapixel images, you will need to tile it at some reasonable size. I would recommend that the tile size be configurable and then you can set it at a reasonable default after testing.
I would recommend starting out with a simple RGBA buffer if you're not looking to perform any deferred shading.
If you are NOT performing tiled rendering/rendering images that can fit in memory, you can simply use Bitmap.LockBits and write the data that way. If you are using tiled rendering, you will need to either find a library that allows you to render a scanline at a time (and make that a "tile") or fix the file format you want to write TGA, PNG and seek/write directly to the file. Dumping the image as a RAW file and then using a command-line tool to convert it would also be another option.
Hope this helps!

Related

C, GTK: display stream of RGB images at < 60 fps

I'm developing an application that shall receive images from a camera device and display them in a GTK window.
The camera delivers raw RGB images (3 bytes per pixel, no alpha channel, fixed size) at a varying frame rate (1-50 fps).
I've already done all that hardware stuff and now have a callback function that gets called with every new image captured by the camera.
What is the easyest but fast enough way to display those images in my window?
Here's what I already tried:
using gdk_draw_rgb_image() on a gtk drawing area: basically worked, but rendered so slow that the drawing processes overlapped and the application crashed after the first few frames, even at 1 fps capture rate.
allocating a GdkPixbuf for each new frame and calling gtk_image_set_from_pixbuf() on a gtk image widget: only displays the first frame, then I see no change in the window. May be a bug in my code, but don't know if that will be fast enough.
using Cairo (cairo_set_source_surface(), then cairo_paint()): seemed pretty fast, but the image looked striped, don't know if the image format is compatible.
Currently I'm thinking about trying something like gstreamer and treating those images like a video stream, but I'm not sure whether this is like an overkill for my simple mechanism.
Thanks in advance for any advice!
The entire GdkRGB API seems to be deprecated, so that's probably not the recommended way to solve this.
The same goes for the call to render a pixbuf. The documentation there points at Cairo, so the solution seems to be to continue investigating why your image looked incorrect when rendered by Cairo.
unwind is right, cairo is the way to go if you want something that will work in GTK2 and GTK3. As your samples are RGB without alpha, you should use the CAIRO_FORMAT_RGB24 format. Make sure the surface you paint is in that format. Also try to make sure that you're not constantly allocating/destroying the surface buffer if the input image keeps the same size.

Low level C - Display text, pixel by pixel

I am working on a small project where I have to write a low level app. I'd like to display text in that app, and I would even like it to be anti aliased (à la ClearType). No libraries allowed, I have to draw each char pixel by pixel.
What is the best way to do this? Can you recommend some known algorithms? How should I store/read the fonts?
Thanks!
You mean you just want to smooth the edges of an existing bitmapped font? This is easy if your original font is 16x32 and you want to render it at 8x16 or something like that, but if you don't have a higher-resolution bitmap to begin with, smoothing is a highly nontrivial operation involving a lot of guesswork. In that case, I would lookup the 2xsai algorithm (which gives visually-pleasing results for this kind of thing) and first perform it to upscale the font to double resolution, then scale it back down with a area-averaging algorithm (i.e. take each destination pixel from the average of a 4-pixel square).
I would also recommend saving your final "anti-aliased" bitmap font and simply using it in your program, rather than performing all this work at runtime.
Putting all together:
There are two main types of fonts:
1) Monospaced: all the characters have fixed size, and you define a bitmap for each. No need for Anti Aliasing (you can hardcode the grey levels in the bitmap). Look horrible when resized.
2) True Type: each letter is defined by a set of parameters for Bezier curves. Can be easily scaled to any size, but requires lots of program logic (and processing power!) for that. Anti Aliasing is useful here (and especially the sub-pixel rendering techniquies).
As I see you want to use bitmapped font and rescaling? You could just precompute several of them, thus avoiding complex runtime logic.
As R. suggested, keeping the bitmaps at higher resolution in greyscale instead of BW will help. I'd suggest using size that is divisible by most small numbers, so that the bitmap can be downscaled easily. Also, if this resolution is high enough, then you can keep it in BW and downscale to greyscale (using surface integral).
EDIT: feel free to edit it and please don't vote. Just put all those commentaries together.
It is hard to build a good font engine, especially if you need to do scaling and anti-aliasing. So I suggest you take the easy path:
Decide on the fonts and sizes you want to use.
Generate a bitmap font for every font/size combination you need to use. This can be done with a tool like Bitmap Font Generator.
Use the bitmap fonts in your program. Blitting bitmaps should be relatively easy.
If you want more features, I suggest you look into using an engine like FreeType before trying to make your own solution.
Well, reading a TTF (or any other) font and rendering some glyph into the bitmap isnt that hard, given you know some stuff about rasterization and bezier curves. The bad point is that if you want the text to look good, it's gonna take a huge amount of code. Aliased font is pretty hard to render, I'm not talking about hinting. There needs to be a routine for kerning, multi-character sequences, something that decides which glyphs map to your characters and also encoding stuff, ...
You might want to use a bitmap font, which comes pre-rendered - then the whole rendering operation is a simple image copy, eventually with some resampling or rotation; but well, you lose the vector font features.
My advice is to take FreeType and live with it, it's a nice library just for this, and can be statically linked and stripped of unnecessary bloat very easily.

Can I read a specific image row using libjpeg?

Using libjpeg, if possible, I would like to read a row from the middle of a JPEG image without reading all the preceding rows. Can this be done?
The answer is almost certainly "yes you can, but it will take more effort than you want".
A JPEG image is a stream of markers that contain either information global to the whole compressed image, or information related to specific portions of the image. The compression works by breaking the image into color planes, possibly changing color spaces to one where the color information can be down-sampled, and within each plane operating on 8x8 pixel blocks.
For instance, it is possible to rotate a compressed image by 90 degrees if it is sized such that it is made up of only whole blocks by only transposing the basic blocks and the coefficients inside each block; i.e. without uncompressing, rotating the real image, and recompressing.
Given that, your approach would be to parse the marker stream on the way into the library, passing all the markers that are global to the image, modifying any related to image size, and dropping markers containing coefficients that lie outside your cropping rectangle.
You will likely need to further crop the result if the restriction of cropping to complete basic blocks is too coarse.
What isn't clear to me is whether there is any real win over the alternative, which is to crop the results as it comes out of the library. The library is highly configurable, so you can provide an uncompressed data consumer function that discards all pixels outside your cropping rectangle and only saves pixels you want to keep.

How do I use OpenGL 3.x VBOs to render a dynamic world?

Although there seem to be very few up to date references for OpenGL 3.x itself, the actual low level manipulation of OpenGL is relatively straight forward. However I am having serious trouble trying to even conceptualise how one would manipulate VBOs in order to render a dynamic world.
Obviously the immediate mode ways of old are non applicable, but from there where do I go? Do I write some kind of scene structure and then convert that to a set of vertices and stream that to the VBO, how would I store translation data? If so how would that look code wise?
Basically really unsure how to continue.
If your entire world is truly dynamic, you can use the GL_STREAM_DRAW_ARB usage flag and reset the data on each frame. Don't bother manipulating it, just try to stream as efficient as possible.
However, I assume that you have a scene that consists of multiple rigid objects that move relative to each other. In this case, use one VBO for each object and specify the GL_STATIC_DRAW_ARB usage flag. You can then set the modelview transform for each instance of an object and render them using one draw call per instance.
A rule of thumb (on the PC) is to issue not more than one draw call per MHz of your CPU. This is a crude estimate, but there's some truth to it. Don't worry about putting multiple independent objects into a single VBO or other performance tricks if you stay below this limit.
Short answer:
Use glMapBufferRange and only update the subrange that needs modification.
Long answer:
The trick is to map the already existing buffer with glMapBufferRange, and then only map the range you need. Given these assumptions:
Your geometry uses per-vertex animation morphing
The vertex count for models is constant during animation.
Then you can use glMapBufferRange to update only the changing parts, and leave the rest of the data alone. Full uploads using glBufferData are slow as a turtle, because they delete the old memory store and allocates a new one. That's in addition to uploading the new data. glMapBufferRange only lets you read/write existing data, it does no allocation or deallocation.
However, if you use skeleton animation, rather pass vertex transformations as 4x4 matrices per-vertex to the vertex shader, and do the calculations there. Per-vertex data is of course specified with glVertexAttribPointer.
Also, remember that you can read texture data in the vertex shader, and that OpenGL 3.1 introduced some new instance draw calls; glDrawArraysInstanced and glDrawElementsInstanced. Those combined can be used for instance-specific lookups. I.e you can do instance draw calls with the same geometry data bound, but send positions or whatever per-vertex data you need as textures or texture-arrays. This can save you from mixing and matching different vertex array data sets.
Imagine if you want to render 100 instances of the same model, but with different positions or color schemes. Or even texture maps.
Using VBOs doesn't mean you have to render your entire scene with only single draw call. You can still issue multiple draw calls, and set up different transformation matrices along the way.
For example, if you're using a scenegraph, each model in the scenegraph can correspond to a single draw call. In such a case, the easiest way to use VBOs is creating a separate VBO for each model.
As an optimization, you might be able to combine several models into a single VBO, then pass in non-zero offsets when making your draw calls; this plucks out the correct model from the VBO. It's also desirable to combine multiple draw calls into a single draw call, but that's not possible if they need to have independent transforms. (Actually it is possible in certain situations if you use instancing or vertex blending, but I suggest getting the basics out of the way first.)

How can I manage a cache texture in OpenGL?

I am writing a text renderer for an OpenGL application. Size, colour, font face, and anti-aliasing can be twiddled at run time (and so multiple font faces can appear on the screen at once). There are too many combinations to allocate one texture to each combination of string and attributes. However, only a small subset of the entire database of strings will be on the screen at any given time.
This leads me into the opportunity to create a cache for the strings that are being printed frame after frame. It has been mandated that I use only one texture for the entire operation, as creating a cache of many textures would incur a texture swapping penalty for every different string printed from the cache.
So I have before me a 2048x2048 texture, into which I can place whatever strings I can fit as they are being requested by the application for caching purposes. I have quickly realized that tracking the free space available in a two dimensional space is not trivial.
I have been looking at things like Best Fit and Next fit, but those seem to be suitable for 1d spaces.
How can I manage this cache texture in OpenGL?
Edit: I have since learned that this is an instance of a "2d packing problem".
What you have is the bin-packing problem.
Bad news first: It's NP-hard, so it's worth to find the optimal solution.
I've done such texture-caching for fonts as well. I didn't cached entire words but just the glyph images. That makes things a lot easier because all your images are roughly square-shaped. A simple grid based approach to keep track of the texture-memory worked pretty good.
In case I got glyphs that are larger than one of my grid-boxes I just allocated two or more boxes using brute force search (it didn't happend that often). In case I didn't found any suitable block I just randomly removed some glyphs from the cache to make free space.
That was much easier than keeping things in a last recently used cache and performed nearly as good.
Btw - you will always have some waste on texture memory for such a cache. Unless you're very tight on memory that shouldn't be a problem. You should use a small texture-format (8 bit alpha works well for fonts).
Also: If you make your grid-blocks a multiple of 8 pixels, and you can drop your antialiasing to 4 bits you can compress the glyphs into one of the compressed DXT or S3TC formats on the fly. The wasted texture-space becomes a non-issue that way.
If you are short on texture memory you could take a look at "Distance Field" or "Signed Distance Field" font rendering technique. You could use 512x512 texture per font family and you could render perfectly antialiased text of any size.
For that algorithm you need to generate a special texture, which contains distance from the texel to the edge of the texture. Take a look at original paper by Valve guys: http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf . There are some frameworks which utilize this. For instance latest version of Qt uses signed distance field for text rendering.
I have opted to use a simple approach. Divide the texture into variable height rows. The first texture to be placed in a row decides the height of the row. If a texture can fit into an existing row by height, check to see if there is enough width remaining and place it there. Otherwise start a new row. If a new row cannot be started, do not cache the string.

Resources