I'm using Cairo for text rendering on an embedded device. I've evaluated the 'toy' text API (i.e. cairo_show_text) and it works very well and is efficient. Unforunately it only supports the most basic operations and always discards the shape immediately.
What I need to do is draw simple text with fill and stroke. When I do this using the slightly more complicated API (cairo_text_path) it works but performance drops to unacceptable levels.
It's a bit difficult to find documentation but I did find this hint:
Be aware cairo_show_text() caches glyphs so is much more efficient if you work with a lot of text.
Where can I read about this glyph caching and how to it also for cairo_text_path? Ideally, is there a code example of this being done? I only need to support this simple use case.
cairo_text_path converts a text with all glyphs to a path and adds him to the context. Rendering this path is expensive because of many segments - dozens of moves, lines, curves for every single glyph.
Glyphs caching by cairo_show_text means that repeating glyphs/characters get rendered once and saved in a much cheaper format (like scanlines, triangles or bitmap) for later occurrences. Because the font doesn't change in-between, this recycling isn't a problem.
You could do this caching by yourself, rendering glyphs on image surfaces and using them as pattern, or simply use bitmap fonts from the beginning.
Related
I'm developing a WPF app but I've noticed that at certain font sizes the text doesn't render as nicely as the samples you see in Control Panel -> Fonts. I'm using large Segoe UI fonts (FontSize="36"), and the effect is more noticeable on the upright lines, e.g. a letter "U" might be slightly thicker on one side than the other.
).
The font quality improves at certain font sizes, e.g. FontSize="48" (which I believe is the equivalent of 36pt), but using a limited number of font sizes isn't always practical.
I can improve the font quality by applying the following properties to the TextBlock:-
TextOptions.TextFormattingMode="Display" TextOptions.TextRenderingMode="ClearType"
Given the improvement in quality I'm curious to know why WPF doesn't do this for all text, or is it down to performance? I was thinking of creating a global style to apply this to all controls, or will this cause a problem?
(I tried uploading a screenshot but SO must store images at a low quality, and you couldn't really make out the font problem).
Here is the blog post that the WPF Text team wrote about this feature.
Note for the TextFormattingMode:
Ideal Ideal text metrics are the metrics which have been used to
format text since the introduction of WPF. These metrics result in
glyphs’ shapes maintaining high fidelity with their outlines from the
font file. The glyphs’ final placement is not taken into account when
creating glyph bitmaps or positioning the glyphs relative to each
other.
Display In this new formatting mode, WPF uses GDI
compatible text metrics. This ensures that every glyph has a width of
multiple whole pixels and is positioned on whole pixels. The use of
GDI compatible text metrics also means that glyph sizes and line
breaking is similar to GDI based frameworks. That said, glyph sizes
are not the only input into the line breaking algorithm used by WPF.
Even though we use the same metrics as GDI, our line breaking will not
be exactly the same.
Since these properties are new in .NET 4.0, they kept the original WPF algorithm as default, which is Ideal mode.
For the TextRenderingMode
Auto This mode will use ClearType unless system settings have been
set to specifically disable ClearType on the machine.
Aliased No antialiasing will be used to draw text.
Grayscale Grayscale antialiasing will be used to draw
text.
ClearType ClearType antialising will be used to draw text.
Since Auto is default, you will generally get ClearType rendering.
Now, because these are attached properties, and they inherit, you can just set them at the root Window. No need to create a bunch of Styles.
I have noticed small performance issues when dealing with large amounts of data (upwards of 10,000 items) when ClearType is turned on. Changing TextFormattingMode to Display has no visible performance impact.
This said, in all of my WPF apps I use global styles to improve text rendering, unless the performance impact is large enough to make the UI feel sticky.
As I said on the title.
I just want to know which is better between using image files and drawing vector shapes (or path).
I know that using vector is better for appearance but what about performance.
And if this depends on cases. Can anyone explain.
(This question may include WP7, Silverlight, WPF or even in general cases.)
Here is a general answer to compare pros/cons of Bitmap (what I think you mean by "image file") vs. Vector.
Bitmap-based images (gif, tiff, jpeg, png, bmp) are essentially the concept of mapping colours (and other data such as alpha layer) to a pixel grid. Different file formats offer variations of what is supported and levels of compression but this is the high-level concept. The complete map of pixels and data is stored in the file as a matrix/table.
Vector-based images, as you say, are path based. Instead of storing information by pixels, the file format will store geometric points and data.
The pros for bitmaps are:
They usually render faster than a vector. This is because there is minimal computation involved in presenting the image (just take the pixel map and display).
They handle "photographic" content better than a vector.
They are more portable than vector. GIF, JPEG, PNG, BMP are more standard than any vector format (where usually Adobe has the market)
The cons for bitmaps are:
They don't scale without degradation (pixelization)
Manipulation (i.e. resizing, blurring, lighting, etc) of a bitmap is more processor expensive than a vector
The files are usually much larger than vector-based files
The pros for vectors are:
Flexible for scaling and manipulation
Smaller file formats than vector
Ideal for print and animation (i.e. manipulating a shape to produce the animation effect)
The cons for vectors are:
Render time, depending on the complexity of the vector, can be longer
Portability most formats are highly proprietary
Work for "graphic" based images but not useful for photorealism
Hope this helps.
Jeremiah Morrill gave a great overview of WPF rendering that basically shows a vector will always be more expensive to render than an image. Basically an image gets treated as a directx texture...no matter the size, scaling or whatever, there is a set constant cost for rendering an image. As Jer's overview shows, even the simplest vector image takes a number of operations to render in WPF. The moral of the story is that when giving an option, go for the image instead of vector.
Based on our experience with Windows Phone 7 (Non-mango) apps, we find using Images instead of using drawing produces a far more responsiveness hence UX Performance for continuous animation in pages. (YMMV)
I would initially say that images render faster than vectors. The complexer the vector, more time it takes to render. The bigger the image, more time to render.
I'm going to speculate that (in Silverlight terms) most of the current video hardware is capable of directly handling the images rendering getting so a boost in the performance. I'm not sure if calculations for vectors can be done at video hardware level.
From the point of view of Windows Phone 7, you'll typically get faster rendering of images/bitmaps rather than paths/vectors. As a general rule for mobile development, due to the constrained resources on the device and the increased need to consider performance, if you can do something once, such as preparing an image, at design (or compile) time that definitely preferable to doing it multiple times on each client.
Be very careful of applying rules across platforms (WPF, Silverlight & WP7) as they are used for different things in different situations and are under different constraints. Things you have to consider on the phone may not be as much of an issue in a WPF app running on an high powered PC.
I am working on a small project where I have to write a low level app. I'd like to display text in that app, and I would even like it to be anti aliased (à la ClearType). No libraries allowed, I have to draw each char pixel by pixel.
What is the best way to do this? Can you recommend some known algorithms? How should I store/read the fonts?
Thanks!
You mean you just want to smooth the edges of an existing bitmapped font? This is easy if your original font is 16x32 and you want to render it at 8x16 or something like that, but if you don't have a higher-resolution bitmap to begin with, smoothing is a highly nontrivial operation involving a lot of guesswork. In that case, I would lookup the 2xsai algorithm (which gives visually-pleasing results for this kind of thing) and first perform it to upscale the font to double resolution, then scale it back down with a area-averaging algorithm (i.e. take each destination pixel from the average of a 4-pixel square).
I would also recommend saving your final "anti-aliased" bitmap font and simply using it in your program, rather than performing all this work at runtime.
Putting all together:
There are two main types of fonts:
1) Monospaced: all the characters have fixed size, and you define a bitmap for each. No need for Anti Aliasing (you can hardcode the grey levels in the bitmap). Look horrible when resized.
2) True Type: each letter is defined by a set of parameters for Bezier curves. Can be easily scaled to any size, but requires lots of program logic (and processing power!) for that. Anti Aliasing is useful here (and especially the sub-pixel rendering techniquies).
As I see you want to use bitmapped font and rescaling? You could just precompute several of them, thus avoiding complex runtime logic.
As R. suggested, keeping the bitmaps at higher resolution in greyscale instead of BW will help. I'd suggest using size that is divisible by most small numbers, so that the bitmap can be downscaled easily. Also, if this resolution is high enough, then you can keep it in BW and downscale to greyscale (using surface integral).
EDIT: feel free to edit it and please don't vote. Just put all those commentaries together.
It is hard to build a good font engine, especially if you need to do scaling and anti-aliasing. So I suggest you take the easy path:
Decide on the fonts and sizes you want to use.
Generate a bitmap font for every font/size combination you need to use. This can be done with a tool like Bitmap Font Generator.
Use the bitmap fonts in your program. Blitting bitmaps should be relatively easy.
If you want more features, I suggest you look into using an engine like FreeType before trying to make your own solution.
Well, reading a TTF (or any other) font and rendering some glyph into the bitmap isnt that hard, given you know some stuff about rasterization and bezier curves. The bad point is that if you want the text to look good, it's gonna take a huge amount of code. Aliased font is pretty hard to render, I'm not talking about hinting. There needs to be a routine for kerning, multi-character sequences, something that decides which glyphs map to your characters and also encoding stuff, ...
You might want to use a bitmap font, which comes pre-rendered - then the whole rendering operation is a simple image copy, eventually with some resampling or rotation; but well, you lose the vector font features.
My advice is to take FreeType and live with it, it's a nice library just for this, and can be statically linked and stripped of unnecessary bloat very easily.
I am writing a text renderer for an OpenGL application. Size, colour, font face, and anti-aliasing can be twiddled at run time (and so multiple font faces can appear on the screen at once). There are too many combinations to allocate one texture to each combination of string and attributes. However, only a small subset of the entire database of strings will be on the screen at any given time.
This leads me into the opportunity to create a cache for the strings that are being printed frame after frame. It has been mandated that I use only one texture for the entire operation, as creating a cache of many textures would incur a texture swapping penalty for every different string printed from the cache.
So I have before me a 2048x2048 texture, into which I can place whatever strings I can fit as they are being requested by the application for caching purposes. I have quickly realized that tracking the free space available in a two dimensional space is not trivial.
I have been looking at things like Best Fit and Next fit, but those seem to be suitable for 1d spaces.
How can I manage this cache texture in OpenGL?
Edit: I have since learned that this is an instance of a "2d packing problem".
What you have is the bin-packing problem.
Bad news first: It's NP-hard, so it's worth to find the optimal solution.
I've done such texture-caching for fonts as well. I didn't cached entire words but just the glyph images. That makes things a lot easier because all your images are roughly square-shaped. A simple grid based approach to keep track of the texture-memory worked pretty good.
In case I got glyphs that are larger than one of my grid-boxes I just allocated two or more boxes using brute force search (it didn't happend that often). In case I didn't found any suitable block I just randomly removed some glyphs from the cache to make free space.
That was much easier than keeping things in a last recently used cache and performed nearly as good.
Btw - you will always have some waste on texture memory for such a cache. Unless you're very tight on memory that shouldn't be a problem. You should use a small texture-format (8 bit alpha works well for fonts).
Also: If you make your grid-blocks a multiple of 8 pixels, and you can drop your antialiasing to 4 bits you can compress the glyphs into one of the compressed DXT or S3TC formats on the fly. The wasted texture-space becomes a non-issue that way.
If you are short on texture memory you could take a look at "Distance Field" or "Signed Distance Field" font rendering technique. You could use 512x512 texture per font family and you could render perfectly antialiased text of any size.
For that algorithm you need to generate a special texture, which contains distance from the texel to the edge of the texture. Take a look at original paper by Valve guys: http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf . There are some frameworks which utilize this. For instance latest version of Qt uses signed distance field for text rendering.
I have opted to use a simple approach. Divide the texture into variable height rows. The first texture to be placed in a row decides the height of the row. If a texture can fit into an existing row by height, check to see if there is enough width remaining and place it there. Otherwise start a new row. If a new row cannot be started, do not cache the string.
How do you do your own fonts? I don't want a heavyweight algorithm (freetype, truetype, adobe, etc) and would be fine with pre-rendered bitmap fonts.
I do want anti-aliasing, and would like proportional fonts if possible.
I've heard I can use Gimp to do the rendering (with some post processing?)
I'm developing for an embedded device with an LCD. It's got a 32 bit processor, but I don't want to run Linux (overkill - too much code/data space for too little functionality that I would use)
C. C++ if necessary, but C is preferred. Algorithms and ideas/concepts are fine in any language...
-Adam
In my old demo-scene days I often drew all characters in the font in one big bitmap image. In the code, I stored the (X,Y) coordinates of each character in the font, as well as the width of each character. The height was usually constant throughout the font. If space isn't an issue, you can put all characters in a grid, that is - have a constant distance between the top-left corner of each character.
Rendering the text then becomes a matter of copying one letter at a time to the destination position. At that time, I usually reserved one color as being the "transparent" color, but you could definitely use an alpha-channel for this today.
A simpler approach, that can be used for small b/w fonts, is to define the characters directly in code:
LetterA db 01111100b
db 11000110b
db 11000110b
db 11111110b
db 11000110b
db 11000110b
The XPM file format is actually a file format with C syntax that can be used as a hybrid solution for storing the characters.
Pre-rendered bitmap fonts are probably the way to go. Render your font using whatever, arrange the characters in a grid, and save the image in a simple uncompressed format like PPM, BMP or TGA. If you want antialiasing, make sure to use a format that supports transparency (BMP and TGA do; PPM does not).
In order to support proportional widths, you'll need to extract the widths of each character from the grid. There's no simple way to do this, it depends on how you generate the grid. You could probably write some short little program to analyze each character and find the minimal bounding box. Once you have the width data, you put it in an auxiliary file which contains the coordinates and sizes of each character.
Finally, to render a string, you look up each character and bitblit its rectangle from the font bitmap onto your frame buffer, advancing the raster position by the width of the character.
We have successfully used the SRGP package for fonts. We did use fixed-pitch fonts, so I'm not sure if it can proportional fonts.
We're using bitmap fonts generated by anglecode#s bitmap font generator :
http://www.angelcode.com/products/bmfont/
This is very usable as it has XML output which will be easy to convert to any data format you need.
Angel Code's bmfont also adds kerning and better packing to the old alternative that was MudFont.