Check if Unicode character is displayed or tofu - c

My question is similar to this one, but a little step-forward.
In my Win32 program I have some menu button with Unicode characters above BMP, such as U+1F5A4 🖤 (UTF-16 surrogate pairs 0xD83D 0xDDA4).
In Windows 10 the system font Segoe UI doesn't have this glyph: it is automagically replaced with a glyph from the font Segoe UI Symbol and displayed correctly in the button, thanks to a process called font linking (or font fallback, still not clear to me).
But in Windows 7 the font linking brings to a font that doesn't have this glyph neither, and the surrogate pairs appear as two empty boxes ▯▯. The same in Windows XP with Tahoma font.
I want to avoid these replacement boxes, by parsing the text before or after the assignment to the button, and replacing the missing glyph with some common ASCII character.
I tried GetGlyphOutline, ScriptGetCMap, GetFontUnicodeRanges and GetGlyphIndices but they don't support surrogate pairs.
I also tried GetCharacterPlacement and Uniscribe ScriptItemize+ScriptShape that support surrogate pairs, but all these functions search only into the base font of HDC (Segoe UI), they don't search for eventually fallback font (Segoe UI Symbol), which is the one that provides the 🖤 glyph.
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\FontLink\SystemLink it's a place where I looked, but I really think it's not there the system takes the fonts to link to.
The question is: how can I know if the system font-linking produces the correct glyph or tofu boxes instead?
Edit
I found some kind of solution copying trom this code and adding the last GetCharacterPlacement.
#include <usp10.h>
wchar_t *checkGlyphExist( HWND hwnd, wchar_t *sUnicode, wchar_t *sLimited ) {
// Create metafile
HDC hdc = GetDC( hwnd );
HDC metaFileDC = CreateEnhMetaFile( hdc, NULL, NULL, NULL );
// Select menu font
NONCLIENTMETRICSW ncm;
ncm.cbSize = sizeof(ncm);
SystemParametersInfoW( SPI_GETNONCLIENTMETRICS, ncm.cbSize, &ncm, 0 );
HFONT hFont = CreateFontIndirectW( &(ncm.lfMenuFont) );
SelectObject( metaFileDC, hFont );
wprintf( L"%s\n", ncm.lfMenuFont.lfFaceName ); // 'Segoe UI' in Win 10 and 7 (ok)
// 'Tahoma' in Win XP (ok)
// Use the meta file to intercept the fallback font chosen by Uniscribe
SCRIPT_STRING_ANALYSIS ssa;
ScriptStringAnalyse( metaFileDC, sUnicode, wcslen(sUnicode), 0, -1,
SSA_METAFILE | SSA_FALLBACK | SSA_GLYPHS | SSA_LINK,
0, NULL, NULL, NULL, NULL, NULL, &ssa );
ScriptStringFree( &ssa );
HENHMETAFILE metaFile = CloseEnhMetaFile(metaFileDC);
LOGFONTW logFont = {0};
EnumEnhMetaFile( 0, metaFile, metaFileEnumProc, &logFont, NULL );
DeleteEnhMetaFile( metaFile );
wprintf( L"%s\n", logFont.lfFaceName );
// 'Segoe UI Symbol' in Win 10 (ok)
// 'Microsoft Sans Serif' in Win 7 (wrong, should be 'Segoe UI Symbol')
// 'Tahoma' in Win XP for characters above 0xFFFF (wrong, should be 'Microsoft Sans Serif', I guess)
// Get glyph indices for the 'sUnicode' string
hFont = CreateFontIndirectW( &logFont );
SelectObject( hdc, hFont );
GCP_RESULTSW infoStr = {0};
infoStr.lStructSize = sizeof(GCP_RESULTSW);
wchar_t tempStr[wcslen(sUnicode)];
wcscpy( tempStr, sUnicode );
infoStr.lpGlyphs = tempStr;
infoStr.nGlyphs = wcslen(tempStr);
GetCharacterPlacementW( hdc, tempStr, wcslen(tempStr), 0, &infoStr, GCP_GLYPHSHAPE );
ReleaseDC( hwnd, hdc );
// Return one string
if( infoStr.lpGlyphs[0] == 3 || // for Windows 7 and 10
infoStr.lpGlyphs[0] == 0 ) // for Windows XP
return sLimited;
else
return sUnicode;
}
// Callback function to intercept font creation
int CALLBACK metaFileEnumProc( HDC hdc, HANDLETABLE *table, const ENHMETARECORD *record,
int tableEntries, LPARAM logFont ) {
if( record->iType == EMR_EXTCREATEFONTINDIRECTW ) {
const EMREXTCREATEFONTINDIRECTW* fontRecord = (const EMREXTCREATEFONTINDIRECTW *)record;
*(LOGFONTW *)logFont = fontRecord->elfw.elfLogFont;
}
return 1;
}
You can call it with checkGlyphExist( hWnd, L"🖤", L"<3" );
I tested on Windows 10 and on two virtual machines: Windows 7 Professional, Windows XP SP2.
It works quite well, but two problems still remain about the fallback font that EnumEnhMetaFile retrieves when a glyph is missing in base font:
in Windows 7 is always Microsoft Sans Serif, but the real fallback font should be Segoe UI Symbol.
in Windows XP is Tahoma instead of Microsoft Sans Serif, but only for surrogate pairs characters (for BMP characters is Microsoft Sans Serif that is correct, I guess).
Can someone help me to solve this?

First you have to make sure you're using same API on both Win7 and Win10. Lower level gdi32 API is not supposed to support surrogate pairs in general I think, while newer DirectWrite does, on every level. Next thing to keep in mind is that font fallback (font linking is a different thing) data differs from release to release and it's not something user has access to, and it's not modifiable.
Second thing to check if Win7 provides fonts for symbol at U+1F5A4 in a first place, it's possible it was introduced in later versions only.
Basically if you're using system rendering functionality, older or newer, you're not supposed to control fallback most of the time, if it doesn't work for you it usually means it won't work. DirectWrite allows custom fallback lists, where you can for example explicitly assign U+1F5A4 to any font you want, that supports it, including custom fonts that you can bundle with your application.
If you want more detailed answer, you'll need to show some sources excerpts that don't work for you.

I believe the high and low 16-bit words are well defined for surrogate pairs. You should be able to identify surrogate pairs by checking the range of values for each of the 16-bit words.
For the high word it should be in the range of 0xd800 to 0xdbff
For the low word it should be in the range of 0xdc00 to 0xdfff
If any two pair of "characters" meets this criteria, they are a surrogate pair.
See the wikipedia article on UTF-16 for more information.

Related

Win32 GDI color palette transparency bug

This question could be considered more of a bug report on an unsightly and time-wasting issue I've recently encountered while using Win32/GDI:
That is, loading a bitmap image into static control (a bitmap static control, not icon). I'll demonstrate with the following code (this follows the creation of the main window):
HBITMAP hbmpLogo;
/* Load the logo bitmap graphic, compiled into the executable file by a resource compiler */
hbmpLogo = (HBITMAP)LoadImage(
wc.hInstance, /* <-- derived from GetModuleHandle(NULL) */
MAKEINTRESOURCE(ID_LOGO), /* <-- ID_LOGO defined in a header */
IMAGE_BITMAP,
0, 0,
LR_CREATEDIBSECTION | LR_LOADTRANSPARENT);
/* We have a fully functioning handle to a bitmap at this line */
if (!hbmpLogo)
{
/* Thus this statement is never reached */
abort();
}
We then create the control, which is a child of the main window:
/* Add static control */
m_hWndLogo = CreateWindowExW(
0, /* Extended styles, not used */
L"STATIC", /* Class name, we want a STATIC control */
(LPWSTR)NULL, /* Would be window text, but we would instead pass an integer identifier
* here, formatted (as a string) in the form "#100" (let 100 = ID_LOGO) */
SS_BITMAP | WS_CHILD | WS_VISIBLE, /* Styles specified. SS = Static Style. We select
* bitmap, rather than other static control styles. */
32, /* X */
32, /* Y */
640, /* Width. */
400, /* Height. */
hMainParentWindow,
(HMENU)ID_LOGO, /* hMenu parameter, repurposed in this case as an identifier for the
* control, hence the obfuscatory use of the cast. */
wc.hInstance, /* Program instance handle appears here again ( GetModuleHandle(NULL) )*/
NULL);
if (!m_hWndLogo)
{
abort(); /* Also never called */
}
/* We then arm the static control with the bitmap by the, once more quite obfuscatory, use of
* a 'SendMessage'-esque interface function: */
SendDlgItemMessageW(
hMainParentWindow, /* Window containing the control */
ID_LOGO, /* The identifier of the control, passed in via the HMENU parameter
* of CreateWindow(...). */
STM_SETIMAGE, /* The action we want to effect, which is, arming the control with the
* bitmap we've loaded. */
(WPARAM)IMAGE_BITMAP, /* Specifying a bitmap, as opposed to an icon or cursor. */
(LPARAM)hbmpLogo); /* Passing in the bitmap handle. */
/* At this line, our static control is sufficiently initialised. */
What is not impressive about this segment of code is the mandated use of LoadImage(...) to load the graphic from the program resources, where it is otherwise seemingly impossible to specify that our image will require transparency. Both flags LR_CREATEDIBSECTION and LR_LOADTRANSPARENT are required to effect this (once again, very ugly and not very explicit behavioural requirements. Why isn't LR_LOADTRANSPARENT good on its own?).
I will elaborate now that the bitmap has been tried at different bit-depths, each less than 16 bits per pixel (id est, using colour palettes), which incurs distractingly unaesthetical disuniformity between them. [Edit: See further discoveries in my answer]
What exactly do I mean by this?
A bitmap loaded at 8 bits per pixel, thus having a 256-length colour palette, renders with the first colour of the bitmap deleted (that is, set to the window class background brush colour); in effect, the bitmap is now 'transparent' in the appropriate areas. This behaviour is expected.
I then recompile the executable, now loading a similar bitmap but at (a reduced) 4 bits per pixel, thus having a 16-length colour palette. All is good and well, except I discover that the transparent region of the bitmap is painted with the WRONG background colour, one that does not match the window background colour. My wonderful bitmap has an unsightly grey rectangle around it, revealing its bounds.
What should the window background colour be? All documentation leads back, very explicitly, to this (HBRUSH)NULL-inclusive eyesore:
WNDCLASSEX wc = {}; /* Zero initialise */
/* initialise various members of wc
* ...
* ... */
wc.hbrBackground = (HBRUSH)(COLOR_WINDOW+1); /* Here is the eyesore. */
Where a certain colour preset must be incremented, then cast to a HBRUSH typename, to specify the desired background colour. 'Window colour' is an obvious choice, and a fragment of code very frequently recurring and reproducible.
You may note that when this is not done, the Window instead assumes the colour of its preceding number code, which on my system happens to be the 'Scroll' colour. Indeed, and alas, if I happen to forget the notorious and glorious +1 appended to the COLOR_WINDOW HBRUSH, my window will become the unintended colour of a scroll bar.
And it seems this mistake has propagated within Microsofts own library. Evidence? That a 4-bpp bitmap, when loaded, will also erase the bitmap transparent areas to the wrong background color, where a 8-bpp bitmap does not.
TL;DR
It seems the programmers at Microsoft themselves do not fully understand their own Win32/GDI interface jargon, especially regarding the peculiar design choice behind adding 1 to the Window Class WNDCLASS[EX] hbrBackground member (supposedly to support (HBRUSH)NULL).
This is unless, of course, anyone can spot a mistake on my part?
Shall I submit a bug report?
Many thanks.
As though to patch over a hole in a parachute, there is a solution that produces consistency, implemented in the window callback procedure:
LRESULT CALLBACK WndProc(HWND hWnd, UINT uiMsg, WPARAM wp, LPARAM lp)
{
/* ... */
switch (uiMsg)
{
/* This message is sent to us as a 'request' for the background colour
* of the static control. */
case WM_CTLCOLORSTATIC:
/* WPARAM will contain the handle of the DC */
/* LPARAM will contain the handle of the control */
if (lp == (LPARAM)g_hLogo)
{
SetBkMode((HDC)wp, TRANSPARENT);
return (LRESULT)GetSysColorBrush(COLOR_WINDOW); /* Here's the magic */
}
break;
}
return DefWindowProc(hWnd, uiMsg, wp, lp);
}
It turns out the problem was not reproducible when other transparent bitmaps of varying sizes (not only bit depths) were loaded.
This is horrible. I am not sure why this happens. Insights?
EDIT: All classes have been removed to produce a neat 'minimal reproducible example'.

Font layouting & rendering with cairo and freetype

I have a system that has only the freetype2 and cairo libraries available. What I want to achieve is:
getting the glyphs for a UTF-8 text
layouting the text, storing position information (by myself)
getting cairo paths for each glyph for rendering
Unfortunately the documentation doesn't really explain how it should be done, as they expect one to use a higher level library like Pango.
What I think could be right is: Create a scaled font with cairo_scaled_font_create and then retrieve the glyphs for the text using cairo_scaled_font_text_to_glyphs. cairo_glyph_extents then gives the extents for each glyph. But how can I then get things like kerning and the advance? Also, how can I then get paths for each font?
Are there some more resources on this topic? Are these functions the expected way to go?
Okay, so I found what's needed.
You first need to create a cairo_scaled_font_t which represents a font in a specific size. To do so, one can simply use cairo_get_scaled_font after setting a font, it creates a scaled font for the current settings in the context.
Next, you convert the input text using cairo_scaled_font_text_to_glyphs, this gives an array of glyphs and also clusters as output. The cluster mappings represent which part of the UTF-8 string belong to the corresponding glyphs in the glyph array.
To get the extents of glyphs, cairo_scaled_font_glyph_extents is used. It gives dimensions, advances and bearings of each glyph/set of glyphs.
Finally, the paths for glyphs can be put in the context using cairo_glyph_path. These paths can then be drawn as wished.
The following example converts an input string to glyphs, retrieves their extents and renders them:
const char* text = "Hello world";
int fontSize = 14;
cairo_font_face_t* fontFace = ...;
// get the scaled font object
cairo_set_font_face(cr, fontFace);
cairo_set_font_size(cr, fontSize);
auto scaled_face = cairo_get_scaled_font(cr);
// get glyphs for the text
cairo_glyph_t* glyphs = NULL;
int glyph_count;
cairo_text_cluster_t* clusters = NULL;
int cluster_count;
cairo_text_cluster_flags_t clusterflags;
auto stat = cairo_scaled_font_text_to_glyphs(scaled_face, 0, 0, text, strlen(text), &glyphs, &glyph_count, &clusters, &cluster_count,
&clusterflags);
// check if conversion was successful
if (stat == CAIRO_STATUS_SUCCESS) {
// text paints on bottom line
cairo_translate(cr, 0, fontSize);
// draw each cluster
int glyph_index = 0;
int byte_index = 0;
for (int i = 0; i < cluster_count; i++) {
cairo_text_cluster_t* cluster = &clusters[i];
cairo_glyph_t* clusterglyphs = &glyphs[glyph_index];
// get extents for the glyphs in the cluster
cairo_text_extents_t extents;
cairo_scaled_font_glyph_extents(scaled_face, clusterglyphs, cluster->num_glyphs, &extents);
// ... for later use
// put paths for current cluster to context
cairo_glyph_path(cr, clusterglyphs, cluster->num_glyphs);
// draw black text with green stroke
cairo_set_source_rgba(cr, 0.2, 0.2, 0.2, 1.0);
cairo_fill_preserve(cr);
cairo_set_source_rgba(cr, 0, 1, 0, 1.0);
cairo_set_line_width(cr, 0.5);
cairo_stroke(cr);
// glyph/byte position
glyph_index += cluster->num_glyphs;
byte_index += cluster->num_bytes;
}
}
Those functions seem to be the best way, considering Cairo's text system. It just shows even more that Cairo isn't really meant for text. It won't be able to do kerning or paths really. Pango, I believe, would have its own complex code for doing those things.
For best advancement of Ghost, I would recommend porting Pango, since you (or someone else) will probably eventually want it anyway.

GDI Monochrome Bitmap flips bits on each creation of HBITMAP

I've been trying to load a 16bit (A1R5G5B5) BMP from file and use its alpha channel as a bit mask. I have gotten everything to work fine except for one problem that's been troubling me for the past week. It is that when I use CreateDIBitmap to make the 1bit channel with a buffer of bytes the Bitmap created uses the inverses of all its bits only on the first draw. On the next paint the bits flip correctly to the data supplied and remain that way for all the draws after. This behavior is very strange and occurs on all Windows versions, I've tracked it down to having to do with some sort of setting of the HDC and possibly CreateDIBitmap. I've tried many things including setting the foreground and background color of both HDC's before and after to many values but everything I have tried still keeps this behavior.
here is a POC to try:
BITMAPINFOHEADER bmih;
BITMAPINFO bmi;
HBITMAP mask;
PBYTE data;
PBYTE alpha;
SIZE dimension;
void WhenCreated() // WM_CREATE
{
dimension.cx=3;
dimension.cy=1;
alpha=(PBYTE)malloc(1);
data=(PBYTE)malloc(1);
alpha[0]=0xA0; // 0b10100000
}
#define BIN_SCAPE(B,A) (B[0]&(1<<A))?1:0
void WhenPresenting(HDC H) // WM_PAINT
{
printf(
"ALPHA:\t%i %i %i\n",
BIN_SCAPE(alpha,7),
BIN_SCAPE(alpha,6),
BIN_SCAPE(alpha,5)
);
HDC memory;
HBITMAP matter;
memory=CreateCompatibleDC(NULL);
memset(&bmi,0x0,sizeof(BITMAPINFO));
bmi.bmiHeader.biSize=sizeof(BITMAPINFOHEADER);
bmi.bmiHeader.biWidth=dimension.cx;
bmi.bmiHeader.biHeight=dimension.cy;
bmi.bmiHeader.biPlanes=1;
bmi.bmiHeader.biBitCount=1;
bmi.bmiHeader.biCompression=BI_RGB;
memset(&bmih,0x0,sizeof(BITMAPINFOHEADER));
bmih.biSize=sizeof(BITMAPINFOHEADER);
bmih.biWidth=bmi.bmiHeader.biWidth;
bmih.biHeight=bmi.bmiHeader.biHeight;
mask=CreateDIBitmap(
memory,
&bmih,
CBM_INIT,
alpha,
&bmi,
DIB_RGB_COLORS
);
SelectObject(memory,mask);
GetDIBits(memory,mask,0,1,data,&bmi,DIB_RGB_COLORS);
printf(
"DATA:\t%i %i %i\n",
BIN_SCAPE(data,7),
BIN_SCAPE(data,6),
BIN_SCAPE(data,5)
);
StretchBlt(
H,
0,0,128,128,
memory,
0,0,dimension.cx,dimension.cy,
SRCCOPY
);
DeleteDC(memory);
DeleteObject(mask);
}
When the program loads the data displayed is inverse to what is given, subsequent paintings cause the data to fit the data supplied as seen in the console output, there is definitely a flipping of bits happening. My guess is the first HDC supplied may use a different palette than ones all after the first which causes this behavior?
Now it all makes sense it's the palette that is changing.
"biBitCount member is less than 16, the biClrUsed member specifies the actual number of colors the graphics engine or device driver accesses." (from msdn)
If you use a color HDC in the CreateDIBitmap you'll get a color with black and this color will change on each repaint, which will start freaking you out, until you understand it is because you have not set a palette to the HBITMAP because when each HDC is made its color palette is undefined unless specified. You could use SetDIBits but if you want to have it done during CreateDIBitmap try this:
PBITMAPINFO pbmi;
RGBQUAD palette[2];
{
// this will give you white (1) and black (0)
palette[0].rgbBlue=0x00;
palette[0].rgbGreen=0x00;
palette[0].rgbRed=0x00;
palette[1].rgbBlue=0xFF;
palette[1].rgbGreen=0xFF;
palette[1].rgbRed=0xFF;
// using a PBITMAPINFO in order to allocate room for palette
pbmi=(PBITMAPINFO)LocalAlloc(LPTR,sizeof(BITMAPINFO)+sizeof(RGBQUAD)*2); // this technically allocates an extra RGBQUAD
pbmi->bmiHeader.biSize=sizeof(BITMAPINFOHEADER);
pbmi->bmiHeader.biWidth=dimension.cx;
pbmi->bmiHeader.biHeight=dimension.cy;
pbmi->bmiHeader.biPlanes=1;
pbmi->bmiHeader.biBitCount=1;
pbmi->bmiHeader.biCompression=BI_RGB;
pbmi->bmiHeader.biClrUsed=2; // palette is two colors long
pbmi->bmiHeader.biClrImportant=2;
memcpy(pbmi->bmiColors,palette,sizeof(RGBQUAD)*2);
mask=CreateDIBitmap(
memory,
&bmih,
CBM_INIT,
alpha,
pbmi,
DIB_RGB_COLORS
);
}

Rendering on a WPF Control with DirectX 11

I am trying to create a map editor based on WPF. Currently I'm using a hack to render DirectX contents. I created a WinFormsHost and rendered on a WinForms-Panel.
This all because DirectX (I´m using DirectX 11 with Featurelevel 10) wants a Handle (alias IntPtr) where to render. I don´t know how I can initialize and use the DX Device without a handle.
But a WPF control has no handle. So I just found out, there is an interop class called "D3DImage". But I don't understand how to use it.
My current system works like this:
The inner loop goes through a list of "IGameloopElement"s. For each, it renders its content calling "Draw()". After that, it calls "Present()" of the swap chain to show the changes. Then it resets the device to switch the handle to the next element (mostly there is only one element).
Now, because D3DImage doesn't have a handle, how do I render onto it? I just know I have to use "Lock()" then "SetBackBuffer()", "AddDirtyRect()" and then "Unlock()".
But how do I render onto a DirectX11.Texture2D object without specifying a handle for the device?
I´m really lost... I just found the "DirectX 4 WPF" sample on codeplex, but this implements all versions of DirectX, manages the device itself and has such a huge overhead.
I want to stay at my current system. I´m managing the device by myself. I don´t want the WPF control to handle it.
The loop just should call "Render()" and then passes the backbuffer texture to the WPF control.
Could anyone tell me how to do this? I´m totally stuck ...
Thanks a lot :)
R
WPF's D3DImage only supports Direct3D9/Direct3D9Ex, it does not support Direct3D 11. You need to use DXGI Surface Sharing to make it work.
Another answer wrote, "D3DImage only supports Direct3D9/Direct3D9Ex"... which is perhaps not entirely true for the last few years anyway. As I summarized in a comment here, the key appears to be that Direct3D11 with DXGI has a very specific interop compatibility mode (D3D11_SHARED_WITHOUT_MUTEX flag) which makes the ID3D11Texture2D1 directly usable as a D3DResourceType.IDirect3DSurface9, without copying any bits, which just so happens to be exactly (and only) what WPF D3DImage is willing to accept.
This is a rough sketch of what worked for me, to create a D3D11 SampleAllocator that produces ID3D11Texture2D1 that are directly compatible with WPF's Direct3D9. Because all the .NET interop shown here is of my own design, this will not be totally ready-to-run code to drop in your project, but the method, intent, and procedures should be clear for easy adaptation.
1. preliminary helper
static D3D_FEATURE_LEVEL[] levels =
{
D3D_FEATURE_LEVEL._11_1,
D3D_FEATURE_LEVEL._11_0,
};
static IMFAttributes GetSampleAllocatorAttribs()
{
MF.CreateAttributes(out IMFAttributes attr, 6);
attr.SetUINT32(in MF_SA_D3D11_AWARE, 1U);
attr.SetUINT32(in MF_SA_D3D11_BINDFLAGS, (uint)D3D11_BIND.RENDER_TARGET);
attr.SetUINT32(in MF_SA_D3D11_USAGE, (uint)D3D11_USAGE.DEFAULT);
attr.SetUINT32(in MF_SA_D3D11_SHARED_WITHOUT_MUTEX, (uint)BOOL.TRUE);
attr.SetUINT32(in MF_SA_BUFFERS_PER_SAMPLE, 1U);
return attr;
}
static IMFMediaType GetMediaType()
{
MF.CreateMediaType(out IMFMediaType mt);
mt.SetUINT64(in MF_MT_FRAME_SIZE, new SIZEU(1920, 1080).ToSwap64());
mt.SetGUID(in MF_MT_MAJOR_TYPE, in WMMEDIATYPE.Video);
mt.SetUINT32(in MF_MT_INTERLACE_MODE, (uint)MFVideoInterlaceMode.Progressive);
mt.SetGUID(in MF_MT_SUBTYPE, in MF_VideoFormat.RGB32);
return mt;
}
2. the D3D11 device and context instances go somewhere
ID3D11Device4 m_d3D11_device;
ID3D11DeviceContext2 m_d3D11_context;
3. initialization code is next
void InitialSetup()
{
D3D11.CreateDevice(
null,
D3D_DRIVER_TYPE.HARDWARE,
IntPtr.Zero,
D3D11_CREATE_DEVICE.BGRA_SUPPORT,
levels,
levels.Length,
D3D11.SDK_VERSION,
out m_d3D11_device,
out D3D_FEATURE_LEVEL _,
out m_d3D11_context);
MF.CreateDXGIDeviceManager(out uint tok, out IMFDXGIDeviceManager m_dxgi);
m_dxgi.ResetDevice(m_d3D11_device, tok);
MF.CreateVideoSampleAllocatorEx(
ref REFGUID<IMFVideoSampleAllocatorEx>.GUID,
out IMFVideoSampleAllocatorEx sa);
sa.SetDirectXManager(m_dxgi);
sa.InitializeSampleAllocatorEx(
PrerollSampleSink.QueueMax,
PrerollSampleSink.QueueMax * 2,
GetSampleAllocatorAttribs(),
GetMediaType());
}
4. use sample allocator to repeatedly generate textures, as needed
ID3D11Texture2D1 CreateTexture2D(SIZEU sz)
{
var vp = new D3D11_VIEWPORT
{
TopLeftX = 0f,
TopLeftY = 0f,
Width = sz.Width,
Height = sz.Height,
MinDepth = 0f,
MaxDepth = 1f,
};
m_d3D11_context.RSSetViewports(1, ref vp);
var desc = new D3D11_TEXTURE2D_DESC1
{
SIZEU = sz,
MipLevels = 1,
ArraySize = 1,
Format = DXGI_FORMAT.B8G8R8X8_UNORM,
SampleDesc = new DXGI_SAMPLE_DESC { Count = 1, Quality = 0 },
Usage = D3D11_USAGE.DEFAULT,
BindFlags = D3D11_BIND.RENDER_TARGET | D3D11_BIND.SHADER_RESOURCE,
CPUAccessFlags = D3D11_CPU_ACCESS.NOT_REQUESTED,
MiscFlags = D3D11_RESOURCE_MISC.SHARED,
TextureLayout = D3D11_TEXTURE_LAYOUT.UNDEFINED,
};
m_d3D11_device.CreateTexture2D1(ref desc, IntPtr.Zero, out ID3D11Texture2D1 tex2D);
return tex2D;
}

problem with TextRenderer.MeasureText

Hi I am using TextRenderer.MeasureText() method to measure the text width for a given font. I use Arial Unicode MS font for measuring the width, which is a Unicode font containing characters for all languages. The method returns different widths on different servers. Both machines have Windows 2003, and .net 3.5 SP1 installed.
Here is the code we used
using (Graphics g = Graphics.FromImage(new Bitmap(1, 1)))
{
width = TextRenderer.MeasureText(g, word, textFont, new Size(5, 5), TextFormatFlags.NoPadding).Width;
}
Any idea why this happens?
I use C# 2.0
//--------------------------------------------------------------------------------------
// MeasureText always adds about 1/2 em width of white space on the right,
// even when NoPadding is specified. It returns zero for an empty string.
// To get the precise string width, measure the width of a string containing a
// single period and subtract that from the width of our original string plus a period.
//--------------------------------------------------------------------------------------
public static Size MeasureText(string Text, Font Font) {
TextFormatFlags flags
= TextFormatFlags.Left
| TextFormatFlags.Top
| TextFormatFlags.NoPadding
| TextFormatFlags.NoPrefix;
Size szProposed = new Size(int.MaxValue, int.MaxValue);
Size sz1 = TextRenderer.MeasureText(".", Font, szProposed, flags);
Size sz2 = TextRenderer.MeasureText(Text + ".", Font, szProposed, flags);
return new Size(sz2.Width - sz1.Width, sz2.Height);
}
MeasureText is not known to be accurate.
Heres a better way :
protected int _MeasureDisplayStringWidth ( Graphics graphics, string text, Font font )
{
if ( text == "" )
return 0;
StringFormat format = new StringFormat ( StringFormat.GenericDefault );
RectangleF rect = new RectangleF ( 0, 0, 1000, 1000 );
CharacterRange[] ranges = { new CharacterRange ( 0, text.Length ) };
Region[] regions = new Region[1];
format.SetMeasurableCharacterRanges ( ranges );
format.FormatFlags = StringFormatFlags.MeasureTrailingSpaces;
regions = graphics.MeasureCharacterRanges ( text, font, rect, format );
rect = regions[0].GetBounds ( graphics );
return (int)( rect.Right );
}
We had a similar problem several years back. In our case, for some reason we had different versions of the same font installed on two different machines. The OS version was the same, but the font was different.
Since you normally don't deploy a system font with your application setup, measuring and output results may vary from one machine to another, based on the font version.
Since you say ...
And not all the machines return different values only some of them..!
...this is something I'd check for.

Resources