CoreText. How Do I Calculate the Bounding Box of an Attributed String? - core-text

In CoreText it is easy ask: "for a given rectangle how much of this attributed string will fit?".
CTFrameGetVisibleStringRange(rect).length
Will return where in the string the next run of text should begin.
My question is: "given an attributed string and a width, what rect height do I need to completely bound the attributed string?".
Does the CoreText framework provide tools to do this?
Thanks,
Doug

What you need is CTFramesetterSuggestFrameSizeWithConstraints(), you can use it like so:
CTFramesetterRef frameSetter = CTFramesetterCreateWithAttributedString((__bridge CFAttributedStringRef)(attributedString)); /*Create your framesetter based in you NSAttrinbutedString*/
CGFloat widthConstraint = 500; // Your width constraint, using 500 as an example
CGSize suggestedSize = CTFramesetterSuggestFrameSizeWithConstraints(
framesetter, /* Framesetter */
CFRangeMake(0, text.length), /* String range (entire string) */
NULL, /* Frame attributes */
CGSizeMake(widthConstraint, CGFLOAT_MAX), /* Constraints (CGFLOAT_MAX indicates unconstrained) */
NULL /* Gives the range of string that fits into the constraints, doesn't matter in your situation */
);
CGFloat suggestedHeight = suggestedSize.height;
EDIT
//IMPORTANT: Release the framesetter, even with ARC enabled!
CFRelease(frameSetter);
As ARC releases only Objective-C objects, and CoreText deals with C, very likely you can have a memory leak here. If your NSAttributedString is small and you do it once, you shouldn't have any bad consequences. But in a case you have a loop to calculate, let's say, 50 heights of big/complex NSAttributedStrings, and you don't release the CTFramesetterRef, you can have serious memory leaks. Check the tutorial linked for more information on memory leaks and debugging with instruments.
So the solution for this problem is to add CFRelease(frameSetter);

Related

supercollider - access buffer information inside a `Pbind` that uses a buffer array

in brief
i have an array of buffers; those are passed to a synth at random using a Pbind ; i need to access info on the current buffer from within the Pbind but I need help doing that !
explanation of the problem
i have loaded an array of buffers containing samples. those samples must be played in a random order (and at random intervals, but that's for later). to do so, i pass those buffers to a synth inside a Pbind. i want to set the \dur key to be the length of the current buffer being played. the thing is, that i can't find a way to access info on the current buffer from within the Pbind. i have tried using Pkey, Pfset and Plambda, but to no success.
does somebody know how to do this ???
code
the sounds are played using:
SynthDef(\player, {
/*
play a file from a buffer
out: the output channel
bufnum: the buffer to play
*/
arg out=0, bufnum;
Out.ar(
out,
PlayBuf.ar(1, bufnum, BufRateScale.kr(bufnum), doneAction: Done.freeSelf)) ! 2
)
}).add;
the buffers are loaded in an array:
path = PathName.new("/path/to/files");
bufferArray = Array.new(100);
path.filesDo({
arg file;
bufferArray.add( Buffer.read(s, file.fullPath) );
});
my Pbind pattern works like this:
i define a \buffer value which is a single buffer from the array
i pass this \buffer to my synth
i then try to calculate its duration (\dur) by dividing the number of frames of the buffer by its sample rate. this is what i can't seem to get right
p = Pbind(
\buffer, Prand(bufferArray, inf),
\instrument, \player,
\bufnum, Pkey(\buffer),
\dur, (Pkey(\buffer.numFrames) / Pkey(\buffer.sampleRate))
)
thanks in advance for your help !!
solution to the problem: how to access buffer information inside a Pbind pattern
after hours of searching, i've found a solution to this problem on the supercollider forum, and i'm posting my own solution in case others are looking on here, like i was !
define a global array of buffers
this isn't compulsory, but it allows to only create the buffer array once; the array is created asynchronously using the action parameter of Buffer.read(), which allows to trigger a function once the buffer is loaded:
var path;
Buffer.freeAll; // avoid using all buffers in server
path = PathName.new("/path/to/sound/files");
~bufferArray = Array.new(100);
path.filesDo({
// add the buffer to `~bufferArray` asynchronously
arg file;
b = Buffer.read(s, file.fullPath, action: {
arg buffer;
~bufferArray.add( buffer );
})
});
play the synth and use Pfunc to access buffer information inside of the Pbind
this is the solution per se:
define a Pbind pattern which activates a synth to play the buffer.
inside that, define a \buffer variable to hold the current buffer.
then, access data on that buffer inside of a Pfunc. this generates an argument containing the last event in the Pbind. using this event, the buffer data can be accessed
p = Pbind(
\buffer, Prand(~bufferArray, inf), // randomly access one buffer inside of the array
\instrument, \player,
\bufnum, Pfunc { arg event; event[\buffer] }, // define a `Pfunc` function to access the previous event containing a `\buffer` variable
\dur, Pfunc { arg event; event[\buffer].numFrames / event[\buffer].sampleRate } // duration
);
p.play;
see the original answer on the supercollider forum for more details !

Why calling `free(malloc(8))`?

The Objective-C runtime's hashtable2.mm file contains the following code:
static void bootstrap (void) {
free(malloc(8));
prototypes = ALLOCTABLE (DEFAULT_ZONE);
prototypes->prototype = &protoPrototype;
prototypes->count = 1;
prototypes->nbBuckets = 1; /* has to be 1 so that the right bucket is 0 */
prototypes->buckets = ALLOCBUCKETS(DEFAULT_ZONE, 1);
prototypes->info = NULL;
((HashBucket *) prototypes->buckets)[0].count = 1;
((HashBucket *) prototypes->buckets)[0].elements.one = &protoPrototype;
};
Why does it allocate and release the 8-bytes space immediately?
Another source of confusion is this method from objc-os.h:
static __inline void *malloc_zone_malloc(malloc_zone_t z, size_t size) { return malloc(size); }
While it uses only one parameter, does the signature ask for two?
For the first question I can only assume. My bet it was done to avoid/reduce memory churn, or segment the memory for some other reason. You can briefly find where it's discussed in the Changelog of bmalloc (which is not quite relevant, but i could not find a better reference):
2017-06-02 Geoffrey Garen <ggaren#apple.com>
...
Updated for new APIs. Note that we cache one free chunk per page
class. This avoids churn in the large allocator when you
free(malloc(X))
It's unclear however, if the memory churn is caused by this technique or it was supposed to address it.
For the second question, Objective-C runtime used to work with "zones" in order to destroy all allocated variables by just destroying the said zone, but it proved being error prone and later it was agreed to not use it anymore. The API, however still uses it for historical reasons (backward compatibility, i assume), but says that zones are ignored:
Zones are ignored on iOS and 64-bit runtime on OS X. You should not use zones in current development.

How to figure out the shell's drag/drop icon size for use with SHDoDragDrop?

How do I figure out the right icon size to use so that the icon matches Explorer's default drag-and-drop icon?
(I'm trying to use it with SHDoDragDrop if that matters.)
The size depends on what's in the data object format. From the Shell, it's 96x96.
You can check that if you drag & drop a file into any valid drop target, the data object will contain the "DragImageBits" format and its data is a SHDRAGIMAGE structure:
typedef struct SHDRAGIMAGE {
SIZE sizeDragImage; // will contain 96x96 when dragged from the Shell
POINT ptOffset;
HBITMAP hbmpDragImage;
COLORREF crColorKey;
} SHDRAGIMAGE, *LPSHDRAGIMAGE;
If you're looking for a more static way, here is a code that seems to work, using the UxThemes API. Note that although it uses documented APIs and defines, I don't think it's documented as such.
...
#include <uxtheme.h>
#include <vsstyle.h>
#include <vssym32.h>
...
// note: error checking ommited
auto theme = OpenThemeData(NULL, VSCLASS_DRAGDROP);
SIZE size = {};
GetThemePartSize(theme, NULL, DD_IMAGEBG, 0, NULL, THEMESIZE::TS_DRAW, &size);
MARGINS margins = {};
GetThemeMargins(theme, NULL, DD_IMAGEBG, 0, TMT_CONTENTMARGINS, NULL, &margins);
// final size
size.cx -= margins.cxLeftWidth + margins.cxRightWidth;
size.cy -= margins.cyTopHeight + margins.cyBottomHeight;
CloseThemeData(theme);
As for DPI settings, I understand you want to mimic Explorer, in this case you'll have to do some computation by yourself depending on your needs and screen context, as the image size extracted by the Shell is itself fixed.

SDL2 Texture sometimes empty after loading multiple 8 bit surfaces

I'll try to make that question as concise as possible, but don't hesitate to ask for clarification.
I'm dealing with legacy code, and I'm trying to load thousands of 8 bit images from the disk to create a texture for each.
I've tried multiple things, and I'm at the point where I'm trying to load my 8 bits images into a 32 bits surface, and then create a texture from that surface.
The problem : while loading and 8 bit image onto a 32 bit surface is working, when I try to SDL_CreateTextureFromSurface, I end up with a lot of textures that are completely blank (full of transparent pixels, 0x00000000).
Not all textures are wrong, thought. Each time I run the program, I get different "bad" textures. Sometimes there's more, sometimes there's less. And when I trace the program, I always end up with a correct texture (is that a timing problem?)
I know that the loading to the SDL_Surface is working, because I'm saving all the surfaces to the disk, and they're all correct. But I inspected the textures using NVidia NSight Graphics, and more than half of them are blank.
Here's the offending code :
int __cdecl IMG_SavePNG(SDL_Surface*, const char*);
SDL_Texture* Resource8bitToTexture32(SDL_Renderer* renderer, SDL_Color* palette, int paletteSize, void* dataAddress, int Width, int Height)
{
u32 uiCurrentOffset;
u32 uiSourceLinearSize = (Width * Height);
SDL_Color *currentColor;
char strSurfacePath[500];
// The texture we're creating
SDL_Texture* newTexture = NULL;
// Load image at specified address
SDL_Surface* tempSurface = SDL_CreateRGBSurface(0x00, Width, Height, 32, 0x00FF0000, 0x0000FF00, 0x000000FF, 0xFF000000);
SDL_SetSurfaceBlendMode(tempSurface, SDL_BLENDMODE_NONE);
if(SDL_MUSTLOCK(tempSurface)
SDL_LockSurface(tempSurface);
for(uiCurrentOffset = 0; uiCurrentOffset < uiSourceLinearSize; uiCurrentOffset++)
{
currentColor = &palette[pSourceData[uiCurrentOffset]];
if(pSourceData[uiCurrentOffset] != PC_COLOR_TRANSPARENT)
{
((u32*)tempSurface->pixels)[uiCurrentOffset] = (u32)((currentColor->a << 24) + (currentColor->r << 16) + (currentColor->g << 8) + (currentColor->b << 0));
}
}
if(SDL_MUSTLOCK(tempSurface)
SDL_UnlockSurface(tempSurface);
// Create texture from surface pixels
newTexture = SDL_CreateTextureFromSurface(renderer, tempSurface);
// Save the surface to disk for verification only
sprintf(strSurfacePath, "c:\\tmp\\surfaces\\%s.png", GenerateUniqueName());
IMG_SavePNG(tempSurface, strSurfacePath);
// Get rid of old loaded surface
SDL_FreeSurface(tempSurface);
return newTexture;
}
Note that in the original code, I'm checking for boundaries, and for NULL after the SDL_Create*. I'm also aware that it would be better to have a spritesheet for the textures instead of loading each texture individually.
EDIT :
Here's a sample of what I'm observing in NSight if I capture a frame and use the Resources View.
The first 3186 textures are correct. Then I get 43 empty textures. Then I get 228 correct textures. Then 100 bad ones. Then 539 correct ones. Then 665 bad ones. It goes on randomly like that, and it changes each time I run my program.
Again, each time the surfaces saved by IMG_SavePNG are correct. This seems to indicate that something happens when I call SDL_CreateTextureFromSurface but at that point, I don't want to rule anything out, because it's a very weird problem, and it smells undefined behaviour all over the place. But I just can't find the problem.
With the help of #mark-benningfield, I was able to find the problem.
TL;DR
There's a bug (or at least, an undocumented feature) in SDL with the DX11 renderer. There's a work-around ; see at the end.
CONTEXT
I'm trying to load around 12,000 textures when my program start. I know it's not a good idea, but I was planning on using that as a stepping-stone to another more sane system.
DETAILS
What I realized while debugging that problem is that the SDL renderer for DirectX 11 does that when it creates a texture :
result = ID3D11Device_CreateTexture2D(rendererData->d3dDevice,
&textureDesc,
NULL,
&textureData->mainTexture
);
The Microsoft's ID3D11Device::CreateTexture2D method page indicates that :
If you don't pass anything to pInitialData, the initial content of the memory for the resource is undefined. In this case, you need to write the resource content some other way before the resource is read.
If we're to believe that article :
Default Usage
The most common type of usage is default usage. To fill a default texture (one created with D3D11_USAGE_DEFAULT) you can :
[...]
After calling ID3D11Device::CreateTexture2D, use ID3D11DeviceContext::UpdateSubresource to fill the default texture with data from a pointer provided by the application.
So it looks like that D3D11_CreateTexture is using the second method of the default usage to initialize a texture and its content.
But right after that, in the SDL, we call SDL_UpdateTexture (without checking the return value ; I'll get to that later). If we dig until we get the the D3D11 renderer, we get that :
static int
D3D11_UpdateTextureInternal(D3D11_RenderData *rendererData, ID3D11Texture2D *texture, int bpp, int x, int y, int w, int h, const void *pixels, int pitch)
{
ID3D11Texture2D *stagingTexture;
[...]
/* Create a 'staging' texture, which will be used to write to a portion of the main texture. */
ID3D11Texture2D_GetDesc(texture, &stagingTextureDesc);
[...]
result = ID3D11Device_CreateTexture2D(rendererData->d3dDevice, &stagingTextureDesc, NULL, &stagingTexture);
[...]
/* Get a write-only pointer to data in the staging texture: */
result = ID3D11DeviceContext_Map(rendererData->d3dContext, (ID3D11Resource *)stagingTexture, 0, D3D11_MAP_WRITE, 0, &textureMemory);
[...]
/* Commit the pixel buffer's changes back to the staging texture: */
ID3D11DeviceContext_Unmap(rendererData->d3dContext, (ID3D11Resource *)stagingTexture, 0);
/* Copy the staging texture's contents back to the texture: */
ID3D11DeviceContext_CopySubresourceRegion(rendererData->d3dContext, (ID3D11Resource *)texture, 0, x, y, 0, (ID3D11Resource *)stagingTexture, 0, NULL);
SAFE_RELEASE(stagingTexture);
return 0;
}
Note : code snipped for conciseness.
This seems to indicate, based on that article I mentioned, that SDL is using the second method of the Default Usage to allocate the texture memory on the GPU, but uses the Staging Usage to upload the actual pixels.
I don't know that much about DX11 programming, but that mixing up of techniques got my programmer's sense tingling.
I contacted a game programmer I know and explained the problem to him. He told me the following interesting bits :
The driver gets to decide where it's storing staging textures. It usually lies in CPU RAM.
It's much better to specify a pInitialData pointer, as the driver can decide to upload the textures asynchronously.
If you load too many staging textures without commiting them to the GPU, you can fill up the RAM.
I then wondered why SDL didn't return me a "out of memory" error at the time I called SDL_CreateTextureFromSurface, and I found out why (again, snipped for concision) :
SDL_Texture *
SDL_CreateTextureFromSurface(SDL_Renderer * renderer, SDL_Surface * surface)
{
[...]
SDL_Texture *texture;
[...]
texture = SDL_CreateTexture(renderer, format, SDL_TEXTUREACCESS_STATIC,
surface->w, surface->h);
if (!texture) {
return NULL;
}
[...]
if (SDL_MUSTLOCK(surface)) {
SDL_LockSurface(surface);
SDL_UpdateTexture(texture, NULL, surface->pixels, surface->pitch);
SDL_UnlockSurface(surface);
} else {
SDL_UpdateTexture(texture, NULL, surface->pixels, surface->pitch);
}
[...]
return texture;
}
If the creation of the texture is successful, it doesn't care whether or not it succeeded in updating the textures (no check on SDL_UpdateTexture's return value).
WORKAROUND
The poor-man's workaround to that problem is to call SDL_RenderPresent each time you call a SDL_CreateTextureFromSurface.
It's probably fine to do it once every hundred textures depending on your texture size. But just be aware that calling SDL_CreateTextureFromSurface repeatedly without updating the renderer will actually fill up the system RAM, and the SDL won't return you any error condition to check for this.
The irony of this is that had I implemented a "correct" loading loop with percentage of completion on screen, I would never had that problem. But fate had me implement this the quick-and-dirty way, as a proof of concept for a bigger system, and I got sucked into that problem.

SetProp problem

Can anybody tell me why the following code doesn't work? I don't get any compiler errors.
short value = 10;
SetProp(hCtl, "value", (short*) value);
The third parameter is typed as a HANDLE, so IMO to meet the explicit contract of the function you should save the property as a HANDLE by allocating a HGLOBAL memory block. However, as noted in the comments below, MSDN states that any value can be specified, and indeed when I try it on Windows 7 using...
SetProp(hWnd, _T("TestProp"), (HANDLE)(10)); // or (HANDLE)(short*)(10)
...
(short)GetProp(hWnd, _T("TestProp"));
... I get back 10 from GetProp. I suspect somewhere between your SetProp and GetProp one of two things happens: (1) the value of hWnd is different -- you're checking a different window or (2) a timing issue -- the property hasn't been set yet or had been removed.
If you wanted to use an HGLOBAL instead to follow the specific types of the function signature, you can follow this example in MSDN.
Even though a HANDLE is just a pointer, it's a specific data type that is allocated by calls into the Windows API. Lots of things have handles: icons, cursors, files, ... Unless the documentation explicitly states otherwise, to use a blob of data such as a short when the function calls for a HANDLE, you need a memory handle (an HGLOBAL).
The sample code linked above copies data as a string, but you can instead set it as another data type:
// TODO: Add error handling
hMem = GlobalAlloc(GPTR, sizeof(short));
lpMem = GlobalLock(hMem);
if (lpMem != NULL)
{
*((short*)lpMem) = 10;
GlobalUnlock(hMem);
}
To read it back, when you GetProp to get the HANDLE you must lock it to read the memory:
// TODO: Add error handling
short val;
hMem = (HGLOBAL)GetProp(hwnd, ...);
if (hMem)
{
lpMem = GlobalLock(hMem);
if (lpMem)
{
val = *((short*)lpMem);
}
}
I would create the short on the heap, so that it continues to exist, or perhaps make it global, which is perhaps what you did. Also the cast for the short address needs to be void *, or HANDLE.

Resources