How do I use SDL_LockTexture() to update dirty rectangles? - c

I'm migrating an application from SDL 1.2 to 2.0, and it keeps an array of dirty rectangles to determine which parts of its SDL_Surface to draw to the screen. I'm trying to find the best way to integrate this with SDL 2's SDL_Texture.
Here's how the SDL 1.2 driver is working: https://gist.github.com/nikolas/1bb8c675209d2296a23cc1a395a32a0d
And here's how I'm getting changes from the surface to the texture in SDL 2:
void *pixels;
int pitch;
SDL_LockTexture(_sdl_texture, NULL, &pixels, &pitch);
memcpy(
pixels, _sdl_surface->pixels,
pitch * _sdl_surface->h);
SDL_UnlockTexture(_sdl_texture);
for (int i = 0; i < num_dirty_rects; i++) {
SDL_RenderCopy(
_sdl_renderer, _sdl_texture, &_dirty_rects[i], &_dirty_rects[i]);
}
SDL_RenderPresent(_sdl_renderer);
I'm just updating the entire surface, but then taking advantage of the dirty rectangles in the RenderCopy(). Is there a better way to do things here, only updating the dirty rectangles? Will I run into problems calling SDL_LockTexture and UnlockTexture up to a hundred times every frame, or is that how they're meant to be used?
SDL_LockTexture accepts an SDL_Rect param which I could use here, but then it's unclear to me how to get the appropriate rect from _sdl_surface->pixels. How would I copy out just a small rect from this pixel data of the entire screen?

Related

What is SceneKit doing between calls to didApplyConstraints and willRenderScene?

The SceneKit rendering loop is well documented here https://developer.apple.com/documentation/scenekit/scnscenerendererdelegate and here https://www.raywenderlich.com/1257-scene-kit-tutorial-with-swift-part-4-render-loop. However neither of these documents explains what SceneKit does between calls to didApplyConstraints and willRenderScene.
I've modified my SCNSceneRendererDelegate to measure the time between each call and I can see that around 5ms elapses between those two calls. It isn't running my code in that time, but presumably some aspect of the way I've set up my scene is creating work which has to be done there. Any insight into what SceneKit is doing would be very helpful.
I am calling SceneKit myself from an MTKView's draw call (rather than using an SCNView) so that I can render the scene twice. The first render is normal, the second uses the depth buffer from the first but draws just a subset of the scene that I want to "glow" onto a separate colour buffer. That colour buffer is then scaled down, gaussian blurred, scaled back up and then blended over the top of the first scene (all with custom Metal shaders).
The 5ms spent between didApplyConstraints and willRenderScene started happening when I introduced this extra rendering pass. To control which nodes are in each scene I switch the opacity of a small number of parent nodes between 0 and 1. If I remove the code which switches opacity but keep everything else (so there are two rendering passes but they both draw everything) the extra 5ms disappears and the overall frame rate is actually faster even though much more rendering is happening.
I'm writing Swift targeting MacOS on a 2018 MacBook Pro.
UPDATE: mnuages has explained that changing the opacity causes SceneKit to rebuild the scene graph and it that explains part of the lost time. However I've now discovered that my use of a custom SCNProgram for the nodes in one rendering pass also triggers a 5ms pause between didApplyConstraints and willRenderScene. Does anyone know why this might be?
Here is my code for setting up the SCNProgram and the SCNMaterial, both done once:
let device = MTLCreateSystemDefaultDevice()!
let library = device.makeDefaultLibrary()
glowProgram = SCNProgram()
glowProgram.library = library
glowProgram.vertexFunctionName = "emissionGlowVertex"
glowProgram.fragmentFunctionName = "emissionGlowFragment"
...
let glowMaterial = SCNMaterial()
glowMaterial.program = glowProgram
let emissionImageProperty = SCNMaterialProperty(contents: emissionImage)
glowMaterial.setValue(emissionImageProperty, forKey: "tex")
Here's where I apply the material to the nodes:
let nodeWithGeometryClone = nodeWithGeometry.clone()
nodeWithGeometryClone.categoryBitMask = 2
let geometry = nodeWithGeometryClone.geometry!
nodeWithGeometryClone.geometry = SCNGeometry(sources: geometry.sources, elements: geometry.elements)
glowNode.addChildNode(nodeWithGeometryClone)
nodeWithGeometryClone.geometry!.firstMaterial = glowMaterial
The glow nodes are a deep clone of the regular nodes, but with an alternative SCNProgram. Here's the Metal code:
#include <metal_stdlib>
using namespace metal;
#include <SceneKit/scn_metal>
struct NodeConstants {
float4x4 modelTransform;
float4x4 modelViewProjectionTransform;
};
struct EmissionGlowVertexIn {
float3 pos [[attribute(SCNVertexSemanticPosition)]];
float2 uv [[attribute(SCNVertexSemanticTexcoord0)]];
};
struct EmissionGlowVertexOut {
float4 pos [[position]];
float2 uv;
};
vertex EmissionGlowVertexOut emissionGlowVertex(EmissionGlowVertexIn in [[stage_in]],
constant NodeConstants &scn_node [[buffer(1)]]) {
EmissionGlowVertexOut out;
out.pos = scn_node.modelViewProjectionTransform * float4(in.pos, 1) + float4(0, 0, -0.01, 0);
out.uv = in.uv;
return out;
}
constexpr sampler linSamp = sampler(coord::normalized, address::clamp_to_zero, filter::linear);
fragment half4 emissionGlowFragment(EmissionGlowVertexOut in [[stage_in]],
texture2d<half, access::sample> tex [[texture(0)]]) {
return tex.sample(linSamp, in.uv);
}
By changing the opacity of nodes you're invalidating parts of the scene graph which can result in additional work for the renderer.
It would be interesting to see if setting the camera's categoryBitMask is more performant (it doesn't modify the scene graph).

best way in linux to display gif gif87a image from C

What would be the best way, in linux from gnu C and not C++, to display a gif87a file on screen and redisplay it in the same location on the screen so the user can observe changes that are made on the fly to the dataset? This is not an animated gif.
in some old code (fortran77) that has a C wrapper which takes an image that was displayed on the screen and writes it to a gif file, there is a comment about X Window Applications Programming, Ed. 2, Johnson & Reichard that was used as a reference to write the C code to display image data to the screen and write a gif87a file, and this code was written around 1995, the onscreen display of the image no longer works (just a black window) but the creation of the gif file still works. What i would like to do is from the existing C code, in SLES version 11.4 with the libraries that are available to open the gif file and display it on screen. The image, or contour plot, has a color bar that the user sets the min/max value for to display the image to their liking and it would be preferable to make it as easy & efficient for the user to adjust those min max values then redraw the image (re-write the gif then redisplay on screen in same location). There's also a handful of other knobs that the user can turn, such as windowing of the dat (hamming or han) and it would be best if the user can quickly/easily run though about 5+ ways of looking at the image before settling on what is considered correct then using that final gif that was created in powerpoint, excel, etc.
Writing an X11 application is non-trivial. You can display a GIF (or any one of around 200 image formats) using ImageMagick which is included in most Linux distros and is available for macOS. Windows doesn't count.
So, you can create images and manipulate images from the command line, or in C if you want. So, let's create a GIF that is 1024x768 and full of random colours:
convert -size 1024x768 xc:blue +noise random -pointsize 72 -gravity center -annotate 0 "10" image.gif
Now we can display it, using ImageMagick's display program:
display image.gif &
Now we can get its X11 "window-id" with:
xprop -root
...
_NET_ACTIVE_WINDOW(WINDOW): window id # 0x600011
...
...
Now you can change the image, however you like with filters and blurs and morphology and thresholds and convolutions:
convert image.gif -threshold 80% -morphology erode diamond -blur 0x3 -convolve "3x3: -1,0,1, -2,0,2, -1,0,1" ... image.gif
And then tell the display program to redraw the window with:
display -window 0x600011 image.gif
Here is a little script that generates images with a new number in the middle of each frame and updates the screen:
for ((t=0;t<100;t++)) ; do
convert -size 640x480 xc:blue +noise random -pointsize 72 -fill white -gravity center -annotate 0 "$t" image.gif
display -window 0x600011 image.gif
done
Now all you need to do is find a little Python or Tcl/Tk library that draws some knobs and dials, reads their positions and changes the image accordingly and tells the screen to redraw.
As a result of the lack of enthusiasm for my other answer, I thought I'd have another attempt. I had a quick look and learn of Processing which is a very simple language, very similar to C but much easier to program.
Here is a screen shot of it loading a GIF and displaying a couple of twiddly knobs - one of which I attached to do a threshold on the image.
Here's the code - it is not the prettiest in the world because it is my first ever code in Processing but you should be able to see what it is doing and adapt to your needs:
import controlP5.*;
ControlP5 cp5;
int myColorBackground = color(0,0,0);
int knobValue = 100;
float threshold=128;
Knob myKnobA;
Knob myKnobB;
PImage src,dst; // Declare a variable of type PImage
void setup() {
size(800,900);
// Make a new instance of a PImage by loading an image file
src = loadImage("image.gif");
// The destination image is created as a blank image the same size as the source.
dst = createImage(src.width, src.height, RGB);
smooth();
noStroke();
cp5 = new ControlP5(this);
myKnobA = cp5.addKnob("some knob")
.setRange(0,255)
.setValue(50)
.setPosition(130,650)
.setRadius(100)
.setDragDirection(Knob.VERTICAL)
;
myKnobB = cp5.addKnob("threshold")
.setRange(0,255)
.setValue(220)
.setPosition(460,650)
.setRadius(100)
.setNumberOfTickMarks(10)
.setTickMarkLength(4)
.snapToTickMarks(true)
.setColorForeground(color(255))
.setColorBackground(color(0, 160, 100))
.setColorActive(color(255,255,0))
.setDragDirection(Knob.HORIZONTAL)
;
}
void draw() {
background(0);
src.loadPixels();
dst.loadPixels();
for (int x = 0; x < src.width; x++) {
for (int y = 0; y < src.height; y++ ) {
int loc = x + y*src.width;
// Test the brightness against the threshold
if (brightness(src.pixels[loc]) > threshold) {
dst.pixels[loc] = color(255); // White
} else {
dst.pixels[loc] = color(0); // Black
}
}
}
// We changed the pixels in destination
dst.updatePixels();
// Display the destination
image(dst,100,80);
}
void knob(int theValue) {
threshold = color(theValue);
println("a knob event. setting background to "+theValue);
}
void keyPressed() {
switch(key) {
case('1'):myKnobA.setValue(180);break;
case('2'):myKnobB.setConstrained(false).hideTickMarks().snapToTickMarks(false);break;
case('3'):myKnobA.shuffle();myKnobB.shuffle();break;
}
}
Here are some links I used - image processing, P5 library of widgets and knobs.

OpenGL model, view, projection matrices

I am trying to understand cameras in opengl that use matrices.
I've written a simple shader that looks like this:
#version 330 core
layout (location = 0) in vec3 a_pos;
layout (location = 1) in vec4 a_col;
uniform mat4 u_mvp_mat;
uniform mat4 u_mod_mat;
uniform mat4 u_view_mat;
uniform mat4 u_proj_mat;
out vec4 f_color;
void main()
{
vec4 v = u_mvp_mat * vec4(0.0, 0.0, 1.0, 1.0);
gl_Position = u_mvp_mat * vec4(a_pos, 1.0);
//gl_Position = u_proj_mat * u_view_mat * u_mod_mat * vec4(a_pos, 1.0);
f_color = a_col;
}
It's a bit verbose but that's because I am testing passing in either the model, view or projection matrices and doing the multiplication on the gpu or doing the multiplication on the cpu and passing in the mvp matrix and then just doing the mvp * position matrix multiplication.
I understand that the later one can offer performance increase but drawing 1 quad I don't really see any issues with performance at this point.
Right now I use this code to get the locations from my shader and create the model view and projection matrices.
pos_loc = get_attrib_location(ce_get_default_shader(), "a_pos");
col_loc = get_attrib_location(ce_get_default_shader(), "a_col");
mvp_matrix_loc = get_uniform_location(ce_get_default_shader(), "u_mvp_mat");
model_mat_loc = get_uniform_location(ce_get_default_shader(), "u_mod_mat");
view_mat_loc = get_uniform_location(ce_get_default_shader(), "u_view_mat");
proj_matrix_loc =
get_uniform_location(ce_get_default_shader(), "u_proj_mat");
float h_w = (float)ce_get_width() * 0.5f; //width = 320
float h_h = (float)ce_get_height() * 0.5f; //height = 480
model_mat = mat4_identity();
view_mat = mat4_identity();
proj_mat = mat4_identity();
point3* eye = point3_new(0, 0, 0);
point3* center = point3_new(0, 0, -1);
vec3* up = vec3_new(0, 1, 0);
mat4_look_at(view_mat, eye, center, up);
mat4_translate(view_mat, h_w, h_h, -20);
mat4_ortho(proj_mat, 0, ce_get_width(), 0, ce_get_height(), 1, 100);
mat4_scale(model_mat, 30, 30, 1);
mvp_mat = mat4_identity();
after this I setup my vao and vbo's then get ready to do rendering.
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(ce_get_default_shader()->shader_program);
glBindVertexArray(vao);
mvp_mat = mat4_multi(mvp_mat, view_mat, model_mat);
mvp_mat = mat4_multi(mvp_mat, proj_mat, mvp_mat);
glUniformMatrix4fv(mvp_matrix_loc, 1, GL_FALSE, mat4_get_data(mvp_mat));
glUniformMatrix4fv(model_mat_loc, 1, GL_FALSE, mat4_get_data(model_mat));
glUniformMatrix4fv(view_mat_loc, 1, GL_FALSE, mat4_get_data(view_mat));
glUniformMatrix4fv(proj_matrix_loc, 1, GL_FALSE, mat4_get_data(proj_mat));
glDrawElements(GL_TRIANGLES, quad->vertex_count, GL_UNSIGNED_SHORT, 0);
glBindVertexArray(0);
Assuming that all the matrix math is correct, I would like to abstract view and projection matrix out into a camera struct as well as the model matrix into a sprite struct so that I can avoid all this matrix math and make things easier to use.
The matrix multiplication order is:
Projection * View * Model * Vector
so the camera would hold the projection and view matrices while the sprite holds the model matrix.
Do all your camera transformations and your sprite transformations then right before you send the data to the gpu you do your matrix multiplications.
If I remember correctly matrix multiplication isn't commutative so doing
view * projection * model will result in the wrong resulting matrix.
pseudo code
glClearxxx(....);
glUseProgram(..);
glBindVertexArray(..);
mvp_mat = mat4_identity();
proj_mat = camera_get_proj_mat();
view_mat = camera_get_view_mat();
mod_mat = sprite_get_transform_mat();
mat4_multi(mvp_mat, view_mat, mod_mat); //mvp holds model * view
mat4_multi(mvp_mat, proj_mat, mvp_mat); //mvp holds proj * model * view
glUniformMatrix4fv(mvp_mat, 1, GL_FALSE, mat4_get_data(mvp_mat));
glDrawElements(...);
glBindVertexArray(0);
Is that a performant way to go about doing this that is scalable?
Is that a performant way to go about doing this that is scalable?
Yes, unless you have a very exotic use case of some sort which is very unlike the norm.
The last thing you should typically ever be worrying about is with respect to the performance of retrieving a modelview and projection matrix out of a camera.
It's because those matrices typically only need to be fetched once per frame per viewport. There's millions of iterations worth of other work that could occur in a frame while scanline-rasterizing primitives, and pulling matrices out of a camera is just a simple constant-time operation.
So typically you want to just make it as convenient as you like. In my case, I go all the way through an abstract interface of function pointers in a central SDK, at which point the functions then compute the proj/mv/ti_mv matrix on the fly out of user-defined properties associated with the camera. In spite of this, it never shows up as a hotspot -- it doesn't even show up in the profiler at all.
There's far more expensive things to worry about. Scalability implies scale -- the complexity of retrieving matrices out of camera doesn't scale. The number of triangles or quads or lines or other primitives you want to render could scale, the number of fragments processed in a frag shader can scale. Cameras typically don't scale except with respect to the number of viewports, and no one should ever have use for a million viewports.
I haven't checked that bit-wise, but it generally looks ok what you're doing.
I would like to abstract view and projection matrix out into a camera struct
That's a most appropriate idea; I can hardly imagine a serious GL application without such an abstraction
Is that a performant way to go about doing this that is scalable?
General constraints of scalability are
diffuse and specular BRDFs (which also require, btw, a light uniform, a normal attribute and calculation of a normal matrix if the scaling of the model is non-uniform) and need per-pixel illumination for quality rendering.
same with multiple lights (e.g. the sun and a close spotlight)
shadow maps! shadow maps? (one for each light-source?)
transparency
reflections (mirrors, glass, water)
textures
As you may take it from the list, you will not get very far with just an MVP uniform and a vertex coordinate attribute.
But the mere number of uniforms is by far not the most crucial points for performance - seeing your code I'm positive that you will not recompile your shaders unnecessarily, update your uniforms only if needed, use Uniform Buffer Objects etc..
The issue is the data that is plugged into those uniforms and VBOs. Or not.
Consider humanoid mesh "Alice" running (that's a mesh morph + translation) across a city square on a windy (water will have ripples) evening (more than one relevant light source), passing a fountain.
Lets' consider we collect it all for by all means on the CPU and old-school only plug ready-to render data into the shaders:
Alice's mesh is morphed, thus her VBOs need an update
Alice's mesh will move; thus all affected shadow maps will need an update (OK, given. they are generated by shadow illumination loops on the GPU, but if you do the wrong way you will shove a lot of data around)
Alice's reflection in the fountain will come and go
Alice's hair will be swirled - the CPU may have quiet a busy time, to say the least
(in fact the latter is so difficult that you will hardly see any halfway-realistic real-time long open hair animation, but amazingly (no, not really) many pony-tails and short hair cuts)
And we've not yet talked about Alice's attire; let's just hope she's wearing a t-shirt and a jeans (not wide shirt and a skirt, which would require fall-of-the-folds and collision calculations).
As you may have guessed that old-school approach doesn't take us far and thus, there is a fit to be found between between CPU and GPU operations.
In addition, one should think about parallelization of calculations at an early stage. It is advantageous to have the data as flat as possible in chunks as large as reasonable, so one just puts a pointer and size into a gl-call and bids that data farewell without any copying, re-arranging, looping or further ado.
That's my 2 cents of wisdom for today about GL performance and scalability.

DirectX9 Use Geometry Instancing for a Mesh with multiple materials

I am trying to have a flexible Geometry Instancing code able to handle meshes with multiple materials. For a mesh with one material everything is fine. I manage to render as many instances as I want with a single draw call.
Things get a bit more complicated with multiple materials. My mesh comes from an .x file. It has one vertex buffer, one index buffer but several materials. The indexes to render for each subset (materials) is stored in an attribute array.
Here is the code I use:
d3ddev->SetVertexDeclaration( m_vertexDeclaration );
d3ddev->SetIndices( m_indexBuffer );
d3ddev->SetStreamSourceFreq(0, (D3DSTREAMSOURCE_INDEXEDDATA | m_numInstancesToDraw ));
d3ddev->SetStreamSource(0, m_vertexBuffer, 0, D3DXGetDeclVertexSize( m_geometryElements, 0 ) );
d3ddev->SetStreamSourceFreq(1, (D3DSTREAMSOURCE_INSTANCEDATA | 1ul));
d3ddev->SetStreamSource(1, m_instanceBuffer, 0, D3DXGetDeclVertexSize( m_instanceElements, 1 ) );
m_effect->Begin(NULL, NULL); // begin using the effect
m_effect->BeginPass(0); // begin the pass
for( DWORD i = 0; i < m_numMaterials; ++i ) // loop through each subset.
{
d3ddev->SetMaterial(&m_materials[i]); // set the material for the subset
if(m_textures[i] != NULL)
{
d3ddev->SetTexture( 0, m_textures[i] );
}
d3ddev->DrawIndexedPrimitive(
D3DPT_TRIANGLELIST, // Type
0, // BaseVertexIndex
m_attributes[i].VertexStart, // MinIndex
m_attributes[i].VertexCount, // NumVertices
m_attributes[i].FaceStart * 3, // StartIndex
m_attributes[i].FaceCount // PrimitiveCount
);
}
m_effect->EndPass();
m_effect->End();
d3ddev->SetStreamSourceFreq(0,1);
d3ddev->SetStreamSourceFreq(1,1);
This code will work for the first material only. When I say the first I meant the one at index 0 because if I start my loop with the second material, it will not be rendered. However, by debugging the vertex buffer in PIX, I can see all my materials being processed properly. So something happens after the vertex shader.
Another weird issue, all my materials will be rendered if I set my stream source containing the instance data to be a vertex size of zero.
So Instead of this:
d3ddev->SetStreamSource(1, m_instanceBuffer, 0, D3DXGetDeclVertexSize( m_instanceElements, 1 ) );
I replace it by:
d3ddev->SetStreamSource(1, m_instanceBuffer, 0, 0 );
But of course, with this code, all my instances are rendered at the same position since I reuse the same instance data over and over again.
And last point, everything works fine if I create my device with D3DCREATE_SOFTWARE_VERTEXPROCESSING. Only Hardware has the issue but unfortunately DirectX does not report any problem in debug mode.
See the Shader Model 3 docs
If you are implementing shaders in hardware, you may not use vs_3_0 or ps_3_0 with any other shader versions, and you may not use either shader type with the fixed function pipeline. These changes make it possible to simplify drivers and the runtime. The only exception is that software-only vs_3_0 shaders may be used with any pixel shader version.
I had the same problem and in my case, the problem was with pool for instancing mesh. I originally had this mesh in SYSTEM_MEMORY, but the instanced mesh in POOL_DEFAULT. When I changed instancing mesh to sit in a default mem, everything worked as desired.
Hope it helps.

How do I convert a WPF size to physical pixels?

What's the best way to convert a WPF (resolution-independent) width and height to physical screen pixels?
I'm showing WPF content in a WinForms Form (via ElementHost) and trying to work out some sizing logic. I've got it working fine when the OS is running at the default 96 dpi. But it won't work when the OS is set to 120 dpi or some other resolution, because then a WPF element that reports its Width as 96 will actually be 120 pixels wide as far as WinForms is concerned.
I couldn't find any "pixels per inch" settings on System.Windows.SystemParameters. I'm sure I could use the WinForms equivalent (System.Windows.Forms.SystemInformation), but is there a better way to do this (read: a way using WPF APIs, rather than using WinForms APIs and manually doing the math)? What's the "best way" to convert WPF "pixels" to real screen pixels?
EDIT: I'm also looking to do this before the WPF control is shown on the screen. It looks like Visual.PointToScreen could be made to give me the right answer, but I can't use it, because the control isn't parented yet and I get InvalidOperationException "This Visual is not connected to a PresentationSource".
Transforming a known size to device pixels
If your visual element is already attached to a PresentationSource (for example, it is part of a window that is visible on screen), the transform is found this way:
var source = PresentationSource.FromVisual(element);
Matrix transformToDevice = source.CompositionTarget.TransformToDevice;
If not, use HwndSource to create a temporary hWnd:
Matrix transformToDevice;
using(var source = new HwndSource(new HwndSourceParameters()))
transformToDevice = source.CompositionTarget.TransformToDevice;
Note that this is less efficient than constructing using a hWnd of IntPtr.Zero but I consider it more reliable because the hWnd created by HwndSource will be attached to the same display device as an actual newly-created Window would. That way, if different display devices have different DPIs you are sure to get the right DPI value.
Once you have the transform, you can convert any size from a WPF size to a pixel size:
var pixelSize = (Size)transformToDevice.Transform((Vector)wpfSize);
Converting the pixel size to integers
If you want to convert the pixel size to integers, you can simply do:
int pixelWidth = (int)pixelSize.Width;
int pixelHeight = (int)pixelSize.Height;
but a more robust solution would be the one used by ElementHost:
int pixelWidth = (int)Math.Max(int.MinValue, Math.Min(int.MaxValue, pixelSize.Width));
int pixelHeight = (int)Math.Max(int.MinValue, Math.Min(int.MaxValue, pixelSize.Height));
Getting the desired size of a UIElement
To get the desired size of a UIElement you need to make sure it is measured. In some circumstances it will already be measured, either because:
You measured it already
You measured one of its ancestors, or
It is part of a PresentationSource (eg it is in a visible Window) and you are executing below DispatcherPriority.Render so you know measurement has already happened automatically.
If your visual element has not been measured yet, you should call Measure on the control or one of its ancestors as appropriate, passing in the available size (or new Size(double.PositivieInfinity, double.PositiveInfinity) if you want to size to content:
element.Measure(availableSize);
Once the measuring is done, all that is necessary is to use the matrix to transform the DesiredSize:
var pixelSize = (Size)transformToDevice.Transform((Vector)element.DesiredSize);
Putting it all together
Here is a simple method that shows how to get the pixel size of an element:
public Size GetElementPixelSize(UIElement element)
{
Matrix transformToDevice;
var source = PresentationSource.FromVisual(element);
if(source!=null)
transformToDevice = source.CompositionTarget.TransformToDevice;
else
using(var source = new HwndSource(new HwndSourceParameters()))
transformToDevice = source.CompositionTarget.TransformToDevice;
if(element.DesiredSize == new Size())
element.Measure(new Size(double.PositiveInfinity, double.PositiveInfinity));
return (Size)transformToDevice.Transform((Vector)element.DesiredSize);
}
Note that in this code I call Measure only if no DesiredSize is present. This provides a convenient method to do everything but has several deficiencies:
It may be that the element's parent would have passed in a smaller availableSize
It is inefficient if the actual DesiredSize is zero (it is remeasured repeatedly)
It may mask bugs in a way that causes the application to fail due to unexpected timing (eg. the code being called at or above DispatchPriority.Render)
Because of these reasons, I would be inclined to omit the Measure call in GetElementPixelSize and just let the client do it.
Simple proportion between Screen.WorkingArea and SystemParameters.WorkArea:
private double PointsToPixels (double wpfPoints, LengthDirection direction)
{
if (direction == LengthDirection.Horizontal)
{
return wpfPoints * Screen.PrimaryScreen.WorkingArea.Width / SystemParameters.WorkArea.Width;
}
else
{
return wpfPoints * Screen.PrimaryScreen.WorkingArea.Height / SystemParameters.WorkArea.Height;
}
}
private double PixelsToPoints(int pixels, LengthDirection direction)
{
if (direction == LengthDirection.Horizontal)
{
return pixels * SystemParameters.WorkArea.Width / Screen.PrimaryScreen.WorkingArea.Width;
}
else
{
return pixels * SystemParameters.WorkArea.Height / Screen.PrimaryScreen.WorkingArea.Height;
}
}
public enum LengthDirection
{
Vertical, // |
Horizontal // ——
}
This works fine with multiple monitors as well.
I found a way to do it, but I don't like it much:
using (var graphics = Graphics.FromHwnd(IntPtr.Zero))
{
var pixelWidth = (int) (element.DesiredSize.Width * graphics.DpiX / 96.0);
var pixelHeight = (int) (element.DesiredSize.Height * graphics.DpiY / 96.0);
// ...
}
I don't like it because (a) it requires a reference to System.Drawing, rather than using WPF APIs; and (b) I have to do the math myself, which means I'm duplicating WPF's implementation details. In .NET 3.5, I have to truncate the result of the calculation to match what ElementHost does with AutoSize=true, but I don't know whether this will still be accurate in future versions of .NET.
This does seem to work, so I'm posting it in case it helps others. But if anyone has a better answer, please, post away.
Just did a quick lookup in the ObjectBrowser and found something quite interesting, you might want to check it out.
System.Windows.Form.AutoScaleMode, it has a property called DPI. Here's the docs, it might be what you are looking for :
public const
System.Windows.Forms.AutoScaleMode Dpi
= 2
Member of System.Windows.Forms.AutoScaleMode
Summary: Controls scale relative to
the display resolution. Common
resolutions are 96 and 120 DPI.
Apply that to your form, it should do the trick.
{enjoy}

Resources