I have a program where I draw a bunch of custom shapes and objects on a canvas that sits on top of an image.
The images sizes vary hugely and they are stretched proportionally to fit the view.
When I draw the shapes, I also create some editing elements (frames and handles etc) for them so they can be scaled/rotated and the like.
The problem is that due to the differing dimensions/DPI of the images, these editing elements can either be massive or tiny depending on the image.
I have tried different methods like scaling the editing elements based on a scale factor calculated from the image sizes, but I'm not terribly happy with the result.
Can anyone suggest a good way to draw these editor elements at a consistent size regardless of the size of the image below it? As in a 1px or unit line will appear exactly the same on a 100 x 100 or 10000 x 10000 image?
I dabbled with adorners hoping they might be the solution, but they scale exactly the same as my custom shapes.
Any suggestions would be awesome, thank you!
Related
A designer provided the attached two DAE files created in Cinema 4D.
Both assets are of comparable size inside of Cinema 4D. Both DAE files were produced with the same export process.
Importing the DAE files into a SceneKit scene, however, produces different results.
Chango.dae imports at a "normal" size with a bounding box of ~3x4x3.
Tiki.dae imports at a huge size with a bounding box of ~155x325x140. Its dimensions inside C4D are ~122x283x142.
Questions:
1) How do you make sure assets "fit" into a SceneKit scene? Are you supposed to scale down assets with the "scale" property of the SCNNode, or are you supposed to ask the designer to make the asset of a certain size? In SpriteKit and UIKit, this is straightforward. The asset size correlates directly to its screen size (e.g., 20x20 icon takes about that much screen space depending on resolution). However, what's the analog for SceneKit? If you want an asset to fit into a 1x1x1 SCNNode, what size do you ask the designer to make the asset?
2) If your asset is too large for a scene, how do you scale it down? In UIKit, for instance, you can scale an image to fit a UIView with something like ScaleAspectFit. There doesn't seem to be an analog for a SCNNode. Using the SCNNode's scale property changes the appearance, but doesn't change the asset's bounding box. And even changing the appearance isn't precise. For instance, with Tiki.dae, the original height of the asset (as shown by the bounding box) is 324.36. If you set the Y-scale to 0.01, however, the height doesn't become ~3.24. It becomes smaller than 3, which you can prove by fitting it comfortably within a sphere of height 3 (radius of 1.5).
If you open the Collada files you'll find that in one case distances are expressed in meters:
Chango.dae
<unit name="meter"/>
and in the other case they are expressed in centimeters:
Tiki.dae
<unit meter="0.01" name="centimeter"/>
So a value of 1 means 1m in one file and 1cm in the other.
This is an asset issue that you can probably fix in Cinema 4D, or by manually editing the Collada file. You can also use the convertUnitsToMeters option to convert units at load time.
My aim is to have 3 images shrink, grow, and move along a horizontal axis depending on selection. Using Auto Layout seems to make the images jump about as they try to fulfil the Top space to superview / Bottom space to superview constraints.
So to combat this I have put all the images inside their own UIView. The UIView is set to the maximum size the images can grow to, it is centred on the horizontal axis. So now all the images must do is stay centred inside their corresponding UIView. This has fixed my problem as the UIViews perform the horizontal translation, while the images shrink/grow inside while remaining centred. My question is - is this the correct way to do this? It seems very long and like I am perhaps misusing the ability of Auto Layout. I have to perform similar tasks with more images and so any advice is welcome! Thanks.
I've just written a little essay on this topic here:
How do I adjust the anchor point of a CALayer, when Auto Layout is being used?
Basically autolayout does not play at all well with any kind of view transform. The easiest solution is to take your view out of autolayout's control altogether, but alternatively you can give it only constraints that won't fight back against the particular kind of transform you intend to apply. That second solution sounds like just the sort of thing you're doing.
So I realize that I am venturing outside of the intended use of a Canvas here and will likely have to come up with a more manual solution. However, not being overly experienced in WPF I was hoping that there may be some solution which would allow me to continue using a Canvas control and the features it gives you for free.
The issue revolves around a Canvas which is used to zoom in and out of an image and some number of child controls that belong to the Canvas. These child controls are to be placed at various positions on the image and, as such, the Canvas works nicely in that it handles all of the layout/positioning for me as I zoom in or out.
However, one drawback is that the Canvas scales these child controls up as I zoom into the image, causing them to become too large to be usable in practice. What I am looking for is a solution that allows me to zoom into an image that belongs to a canvas without also zooming up the size of the child controls, preferably handling the layout for me.
I have tried modifying the width and height of these child controls as the zoom factor increases or decreases, but there is a slight lag time and it all looks a bit 'jerky'.
If it comes down to it I will simply do all of the zooming/panning/layout myself, but I thought I would ask first just to make sure that I am not missing anything that would allow me to tell the Canvas to not scale the size of certain controls. Thanks in advance.
You can bind the children's RenderTransform to the inverse of the Canvas' transform, see my answer to this similar question on rotation.
This is more of a thought than an answer, but what if you set a transform on the element that you did not want scaled that was the opposite of the canvas itself. So for example, if the canvas had a scale transform of 2.0, set the element to have a scale transform of 0.5. You could probably accomplish this by binding the transform values together using a value converter.
You'll probably want to make sure the element has a render transform origin of 0.5,0.5 so that it scales from the center.
In my Silverlight application I display texts (textblock on canvas) as well as rectangles and lines (again shapes drawn on the canvas) over deep zoom images. I handle zoom in / out, pan and tilts. What is not realy cool in my opinion, is the way my vector objects look at different zoom factors. Of cause they become bigger or smaller.
Would you have any suggestions how to keep some of the objects dimentions look the same at any zoom? let's say, a line with StrokeThickness will always be of 10 screen pixels. or text block width, height 100 screen pix by 300 screen pix.
Thanks,
Val
It depends on how/where your objects are defined that you want to remain at 1:1 scale.
The 2 options I can think of are:
Render those objects in a canvas above the deep zoom (this means you need to work out the positions again yourself).
Apply reciprocal scaling to those objects (which means you work out what scale the item is going to display at and apply a 1/x scaling factor to them. That way the deep zoom shrinks an object that is scaled up to compensate and the 2 cancel out).
Hope this helps.
I am bit new to Imaging and want to understand below:
what is the bounding box of a dataset and why is that needed? Does it represent something of measurement in real world or just for computer screen where it is displayed? How is this related to the image size specified in pixels?
What does WMTS layers zoom level & matrix sets mean? I understand that WMTS works by using getting tiles of the dataset. Also, I see that the get Capabilities for a specific WMTS dataset returns back matrix Sets in the XML which I don't understand?
what do the matrix datasets and zoom levels signify and how can I understand them as a layman?
I have tried googling a bit but it looks like the articles assume some technical knowledge around this already which I am trying to gather.
A bounding box is the (imaginary) rectangle that you can draw around a dataset (or feature) that touches it's maximums and minimums in both X and Y direction. It is measured in the same units as the geometry. It is related to the image size in pixels as the resolution or scale which are bbox.width/image.width or (the inverse), and are in units of metres/pixel or pixels/metre (or degree or foot).
A WMTS layer is a set of pre-rendered tiles that have been produced at a fixed set of scales and over a fixed area. These are related in the matrix sets of a WMTS layer - the zoom level is how far down that set of scales you have traversed with 0 being the top and an arbitrary number (usually between 15-20 for global data sets) being the lowest (or most detailed).
See 2. - You should not really need to understand them in detail as your client library will handle all of that for you.