Is it possible to drag-snap a point of a shape being edited by the drawing manager to another shape's point location? - azure-maps

I'd like a user to be able to draw a polygon using the Azure Maps Drawing Manager and have the ability to move a point of the polygon to near one of another polygon's points and have the dragged point snap to the same location such that the resulting 2 points would be the same.
I know there is snap capability with a grid but don't see a sample for this behaviour?
The ultimate goal is to prevent polygon overlaps, assuming the intersecting shared line of adjoining shapes is excluded from determination of which polygon a point resides within.
I can allow a user to manually draw and get as close as possible of course, and provide some assertion to confirm no polygons overlap but would additionally like a nice snap-to-point experience if possible.

You can find hundreds of samples for Azure Maps here: https://samples.azuremaps.com/
As you noted, the snapping grid is likely the best place to start in your scenario. Here are some specific samples of this:
https://samples.azuremaps.com/?sample=use-a-snapping-grid
https://samples.azuremaps.com/?sample=snap-grid-options
The following sample is an example of a custom snapping scenario where the routing service is used to snap a drawn line to a route (the route part can be swapped out for custom logic): https://samples.azuremaps.com/?sample=snap-drawn-line-to-roads

Related

Azure Maps Line Layer and Pop Ups

I am creating a web page that has an Azure Maps control on it. The purpose of it being a sort of snail trail of movement. I have the map rendering and am using a LineLayer with a SymbolLayer in order to draw a line from point to point and then put an arrow on the line to show movement.
Another requirement is that we are able to hover over the points on the map to see information about that specific point, but I don't seem to be finding much online about "Points" on a line.
Any idea how to access individual points in the Linestring and add attributes to them in order to show a pop up?
To accomplish what you are asking you would need to have a second data source that contains the individual points with the attribute information, then connect this data source to a bubble or symbol layer. By doing this, you can add an event to that layer.
Another, less elegant approach is to have a property in your linestring that has an array of all the attribute information for each point. Then when the user hovers the line, loop through all points in the line and calculate the distance to each point from the mouse pointer location, use the index of the closest point to do a look up in your array.

How to find out if the surface detected by ARKit is no more available?

I am working on an application with ARKit and SceneKit frameworks. In my application I have enabled surface detection (I followed the placing objects sample provided by Apple). How to find if the surface detected is no more available? That is, initially only if user has detected the surface in ARSession I am allowing him to place the 3D object.
But if the user moves rapidly or focuses somewhere, the detected surface area is getting lost. In this case if the user tries to place another object I shouldn't allow him to place it until he scans the floor again and get the surface corrected.
Is there any delegate which is available to let us know that the surface detected is no more available?
There are delegate functions that you can use. The delegate is the ARSCNViewDelegate
It has a function that is renderer(_:didRemove:for:) that fires when an ARAnchor has been removed. You can use this function to perform some operation when a surface gets removed.
ARSCNViewDelegate Link
There are two ways to “lose” a surface, so there’s more than one approach to dealing with such a problem.
As noted in the other answer, there’s an ARSCNViewDelegate method that ARKit calls when an anchor is removed from the AR session. However, ARKit doesn’t remove plane anchors during a running session — once it’s detected a plane, it assumes the plane is always there. So that method gets called only if:
You remove the plane anchor directly by passing it to session.remove(anchor:), or
You reset the session by running it again with the .removeExistingAnchors option.
I’m not sure the former is a good idea, but the latter is important to handle, so you probably want your delegate to handle it well.
You can also “lose” a surface by having it pass out of view — for example, ARKit detects a table, and then the user turns around so the camera isn’t pointed at or near the table anymore.
ARKit itself doesn’t offer you any help for dealing with this problem. It gives you all the info you need to do the math yourself, though. You get the plane anchor’s position, orientation, and size, so you can calculate its four corner points. And you get the camera’s projection matrix, so you can check for whether any point is in the viewing frustum.
Since you’re already using SceneKit, though, there are also ways to get SceneKit to do the math for you... Working backwards:
SceneKit gives you an isNode(_:insideFrustumOf:) test, so if you have a SCNNode whose bounding box matches the extent of your plane anchor, you can pass that along with the camera (view.pointOfView) to find out if the node is visible.
To get a node whose bounding box matches a plane anchor, implement the ARSCNViewDelegate didAdd and didUpdate callbacks to create/update an SCNPlane whose position and dimensions match the ARPlaneAnchor’s center and extent. (Don’t forget to flip the plane sideways, since SCNPlane is vertically oriented by default.)
If you don’t want that plane visible in the AR view, set its materials to be transparent.

Efficiently display multiple markers on WPF image

I need to display many markers on a WPF image. The markers can be lines, circles, squares, etc. and there can be several hundreds of them.
Both the image source and the markers data are updated every few seconds. The markers are associated with specific pixels on the image and their size should be absolute in relation to the screen (i.e. when I move the image the markers should move along with it, but if i zoom in, they should take the same space of the screen as before).
Currently, I've implemented this using the AdornerLayer. This solution has several problems but the most significant one is that the UI doesn't fare well under the load even for 120 such markers.
I wanted to ask what would be the best way to go about implementing this? I thought of two solutions:
Inherit from Canvas and make sure it is invalidated not for every
added marker but for a range of markers at once
Create a control that holds an image and change its OnDraw to draw all the markers
I would appreciate some pointers from someone with experience with a similar problem.
Your use case looks quite specialized, so a specialized solution seems in order. I'd try a variant of your second option — extend Image, overriding its OnRender method.

Making WPF User Control transformations internal

I am developing my system using WPF with MVVM and I am having trouble to find out the best way to solve the following problem:
I have a screen in which many components (User Controls) are drawn at specific positions. All components in the screen are rotated, translated and scaled according to binded variables calculated by the screen's VM.
However, each of this components could have a different center for the rotation, a different origin for the translation and a different scale, dependent of internal variables and the screen scale.
How is it possible to make this transformations calculated internally in the User Control? I think the easier approach is using the Converter, however since I have many different User Controls with different behaviours, I would have to create multiple converters very similar to each other, which would not be the ideal solution.
Thank you very much for the help!
A UIElement has only one RenderTransformOrigin.
Some transformations allow you to set the origin for that transformation but in coordinates relative to the control bounds (e.g.: 125, 34) not in proportional coordinates like the RenderOrigin (e.g.: 0.5, 0.75)
So if you can use the coordinates you're good to go.
If not, you could compose the transformations by creating Transformation groups that first translate the control, then perform the transformation, and then translate the control back.
If you need more help, please post an example of what you are trying to achieve.

Show two polygons (wrap them) at low zoom, when showing more than one complete earth

How can I wrap shapes around the world, so that a shape is shown more than once at low zoom?
Example:
I draw a polygon over USA.
I zoom out so that I can see two USA's.
I only see one polygon: ( I want to see two!
The map data effectively has 2 USAs. That implies you should actually want 2 polygons, one of which will be hidden most of the time.
Might as well cater for the worst case and treat a single USA as the exception rather than the rule.
You can't.
As others have already pointed out, the fact that, at far zoom levels certain features get repeated on either side of the map is an unwanted but inevitable side-effect of a projected surface that enables continuous scrolling. This has only been an issue in recent versions of the Bing Maps control - the earlier v6.x control prevented the map from panning across the 180th meridian.
I cannot think of any possible reason why you'd ever want to show two USAs, let alone target data to be positioned on each one. So the solution is to modify either the zoom level at which the map is displayed, or the size of the application window in which it is being displayed so that this situation doesn't occur.

Resources