Show two polygons (wrap them) at low zoom, when showing more than one complete earth - silverlight

How can I wrap shapes around the world, so that a shape is shown more than once at low zoom?
Example:
I draw a polygon over USA.
I zoom out so that I can see two USA's.
I only see one polygon: ( I want to see two!

The map data effectively has 2 USAs. That implies you should actually want 2 polygons, one of which will be hidden most of the time.
Might as well cater for the worst case and treat a single USA as the exception rather than the rule.

You can't.
As others have already pointed out, the fact that, at far zoom levels certain features get repeated on either side of the map is an unwanted but inevitable side-effect of a projected surface that enables continuous scrolling. This has only been an issue in recent versions of the Bing Maps control - the earlier v6.x control prevented the map from panning across the 180th meridian.
I cannot think of any possible reason why you'd ever want to show two USAs, let alone target data to be positioned on each one. So the solution is to modify either the zoom level at which the map is displayed, or the size of the application window in which it is being displayed so that this situation doesn't occur.

Related

How to find out if the surface detected by ARKit is no more available?

I am working on an application with ARKit and SceneKit frameworks. In my application I have enabled surface detection (I followed the placing objects sample provided by Apple). How to find if the surface detected is no more available? That is, initially only if user has detected the surface in ARSession I am allowing him to place the 3D object.
But if the user moves rapidly or focuses somewhere, the detected surface area is getting lost. In this case if the user tries to place another object I shouldn't allow him to place it until he scans the floor again and get the surface corrected.
Is there any delegate which is available to let us know that the surface detected is no more available?
There are delegate functions that you can use. The delegate is the ARSCNViewDelegate
It has a function that is renderer(_:didRemove:for:) that fires when an ARAnchor has been removed. You can use this function to perform some operation when a surface gets removed.
ARSCNViewDelegate Link
There are two ways to “lose” a surface, so there’s more than one approach to dealing with such a problem.
As noted in the other answer, there’s an ARSCNViewDelegate method that ARKit calls when an anchor is removed from the AR session. However, ARKit doesn’t remove plane anchors during a running session — once it’s detected a plane, it assumes the plane is always there. So that method gets called only if:
You remove the plane anchor directly by passing it to session.remove(anchor:), or
You reset the session by running it again with the .removeExistingAnchors option.
I’m not sure the former is a good idea, but the latter is important to handle, so you probably want your delegate to handle it well.
You can also “lose” a surface by having it pass out of view — for example, ARKit detects a table, and then the user turns around so the camera isn’t pointed at or near the table anymore.
ARKit itself doesn’t offer you any help for dealing with this problem. It gives you all the info you need to do the math yourself, though. You get the plane anchor’s position, orientation, and size, so you can calculate its four corner points. And you get the camera’s projection matrix, so you can check for whether any point is in the viewing frustum.
Since you’re already using SceneKit, though, there are also ways to get SceneKit to do the math for you... Working backwards:
SceneKit gives you an isNode(_:insideFrustumOf:) test, so if you have a SCNNode whose bounding box matches the extent of your plane anchor, you can pass that along with the camera (view.pointOfView) to find out if the node is visible.
To get a node whose bounding box matches a plane anchor, implement the ARSCNViewDelegate didAdd and didUpdate callbacks to create/update an SCNPlane whose position and dimensions match the ARPlaneAnchor’s center and extent. (Don’t forget to flip the plane sideways, since SCNPlane is vertically oriented by default.)
If you don’t want that plane visible in the AR view, set its materials to be transparent.

How to measure horizontal plane surface(visible in camera) using ARKit-Scenekit before placing objects?

I want to measure the horizontal plane surface to find whether it fits the object that i am going to place. For ex. if i am going to place a cot 3D model(with fixed size) in a room using iOS 11 ARKit,
First i want to detect if that room surface is sufficient or not to place my 3D model by measuring the surface area(width and height etc.)
Second if the user tries to place it without sufficient place, i should not allow him to place the cot and show him error message.
I created a sample POC by following https://developer.apple.com/sample-code/wwdc/2017/PlacingObjects.zip using which i am able to detect the horizontal plane and place the cot. But the issue is whatever may be the surface, user is able to place the cot which shouldn't be allowed in real time.
I saw couple of demos in which they say we can measure the size of the room or a horizontal plane(https://www.curbed.com/2017/6/29/15894556/ar-measure-app-augmented-reality-ruler-measuring-tape-ios)
I am using ARKit Scenekit inorder to achieve this and i am new to AR and Scenekit. I need to know if this is doable, and if so how to achieve it.
You could estimate the size of a detected plane by inspecting its dimensions. But you shouldn't.
ARKit has plane estimation, not scene reconstruction. That is, it'll tell you there's a flat surface at (some point) and that said surface probably extends at least (some distance) from that point. It doesn't know exactly how big the surface is (it's even refining its estimate over time), and it doesn't tell you where there are interruptions in that continuous surface, much less the size and shape of such interruptions.
In fact, if you're looking at the floor and moving around, and you see one patch of floor, then another patch of floor on the other side of a solid wall from the first, ARKit will happily recognize that those two patches are coplanar and merge them into the same anchor. At the same time, neither detected patch may cover the entire extent of the floor around it.
If you attempt to restrict where the user can place virtual objects in AR based on plane estimates, you're likely to frustrate them with two kinds of error: you'll have areas where it looks to the user like they can place something but that don't allow it, and you'll have areas that look like they should be off-limits that do allow placing things.
Instead, design your experience to involve the user in deciding where the sensible places for content are. See this demo for example — ARKit detects the level of the floor (not its boundaries), then uses that to show UI indicating the size/shape of objects to be placed. It's up to the user to make sure there's enough room for the couch, etc.
As for the technical how-to on what you probably shouldn't do: The docs for ARPlaneAnchor.extent say that the x and z coordinates of that vector are the width and length of the estimated plane. And all units in ARKit are meters. (Which is width and which is length? It's a matter of perspective. And of the rotation encoded in the anchor's transform.)

Efficiently display multiple markers on WPF image

I need to display many markers on a WPF image. The markers can be lines, circles, squares, etc. and there can be several hundreds of them.
Both the image source and the markers data are updated every few seconds. The markers are associated with specific pixels on the image and their size should be absolute in relation to the screen (i.e. when I move the image the markers should move along with it, but if i zoom in, they should take the same space of the screen as before).
Currently, I've implemented this using the AdornerLayer. This solution has several problems but the most significant one is that the UI doesn't fare well under the load even for 120 such markers.
I wanted to ask what would be the best way to go about implementing this? I thought of two solutions:
Inherit from Canvas and make sure it is invalidated not for every
added marker but for a range of markers at once
Create a control that holds an image and change its OnDraw to draw all the markers
I would appreciate some pointers from someone with experience with a similar problem.
Your use case looks quite specialized, so a specialized solution seems in order. I'd try a variant of your second option — extend Image, overriding its OnRender method.

WPF tabswitch/ render takes too much time

I have a WPF application with many tabs..
in one tab.. i make a verycomplex vector drawing consisting of thousands of drawing visuals.. (this represents a machine and all elements need to be interactable..)
It takes 3/4 seconds for drawing this for the first time..After the first draw it should be done..
The problem is if i switch to another tab and comeback, it takes atlease 2,3 seconds to show the tabpage with drawing again.. Since there is no redraw, why should it take so much time..?
If the component is not going to change, you could call Freeze() on it to mark it as done. Without trying it out I don't know if that would help, but you could give it a shot.
Not all objects are Freezable. Check out the MSDN documentation for more info:
http://msdn.microsoft.com/en-us/library/ms750509.aspx
Another thing you could try would be rendering the vector art to a bitmap, and displaying that. Maybe it makes you feel icky to lose the vector precision, but if you know it's not going to change and it will look the same, what's the harm? (If you support printing or something that will require a hi-res version, you could always switch back for that operation.) For info on how to convert a UIElement to a bitmap, check out:
http://msdn.microsoft.com/en-us/library/system.windows.media.imaging.rendertargetbitmap.aspx
Another possible solution: You don't really explain what kind of interaction you are doing with the elements, but if all you want to do is zoom and pan, a RenderTransform may be good enough (which is more efficient than a LayoutTransform and/or moving all the elements individually). I haven't played around with combining Freeze() and a RenderTransform, but you may be able to get the desired zooming while reducing the amount of layout WPF has to do.

Evenly distibuted scatterViewItems that dont overlap

I have an app that creates a variable number of ScatterviewItems based on which tagged object is placed on the surface table.
The ScatterViewItems are added programatically to the ScatterView based on info looked up in a DB
The Scatterview does a good job of displaying this info
However, I would like them to be
evenly distributed across the table and
not have any items overlapping
Any ideas how to do that?
Sounds like you need collision detection.
There's two parts to this problem: detection and resolution. Detection is seeing if any item's bounding intersects with any other item's bounding. If the items are retangular or circular this is pretty straightforeward. It can get complex if you're dealing with other geometries.
Resoltion is what to do once you've detected a collision. Google will help you find the myriad algorithms for this. Here's a couple links to stackoverflow discussions: WPF: Collision Detection with Rotated Squares, Applying Coefficient of Restitution in a collision resolution method, Best way to detect collision between sprites?.
You can implement collision to work so that items bound off of each other as they scatter. Depending on the number of items, this might cause so much collision that the items don't scatter well. If this happens, just run the collision detection one items have stopped moving.
UniformGrid ?
You can also create your own panel by iheriting from Panel.
You will find some uber-valuable info in the Dr. WPF ItemsControl How-To series : http://drwpf.com/blog/itemscontrol-a-to-z/
That's a must-read, period.
ScatterViewItem has properties Center and Orientation which you can use to programmatically position items. If you know the size of each item you should be able to use these properties to position them in whichever way is ideal for your situation. By hooking into the Loaded event of each and checking ActualWidth/ActualHeight, you can get the dimensions. If you can use a fixed initial size for all of your SVIs, that's even easier.
You could lay them out by calculating a simple grid (plus some randomness to make it look more natural), or you may be looking for what's called a "force directed layout", which gives each object a repellent force relative to its size. After a while the elements will naturally be evenly spaced from one another, though they may still overlap if they run out of room. I haven't seen a WPF example of this, but see flare.prefused.org/demo (layout > force) for what I mean in Flash.

Resources