I'm developing an AR app for iOS that lets the user place a model in the physical world, using ARKit and SceneKit.
I've looked at this Apple sample code for inspiration. In the sample project, they use tracked raycasts to position 3D models in a scene. This is close to what I want, and what led me to assume I need to do the same to achieve the most accurate positioning.
However, when I use a tracked raycast to position my model, the model drifts around the scene a lot as ARKit updates the position of the raycast.
I get much more stable positioning when using a non-tracked raycast.
That makes me ask: what actually is the intended use case for a tracked raycast? Am I understanding this API wrong?
I've tried:
Positioning the model using an image anchor. This is very stable.
Positioning the model using a non-tracked raycast. This is about as stable as the image anchor.
Positioning the model using a tracked raycast. This drifts all over the scene.
I also understand what an AR raycast in general is for:
getting the intersection of a 2D point on the screen with the 3D geometries that ARKit is tracking.
As this post has explained already.
In Apple's example app you mentioned, raycasting is used to update the FocusSquare all the time. You don't really need it for placing your model. You can get a certain (real-world) position (using the FocusSquare) to place a model on that exact location. For this you can fetch static positon data from the FocusSquare at the moment you add your model on scene. I hope I understood corectly what you want.
There is a case, where I have one or two points on the map. While I have only one point, it`s easy, I can make any zoom I want with the center on this point. But I don't know how to make 2 points appear on the screen and how to calculate zoom level depending on it. I mean for example when I have first point in London and second one in Africa, my map should be zoomed out that we could see those 2 points(but not maximum zoom level)
You can do this by calculating the extent of your data (i.e. what coordinate space your points fall in) and then using this extent you calculate an appropriate center coordinate and zoom level.
Fortunately, both these steps are solved problems and open source packages already exist for each:
Extent: https://github.com/mapbox/geojson-extent
Zoom + Center: https://github.com/mapbox/geo-viewport
I would recommend either using these packages directly, or reading through their solutions to get a sense of how to implement each step in your application.
⚠️ disclaimer: I currently work at Mapbox ⚠️
I have created a very simple leaflet map displaying an array of markers as a square which I want to rotate through any angle around its center or relative to its bottom left hand vertex (which is also its origin & position[0][0]).
https://codesandbox.io/s/react-leaflet-grid-with-markers-ds9yl?file=/src/index.js
I don't wish to rotate individual markers (there's plugins for that) but the entire grid which should maintain its shape with all relative marker spacings remaining the same but with the entire map rotated through any angle. As an added complexity the each marker is rendered within a grid cell which is just a leaflet rectangle but which also needs to maintain its position relative to adjacent cells.
It would be a big bonus to be to be able to apply the transform when generating the grid, but applying the transform to the grid shown is also a great start.
Leaflet already provides path transforms but I need to transform an entire array of markers and their path representations. It looks like a geometry/maths problem but I'm hoping that leaflet already has it covered.
Any help or advice much appreciated.
Ok my friend. This was not trivial. But it was fun.
Working codesandbox
There is nothing built in to leaflet to make this happen. I am a big fan of leaflet-geometryutil to do GIS calculations and transformations in leaflet. While leaflet-path-transoform exists, I found it to be too narrow in scope for what you need. Rather, let's invent our own wheel.
The absolute simplest way to approach this is to take every coordinate that's involved, and rotate it around a given latlng. That can be done with leaflet-geometryutil's rotatePoint function. Note the documentation there is not 100% correct - it takes 4 arguments, the first being the map instance. So I had to get a ref to the map by tagging the <Map> component with a ref.
I threw a bit of UI on there so you can manually change the angle, and toggle the point of rotation from the center of the grid to the bottom left corner. That part is a bit trivial, and you can check out the code in the sandbox. The most crucial part is being able to rotate every point involved in the grid (squares and markers alike) around a rotation point. This function does that:
const rotatePoint = React.useCallback(
(point) => {
const { lat, lng } = L.GeometryUtil.rotatePoint(
mapRef.current.leafletElement,
L.latLng(point[0], point[1]),
transformation.degrees,
L.latLng(axis[0], axis[1])
);
return [lat, lng];
},
[mapRef, transformation, axis]
);
The axis parameter is the rotation origin we want to use. transoformation.degrees is simply the number from a user input to get the nuber of degrees you want to rotate. I'm not sure how you plan on implementing this (UI vs programatically), but this function enables you to rotate however you want.
When the map is created, we take your gridBottomLeft and gridSize, and calculate an initial grid, called baseGrid, and save that to state. Really this doesn't need to be a state variable, because we don't expect it to change unless gridBottomLeft or gridSize changes, in which case the component will rerender anyway. However I kept it as a state var just to keep the same logic you had. It might also make sense to keep it as a state var, because as you see, when you toggle between different rotation origins, things may not behave as you expect, and you may want to reset the baseGrid when you toggle rotation origin points.
We keep a separate state var for the current state of the grid, grid, which is a variation of the baseGrid. When the user changes the degree input, a useEffect is fired, which creates a transofmration of baseGrid based on the degree number, and sets it to the grid state var:
useEffect(() => {
const rotatedGrid = baseGrid.map((square) => {
return {
id: square.id,
positions: square.positions.map((coord) => {
return rotatePoint(coord);
}),
center: rotatePoint(square.position)
};
});
setGrid(rotatedGrid);
}, [transformation.degrees, bottomLeft, rotatePoint, baseGrid]);
As you can see, what I'm doing is applying our rotation transformation function to all points in the square (renamed to positions), as well as the marker's position (renamed center).
And voila, the entire grid rotates, along with its markers, around the axis point that you define.
**Note that you were using the Rectangle component, with 2 points as bounds to define the rectangle. This no longer works as you rotate the rectangle, so I converted it to a Polygon, with the 4 corners of the square as its positions prop.
Also note this will not work in react-leaflet version 3. v3 has an absolute ton of breaking changes (which IMO have greatly improved the performance and UX of the library), but the management of refs and changing props is completely different, and as such, will need to be accounted for in v3. If you have questions about that, comment and I can write a bit more, but this answer is already long.
I've been trying to get coordinates from Minecraft and putting them on an WPF canvas for a map viewer. However, most of it does not work and it is not centered. How would you go about with doing this?
Help is appreciated!
Update #1:
I'll be a bit more descriptive of what I want. I'd like to show positions (not images or terrain and stuff) on a WPF Canvas. What I am trying to do is making a navigation system showing lines from certain coordinates to other coordinates. One thing I've tried was making the canvas as big as a Minecraft map, but that didn't work well. https://www.codeproject.com/Articles/85603/A-WPF-custom-control-for-zooming-and-panning I've been using that with no problems, but the thing is that I want to actually translate in game coordinates to coordinates on the maps for waypoints and such.
Sorry for the lack of detail at first!
I need to display many markers on a WPF image. The markers can be lines, circles, squares, etc. and there can be several hundreds of them.
Both the image source and the markers data are updated every few seconds. The markers are associated with specific pixels on the image and their size should be absolute in relation to the screen (i.e. when I move the image the markers should move along with it, but if i zoom in, they should take the same space of the screen as before).
Currently, I've implemented this using the AdornerLayer. This solution has several problems but the most significant one is that the UI doesn't fare well under the load even for 120 such markers.
I wanted to ask what would be the best way to go about implementing this? I thought of two solutions:
Inherit from Canvas and make sure it is invalidated not for every
added marker but for a range of markers at once
Create a control that holds an image and change its OnDraw to draw all the markers
I would appreciate some pointers from someone with experience with a similar problem.
Your use case looks quite specialized, so a specialized solution seems in order. I'd try a variant of your second option — extend Image, overriding its OnRender method.