how to draw underground layer in harp.gl and set camera to it - harp.gl

How can we draw underground feature like pipeline and subway? I would like to have them appear to be underground and not merely on road level. Can we also set the map view port to underground level? in documentation, camera pitch angle can only be set 0 to 90 degree

Related

Visualization for Indoor Positioning System in Real Time

I want to visualize moving assets on a indoor map of a facility.
I think the first step would be to trace floor plan image in some form of accurate vector drawing with precise lengths of all the structures to create a digitized version of the facility.
The hardware setup gives me relative x,y positioning, for example within 50x50 meters bounding box where coordinates are 0,0 (left bottom point) to 50,50 (right top).
Accuracy of the indoor map drawing is very critical for the application as I need to plot moving objects. I came across OpenStreetMap's indoor maps libraries like openindoor6, it looks good for static maps to show internal structures of the buildings, but I've doubts about the measurement accuracy of the structures (length of walls, room sizes, etc) as I'll have to manually Georeference the floorplan, then map the x, y coordinates that I obtain from the hardware to LatLon.
In short, I need tools that'll help me to draw accurate indoor maps with reliable coordinate system, do some layering stuff like using markers, marking zones and indoor geofencing.
I'm looking for open source tools if possible to achieve all this. Any suggestions? TIA.

How are camera views handled in fps multiplayer games?

Hi I was completing the movement system in my hobby engine, where I have a fps camera, and I was curious to know how multiplayer games handle the movement of all players, I mean imagine there are 10 people on a server,8 players playing and moving,2 spectating, do they have 10 different cameras with the mouse movement of each player right? Or am I missing something? So for example if I die and I wanna switch my camera to another player that is playing I simply switch my view with his view? Does it make sense?
Here's a great resource for coordinate systems, transformations
The camera is just a matrix(or a combination), that bends the universe into a single box(what happens to be outside is outside of field of view) by multiplying coordinates of triangles by self, which is then drawn as if its xy corners were the the corners of screen(viewport actually), and z is the depth(if you need it).
Just 16 numbers, 4x4 matrix is passed to the rendering engine(or more, if you pass view and projection matrices separately), that puts the triangles where they end up on the screen. In my own engine, I pre-multiply the view matrix (one that rotates and shifts the coordinates so the camera is the origin, 0;0;0 point, to the view space) and projection matrix (one that packs things in field of view to the screen-box, clip space).
Obviously, if you want a different camera, you just make the matrices from the camera's position, orientation, FoV, etc.

How to transform a 3D bounding box from pointclouds to matched RGB frames in ROS?

i am doing SLAM with depth camera realsense D455 on ROS (https://github.com/IntelRealSense/realsense-ros/wiki/SLAM-with-D435i) and I am able to create 3D point clouds of the environment.
I am new in ROS and my goal is to bound a 3d box in a region of global pointcloud then transform the same box on RGB frames that are matched with that points(at the same coordinates) in the global point cloud.
now i have RGB, depth, and point cloud topic and tf but since i am new in this field i do not know how can i find RGB frames that are matched with each point in the global point cloud?
and how do the same operation from point cloud to RGB frames?
i would be grateful if someone can help me with that
Get 3D bounding box points.
Transform them into camera's frame (z is depth).
Generate your camera's calibration model (intrinsic and extrinsic parameters of the camera, focal length etc.) -> Check openCV model, Scaramuzza Model or another stereo camera calibration model
Project 3D points to 2D pixels. (openCV has one projectPoints() function).

Mapping Lat And Lon on a blank page

VB.net, WPF
Been researching for a while and come across nothing that is exactly what im looking for
Basically I am required to make a program that can map area covered by a machine this map can be either be a polygon covering a white screen or a polygon overlaying an offline satellite/Street map.
I have the NMEA serial GPS setup and running perfectly however I am now required to store the Lat and lon data of the machine for each second tick and then overlay this as a polygon map for area covered.
The machine width will be set by the user and map area covered with the GPS.
My Question.
Is there a way I can overlay a gps generated coverage map on either a white screen or an offline street map?
I think I understand what you're trying to achieve here.
You can draw the coverage of your machine as a circle of machine width on a canvas. See here for a tutorial
When it comes to converting your lat/long to pixels for the centre of your machine you need to account for the projection of the chart. Lat/Long describes a location on the surface of a sphere. However your map is a flat 2d shape. To enable us to plot these points the maps are squished using a projection, the most common of which is the mercator projection
Here's an example of transforming lat / long for mercator: Covert latitude/longitude point to a pixels (x,y) on mercator projection

Getting Relative Position of a Rotating Camera

I have a Viewport3D with a 3D model composed of multiple smaller components centered at the origin. I'm animating the PerspectiveCamera to rotate about the Y-axis using an AnimationClock created from a DoubleAnimation to create a rotating effect on the model. In addition, I have another rotateTransform3D assigned to the camera's transformgroup to enable the user to orbit around and zoom in-out of the model using the mouse.
I want to be able to translate each component as it is being selected to move in front of the rotating camera. However, I don't know how to get the position of the camera relative to the 3D model because the coordinate system of camera is being animated and transformed by the user's input.
Is there a way to get the offset between two coordinate systems?
Any help or suggestions would be appreciated.
Thanks,

Resources