Can one use encoded polylines to render maps on timeline cards? If not, what is the practical limit on coordinate count?
The current implementation does not support encoded polylines yet. Please feel free to file a feature request on our issue tracker to request support for it.
Regarding the number of coordinates, there is currently no known upper limit but we will probably top it at a few hundred to limit resource consumption.
Related
Most speakers would have a volume knob, but is there a way to modify it for each audio file? My biggest concern is the speaker integrity as to not send too loud of a sound. I can't find a way to get the speakers data so it can be decided if it's safe to play it or not, as I intend to make it work in every speaker. It's a broad question but I dont even know if it's possible so I dont know where to start.
Audio files should ideally be at the maximum level that does not distort. I could see engaging in preprocessing all your recordings so that they're at a consistent sound level. That might be sufficient to account for the concern you are raising about client speaker integrity.
If you want to have the database store a number that reflects how "hot" a particular audio file is, then it seems to me that the amount of effort required to get that number might be comparable or even greater than the effort to preprocess the sounds to a consistent max-with-no-overflow level.
Once you have that number, how to make use of it? The HTML <audio> tag does not have an amplitude property. So, you would have to make use of some library in your client, such as the Web Audio API, or one of the many libraries that are built on top of it. With Web Audio API, the relevant point to change the volume is the gain node.
I started exploring Overpass Turbo and Mapbox with hopes of building my travel app. I can query some data in OT and get towns or islands, no problem, I understand the whole process of querying and exporting as Geojson.
But for learning purposes, I always do queries within a small area so I don't get too much data back.
Also, various resources mention that OSM data for the whole planet is huge, like here: https://wiki.openstreetmap.org/wiki/Downloading_data it says: The entire planet is a huge amount of data. Start with a regional extract to make sure your setup works properly. Common tools like Osmosis or various import tools for database imports and converters take hours or days to import data, depending largely on disk speed.
But when I go to apps like AllTrails, Maps.me or Mapbox, they seem to be showing a huge amount of data, definitely the main POIs.
here's an example screenshot from All Trails
Can someone briefly explain how is this done then? Do they actually download all of data? Or little by little depending on the current bounding box. Any info I can research further, I'd appreciate it!
Thanks
P.S. I am hoping to build my app with Node.js, if that makes a difference.
Several reasons:
They don't always display everything. You will always only see a limited region, never the whole world in full detail. If you zoom in, you will see a smaller region but with more details. If you zoom out, you will see a larger region but with reduced details (less or no POIs, smaller roads and waterways disappear etc.).
They don't contain all the available data. OSM data is very diverse. OSM contains roads, buildings, landuse, addresses, POI information and much more. For each of the mentioned elements, there is additional information available. Roads for instance have maxspeed information, lane count, surface information, whether they are lit and if they have sidewalks or cycleways etc. Buildings may have information about the number of building levels, the building color, roof shape and color and so on. Not all of these information are required for the apps you listed and thus can be removed from the data.
They perform simplifications. It isn't always necessary to show roads, buildings, waterways and landuse in full detail. Instead, special algorithms reduce the polygon count so that the data becomes smaller while keeping sufficient details. This is often coupled with the zoom level, i.e. roads and lakes will become less detailed if zoomed out.
They never ship the whole world offline. Depending on the app, the map is either online or offline available, or both. If online, the server has to store the huge amount of data, not the client device. If offline, the map is split into smaller regions that can be handled by the client. This usually means that a certain map only covers a state, a certain region or sometimes a city but rarely a whole country except for smaller countries. If you want to store whole countries offline you will need a significant amount of data storage.
They never access OSM directly. All apps and websites that display OSM maps don't obtain this information live from OSM. Instead, they either already have a local database containing the required data. This database is periodically updated from the main OSM database via planet dumps. Or they use a third-party map provider (such as MapBox from your screenshot) to display a base map with layers on top. In this case they don't have to store much information on their server, just the things they want to show on top of OSM.
None of the above is specifically for OSM. You will find similar mechanisms in other map apps and for other map sources.
I have been working on a project using Azure Indoor Maps. I started to use the Azure Maps Web SDK. I have looked for a way to loop to all features that are loaded automatically by the SDK, without making a request to WFS API https://learn.microsoft.com/en-us/rest/api/maps/v2/wfs/get-feature.
As I see the map loaded, I think that this information should be accessible directly by SDK, and I do not need to create another request. But maybe I am wrong.
I have found a method that does something similar to what I need getRenderedShapes but it only returns the features that are visible when the method is called, and I need all the features in the indoor map or in one floor.
Does anybody know if this is possible? On one side I think should be something similar to getRenderedShapes, but on the other side, I think that the front-end only has the visual information and that azure indoor maps use the Vector tile source and are optimized in the back-end and only serve to the front-end the required information.
https://learn.microsoft.com/en-us/azure/azure-maps/web-sdk-best-practices#optimize-data-sources
The Web SDK has two data sources,
GeoJSON source: Known as the DataSource class, manages raw location
data in GeoJSON format locally. Good for small to medium data sets
(upwards of hundreds of thousands of features). Vector tile source:
Known at the VectorTileSource class, loads data formatted as vector
tiles for the current map view, based on the maps tiling system. Ideal
for large to massive data sets (millions or billions of features).
Vector tile source: Known at the VectorTileSource class, loads data
formatted as vector tiles for the current map view, based on the maps
tiling system. Ideal for large to massive data sets (millions or
billions of features).
As you noted, the map SDK only loads the indoor maps via vector tiles which are condensed set of the data set clipped to areas of the view port. This only loads a small subset of the data. This makes it possible to create a large scalable indoor map platform that in theory could support every building in the world in real time. As you noted, the getRenderedShapes function can retrieve data from the vector tiles, but only those that are in the current viewport (plus a small buffer). I believe the only way to get the data as GeoJSON if via the WFS GetFeatures service: https://learn.microsoft.com/en-us/rest/api/maps/v2/wfs/get-features
I remember reading that scenekit has a polygon limit of 200k. However I haven't been able to find the source where I read it, so I have no idea if it's current information or even correct at all.
It may have been a wwdc session, either way what I need to know is;
Is this correct?
Is this limitation on the entire scene or just what is being rendered at any one time?
I don't think SceneKit has a hard limit on the number of polygons it will render. Instead, you'll see a gradual performance falloff as that number (and other measures of CPU and GPU usage) goes up. 200,000 is probably well into that falloff on some devices and perfectly fine on others.
You might be thinking about some of the advice in the WWDC 2014 session "Building a Game with SceneKit". That talk shows several metrics you can use to gauge a SceneKit app's performance and strategies for dealing with different kinds if bottlenecks. I'd recommend watching the video if you haven't already.
I was wondering if it was possible to get the 121 face's coordinates given by the Kinect Face Tracking SDK, but from a local image.
I mean, I have some local images on hard disk and I want to extract those points.
Is this possible, or the face tracking algorithm only works with data provided by the Kinect Camera?
Thanks
The algorithm used in the Microsoft Kinect FraceTracking example uses multiple points of data that come directly from the Kinect sensor -- the depth data being the primary point that you will be lacking. As a result, it is not possible to simply plug an image into the algorithm to obtain the data points from the flat image.
There are multiple examples around the web to extract facial features from both flat images as well as standard video (i.e., no depth data included). Some standard image processing libraries (e.g., OpenCV) may even include them already baked in (though I can't confirm this).