Does tile server renders every request? - maps

I am trying to understand how tile server handle requests.
I red about Mapnik which "creates" the map using OSM,
Does "creates" means that every request is rendered and returned? If yes what is the performance hit (global map)?
Or it means that Mapnik Creates all tiles needed on initialization and then serves the right one upon request? if yes what is the total storage needed for all tiles(for global map)?
I am trying to find a way to build my own tile server (global map) while rendering it only once.

With typical tile-based maps such as OSM's default Mapnik style, there is a cache to store previously rendered tiles, but it contains just a small subset of all possible tiles. Other tiles are calculated on-the-fly if a client tries to access them. Only a small percentage of the tiles are actually ever requested as most of them are likely not particularly interesting to users.
The OpenStreetMap wiki's page on Tile Disk Usage has some numbers with a breakdown by zoom level. According to this source, you are looking at roughly 50 TB of data if you want to store all tiles for a global map with 18 zoom levels. Other estimates range into the hundreds of TB, though.

Related

Efficient retrieval of lat-lon points that are within a square boundary

I have a react-native application that populates pins on a map that have been submitted by users. The front end gets the corners of the window and then the back end goes through each pin to check if it falls within the boundary, and returns the ones that do.
This is taking too long on the backend and I want to ask the community for ideas, because I doubt I have the best one.
My idea is to store tables of pins grouped by quadrants, effectively a cache, and then I can in almost constant time return the pins from the quadrants involved.
Is there a simpler way to do this?
Maybe using NoSQL?
🙏🏻
A month later it seems geohashing is probably the best way, plus AWS has a library for automatically handling this with dynamodb. Apparently it takes the corners of the screen, lat/lon, and automatically returns the items from the DB in the view, in, I assume, constant time, since that's the whole point of geohashing, getting performance that works at scale..
https://www.npmjs.com/package/dynamodb-geo
https://aws.amazon.com/blogs/compute/implementing-geohashing-at-scale-in-serverless-web-applications/
Otherwise, using a geohashing library that is built for serving mobile apps likely exists.

How to use React-Leaflet with Markers from API with large dataset

We have an application where we need to plot ~13K+ markers on a map depending on the users current filters. I have found that react-leaflet does not handle this many markers very well. I even offloaded all of the popup data to a separate query on popup to reduce the initial data load, but to no avail.
I did find that using clustering does help improve the performance, so I have enabled that (even though it is not in the following example code). But even with clustering, on the initial map load a user might have to wait 20 seconds for the map to render with the markers, but the actual data fetch from the GraphQL API is less than a second.
I am trying to find a way to only fetch the markers that are present within the bounding box of the page. I am doing this successfully, but the problem now is that all of the markers flash (re-render) when the map is moved/zoomed even if the new bounds is just a subset of the original.
CodeSandbox: https://codesandbox.io/s/graphql-markers-t409f?file=/src/App.js
Is there a way for react-leaflet to only re-render the markers that are new and not all the markers? I am providing the unique id of the marker from the api to the Marker component key field, but that doesn't seem to provide any improvements.
I am thinking that maybe the best way to handle it would be to do something to manage the map bounds to the min east, max west, max north, and min south and only refetch if the new bounds are outside of the currently fetched area. This along with a 10-20% over area fetch to not refetch if the map is only very slightly moved in any direction.
Note: I am not guaranteeing that this GraphQL backend will be available after the question has been answered. There is a 1Mb/Day data transfer limitation on this backend in case this question blows up and you cannot see the markers. See the README in the codesandbox for steps to setup the backend for free online in under 2 minutes.
Some related issues that may help if no other solution is found:
Limit rendering Popups
Not the same question as Plotting 140K points in leafletjs. I am specifically looking for an answer to how it might be possible to update markers to remove markers not in new data fetch and add new markers not present in previous data fetch without re-rendering existing markers.
Take a look at the example code and you will see the markers flash as they all get re-rendered when the new data is fetched. This might lead to a better hybrid between performance and UX as I really don't want to use the CircleMarker and I already added clustering but still have existing lag.
You should use the React Canvas Markers plugin. The poor performance is because the default behavior is to create an individual div for each marker. This avoids the performance hit by writing your map and markers to a canvas element. Here's a working example from SO.

Tilemill Export in MBTiles Format displaying infinite export time

I'm attempting to upload my map as an MBTile, and it is telling me that there are thousands of days remaining, though my map does not have enormous quantities of data. I've tried several different exports, and they have not exported any of the data after several hours.
This is my first time exporting in MBTiles format, and I hope to eventually upload my information to MapBox. Is there a step that I am missing?
Best,
Make sure you are cropping and setting zoom levels.
https://www.mapbox.com/tilemill/docs/crashcourse/exporting/
Select the map “Bounds”. This is the area of the map to be exported. By default the entire world is selected. If your map is allocated to a smaller region of the globe, you can save processing time and disk space by cropping to that area.

Create a card with realtime distance to coordinates

I understand that I can a render maps on timeline cards. Can I create a card with a set of coordinates that display live updates with the distance to those coordinates?
No map is displayed, just a number (meters, feet) that is updated as the user moves.
From what I can tell that is not possible at the moment.
All you can do with the locations queries are gets and lists. From what I understand it updates every ten minutes as you can read at: https://developers.google.com/glass/location
Yes, you can create a card that contains this information, but it requires that you use some other APIs in addition to the Mirror API. The Google Distance Matrix API appears to be a good fit for this use case.
Every time you'd like to update the timeline card, make a request to the Distance Matrix API to determine the travel distance across that set of coordinates. Convert this into a human readable format, and include it into the timelineItem.text or timelineItem.html field for the card.

Showing 1 million rows in a browser

Our Utilty has one single table, and it has 10 million to 50 million rows, There may be a case we need to show 50 million rows in a single page html client page, To show the rows in browser we use jQuery in UI.
To retrieve rows we use Hibernate and use Spring for MVC. I am looking for best practice in retrieving the rows and showing in UI. Should I retrieve a bulk of thousands rows or two thousand rows in Hibernate and buffer to Web Client or a best practice is there ?
The best practice is not to do this. It will explode the browser memory and rendering engine, and will take too much time to load.
Add a search form to your webapp, make the end user search for what he's interested about, and only display the N first search results, just like Google does.
Nobody is able to do anything meaningful with 50 million rows without searching anyway.
i think you should use scroll pagination (when user reaches to almost bottom of page makes ajax call and load data).
Just for example quick google example & demo
and if your data is tabular then you can use jQGrid
Handling a larger quantity of data in an application must be done via virtualization. While it's true that the user will be overwhelmed by millions of records, it's not exactly true that they can't do stuff with it, nor that such quantities of data are unfathomable.
In practice and depending on what you're doing you'll note that this limit crops up on you with just thousands of records. Which frankly is very little data. Data centric apps just need a different approach, altogether, if they are going to work in a browser and perform well.
The way we do this is quite simple but not all that straightforward.
It helps to decide on a fixed height, because you will need to know the max height of a scrollable container. You then render into this container the subset of records that can be visible at any given moment and position them accordingly (based on scroll events). There are more or less efficient ways of doing this.
The end goal remains the same. You basically need to cull everything which isn't directly visible on screen in such a way that the browser isn't paying the cost of memory and layout logic for the app to be responsive. This is common practice in game development, only the world that is visible right now on screen is ever present at any given moment. So that's what you need to do to get large quantities of stuff to behave well.
In the context of a browser, anything which attributes to memory use and layout/render cost needs to go away if it's isn't absolutely vital.
You can also stagger and smear recalculations so that you don't incur the cost of whatever is causing the app to degrade on every small update. The user can afford to wait 1 second, if the apps remains responsive.

Resources