I am building a map application with the silverlight bing maps control.
In the map control I want to show all of the subscribed customers.
The amount of customers is somewhere between 5000 and 7000, this means I can't show them all at once. This would result in a crash I guess.
How would you solve this issue?
I've read about events on zoomlevels etc. about tile layers about spatial sql
but I have no idea what the right solution is in this situation and where to begin.
This seems like a pretty basic problem when working with maps but there is little to no information on how to handle lots of data when working with bing maps.
Can anyone explain or point me to a good tutorial?
You can use a space-filling-curve or a spatial-index to get those points nested with the zoom-level of your map application to achieve a cluster effect http://blog.notdot.net/2009/11/Damn-Cool-Algorithms-Spatial-indexing-with-Quadtrees-and-Hilbert-Curves. There are many implementation of sfc and hilbert-curves. I've uploaded my own at phpclasses.org (hilbert-curve, bsd licence) and with a quadkey function for a cluster function. I've succesful implemented it for some customers. The idea is to search for a quadkey from left to right to get only a portion of pois. www.maptiler.org uses a quadkey with a z-curve. Probably you are getting better answers at gis.stackexchange. A sfc has usually a constraint of power of 2.
Related
I have a react-native application that populates pins on a map that have been submitted by users. The front end gets the corners of the window and then the back end goes through each pin to check if it falls within the boundary, and returns the ones that do.
This is taking too long on the backend and I want to ask the community for ideas, because I doubt I have the best one.
My idea is to store tables of pins grouped by quadrants, effectively a cache, and then I can in almost constant time return the pins from the quadrants involved.
Is there a simpler way to do this?
Maybe using NoSQL?
🙏🏻
A month later it seems geohashing is probably the best way, plus AWS has a library for automatically handling this with dynamodb. Apparently it takes the corners of the screen, lat/lon, and automatically returns the items from the DB in the view, in, I assume, constant time, since that's the whole point of geohashing, getting performance that works at scale..
https://www.npmjs.com/package/dynamodb-geo
https://aws.amazon.com/blogs/compute/implementing-geohashing-at-scale-in-serverless-web-applications/
Otherwise, using a geohashing library that is built for serving mobile apps likely exists.
Tilestache doesn't serve polygons that are self crossed. like in the picture below.
I checked whether or not the polygon is stored within my postgresql, and it was the case. Therefore, the problem comes in serving, I kind of think that tilestache is not able to treat self crossed polygons. Any ideas ?
That is because a self crossing polygon is invalid in the OGC simple feature model. I can't think of a single program that will correctly display them.
We have a need for "Blending of hits from different sources", as per your documentation it is recommended to write a custom-searcher in JAVA. Is there a demo of this written somewhere on Github ? I wouldn't even know where to start :( I understand I can create search "chains" , preferably Asynchronous, and then blend results in JAVA before returning them...but then how would I handle paginations, limits...etc ? This all seems very complicated, for someone who doesn't even know JAVA that much. So, I am hoping someone has already written a demo for this ? Please ? Anyone ?
Thank you so much
EDIT to make my quesion clearer:
We are writing a search engine that fetches data from various websites. Some websites have 10mil indexable items, other websites only 100,000. When we present the results to end user, we want to include results from all our sources ( when match applies ). Let's say 10 results from each of the websites we crawl, so that they all get equal amount of attention on page. If we don't do custom blending, what happens is that the largest website with most items wins all our traffic.
I understand that we can send 10 separate queries to VESPA, and blend the results in our front end, but that seems very inefficient. Thus, the quesion of "Custome Searcher". Thank you so much !
That documentation covers some very advanced use cases which you do not have. Are your sources different Vespa schemas or content clusters? If so Vespa will by default blend the hits returned from each according to their relevance scores so there's nothing you need to do.
The two other most common use-cases are:
Some (or all) the data sources are external, so you need to write a Searcher component to fetch the external data and turn it into a Result.
You want the data to be blended in some custom way (rather than by relevance score). If so you need to exclude the default blending Searcher (com.yahoo.prelude.searcher.BlendingSearcher) and write your own.
If you provide some more information about your use cases I can give you some code examples.
EDIT: Use grouping to solve the need explained under "EDIT" in the question:
Create a "siteid" field when feeding (e.g in document processing).
Use the grouping expression all(group(siteid) each(max(10) output(summary())))
See http://docs.vespa.ai/documentation/grouping.html
Im planning to use Leaflet Draw as part of a special wiki with an embedded map. Users should be able to draw geo-objects that are related to one or more pages in the wiki. As the wiki-pages the objects are saved in a database and can be modified by every user.
Problems:
How can i limit the number of editable objects to only one at a time?
How to keep the database consistent if two users are editing the same object at the same time?
How can i generte multi-objects/combine several objects (e.g. polygons) to a super-object (multi-polygon)?
Does anybody know some similiar approches to my idea?
Thanks.
You will have a single FeatureGroup for leaflet.draw's objects that can be edited. Simply figure out which objects will be edited and which won't be and add them to seperate FeatureGroups.
This can be handled in a few ways, maybe have a look around at general database consistency for this.
I'm not sure what you mean, maybe have a look at Well Known Text it might help you with storage here.
We are on decision point - which technology will be used for our highly loaded flight deals map.
There is simple test - http://buruki.com/gmap but if i choose London or Moscow( they have ~200-300 flights destinations) most of browsers( firefox 3.5 and IE for sure :-) ) are extremely slow.
Now there are simple markers and simple polylines, MarkerManager or other things are not use.
I would like to ask gmap experts - is it possible to have almost immediate response time with ~200-300 polylines and markers on map. If yes - any live examples from existing projects.
PS we already have silverlight( http://buruki.com/map ) implementation, it has great speed and great disadvantages :-( - plugin is required, linux users are out of bossiness. Is it possible to achieve same speed(or close) as silverlight has with gmaps?
Answer : yes it is possible not only for 200-300 but also you can add more then 500. I worked on one of the airline site same like your and i attached different markers and polyline more then 200 within a 5 milisecond with google map V3. I made an javascript file for that and an array for whole data with lat long after that i used for loop for placing a marker images with lat long (which data is comes from an array). Till now no any bug appear in that and every thing with heavy data working fine. thanks