Efficient retrieval of lat-lon points that are within a square boundary - database

I have a react-native application that populates pins on a map that have been submitted by users. The front end gets the corners of the window and then the back end goes through each pin to check if it falls within the boundary, and returns the ones that do.
This is taking too long on the backend and I want to ask the community for ideas, because I doubt I have the best one.
My idea is to store tables of pins grouped by quadrants, effectively a cache, and then I can in almost constant time return the pins from the quadrants involved.
Is there a simpler way to do this?
Maybe using NoSQL?
🙏🏻

A month later it seems geohashing is probably the best way, plus AWS has a library for automatically handling this with dynamodb. Apparently it takes the corners of the screen, lat/lon, and automatically returns the items from the DB in the view, in, I assume, constant time, since that's the whole point of geohashing, getting performance that works at scale..
https://www.npmjs.com/package/dynamodb-geo
https://aws.amazon.com/blogs/compute/implementing-geohashing-at-scale-in-serverless-web-applications/
Otherwise, using a geohashing library that is built for serving mobile apps likely exists.

Related

REST API for a game

I am developing a web distributed application to let any user to play the Klondike (solitaire) game. I want to develop a API REST but I am not sure how the resources URL should look like.
At the server, I have the 'game', 'stock', 'waste', 'tableau', 'foundation' classes among others. The game class has the method moveFromStockToWaste, moveFromTableauToTableu... which implements the movements of the game.
What I have read about REST API is that the resources URL should look like a hierarchy of nouns while the operations (verbs) over this nouns are the HTTP methods (GET, PUT, POST, PATCH).
I am not sure if the way to move a card from stock to waste via API REST should look like this though moveFromTableauToTableau resource is a verb and not a noun:
UPDATE /player/{playerId}/game/{gameId}/moveFromTableauToTableau
Other way I have though is having this tableau piles cards resources:
URL: /player/{playerId}/game/{gameId}/TableauPile/1/
than in turn have resources like the number of not upturned cards and the upturned cards (all the information needed about the tableaus).
Then update this tableau pile resource by deleting the last card:
DELETE /player/{playerId}/game/{gameId}/TableauPile/1/upTurnedCard/3
And then put the card deleted in the target passing the new card suit and value:
POST /player/{playerId}/game/{gameId}/TableauPile/3/upTurnedCard
But this way the REST API would let to move a card from the tableau to the waste and this is not a valid movement.
I always think designing a REST API as a pretty uneasy task.
The second approach seems cleaner in naming convention terms, but I think you should never compromise the integrity of your targeted system to be compliant with such things. As you allow making one atomic operation in two http calls, it is in fact no longer atomic and you expose your system to unpredictable state in case of network failure or if any call fail for some reason. Avoid this kind of problem must be the top priority.
One idea could be thinking moves in terms of moves collection. So for a game you have a moves ressources. And then you can refine the move nature with some additional request parameters such like
POST/players/{playerId}/games/{gameId}/moves?type=TABLEAU_TO_TABLEAU
Body:
{
"src": "stock",
"dest": "waste"
}
This way you should have enough flexibilty to handle the different types of move.
Besides, may I suggest you to use plural form for resource naming :
player -> players
game -> games
so
GET /players naturally means give me all values of the player resource
GET /players/1 means give me the player of the players resource with the restriction playerId=1
I am developing a web distributed application to let any user to play the Klondike (solitaire) game. I want to develop a API REST but I am not sure how the resources URL should look like.
Oracle: how would you implement this as a website?
At the server, I have the 'game', 'stock', 'waste', 'tableau', 'foundation' classes among others. The game class has the method moveFromStockToWaste, moveFromTableauToTableu... which implements the movements of the game.
One important thing to realize is that the classes in your implementation don't matter; REST is about manipulating documents, the changes that happen to the game are side effects of manipulating the "documents".
In other words, the REST API is a mask that your game wears so that it looks like a web site.
See Jim Webber's talk DDD In the Large.
Klondike is effectively a state machine; any given tableau has some limited number of legal moves to make, each of which takes you to a new position.
So one way you might model the API is a representation of the tableau plus affordances (links) for each move, and the game progresses from one state to the next as you follow the links that describe a possible legal move.
There are "only" 8*10^67 or so deals to worry about, and for each of them you effectively have a graph of all of the reachable positions, and order them by traversal order, and then just link them all together.
/76543210987654321098765432109876543210987654321098765432109876543210/0
/76543210987654321098765432109876543210987654321098765432109876543210/1
/76543210987654321098765432109876543210987654321098765432109876543210/2
/76543210987654321098765432109876543210987654321098765432109876543210/3
And so on.
It's not an impossible arrangement, although it may be impractical, and since the URL describes the entire state of the game, the player has access to hidden state.
I'd suggest first trying this approach on something less complicated, like tic-tac-toe.
Hiding the state is relatively straight forward, because the mapping of the current game to a specific seed can be done on the server. That is, you send a POST to the start game end point, and that generates some random identifier, and maps the random identifier to a seed position, and off you go.
But a potential problem in this design is that HTTP is a stateless protocol; there's no way for the server to "know", when the player requests GET /games/000/152, that the client was previously in a position that could legally move to position 152. You can make the URI hard to guess, but that's about it.
What you likely want is the ability to ensure that the moves made by the player are legal, which means that the server needs to be tracking the current state of the game, and the player gets a view of the open information only.
The simplest HTML model of this would have the representation of the game show the information that the player is allowed, and a form with a list of the legal moves. The player selects one move and submits the form, which is a POST back to the game resource (directly back to the same resource, because we want the cache invalidation properties). Your implementation could then check that the received move is legal, refresh its own local state, and send an appropriate response.
That's the basic pattern we should be considering; GET the game, then send an unsafe request to modify the server's copy of the game.
The basic plan isn't all that different if you want to use a remote authoring approach. GET fetches a representation of the revealed information, the client makes legal edits to that representation, and PUTs the new representation to the same URL. The server verifies that the new position is reachable from the old position, and accepts the move, using its own copy of the hidden information to update the representation of the player's view.
(Pay careful attention to the meta data used in the response to PUT; the server is supposed to be communicating carefully whether the new representation is adopted as is, or if the server has transformed the proposed representation to make it consistent with the server's constraints).
You could, of course, also use PATCH to communicate the changes made to the representation by the client.
If messages were lost, or duplicated, the client's view and the server's view might not be aligned. So you may want to have your representation of the game include a clock/timer/turn number, so that the server can be certain that the players move is intended for the current state of the game.
EDIT As Roman notes, HTTP already has built into it the concept of validators, which allow you to lift data from your domain specific clock into the headers, so that generic components can understand and act appropriately to conditional requests
Another way of thinking about the game is to consider event sourcing; the client and the server are taking turns appending entries to a log, and the view of the game itself is computed by applying the events in the log. The client's moves would be limited to the set that manipulate the open information, the server's moves would reveal previously hidden information.
So you could use Atom Pub, or something very similar to it, to write new entries into the log. This in effect gives you two different representations of the game - the view, that shows you what you see when you look at the tableau, and the feed, which shows you the moves made to reach that point.
(If you squint, you'll see that this is really just a variation on "let the client pick a legal move".)
You could, I suppose, treat each of the elements in your domain model as a resource, and try to design an API to allow the client to manipulate those directly, but it isn't at all clear to me what benefit you get from that.

How to make JSON loads faster with large data (on HTTP or WebPage)

. Requesting the page(on HTTP or WebPage), it is very slow or even crash unless i load my JSON with fewer data. I really need to solve this since sooner or later i will be using large amount of data frequently. Here are my JSON data. --->>>
Notes:
1. The JSON loads only String and Integer.
2. I used to view my JSON in JSONView more like treeview using plugin
from GoogleChrome.
I am using angular and nodejs. tq
A quick resume of all the things that comes to my mind :
I had a similar issue once. My solutions may make the UI change.
Pagination
I doubt you can display that much data at one time, so the strategy should be divising your data in small amounts and then only load more when the client ask for it.
This way, the whole data is no longer stored in RAM as it is currently. This is how forums works (only 20 topics at a time).
Just imagine if StackOverflow make you load the whole historic of questions in the main page, how much GB would your navigator need just for that ?
You can use pagination in a classic way (button with page number, like google), or in an infinite scroll way, as you want.
For that you need to adapt your api and keep track of the index of the pages you already loaded at every moment in your Front. There are plenty of examples in AngularJS.
Only show the beginning of the data
When you look at Facebook comments, you may have a "show more" button. In their case, maybe it's to not break the UI, but it can also be used to not overload data.
You can display only the main lines of your datae (titles or somewhat) and add a button so the user can load more details if they want.
In your data model, the cost seems to be on the second level of "C". Just load data untill this second level, and download the remaining part (for this object) only if the user asks for.
Once again, no need to overload, your client's RAM will be thankfull, and your client's mobile 3G too.
Optimize your data stucture
If this is still not enough :
As StefanArya said in comment, indeed remove the "I" attribute, which is redundant with the JSON key.
Remove the "I" as you can use Object.keys() to get key name.
You also may don't need that much precision on your floats.
If I see any other ideas, I'll edit this post later.

Fail to improve pose of point cloud with ADF origin

I save the point clouds of a scene and its quaternion in a pcl file.
First, I've only used pose w.r.t to device start (see second image) to get the quaternion. I've discovered a drifting problem which I mentioned here.
Therefore, I learned the scene with area learning (see first image) by walking around the table.
After that, I'm loading the area description file (ADF) to overcome the drifting. I wait for the first loop closure/localization in the onPoseAvailable callback.
Then in the onXyzIjAvailable callback I use its timestamp to get a valid pose w.r.t to the ADF origin (baseFrame = COORDINATE_FRAME_AREA_DESCRIPTION and targetFrame = COORDINATE_FRAME_DEVICE).
And I save the poseAtTime(xyzIj.timestamp) and the xyzIj in a *.PCD file.
But, the drifting seems to get even worse (see third image). It's better oriented to the origin, but the point clouds aren't that
close as in the image without adf.
Do I something wrong?
Is there any way to improve this?
You should set up the pose callback such that only poses with respect to ADF base are returned - poses with respect to start of service should not be returned - drift will not go away, but it will become minimal and will autocorrect - the ADF needs to be well trained to keep up the pose return rate.

Showing 1 million rows in a browser

Our Utilty has one single table, and it has 10 million to 50 million rows, There may be a case we need to show 50 million rows in a single page html client page, To show the rows in browser we use jQuery in UI.
To retrieve rows we use Hibernate and use Spring for MVC. I am looking for best practice in retrieving the rows and showing in UI. Should I retrieve a bulk of thousands rows or two thousand rows in Hibernate and buffer to Web Client or a best practice is there ?
The best practice is not to do this. It will explode the browser memory and rendering engine, and will take too much time to load.
Add a search form to your webapp, make the end user search for what he's interested about, and only display the N first search results, just like Google does.
Nobody is able to do anything meaningful with 50 million rows without searching anyway.
i think you should use scroll pagination (when user reaches to almost bottom of page makes ajax call and load data).
Just for example quick google example & demo
and if your data is tabular then you can use jQGrid
Handling a larger quantity of data in an application must be done via virtualization. While it's true that the user will be overwhelmed by millions of records, it's not exactly true that they can't do stuff with it, nor that such quantities of data are unfathomable.
In practice and depending on what you're doing you'll note that this limit crops up on you with just thousands of records. Which frankly is very little data. Data centric apps just need a different approach, altogether, if they are going to work in a browser and perform well.
The way we do this is quite simple but not all that straightforward.
It helps to decide on a fixed height, because you will need to know the max height of a scrollable container. You then render into this container the subset of records that can be visible at any given moment and position them accordingly (based on scroll events). There are more or less efficient ways of doing this.
The end goal remains the same. You basically need to cull everything which isn't directly visible on screen in such a way that the browser isn't paying the cost of memory and layout logic for the app to be responsive. This is common practice in game development, only the world that is visible right now on screen is ever present at any given moment. So that's what you need to do to get large quantities of stuff to behave well.
In the context of a browser, anything which attributes to memory use and layout/render cost needs to go away if it's isn't absolutely vital.
You can also stagger and smear recalculations so that you don't incur the cost of whatever is causing the app to degrade on every small update. The user can afford to wait 1 second, if the apps remains responsive.

time to create a bot for second life using linden scripting language?

how much time does it take to create a simple dance performing bot in second life using Linden Scripting Language ?? , i have no prior knowledge of lsl , but i know various object oriented and event driven programming languages .
Well, it's pretty simple to animate avatar: you'll need a dance animation (those are pretty easy to find, or you could create your own), put it in a prim (which is basic building object in SL), and then create a simple script which first requests permission to animate desired avatar:
llRequestPermissions(agent_key, PERMISSION_TRIGGER_ANIMATION);
You receive response in run_time_permissions event and trigger your animation by name:
run_time_permissions(integer param)
{
llStartAnimation("my animation");
}
Those are the essentials; you'll probably want to request permissions when an avatar touches your object, and stop animation on second touch... or you could request for permissions for each avatar which is in certain range.
As for the "bot" part, Second Life viewer code is open source; if you don't want to build it yourself, there are several customizable bots available. Or you could simply run an official SL viewer and leave it on; there is a way to run several instances of SL viewer at the same time. It all depends on what exactly you need it for.
Official LSL portal can be found here, though I prefer slightly outdated LSL wiki.
Slight language mismatch: To make an object perform a dance is currently known as "puppetry" in a SecondLife context. The term "bot" currently means control of an avatar by an external script api.
Anyway in one instance, it took me about two hours to write, when I did one for a teddy bear a few weeks back, but that was after learning alot from taking apart some old ones, and i never did finish making the dance, it just waggles the legs or hugs with the arms, but the script is capable for however many steps and parts you can cram in memory.
Puppetry of objects has not improved much in the past decade. It is highly restricted by movement update rates and script limitations. Movement is often delayed under server load and the client doesn't always get updates, which produces pauses and skips in varying measure. Scripts have a maximum size of 64k which should be plenty but in actuality runs out fast with the convolutions necessary in lsl. Moving each linked prim in an object seperately used to need a script in each prim, until new fuctions were introduced to apply attributes by linknumber, still many objects use old scripts which may never be updated. Laggy puppets make for a pitiful show, but most users would not know how to identify a good puppetry script.
The popular way to start making a puppet script is to find an older open source puppet script online, and update it to work from one script. Some archane versions are given as 'master' and 'slave' scripts which need to be merged placing the slave actions as a function into the master, changing llMessageLinked( ) for the function name. Others use an identical script for each prim. I said popular, not the simplest or easiest, and it will never be the best.
Scripting from scratch, the active flow needs to go in a timer event with nothing else in it. Use a different state for animating if a timer is needed when waiting because it's a heavy activity and you don't need any more ifs or buts.
The most crucial task is create a loop to build parameters from a list of linknumbers, positions, and rotations into a parameter list before the llSetLinkPrimitiveParamsFast( ). Yes, that's what they called it because it's a heavy list based function, you may just call it SLPPF inworld but not in the script. Because SLPPF requires the call to have certain constants for each parameter, the parameter list for each step will need to include PRIM_LINK_TARGET, linknumber, PRIM_POS, position, PRIM_ROT, rotation for each linked part in the animation step.
Here's an example of putting a list of a single puppetry step into SLPPF.
list parameters;
integer index;
while ( index < total ) { // total number of elements in each list
parameters += [
PRIM_LINK_TARGET, llList2Integer( currentstep, index ),
PRIM_POS, llList2Vector( currentstep, index+1 ),
PRIM_ROT, llList2Rotation( currentstep, index+2 )
];
index += 3;
}
llSetLinkPrimitiveParamsFast( 0, parameters );
How you create your currentstep list is another matter but the smoothest movement over many linked parts will only be possible if the script isn't moving lists around. So if running the timer at 0.2 seconds isn't any improvement over 0.3, it's because you've told lsl to shovel too many lists. This loop with three list calls may handle about 20 links at 0.1 seconds if the weather's good.
That's actually the worst of it over, just be careful of memory if cramming too many step lists into memory at once. Oh and if your object just vanishes completely, it's going to be hanging around near <0,0,0> because a 1 landed in the PRIM_LINK_TARGET hole.

Resources