Firstly, let me explain what I'm building:
I have a D3.js Force Layout graph which is rooted at the center, and has a bunch of nodes spread around it. The center node is an Entity of some sort, the nodes around it are other Entities which are somehow related to the root. The edges are the actual relations (i.e. how the two are related).
The outer nodes can be clicked to center the target Entity and load its relations
This graph is "Egocentric" in the sense that every time a node is clicked, it becomes the center, and only relations directly involved with itself are displayed.
My Setup, in case any of it matters:
I'm serving an API through Node.js, which translates requests into queries to a CouchDB server with huge data sets.
D3.js is used for layout, and aside from jQuery and Bootstrap, I'm not using any other client-side libraries. If any would help with this caching task, I'm open to suggestions :)
My Ideas:
I could easily grab a few levels of the graph each time (recurse through the process of listing and expanding children a few times) but since clicking on any given node loads completely unrelated data, it is not guaranteed to yield a high percentage of the similar data as was loaded for the root. This seems like a complete waste, and actually a step in the opposite direction -- I'd end up doing more processing this way!
I can easily maintain a hash table of Entities that have already been retrieved, and check the list before requesting data for that entity from the server. I'll probably end up doing this regardless of the cache strategy I implement, since it's a really simple way of reducing queries.
Now, how do you suggest I cache this data?
Is there any super-effective strategy you can think of for doing this kind of caching? Both server-and-client-side options are greatly welcomed. A ton of data is involved in this process, and any reduction of querying/processing puts me miles ahead of the game.
Thanks!
On the client side I would have nodes, and have their children either be an array of children, or else a function that serves as a promise of those children. When you click on a given node, if you have data, display it immediately. Else send off an AJAX request that will fill it.
Whenever you display a node (not centered), create an asynchronous list of AJAX requests for the children of the displayed nodes and start requesting them. That way when the user clicks, there is a chance that you already have it cached. And if not, well, you tried and cost them nothing.
Once you have it working, decide how many levels deep it makes sense to go. My guess is that the magic number is likely to be 1. Beyond that the return in responsiveness falls off rapidly, while the server load rises rapidly. But having clicks come back ASAP is a pretty big UI win.
I think you need to do two things:
Reduce the number of requests you make
Reduce the cost of requests
As btilly points out, you're probably best requesting the related nodes for each visible node, so that if they are clicked the visualisation is immediately responsive -- you don't want the query plus transit time as a response lag.
However, if you have this great need to reduce the number of requests, it suggests your requests themselves are too costly, since the total load is requestCost * numRequests. Consider finding ways to pre-calculate the set of nodes related to each node, so that the request is a read request rather than a full DB query. If you're thinking that sounds hard, it's just what Google do every time you search for a new thing; they can't search the internet every time you start typing, so they do that ahead of time, and cache.
This may mean some amount of denormalisation; if you have a cache and a query, there is no guarantee they are in sync, but the question there is whether your dataset changes; is it write once, read many?
To minimise the space needed by all these nodes and their relations, see it more as a particle interaction problem; by partitioning the space you may be able to group nodes so that you only need query a group of nodes for its aggregate neighbours, and store that. That way you can do a much smaller filtration on each request rather than a full DB query. If it's O(n log n) and you make n a hundred times smaller, it's more than 100x faster.
Related
I'm in the design stage of my App DB, running on Firebase.
I wonder what is the best approach to store my Data.
I know that Firebase only returns the first layer of the document unless you explicitly query for sub-collection And I want to reduce extra reads as much as possible to save $.
In short, my DB holds devices and each device can have messages (call-backs) that the server needs to send back to him, one at a time, in FIFO order.
The user sees the waiting call-backs when watching a device in the App.
My first thought was to hold a collection of messages for each device, but this approach will require two reads for each call-back in each device when the server wants to know what is the next call-back to send.
My second approach is to hold the call-back in array in each device document but I'm not sure if it is the right approach, sounds messy to me for some reason.
My third option is to hold a collection of call-backs with the deviceID field in each call-back. the drawback I see in this approach is that I need to perform some kind of "join" when viewing a device and his call-backs in the App, And the searching time for the next call-back is increasing, although still reasonable (log(n) time).
If we talk numbers theoretically the system can store tens of thousands of devices and each device can have 10-100 call-backs in line, and each call-back can be called every 15 seconds or more.
My first thought was to hold a collection of messages for each device, but this approach will require two reads for each call-back in each device when the server wants to know what is the next call-back to send.
If you need to query the messages that correspond to a device, then storing them into a collection is a good option to go ahead with. This is because you can perform simple and compound qureis.
My second approach is to hold the call-back in array in each device document but I'm not sure if it is the right approach, sounds messy to me for some reason.
If you only need to list them, and not query, then that's a really good idea in my opinion, but only if the size of the document can stay below the maximum limit. So there are some limits when it comes to how much data you can put into a document. According to the official documentation regarding usage and limits:
Maximum size for a document: 1 MiB (1,048,576 bytes)
As you can see, you are limited to 1 MiB total of data in a single document. When we are talking about storing text, you can store pretty much but as your array gets bigger, be careful about this limitation.
My third option is to hold a collection of call-backs with deviceID field in each call-back. the drawback I see in this approach is that I need to perform some kind of "join" when viewing a device and his call-backs in the App, And the searching time for the next call-back is increasing, although still reasonable (log(n) time).
When talking again about collections, we are getting back to the first question. Remember, we are usually structuring a Firestore database according to the queries that we want to perform. So it's up to you to decide if this approach it's best for you.
Always keep in mind that you are billed according to the number of documents a query returns.
I'm starting off with using Firebase Database for the first time, and I think I've got a decent structure planned out, but I'm worried about the number of child nodes I might end up with.
Is there a recommended limit or average known-good number of child values which can be added to a node without running into noticeable performance problems? I've not had much database experience at all, and I've not been able to find any information on what an acceptable value would be, so I have no idea if my planned structure will scale well.
As a rough estimate, I'm expecting a rough maximum of 30,000 children all-in. I'll only be requesting data from around 10 of those, but as far as I know, Firebase will retrieve the entire node, before filtering out any results, which is why I'm worried about the performance impact of retrieving the entire node. Any help with this would be massively appreciated! Thanks!
As a rough estimate, I'm expecting a rough maximum of 30,000 children all-in.
That's not really a very large number of child nodes.
as far as I know, Firebase will retrieve the entire node, before filtering out any results
If you query the database using a field with an index, the nodes will be filtered on the server. You can create an index to avoid performance problems for larger numbers of child nodes.
In my application I run a cron job to loop over all users (2500 user) to choose an item for every user out of 4k items, considering that:
- choosing the item is based on some user info,
- I need to make sure that each user take a unique item that wasn't taken by any one else, so relation is one-to-one
To achieve this I have to run this cron job and loop over the users one by one sequentially and pick up the item for each then remove it from the list (not to be chosen by next user(s)) then move to the next user
actually in my system the number of users/items is getting bigger and bigger every single day, this cron job now takes 2 hours to set items to all users.
I need to improve this, one of the things I've thought about is using Threads but I cant do that since Im using automatic scaling, so I start thinking about push Queues, so when the cron jobs run, will make a loop like this:
for(User user : users){
getMyItem(user.getId());
}
where getMyItem will push the task to a servlet to handle it and choose the best item for this person based on his data.
Let's say I'll start doing that so what will be the best/robust solution to avoid setting an item to more than one user ?
Since Im using basic scaling and 8 instances, can't rely on static variables.
one of the things that came across my mind is to create a table in the DB that accept only unique items then I insert into it the taken items so if the insertion is done successfully it means no body else took this item so i can just assign it to that person, but this will make the performance a bit lower cause I need to make write DB operation with every call (I want to avoid that)
Also I thought about MemCach, its really fast but not robust enough, if I save a Set of items into it which will accept only unique items, then if more than one thread was trying to access this Set at the same time to update it, only one thread will be able to save its data and all other threads data might be overwritten and lost.
I hope you guys can help to find a solution for this problem, thanks in advance :)
First - I would advice against using solely memcache for such algorithm - the key thing to remember about memcache is that it is volatile and might dissapear at any time, breaking the algorithm.
From Service levels:
Note: Whether shared or dedicated, memcache is not durable storage. Keys can be evicted when the cache fills up, according to the
cache's LRU policy. Changes in the cache configuration or datacenter
maintenance events can also flush some or all of the cache.
And from How cached data expires:
Under rare circumstances, values can also disappear from the cache
prior to expiration for reasons other than memory pressure. While
memcache is resilient to server failures, memcache values are not
saved to disk, so a service failure can cause values to become
unavailable.
I'd suggest adding a property, let's say called assigned, to the item entities, by default unset (or set to null/None) and, when it's assigned to a user, set to the user's key or key ID. This allows you:
to query for unassigned items when you want to make assignments
to skip items recently assigned but still showing up in the query results due to eventual consistency, so no need to struggle for consistency
to be certain that an item can uniquely be assigned to only a single user
to easily find items assigned to a certain user if/when you're doing per-user processing of items, eventually setting the assigned property to a known value signifying done when its processing completes
Note: you may need a one-time migration task to update this assigned property for any existing entities when you first deploy the solution, to have these entities included in the query index, otherwise they would not show up in the query results.
As for the growing execution time of the cron jobs: just split the work into multiple fixed-size batches (as many as needed) to be performed in separate requests, typically push tasks. The usual approach for splitting is using query cursors. The cron job would only trigger enqueueing the initial batch processing task, which would then enqueue an additional such task if there are remaining batches for processing.
To get a general idea of such a solution works take a peek at Google appengine: Task queue performance (it's python, but the general idea is the same).
If you are planning for push jobs inside a cron and you want the jobs to be updating key-value pairs as an addon to improvise the speed and performance, we can split the number of users and number of items into multiple key-(list of values) pairs so that our push jobs will pick the key random ( logic to write to pick a key out of 4 or 5 keys) and then remove an item from the list of items and update the key again, try to have a locking before working on the above part. Example of key value paris.
Userlist1: ["vijay",...]
Userlist2: ["ramana",...]
I'm developing a web application which display a list of let's say "threads". The list can be sorted by the amount of likes a thread has. There can be thousands of threads in one list.
The application needs to work in a scenario where the likes of a thread can change more than 10x in a second. The application furthermore is distributed over multiple servers.
I can't figure out an efficient way to enable paging for this sort of list. And I can't transmit the whole sorted list by likes to a user at once.
As soon as an user would go to page 2 of this list, it likely changed and may contain threads already listed from page one
Solutions which don't work:
Storing the seen threads on the client side (could be too many on mobile)
Storing the seen threads on the Server side (too many users and threads)
Snapshot the list in temp database table (it's too frequent changing data and it need to be actual)
(If it matters I'm using MongoDB+c#)
How would you solve this kind of problem?
Interesting question. Unless I'm misunderstanding you, and by all means let me know if I am, it sounds like the best solution would be to implement a system that, instead of page numbers, uses timestamps. It would be similar to what many of the main APIs already do. I know Tumblr even does this on the dashboard, where this is, of course, not an unreasonable case: there can be tons of posts added in a small amount of time at peak hours, depending on how many people the user follows.
So basically, your "next page" button could just link to /threads/threadindex/1407051000, which could translate to "all the threads that were created before 2014-08-02 17:30. That makes your query super easy to implement. Then, when you pull down all the next elements, you just look for anything that occurred before the last element on the page.
The downfall of this, of course, is that it's hard to know how many new elements have been added since the user started browsing, but you could always log the start time and know anything since then would be new. And it's also difficult for users to type in their own pages, but that's not a problem in most applications. You also need to store the timestamps for every record in your thread, but that's probably already being done, and if it's not then it's certainly not hard to implement. You'll be paying the cost of something like eight bytes extra per record, but that's better than having to store anything about "seen" posts.
It's also nice because, and again this might not apply to you, but a user could bookmark a page in the list, and it would last unchanged forever since it's not relative to anything else.
This is typically handled using an OLAP cube. The idea here is that you add a natural time dimension. They may be too heavy for this application, but here's a summary in case someone else needs it.
OLAP cubes start with the fundamental concept of time. You have to know what time you care about to be able to make sense of the data.
You start off with a "Time" table:
Time {
timestamp long (PK)
created datetime
last_queried datetime
}
This basically tracks snapshots of your data. I've included a last_queried field. This should be updated with the current time any time a user asks for data based on this specific timestamp.
Now we can start talking about "Threads":
Threads {
id long (PK)
identifier long
last_modified datetime
title string
body string
score int
}
The id field is an auto-incrementing key; this is never exposed. identifier is the "unique" id for your thread. I say "unique" because there's no unique-ness constraint, and as far as the database is concerned it is not unique. Everything else in there is pretty standard... except... when you do writes you do not update this entry. In OLAP cubes you almost never modify data. Updates and inserts are explained at the end.
Now, how do we query this? You can't just directly query Threads. You need to include a star table:
ThreadStar {
timestamp long (FK -> Time.timestamp)
thread_id long (FK -> Threads.id)
thread_identifier long (matches Threads[thread_id].identifier)
(timestamp, thread_identifier should be unique)
}
This table gives you a mapping from what time it is to what the state of all of the threads are. Given a specific timestamp you can get the state of a Thread by doing:
SELECT Thread.*
FROM Thread
JOIN ThreadStar ON Thread.id = ThreadStar.thread_id
WHERE ThreadStar.timestamp = {timestamp}
AND Thread.identifier = {thread_identifier}
That's not too bad. How do we get a stream of threads? First we need to know what time it is. Basically you want to get the largest timestamp from Time and update Time.last_queried to the current time. You can throw a cache up in front of that that only updates every few seconds, or whatever you want. Once you have that you can get all threads:
SELECT Thread.*
FROM Thread
JOIN ThreadStar ON Thread.id = ThreadStar.thread_id
WHERE ThreadStar.timestamp = {timestamp}
ORDER BY Thread.score DESC
Nice. We've got a list of threads and the ordering is stable as the actual scores change. You can page through this at your leisure... kind of. Eventually data will be cleaned up and you'll lose your snapshot.
So this is great and all, but now you need to create or update a Thread. Creation and modification are almost identical. Both are handled with an INSERT, the only difference is whether you use an existing identifier or create a new one.
So now you've inserted a new Thread. You need to update ThreadStar. This is the crazy expensive part. Basically you make a copy of all of the ThreadStar entries with the most recent timestamp, except you update the thread_id for the Thread you just modified. That's a crazy amount of duplication. Fortunately it's pretty much only foreign keys, but still.
You also don't do DELETEs either; mark a row as deleted or just exclude it when you update ThreadStar.
Now you're humming along, but you've got crazy amounts of data growing. You'll probably want to clean it out, unless you've got a lot of storage budge, but even then things will start slowing down (aside: this will actually perform shockingly well, even with crazy amounts of data).
Cleanup is pretty straightforward. It's just a matter of some cascading deletes and scrubbing for orphaned data. Delete entries from Time whenever you want (e.g. it's not the latest entry and last_queried is null or older than whatever cutoff). Cascade those deletes to ThreadStar. Then find any Threads with an id that isn't in ThreadStar and scrub those.
This general mechanism also works if you have more nested data, but your queries get harder.
Final note: you'll find that your inserts get really slow because of the sheer amounts of data. Most places build this with appropriate constraints in development and testing environments, but then disable constraints in production!
Yeah. Make sure your tests are solid.
But at least you aren't sensitive to re-ordered data mid-paging.
For constantly changing data such as likes I would use a two stage appraoch. For the frequently changing data I would use an in memory DB to keep up with the change rates and flush this peridically to the "real" db.
Once you have that the query for constantly chaning data is easy.
Query the db.
Query the in memory db.
Merge the frequently changed data from the in memory db with the "slow" db data .
Remember which results you already have displayed so pressing the next button will
not display an already dispalyed value twice because on different pages because its rank has changed.
If many people look at the same data it might help to cache the results of 3 in itself to reduce the load on the real db even further.
Your current architecture has no caching layers (the bigger the site the more things are cached). You will not get away with a simple DB and efficient queries against the db if things become too massive.
I would cache all 'thread' results on the server when the user first time hits the database. Then return the first page of data to the user and for each subsequent next page calls I'd return cached results.
To minimize memory usage you can cache only records ids and fetch whole data when user requests it.
Cache can be evicted each time user exits current page. If it isn't a ton of data I would stick to this solution because user won't get annoyed of data constantly changing.
I'm trying to understand the data pipelines talk presented at google i/o:
http://www.youtube.com/watch?v=zSDC_TU7rtc
I don't see why fan-in work indexes are necessary if i'm just going to batch through input-sequence markers.
Can't the optimistically-enqueued task grab all unapplied markers, churn through as many of them as possible (repeatedly fetching a batch of say 10, then transactionally update the materialized view entity), and re-enqueue itself if the task times out before working through all markers?
Does the work indexes have something to do with the efficiency querying for all unapplied markers? i.e., it's better to query for "markers with work_index = " than for "markers with applied = False"? If so, why is that?
For reference, the question+answer which led me to the data pipelines talk is here:
app engine datastore: model for progressively updated terrain height map
A few things:
My approach assumes multiple workers (see ShardedForkJoinQueue here: http://code.google.com/p/pubsubhubbub/source/browse/trunk/hub/fork_join_queue.py), where the inbound rate of tasks exceeds the amount of work a single thread can do. With that in mind, how would you use a simple "applied = False" to split work across N threads? Probably assign another field on your model to a worker's shard_number at random; then your query would be on "shard_number=N AND applied=False" (requiring another composite index). Okay that should work.
But then how do you know how many worker shards/threads you need? With the approach above you need to statically configure them so your shard_number parameter is between 1 and N. You can only have one thread querying for each shard_number at a time or else you have contention. I want the system to figure out the shard/thread count at runtime. My approach batches work together into reasonably sized chunks (like the 10 items) and then enqueues a continuation task to take care of the rest. Using query cursors I know that each continuation will not overlap the last thread's, so there's no contention. This gives me a dynamic number of threads working in parallel on the same shard's work items.
Now say your queue backs up. How do you ensure the oldest work items are processed first? Put another way: How do you prevent starvation? You could assign another field on your model to the time of insertion-- call it add_time. Now your query would be "shard_number=N AND applied=False ORDER BY add_time DESC". This works fine for low throughput queues.
What if your work item write-rate goes up a ton? You're going to be writing many, many rows with roughly the same add_time. This requires a Bigtable row prefix for your entities as something like "shard_number=1|applied=False|add_time=2010-06-24T9:15:22". That means every work item insert is hitting the same Bigtable tablet server, the server that's currently owner of the lexical head of the descending index. So fundamentally you're limited to the throughput of a single machine for each work shard's Datastore writes.
With my approach, your only Bigtable index row is prefixed by the hash of the incrementing work sequence number. This work_index value is scattered across the lexical rowspace of Bigtable each time the sequence number is incremented. Thus, each sequential work item enqueue will likely go to a different tablet server (given enough data), spreading the load of my queue beyond a single machine. With this approach the write-rate should effectively be bound only by the number of physical Bigtable machines in a cluster.
One disadvantage of this approach is that it requires an extra write: you have to flip the flag on the original marker entity when you've completed the update, which is something Brett's original approach doesn't require.
You still need some sort of work index, too, or you encounter the race conditions Brett talked about, where the task that should apply an update runs before the update transaction has committed. In your system, the update would still get applied - but it could be an arbitrary amount of time before the next update runs and applies it.
Still, I'm not the expert on this (yet ;). I've forwarded your question to Brett, and I'll let you know what he says - I'm curious as to his answer, too!