Efficiently search/exist() Firestore without exhausting free quota - reactjs

I'm working on a side project and want to let my users check if their friends have accounts.
Currently I've implemented it like this:
Read phone contacts for emails
Loop through the emails
Make .get() query on the user database1 for users with that email
If data comes back, the friend is on the platform and an invite button is displayed
Free quota2 exceeded within an hour
The thing is that any .get is considered a read operation, even if no data comes back. Their doc.exists can only be tun after a .get so a document read is needed to check for existence.
I'm sure I'm overlooking something obvious, what I want to do is in essence to an .exist() like query that does not 'cost' a read.
1: I'm not actually storing emails in firestore but their hashes, and am querying those. Same effect, but it allows me to query a secondary user database that doesn't expose true emails and other data.
2: Not trying to be cheap per se, but if this app turns commercial this would make the billing a nightmare.

According to your comment, you say that you keep the contacts in memory and for each contact (email address), you search in your existing Firestore database for matches.
Free quota exceeded within an hour
It means that you are searching the Firestore database for a huge number of contacts.
The thing is that any .get is considered a read operation, even if no data comes back.
That's correct. According to the official documentation regarding Firestore pricing, it clearly states that:
Minimum charge for queries
There is a minimum charge of one document read for each query that you perform, even if the query returns no results.
So if you have for example 1000 contacts and you query the database for each one of them, even if your queries return no results, you're still charged with 1000 read operations.
I'm sure I'm overlooking something obvious, what I want to do is in essence to an .exist() like query that does not 'cost' a read.
That's not how Firestore works. This means that every query incurs a cost of at least one document read, no matter the results.
1: I'm not actually storing emails in firestore but their hashes, and am querying those. Same effect, but it allows me to query a secondary user database that doesn't expose true emails and other data.
As you already noticed, doesn't matter if you store the actual email address or the corresponding hash, the result is the same.
2: Not trying to be cheap per se, but if this app turns commercial this would make the billing a nightmare.
Try for this feature, Firebase realtime database and believe me, both work very well together in the same project.

Related

How can I poll the Salesforce API to find records that meet criteria and have not been seen by my app before?

I am working on a Salesforce integration for an high-traffic app where we want to be able to automate the process of importing records from Salesforce to our app. To be clear I am not working from the Salesforce side (i.e. Apex), but rather using the Salesforce Rest API from within the other app.
The first idea was to use the cutoff time for when the record was created where we would increase that time on each poll based on the creation time of the applicant in the last poll. It was quickly realized this wouldn't work for this. There can be other filters in the query that might include a status field in Salesforce, for example, where the record should only import after a certain status is set. This would make checking creation time or anything like that unreliable since an older record could later become relevant to our auto importing.
My next idea was to poll the Salesforce API to find records every few hours. In order to avoid importing the same record twice, the only way I could think to do this is by keeping track of the IDs we already attempted to import and using these to do a NOT IN condition:
SELECT #{columns} FROM #{sobject_name}
WHERE Id NOT IN #{ids_we_already_imported} AND #{other_filters}
My big concern at this point was whether or not Salesforce had a limitation on the length of the WHERE clause. Through some research I see there are actually several limitations:
https://developer.salesforce.com/docs/atlas.en-us.salesforce_app_limits_cheatsheet.meta/salesforce_app_limits_cheatsheet/salesforce_app_limits_platform_soslsoql.htm
The next thing I considered was doing queries to find the all of the IDs in Salesforce that meet the conditions of the other filters without checking the ID itself. Then we could take that list of IDs and remove the ones we already tracked on our end to find a smaller IN condition we could set to find all of the data on the records we actually need.
This still doesn't seem completely reliable though. I see a single query can only return 2000 rows and only have an offset up to 2000. If we already imported 2000 records the first query might not have any necessary rows we'd want to import, but we can't offset it to get the relevant rows because of these limitations.
With these limitations I can't figure out a reliable way to find the relevant records to import as the number of records we already imported grows. I feel like this would be common usage of a Salesforce integration, but I can't find anything on this. How can I do this without having to worry about issues when we reach a high volume?
Not sure what all of your requirements are or if the solution needs to be generic, but you could do a few of things.
Flag records that have been imported, but that means making a call back to salesforce to update the records, but that can be bulkified to reduce the number of calls and modify your query to exclude the flag
Reverse the way you get the data to push instead of pull, so have salesforce push records that meet the criteria to you app whenever the record meets the criteria with workflow and outbound messages
Use the streaming API to setup a push topic that you app can subscribe to that would get notified when a records meets the criteria

Is google Datastore recommended for storing logs?

I am investigating what might be the best infrastructure for storing log files from many clients.
Google App engine offers a nice solution that doesn't make the process a IT nightmare: Load balancing, sharding, server, user authentication - all in once place with almost zero configuration.
However, I wonder if the Datastore model is the right for storing logs. Each log entry should be saved as a single document, where each clients uploads its document on a daily basis and can consists of 100K of log entries each day.
Plus, there are some limitation and questions that can break the requirements:
60 seconds timeout on bulk transaction - How many log entries per second will I be able to insert? If 100K won't fit into the 60 seconds frame - this will affect the design and the work that needs to be put into the server.
5 inserts per entity per seconds - Is a transaction considered a single insert?
Post analysis - text search, searching for similar log entries cross clients. How flexible and efficient is Datastore with these queries?
Real time data fetch - getting all the recent log entries.
The other option is to deploy an elasticsearch cluster on goole compute and write the server on our own which fetches data from ES.
Thanks!
Bad idea to use datastore and even worse if you use entity groups with parent/child as a comment mentions when comparing performance.
Those numbers do not apply but datastore is not at all designed for what you want.
bigquery is what you want. its designed for this specially if you later want to analyze the logs in a sql-like fashion. Any more detail requires that you ask a specific question as it seems you havent read much about either service.
I do not agree, Data Store is a totally fully managed no sql document store database, you can store the logs you want in this type of storage and you can query directly in datastore, the benefits of using this instead of BigQuery is the schemaless part, in BigQuery you have to define the schema before inserting the logs, this is not necessary if you use DataStore, think of DataStore as a MongoDB log analysis use case in Google Cloud.

Best implementation of turn-based access on App Engine?

I am trying to implement a 2-player turn-based game with a GAE backend. The first thing this game requires is a very simple match making system that operates like this:
User A asks the backend for a match. The back ends tells him to come back later
User B asks the backend for a match. He will be matched with A.
User C asks the backend for a match. The back ends tells him to come back later
User D asks the backend for a match. He will be matched with C.
and so on...
(edit: my assumption is that if I can figure this one out, most other operation i a turn based game can use the same implementation)
This can be done quite easily in Apple Gamecenter and Xbox Live, however I would rather implement this on an open and platform independent backend like GAE. After some research, I have found the following options for a GAE implementation:
use memcache. However, there is no guarantee that the memcache is synchronized across different instances. I did some tests and could actually see match request disappearing due to memcache mis-synchronization.
Harden memcache with Sharding Counters. This does not always solve the multiple instance problem and mayabe results in high memcache quota usage.
Use memcache with Compare and Set. Does not solve the multiple instance problem when used as a mutex.
task queues. I have no idea how to use these but someone mentioned as a possible solution. However, I am afraid that queues will eat me GAE quota very quickly.
push queues. Same as above.
transaction. Same as above. Also probably very expensive.
channels. Same as above. Also probably very expensive.
Given that the match making is a very basic operation in online games, I cannot be the first one encountering this. Hence my questions:
Do you know of any safe mechanism for match making?
If multiple solutions exist, which is the cheapest (in terms of GAE quota usage) solution?
You could accomplish this using a cron tasks in a scheme like this:
define MatchRequest:
requestor = db.StringProperty()
opponent = db.StringProperty(default = '')
User A asks for a match, a MatchRequest entity is created with A as the requestor and the opponent blank.
User A polls to see when the opponent field has been filled.
User B asks for a match, a MatchRequest entity is created with B as as the requestor.
User B pools to see when the opponent field has been filled.
A cron job that runs every 20 seconds? or so runs:
Grab all MatchRequest where opponent == ''
Make all appropriate matches
Put all the MatchRequests as a transaction
Now when A and B poll next they will see that they they have an opponent.
According to the GAE docs on crons free apps can have up to 20 free cron tasks. The computation required for these crons for a small amount of users should be small.
This would be a safe way but I'm not sure if it is the cheapest way. It's also pretty easy to implement.

Google app engine excessive small datastore operations

I'm having some trouble with the google app engine datastore. Ever since the new pricing model was introduced, the cost of running my app has increased massively.
The culprit appears to be "Datastore small operations", which come in at more than 20 Million ops per day!
Has anyone had this problem, I don't think I'm doing an excessive amount of key lookups, and I only have 5000 users, with roughly 10 - 20 requests per minute.
Thanks in advance!
Edit
Ok got some stats, these are after abut 3 hours. Here is what I am seeing in my dashboard, in the billing section:
And here are some of the stats:
Obviously there are quite a lot of calls to datastore.get. I am starting to think that it is my design that is causing the problem. Those gets correspond to accounts. Every user has an account, but an account can be one of two types, for this I use composition. So each account entity has a link to its sub account entity.
As a result when I do a search for nearby users it involves fetching the accounts using the query, and then doing a get on each account to get its sub account. The top request in the stats picture is a call that gets 100 accounts, and then has to do a get on each one. I would have thought that this was a very light query, but I guess not. And I am still confused by the number of datastore small ops being recorded in my dashboard.
Definitely use appstats as Drew suggests; regardless of what library you're using, it will tell you what operations your handlers are doing. The most likely culprits are keys-only queries and count operations.
My advice would be to use AppStats (Python / Java) to profile your traffic and figure out which handler is generating the most datastore ops. If you post the code here we can potentially suggest optimizations.
Don't scan your datastore, use get(key) or get_by_id(id) or get_by_key_name(keyname) as much as you can.
Do you have lots of ReferenceProperty properties in your models? Accessing them will trigger db.get for each property unless you prefetch them. This would trigger 101 db.get requests.
class Foo(db.Model):
user = db.ReferenceProperty(User)
foos = Foo.all().fetch(100)
for f in foos:
print f.user.name # this triggers db.get(parent=f, key=f.user)

About Youtube views count

I'm implementing an app that keeps track of how many times a post is viewed. But I'd like to keep a 'smart' way of keeping track. This means, I don't want to increase the view counter just because a user refreshes his browser.
So I decided to only increase the view counter if IP and user agent (browser) are unique. Which is working so far.
But then I thought. If Youtube, is doing it this way, and they have several videos with thousands or even millions of views. This would mean that their views table in the database would be overly populated with IP's and user agents....
Which brings me to the assumption that their video table has a counter cache for views (i.e. views_count). This means, when a user clicks on a video, the IP and user agent is stored. Plus, the counter cache column in the video table is increased.
Every time a video is clicked. Youtube would need to query the views table and count the number of entries. Won't this affect performance drastically?
Is this how they do it? Or is there a better way?
I would leverage client side browser fingerprinting to uniquely identify view counts. This library seems to be getting significant traction:
https://github.com/Valve/fingerprintJS
I would also recommend using Redis for anything to do with counts. It's atomic increment commands are easy to use and guarantee your counts never get messed up via race conditions.
This would be the command you would want to use for incrementing your counters:
http://redis.io/commands/incr
The key in this case would be the browser fingerprint hash sent to you from the client. You could then have a Redis "set" that would contain a list of all browser fingerprints known to be associated with a given user_id (the key for the set would be the user_id).
Finally, if you really need to, you run a cron job or other async process that dumps the view counts for each user into your counter cache field for your relational database.
You could also take the approach where you store user_id, browser fingerprint, and timestamps in a relational database (mysql?) and counter cache them into your user table periodically (probably via cron).
First of all, afaik, youtube uses BigTable, so do not worry about querying the count, we don't know the exact structure of the database anyway.
Assuming that you are on a relational model, create a column view_count, but do not update it on every refresh. Record the visists and periodically update the cache.
Also, you can generate hash from IP, browser, date and any other information you are using to detect if this is an unique view, and do not store the whole data.
Also, you can use session/cookie to record the view being viewed. Since it will expire, it won't be such memory problem - I don't believe anyone is viewing thousand of videos in one session
If you want to store all the IP's and browsers, then make sure you have enough DB storage space, add an index and that's it.
If not, then you can use the rails session to store the list of videos that a user has visited, and only increment the view_count attribute of a video when he's visiting a new video.

Resources