Per operation
GAE Quotas article states that Datastore quota measured in read/write operations (for example when you put an entity several write operations occurs per property):
The same is at GAE Home page:
Per entity
At the same time App Engine Pricing article states that quota measured in read/write per entity (here there is information that it's appeared recently):
Experimenting
In Google Cloud Console at App Engine Quotas tab there are details both for Datastore Write Operations and Datastore Entity Writes; putting an entity works fine when the first one exceeds its limit (I reached the limit and then put new entity to check, there is no billing enabled for the project):
My question is what is the real quota restrictions in GAE Datastore?
As of July 1, 2016, Cloud Datastore quotas and billing are based on entity reads and writes (and deletes) instead of read and write operations. The third link you provided in your question shows the correct information.
The other pages are out of date and should be updated shortly. Thanks for pointing them out!
Related
Im a little confused about this because the docs say I can use stackdriver for "Request logs and application logs for App Engine applications" so does that mean like web requests? Does that mean like millions of web requests?
Stackdriver's pricing is per resource so does that mean I can log all of my web servers web request logs (which would be HUGE) for no extra cost (meaning I would not be charged by the volume of storage the logs use)?
Does stackdriver use GCP cloud storage as a backend and do I have to pay for the storage? It just looks like I can get hundreds of gigabytes of log aggregation for virtually no money just want to make sure Im understanding this.
I bring up ELK because elastic just partnered with google so it must not do everything elasticsearch does (for almost no money) otherwise it would be a competitor?
Things definitely seem to be moving quickly at Google's cloud division and documentation does seem to suffer a bit.
Having said that, the document you linked to also details the limitations -
The request and application logs for your app are collected by a Cloud
Logging agent and are kept for a maximum of 90 days, up to a maximum
size of 1GB. If you want to store your logs for a longer period or
store a larger size than 1GB, you can export your logs to Cloud
Storage. You can also export your logs to BigQuery and Pub/Sub for
further processing.
It should work out of the box for small to medium sized projects. The built in log viewer is also pretty basic.
From your description, it sounds like you may have specific needs, so you should not assume this will be free. You should factor in costs for Cloud Storage for the logs you want to retain and BigQuery depending on your needs to crunch the logs.
My application is currently on app engine server. My application writes the records(for logging and reporting) continuously.
Scenario: Views count in the website. When we open the website it hits the server to add the record with time and type of view. Showing these counts in the users dashboard.
Seems these requests are huge now. For now 40/sec. Google App Engine writes are going heavy and cost is increasing like anything.
Is there any way to reduce this or any other db to log the views?
Google App Engine's Datastore is NOT suitable for such a requirement where you have to continuously write to datastore and read less often.
You need to offload this task to a third party service (either you write one or use existing one)
Better option for user tracking and analytics is Google Analytics (Although you wont be directly able to show the hit counters on website using analytics).
If you want to show your user page hit count use a page hit counter: https://www.google.com/search?q=hit+counter
In this case you should avoid Datastore.
For this kind of analytics it's best to do the following:
Dump data to GAE log (yes, this sounds counter-intuitive, but it's actually advice from google engineers). GAE log is persistent and is guaranteed to not loose data you write to it.
Periodically parse the log for your data and then export it to BigQuery.
BigQuery has a quite powerful query language so it's capable of doing complex analytics reports.
Luckily this was already done before: see the Mache framework. Also see related video.
Note: there is now a new BigQuery feature called streaming inserts, which could potentially replace the cumbersome middle step (files on Cloud Storage) used in Mache.
I have an app which was facing recurring server errors when on M/S datastore and have since migrated to HR datastore. The old application was aliased to redirect users to the new app and all is well for the new app and my users.
Now, I am trying to delete the old data in the M/S datastore, so that I can disable billing for the old application, but finding it difficult due to the following reasons :
The Datastore Admin cannot be enabled because the application has been aliased.
The Datastore Viewer throws up Server Errors -- possibly because
the viewer page is trying to load the list of all entities in the database and fails in the process because of the large number of entities in my app (the app is a meta-data driven multi-tenant online database application, with entities added dynamically and hence has more entities than the typical Google App Engine application) (or)
due to the unreliable M/S datastore (or)
a combination of both (or)
other issues
The remote_api is not working out because the request is likely redirected to the new application.
I have already removed almost all composite indexes and vacuumed them to reduce the size to an extent. Most of the current usage is for built-in indexes, as shown in the latest Datastore statistics below :
Entities Built-in Indexes Composite Indexes Total
Total Size: 189 MBytes 1 GByte 3 MBytes 1 GByte
Entry Count: 203,793 9,506,340 20,797
The total storage used is around 1.27 GB and I can safely assume the entity which is taking up most of the storage. If I am able to delete records from those couple of entities, my datastore will fall within the 1 GB free quota.
Resource Usage Billable Price Cost
Datastore Stored Data 1.27 GBytes 0.27 $0.008/ GByte-day $0.01
I do not want to fully delete the old-application as I have users already mapping the application to their Google Apps domain and the alias to the new application helps.
Would like to hear suggestions on how I can possibly delete the data from this old M/S datastore, of my now aliased application.
You should be able to disable the application (and billing) without deleting your data.
I'm building an application on Google App Engine that uses the datastore to store information about the current state of the server. When an Android device queries the server, a servlet gets an Entity from the datastore, modifies it, and puts it back into the datastore to update the datastore entry.
However, sometimes while one instance of the servlet has gotten the data from the datastore, another instance of the servlet does the same before the first instance puts updated data back in. This is causing synchronization issues in my application.
Is there any way to "lock" the datastore so that nothing can operate on it until the lock is released?
Thanks.
Transactions are what you're after.
Read the docs carefully though: there are strict limitations on what you can do within a transaction. Specifically, you can only query within a single entity group - that is, the set of entities with the same ancestor.
Simple Task: keep track of web traffic (hits) so that I can graph the number of hits per day for the last 30 days.
Current Datastore Model (2 fields): 1) Website ID 2) Timestamp of Hit
Problem: I'm using Google App Engine's datastore and don't have the ability to do a group-by or count.
Can anyone offer a simple way to structure my Google Datastore database to achieve this task? By returning all of the hits and then grouping them in my code seems like a performance hog. Any ideas?
I would use sharding counters for this specific task; have a look to the last example of this documentation.