I want to use Multitenancy feature of GAE in my app's datastore. Here they have given that
Using the Namespaces API, you can easily partition data across tenants simply by specifying a unique namespace string for each tenant.
so my questions are,
[1.]How many partition(of one datastore) can be created using namespace api?
[2.]Is there any limit on the size of each partition?
[3.]how would I know if size of a partition is increased beyond GAE free quota?
i'll really appreciate any help to clear this doubt.
thanks in advance!!!!
How many partition(of one datastore) can be created using namespace api?
Namespaces help you increase your app scalability, you have no limit number
Is there any limit on the size of each partition?
App engine Free quota is fixed, it's the only limit. If you need to activate billing, you'll have to fix the limit budget. App engine offers you a very high scalability
how would I know if size of a partition is increased beyond GAE free quota?
As for second question, you have a quota for the whole application, not per namespace. If you consume all the free resources your app will throw an error instead of serving the appropriate handler, until the quota is replenished
Related
I am a cloud noob.
I have a node application with GAE. I am using basic scaling to serve up the requests. I have specified instance class B4_1G which has memory limit of 2048mb.(https://cloud.google.com/appengine/docs/standard#second-gen-runtimes)
The application is supposed to perform DOM scraping using Cheerio on some extremely large HTML files. This works well until the HTML that I need to scrape is beyond huge. Then I start getting memory error in logs :
Exceeded hard memory limit of 2048 MB with 2052 MB after servicing 1 requests total. Consider setting a larger instance class in app.yaml.
Is there any way I can override the memory limit to say 4096mb or even more?
Setting the resources additionally in app.yaml did not seem to help.
Any help or pointers appreciated. Thank you.
The link you provided shows the support instance sizes.
If you need more than 2 GB of memory you will need to switch to App Engine Flexible or a Compute Engine instance.
I've almost completed migrating based on google's instructions.
It's very nice to not have to call into the app-engine libraries whatsoever.
However, now I must replace my calls to app-engine-standard memcached.
Here's what the guide says: "To use a memcache service on App Engine, use Redis Labs Memcached Cloud instead of App Engine Memcache."
So is this my only option; a third party? They don't even list pricing on their page if GCE is selected.
I also see in the standard environment how-to guides there is a guide on Connecting to internal resources in a VPC network.
From that link it mentions Cloud Memorystore. I can't find any examples if this is advisable or possible to do on GAE standard. Of course it wasn't previously possible but now that GAE standard has become much more "standard", I think it should be possible?
Thanks for any advice on the best way forward.
Memorystore appears to be Google's replacement:
https://cloud.google.com/memorystore/
You connect to it using this guide:
https://cloud.google.com/appengine/docs/standard/go/using-memorystore
Alas it costs about $1.20/GB per day with no free quota.
Thus, if your data doesn't change, and requires less than 100MB of cache at a time, the first answer might be better (free). Also, your data won't explode the instance as you can control the max size of the cache.
However, if your data changes or you need more cache, MemoryStore is a more direct replacement to MemCache - just costs money.
I've been thinking about this. 2nd gen instances have twice the ram, so if global cache isn't required (as in items don't change once created - (name items using their sha256)), you can run your own local threadsafe memcache (such as https://github.com/dgraph-io/ristretto) and allocate some of the extra ram to it. It'll be faster than Memcache was, so requests can be serviced even faster, keeping the number of instances low.
You could make it global for data that does change, by using pub/sub between instances, but I think that's significantly more work.
To ease the migration to 1.12, I have been thinking of using this solution:
create a dedicated app using the 1.11 runtime.
setup twirp endpoints to act as a proxy for all the deprecated app engine services (memcache, mail, search...)
In the app engine release notes 1.9.0 is stated:
"The size limit on the Search API is now computed and enforced on a per-index basis, rather than for the app as a whole. The per-index limit is now 10GB. There is no fixed limit on the number of indexes, or on the total amount of Search API storage an application may use."
The Search API is now in experimental state, but I would like to know if the 10GB per-index limit will be removed when the Search API will be out of experimental (or at least, will be replaced with a much larger one).
As indicated in Search API quotas, the current quota is still 10 GB per index. There are no public plans to increase this quota at the time of this writing.
Should this quota increase be desired, feel free to file a new feature request on the App Engine public issue tracker detailing a thorough business case. You may also want to consider supporting this existing and related issue: Issue 10667: Search API multiple index search.
I'm doing a prototype backend and in the near future I expect little traffic but while testing I consumed all my 300$ free trail.
How can I configure my app to consume the least possible resources? I need things like limiting the number of instances to 1, using a cheap machine, sleep whenever possible, I've read something about Client vs Backend intances.
With time I'll learn the config that best suits me, but now I need the CHEAPEST config to get going.
BTW: I am using managed-vms with Dart.
EDIT
I've been recommended to configure my app.yaml file, what options would you recommend to confront this issue?
There are two train of thought for your issue.
1) Optimization of code: This is very difficult for us as we are not privy to your App's usage and client-base and architecture. In general, it depends on what Google App Engine product you use the most, for example: Datastore API call (fetch, write, delete... etc...), BigQuery and Cloud SQL. Even after optimization, you can still incur a lot of cost depending on traffic.
2) Enforcing cheap operation: This is easier and I think this is what you want. You can manually enforce a daily budget (in your billing setup page) so the App never cost more than a certain amount per day. You can also artificially lower the maximum amount of idling instances to 0 and use the smallest instance possible (F1 for frontend).
For pricing details see this article - https://cloud.google.com/appengine/pricing#Billable_Resource_Unit_Costs
If you use managed VM -- you'll be billed for Compute Engine Instance prices, not for App Engine Instances, and, as I know, the minimum possible instance to use as Managed VM is "g1-small" which costs you $0.023 per hour full sustained usage (if it will be turned on all month), so you minimum bill will be 0.023 * 24 * 30 = $16.56 only for instance hours. Excluding disk and traffic. With minimum amount of datastore operations you may stay on free quota.
Every application consumes resources differently. To minimize your cost, you need to know what resources used the majority of your expenses and go from there.
If it is spent on extra instances that were just sitting there - then trim the number of instances to the minimum required and use a lower class instance. If you are seeing a lot of expense on datastore calls - then look at optimizing your entities and take advantage of memcache.
Lowest Cost for a simple app:
Use App Engine Standard. It scales to zero instances, so will not cost anything if there is no traffic. With App Engine Flex you will pay for the instance hours and the Flex (GCE) instances are bigger.
Use autoscaling with max instances, F1 instance class:
With autoscaling you do not need to guess how many instances you need. F1 are the smallest instances. Set the max instances in case you get DoS'd or more traffic than you can afford.
Stop Instances:
You can stop the App Engine versions when you do not expect the app to be used. The will be no charge for instance hours for either Standard or Flex. For Flex there will be disk charges. The app will be ready to go when you need it again.
App Engine Version Cleanup:
Versions are easy to create and harder to remove. Here is a post on project cleanup. See this post on App Engine cleanup
https://medium.com/google-cloud/app-engine-project-cleanup-9647296e796a
Amazon S3 has a limit of 100 buckets per account:
http://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html
Does Google Cloud Storage have any such limits? I cannot find any mention of them... but wanted to know before I made a design decision
There are no limits on the number of buckets you can create in Google Cloud Storage.
Keep in mind, though, that bucket names are a global namespace, so if you create them programmatically, make sure the names won't conflict with others.
There is, however, a rate limit on how quickly you can create buckets. See the Quotas & limits page.