GAE basic scaling override memory limit - google-app-engine

I am a cloud noob.
I have a node application with GAE. I am using basic scaling to serve up the requests. I have specified instance class B4_1G which has memory limit of 2048mb.(https://cloud.google.com/appengine/docs/standard#second-gen-runtimes)
The application is supposed to perform DOM scraping using Cheerio on some extremely large HTML files. This works well until the HTML that I need to scrape is beyond huge. Then I start getting memory error in logs :
Exceeded hard memory limit of 2048 MB with 2052 MB after servicing 1 requests total. Consider setting a larger instance class in app.yaml.
Is there any way I can override the memory limit to say 4096mb or even more?
Setting the resources additionally in app.yaml did not seem to help.
Any help or pointers appreciated. Thank you.

The link you provided shows the support instance sizes.
If you need more than 2 GB of memory you will need to switch to App Engine Flexible or a Compute Engine instance.

Related

What does App Engine do to prevent instances running out of memory?

I have a few requests that needs to use extensive amount of memory i.e. 40 MB more than other requests.
At the default of 10 max concurrent requests using a F1 auto-scaling instance, it can potentially use 400+ MB, which is way more than the 130MB-ish system memory it has available. There is no memory utilization setting in the yaml file, so I wonder what can be done to prevent such situations.
Google App Engine don't have any memory utilization beside Python Garbage Collection.
My Advise is,
Try to release memory as soon as response
Try to optimize memory usage on that part, may be you need to use other service to help solving memory usage problem, eg. file serving via Google Storage, etc...
Scale up instance to F2 which more suitable for production, but you still need to optimize your memory usage for higher usage.

Datastore partition using namespace api

I want to use Multitenancy feature of GAE in my app's datastore. Here they have given that
Using the Namespaces API, you can easily partition data across tenants simply by specifying a unique namespace string for each tenant.
so my questions are,
[1.]How many partition(of one datastore) can be created using namespace api?
[2.]Is there any limit on the size of each partition?
[3.]how would I know if size of a partition is increased beyond GAE free quota?
i'll really appreciate any help to clear this doubt.
thanks in advance!!!!
How many partition(of one datastore) can be created using namespace api?
Namespaces help you increase your app scalability, you have no limit number
Is there any limit on the size of each partition?
App engine Free quota is fixed, it's the only limit. If you need to activate billing, you'll have to fix the limit budget. App engine offers you a very high scalability
how would I know if size of a partition is increased beyond GAE free quota?
As for second question, you have a quota for the whole application, not per namespace. If you consume all the free resources your app will throw an error instead of serving the appropriate handler, until the quota is replenished

Memory usage of Google App Engine instance

I am developping an application using App Engine to collect, store and deliver data to users.
During my tests, I have 4 data sources which send HTTP POST requests to the server every 5s (all requests are exactly uniform).
The server stores received data to the datastore using Objectify.
At the beginning, all requests are manage by 1 instance (class F1) with 0.8 QPS, a latency of 80ms and 80MB of memory.
But during the following hours, the used memory increases and goes over the limit of F1 Instance.
However, the scheduler doesn't start another instance. When I stop all traffic, average memory never decreases.
Now I have 150MB memory instead of 128MB (limit of F1 class) and I stopped all the traffic.
I Tried to set performance settings manually or automatic, disable Appstats without any improvement.
I use Memcache and datastore, don't have any cron or task queues and the traffic is always the same.
What are the possible reasons the average memory increase?
Is it a bug of the admin console?
Which points define the quantity of memory used per request?
Another question:
Does Google have special discount for datastore read/write ( >30 million ops / day ) ?
Thank you,
Joel
Regarding the special price, I don't think there is. If your app needs this amount of read/write quota you should look into optimizing to minimize write and perhaps implement some sort of bulk writing if possible.
On the memory issue. You should post your code in order to get a straight answer since there are too many things to look into when discussing memory usage. Knowing more about your case will help in producing a straight answer.
Cheers,
Kjartan

Silverlight with Isolated Storage - Port from Windows Forms with SQLite

I am about to port a Windows Form app to WPF or Silverlight. The current application uses a cache to store SQL responses temporaily as well as for later use in order not to have to run the queries again. The local cache should be able to handle 1 to 4 GB.
1) Is the Internal Storage capable to handle this amount of data? A search has not given me a clear answer so far, many talk about a 1MB limit, some say storage is of size long.
2) SQLite has C# managed code port, but I am not sure if that is stable enough to use in a professional application. Any experience or opinion?
3) Is it possible to use the SQLite ADO.Net provider for the Isolated storage or would it be an idea to run a local server that is responsible for the cache only? Or any way to achieve this through the COM access?
4) Any file based db system that you can recommend as a substitute for SQLite in case nothing else does work?
Any other ideas welcome, I need the local cache. If not, I need to do the application in Silverlight AND WPF and I would like to avoid that.
Thanks!
Regarding your 1 question:
Is the Internal Storage capable to handle this amount of data? A
search has not given me a clear answer so far, many talk about a 1MB
limit, some say storage is of size long.
Basically, by default Silverlight apps are granted 1 Mb of storage but they can request an increase in its storage quota (see here and here for more details).
Hope this helps

app engine - how can i increase the datastore item size limit

how can i increase the datastore item size limit, which is now only 1 MB, in app engine?
if i buy more storage what will happen to this limit?
thanks
Not exactly. Enabling billing won't remove the 1MB entity size limit, however it will allow you to use the Blobstore API, where you can store blobs up to 2GB in size. These are typically uploaded files, but could just as easily be pickled python objects.
While each entity is limited to 1MB, HTTP requests and responses can be up to 10MB, so you can handle slightly larger files by spanning the blob contents across multiple entities, splitting the file on upload and stitching it back together on download.
You can use the Blobstore API to handles objects that are larger than 1 Mb.
I dont know what limit 1MB you exactly talking about but for GAE if you want to do anything above the free quota, enable billing for your application :)

Resources