I have an application that uses Vault Transit Engine to encrypt/decrypt some data. Unfortunately, sometimes the data size depends on the user and may be huge - 100Mb and more. The "transit/encrypt" API receives as an input base64 string, so I need to store in the memory this string. The In the end, I have unpredictable memory consumption of the service (depends on user's input).
Is there some method to reduce the memory consumption of this service?
Related
I am building an IoT device that will be producing 200Kb of data per second, and I need to save this data to storage. I currently have about 500 devices, I am trying to figure out what is the best way to store the data? And the best database for this purpose? In the past I have stored data to GCP's BigQuery and done processing by using compute engine instance groups, but the size of the data was much smaller.
This is my best answer based upon the limited information in your question.
The first step is to document / describe what type of data that you are processing. Is it structured data (SQL) or unstructured (NoSQL)? What type of queries do you need to make? How long do you need to store the data and what is the expected total data size. This will determine the choice of the backend performing the query processing and analytics.
Next you need to look at the rate of data being transmitted. At 200 Kbits (or is it 200 KBytes) times 500 devices this is 100 Mbits (or 800 MBits) per second. How valuable is the data and how tolerant is your design for data loss? What is the data transfer rate for each device (cellular, wireless, etc.) and connection reliability?.
To push the data into the cloud I would use Pub/Sub. Then process the data to merge, combine, compress, purge, etc and push to Google Cloud Storage or to BigQuery (but other options may be better such as Cloud SQL or Cloud Datastore / BigTable). The answer for the intermediate processor depends on the previous questions but you will need some horsepower to process that rate of data stream. Options might be Google Cloud Dataproc running Spark or Google Cloud Dataflow.
There is a lot to consider for this type of design. My answer has created a bunch of questions, hopefully this will help you architect a suitable solution.
You could also look at IoT Core as a possible way to handle the load balancing piece (it auto-scales). There would be some up front overhead registering all your devices, but it also then handles secure connection as well (TLS stack + JWT encryption for security on devices using IoT Core).
With 500 devices and 200KB/s, that sound well within the capabilities of the system to handle. Pub/Sub is the limiter, and it handles between 1-2M messages per second so it should be fine.
I have a project hosted on Google Appengine, standard runtime.
I have defined a servlet which responds to a GET request with up-to 200kb data. Problem is that the servlet attempts to buffer a very large amount of the data before actually writing out.
I tried to put an upper limit on the buffering by doing,
resp.setBufferSize(1024);
But this makes no difference. An immediate log of the buffer size,
LOGGER.info("Using buffer of size " + resp.getBufferSize());
tells me that the buffer is of size 1024, but once the data is written, I again log the buffer size and it has grown to a very large amount depending upon the data written out.
Now, if I increase the output to sufficiently large amount I get an Exception, while writing out that data. The data itself is not stored anywhere, it is generated and written to the "ServletOutputStream" directly.
Error for /sizeTestServlet
java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2271)
at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:118)
at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
at java.io.OutputStream.write(OutputStream.java:75)
at com.google.apphosting.runtime.jetty.RpcResponseGenerator.addContent(RpcResponseGenerator.java:65)
at org.mortbay.jetty.AbstractGenerator$Output.write(AbstractGenerator.java:644)
at org.mortbay.jetty.AbstractGenerator$Output.write(AbstractGenerator.java:589)
Is there some way to disable output buffering ?
App Engine instances do not support data streaming - they always return an entire response. If you need to stream data, you can use flexible environment.
From
How Requests are Handled:
Streaming Responses
App Engine does not support streaming responses where data is sent in
incremental chunks to the client while a request is being processed.
All data from your code is collected as described above and sent as a
single HTTP response.
Note, however, that for such small responses, as in your case (up to 200KB), streaming may be significantly less efficient and slower than returning all the data at once.
I am looking at using a spell checker for my GAE app and we have an algorithm already for spell checking, but I'm trying to figure out how to best store and load dictionary files for best performance.
I am considering the following strategies:
Place the dictionary data in a text file(s) in local app engine storage and load/read them using standard IO methods (open(),read(),etc)
Place the dictionary data in GCS and load/read using GCS IO methods
Place the dictionary data in an ndb.model() and load/cache information
One cache I don't quite understand is the context cache -- is this cache that is attached to a given instance? I.e. if I have a resident instance that is spun up, can I go ahead and load the dictionary data into the instance's RAM and thus accessing data should be extremely fast (microsecond vs millisecond seek/get times)? The dictionary data will probably be a sharded list of some sort that we'll optimize for performance. Are there other data storage methods/structures I'm not considering here that may be more appropriate? Thanks.
Cache (or its full name memcache) isn't exactly RAM but similar. When used with NDB it acts like a buffer. When you do writes it writes to the Memcache first then to the DB. Though this may sound slower its not, as writes to the DB take a while before they are accessible. When it reads it checks memcache, if it exists then it uses that info otherwise it pulls from the DB, stores it in Memcache then gives you the data. Just like RAM though its volatile, thus you cannot guaranty information is always acceptable, its limited (depending on what type of instance you have) and can be flush with no warning or reason. You can read more here:
https://developers.google.com/appengine/docs/python/memcache/
https://developers.google.com/appengine/articles/scaling/memcache
Ultimately Memcahe will be the fastest and most accessible as it it shared amongst all your instances, so if one instance pulls some data from the datastore then all of them can access it quickly. Even if its not in memcache it is still the fastest of all the options, as the others ones will fill up your memory and may cause errors and performance issues.
So basically, I am trying to decide if I should go for dedicated memcache.
My scenario is as follows:
I am working on an application that provides real time analysis for some public data.
I am going to maintain a total 15kb key/value memcache (20 keys, variable values)
Meanwhile values are constantly changing (total keys/values around 20 updates every 3 seconds)
Hits to the website will perform request for these keys (also around a request per 3 seconds)
I am assuming that 10000 users hit the website immediately, this will produce about 20 * 10000 requests each 3 seconds.
Considering the size of the memcache (relatively small), but also the number of requests produced about 7000/second (memcache key/value access), would dedicated memcache be more of "risk averse" deal for this situation.
Thanks,
The cached data seems crucial to the correct operation of your web application. If you lost data it might be a disservice to thousands of users! Hopefully your app also periodically persists the cached data and automatically recovers from cache erasure.
Although the size of the data is small, shared memcache still has more risk than dedicated memcache of evicting some or all of the data at unpredictable times. The design must also deal correctly with partial data loss. Not only your memory pressure but also that from other applications and cloud operational factors are more likely to result in AppEngine discarding shared cache.
With this data size you will not get any benefits from using the dedicated memcache.
The rate at which you access memcache does is not relevant for this decision.
I am curious as to how caching works in Google App Engine or any cloud based application. Since there is no guarantee that requests are sent to same sever, does that mean that if data is cached on 1st request on Server A, then on 2nd requests which is processed by Server B, it will not be able to access the cache?
If thats the case (cache only local to server), won't it be unlikely (depending on number of users) that a request uses the cache? eg. Google probably has thousands of servers
With App Engine you cache using memcached. This means that a cache server will hold the data in memory (rather than each application server). The application servers (for a given application) all talk the same cache server (conceptually, there could be sharding or replication going on under the hoods).
In-memory caching on the application server itself will potentially not be very effective, because there is more than one of those (although for your given application there are only a few instances active, it is not spread out over all of Google's servers), and also because Google is free to shut them down all the time (which is a real problem for Java apps that take some time to boot up again, so now you can pay to keep idle instances alive).
In addition to these performance/effectiveness issues, in-memory caching on the application server could lead to consistency problems (every refresh shows different data when the caches are not in sync).
Depends on the type of caching you want to achieve.
Caching on the application server itself can be interesting if you have complex in-memory object structure that takes time to rebuild from data loaded from the database. In that specific case, you may want to cache the result of the computation. It will be faster to use a local cache than a shared memcache to load if the structure is large.
If having consistent value between in-memory and the database is paramount, you can do some checksum/timestamp check with a stored value on the datastore, every time you use the cached value. Storing checksum/timestamp on a small object or in a global cache will fasten the process.
One big issue using global memcache is ensuring proper synchronization on "refilling" it, when a value is not yet present or has been flushed. If you have multiple servers doing the check at the exact same time and refilling value in cache, you may end-up having several distinct servers doing the refill at the same time. If the operation is idem-potent, this is not a problem; if not, a potential and very hard to trace bug.