ndb.BlobProperty vs BlobStore: which is more private and more secure - google-app-engine

I have been reading all over stackoverflow concerning datastore vs blobstore for storing and retrieving image files. Everything is pointing towards blobstore except one: privacy and security.
In the datastore, the photos of my users are private: I have full control on who gets a blob. In the blobstore, however, anyone who knows the url can conceivable access my users photos? Is that true?
Here is a quote that is supposed to give me peace of mind, but it's still not clear. So anyone with the blob key can still access the photos? (from Store Photos in Blobstore or as Blobs in Datastore - Which is better/more efficient /cheaper?)
the way you serve a value out of the Blobstore is to accept a request
to the app, then respond with the X-AppEngine-BlobKey header with the
key. App Engine intercepts the outgoing response and replaces the body
with the Blobstore value streamed directly from the service. Because
app logic sets the header in the first place, the app can implement
any access control it wants. There is no default URL that serves
values directly out of the Blobstore without app intervention.
All of this is to ask: Which is more private and more secure for trafficking images, and why: datastore or blobstore? Or, hey, google-cloud-storage (which I know nothing about presently)

If you use google.appengine.api.images.get_serving_url then yes, the url returned is public. However the url returned is not guessable from a blob's key, nor does the url even exist before calling get_serving_url. (Or after calling delete_serving_url).
If you need access control on top of the data in the blobstore you can write your own handlers and add the access control there.

BlobProperty is just as private and secure as BlobStore, all depends on your application which serves the requests. your application can implement any permission checking before sending the contents to the user, so I don't see any difference as long as you serve all the images yourself and don't intentionally create publicly available URLs.
Actually, I would not even thinlk about storing photos in the BlobProperty, because this way the data ends up in the database instead of the BlobStore and it costs significantly more to store data in the database. BlobStore, on the other hand, is cheap and convenient.

Related

Limit upload size for appengine interface to cloud store

Consider an image (avatar) uploader to Google Cloud Storage which will start from the user's web browser, and then pass through a Go appengine instance which will handle standard compression/cropping etc. and then set the resulting image as an object in Cloud Storage
How can I ensure that the appengine instance isn't overloaded by too much or bad data? In other words, I think I'm asking two questions (or possibly not):
How can I limit the amount of data allowed to be sent to an appengine instance in a single request, or is there already a default safe limit?
How can I validate the data to make sure it's proper jpg/png/gif before attempting to process it with standard go image libraries?
All App Engine requests are limited to 32MB.
You can check the size of the file being uploaded before the upload starts.
You can verify the file's mime-type and only allow correct files to be uploaded.

Using azure mobile services how do I download a blob from a private container?

I am using Azure Mobile Services to store images for a web application.
I have managed to successfully upload images to a private container. I've followed the logic in this introductory guide (http://code.msdn.microsoft.com/windowsapps/Upload-File-to-Windows-c9169190), i.e. when uploading the file to the database an SAS is generated by a node script called when inserting a record into a table.
One of the reasons to use this approach from mobile apps is so that the storage key is not stored within the application source itself.
Conforming with that idea I am now struggling to find an example of how to download the images.
Perhaps I should update the read function for the same table and have that return an SAS which can be used to accessed the image.
Does this sound reasonable or are they better approaches?
Any assistance is greatly appreciated.
It sounds to me like you are on the right track. If you are storing the image in a private container and want the mobile device to read it back then yes, you will want to produce a SAS that allows reading and get that back to the device. The device code can then make a call directly against BLOB storage using that SAS URL to retrieve the image.
This applies only if you want the container private. If the container is public then just returning the URL (like they have in the article you link to) should be fine.
It also depends on how private you care the image to be. For example, let's say you have a container created per user. If the container has a Shared Access Signature Policy on it with a really far off expiration date then technically someone still needs the URL with the SAS to view it, but you can create that SAS and store it like the sample. The mobile app can then be given the URL when it reads data from your service and get to the BLOB directly without having it create an additional SAS. In my opinion this option only really works if the images aren't going to be around very long, or you don't really care that if someone sniffs the URL from the network traffic that they can access it.
If you want it fairly secure and do not know how long the images will be around, then you should go with your stated approach of getting a SAS for read when the app reads from the related table data. The SAS can have a fairly short expiry on it and the mobile device can cache the result.

How long to blob urls served by the app engine remain valid?

I was wondering if anyone knew how long image urls served back from the google app engine blob store remain valid for?
I have been tracking on url that i served an image from the blob store on 1/3/13 and its still there.
I am ask specifically so i can cache the image url instead of attempting to serve it repeatedly. If i did this i would still check if the image is there, but how often would i need to check that
thanks!
They remain valid until either you
a. call delete_serving_url, or
b. delete the underling blob.

Google App Engine: keep state of an object between HTTP-requests (Java)

User makes HTTP-request to the server. This request is processed with an object of some class, let's call it "Processor". Then the same user in two minutes makes another HTTP request. And I want it to be processed with the same instance of Processor as the first one. So basically I want to keep the state of some object among several requests.
I know that I can save it each time to the datastore and then load back, but this approach seems to be very slow. Is there a way to store objects in some RAM place?
How about using memcache?
You can't ensure that consecutive requests to your app will go to the same instance, but memcache can help reduce or eliminate the overhead of accessing the datastore for each request.
It sounds like you are describing is a session.
I am not sure which language runtime and web framework you are using, but it is sure to include support for a sessions. (If you are using Java you will need to enable it.)
The standard session mechanism puts a small ID in a cookie that is stored in the user's browser. On every request, each of which could be go to a different application server, this ID is used as a key to read and write persistent information from the data store.
If the datastore accesses are too slow for you I would suggest not using memcache for this session storage, because memcache is by design unreliable, so the user's session information could disappear at any time, which would be a bad experience for them.
If the amount of data you want to store is less than about a few kilobytes, then I recommend doing what Play Framework does, which is to encrypt your session data and store it directly in a cookie stored in the user's browser. This is fast and truly stateless.
If you have more data than can be stored in a cookie, and you don't want to use the data store, you could could use JavaScript local storage on the browser, and use AJAX calls to communicate with the server. (If you want to support older browsers you may need to use the jStorage wrapper library.)
If memcache isn't enough, you could use backends to maintain state. Use a resident backend (or a set of them) and route incoming requests from the frontend to the backend machine that has the state.
Docs: Python Java

Uploading an image to a static directory in my app engine project?

Is it possible to copy images into a static directory under my app engine project domain?
For example, when a user signs up for my app, I want them to supply an image for themselves, and I would copy it to a static directory but rename the image using their username, like:
www.mysite.com/imgs/username.jpg
www.mysite.com/imgs/john.jpg
www.mysite.com/imgs/jane.jpg
but I don't know where to start with this, since the JDO api doesn't really deal with this sort of thing (I think using JDO, they'd want me to store the image data as a blob associated with my User objects). Can I just upload the images to a static directory like this?
Thanks
No. App Engine has a provision for static files, but only static files you upload along with your code. If users can upload the data, it is not really "static" in the app engine context. Depending on how large a picture you want users to be able to upload, you will want to use either the regular datastore (for storing up to 1MB) or the Blobstore for bigger files (up to 2 gig)
I'm almost certain you need to use the blobstore for dynamic upload. Even if you need not, for reasons of session independence you probably want to. As blobstore operations are expensive relative to a static file, you could have a task queue move the (now static) images into static store.

Resources