Google Cloud Storage public link does not become invalid when unchecking - google-app-engine

I am using Google Cloud Storage to upload images. I am now testing it from the cloud console.
After I upload a picture if I check the Share publicly checkbox to obtain a public link, I get (obviously) a publicly accessible url, which is: https://storage.googleapis.com/bucket_name/pictureName .
Then, if I uncheck the Share Publicly checkbox, it makes a request
Request URL:https://clients6.google.com/storage/v1_internal/b/bucketName/o/pictureName.jpg/acl/allUsers?key=AIzaSyCI-yuie5UVOi0DjtiCwWBwQ1djkiuo1g
Request Method:DELETE
The request goes well, but the public url remains publicly accessible. I thought it is valid for some time, but after one hour is still available.
So, what is the right way to remove the public url? How do I restrict access to a stored file after I made it public?

See the documentation on cache control and consistency. In particular:
Note: If you do not specify a cache lifetime, a publicly accessible
object can be cached for up to 60 minutes.
So I'm guessing this is working as intended and your object is cached. Have you tried waiting a little longer?

In Sharing your data publicly, it's shown that there are 2 ways to stop sharing an object publicly.
Deselect the checkbox under Shared Publicly as you've mentioned already.
Edit the object permissions and remove the entry with ID allUsers.
The reason you are still able to access the object publicly is indeed because of caching as mentioned by #jterrace. The Cache control and consistency article referenced explains the effect of this eventual consistency.
One can test this behavior by sharing an object publicly and unsharing immediately after. In most cases, the object will be publicly accessible for the cache duration. One can shorten this duration by specifying the Cache-Control headers such as max-age.

When Your sharing Publicly Url is like https://storage.googleapis.com/bucket_name/pictureName.
If you delete the file or uncheck the Share Publicly checkbox
It is available up to 60 minutes it is default cache time in Google cloud,
To avoid the Issue Need to pass Query parameter like
https://storage.googleapis.com/bucket_name/pictureName?avoidCache=1
Every time passes random number in a query string.

Related

CDN serving private images / videos

I would like to know how do CDNs serve private data - images / videos. I came across this stackoverflow answer but this seems to be Amazon CloudFront specific answer.
As a popular example case lets say the problem in question is serving contents inside of facebook. So there is access controlled stuff at an individual user level and also at a group of users level. Besides, there is some publicly accessible data.
All logic of what can be served to whom resides on the server!
The first request to CDN will go to application server and gets validated for access rights. But there is a catch - keep this in mind:
Assume that first request is successful and after that, anyone will be able to access the image with that CDN URL. I tested this with Facebook user uploaded restricted image and it was accessible with the CDN URL by others too even after me logging out. So, the image will be accessible till the CDN cache expiry time.
I believe this should work - all requests first come to the main application server. After determining whether access is allowed or not, a redirect to the CDN server or access-denied error can be shown.
Each CDN working differently, so unless you specify which CDN you are looking for its hard to tell.

Multiple user profiles / sessions in one CEF instance

Is it possible to have multiple user profiles—with separate cookies, history, local storage, etc.—running at the same time in one CEF (Chromium Embedded Framework) instance? The goal is to allow multiple browsing "sessions" side-by-side in one window (it's actually an OpenGL app).
There are two possible solutions I've looked into, each with its own problems:
Using CefCookieManager
This is possible to do for just cookies by creating multiple CefCookieManagers. However, there does not seem to be similar API for history and local storage, which are now still shared.
Using CefSettings::cache_path
CefSettings settings;
CefString(&settings.cache_path).FromASCII("C:\\CefCache");
CefInitialize(args, settings, app, nullptr);
The problem here is that CefSettings is associated with the global CEF instance rather than with each browser/client.
Is there a way to do this that I have not discovered?
If it's only about cookies and local storage, and you host content using custom scheme handler or request interception, then you could use different domains/subdomains for each profile. See this topic for reference: http://www.magpcss.org/ceforum/viewtopic.php?f=6&t=11695 .
Regarding history, you could implement history on your own by using the OnBeforeBrowse callback.
In the topic referenced above it is also mentioned that it's technically possible to specify a different cache path per CefRequestContext (can be provided during browser creation). So working on a patch for CEF may be another option.
EDIT: CEF revision 2040 adds support for complete isolation of storage and permissions per request context, see comment #7 in Issue 1044: https://code.google.com/p/chromiumembedded/issues/detail?id=1044#c7

Google Storage Image Serving Cache

Using the GoogleStorageTools class's CloudStorageTools::getImageServingUrl and then replacing the storage object of the image with another image of the same name, the old image is still displayed upon subsequent calls of getImageServingUrl
I tried using CloudStorageTools::deleteImageServingUrl and then CloudStorageTools::getImageServingUrl again, but this doesn't work.
Is there any way to interact with Cloud Storage and tell it to refresh the image or the image URL? I'm guessing not, and am going to ensure the filenames are unique, instead, but it feels like there ought to be a way.
If you refresh the image, does the new image show up? It's possible there's a cache-control policy set on the image. Google Cloud Storage allows users to specify what cache-control headers should be sent to browsers, but I'm not sure whether app engine's getImageServingUrl respects that value.
As an experiment, could you try going to console.developers.google.com, heading over to "storage > cloud storage > storage browser", choosing the appropriate object, choosing "edit metadata," and then seeing whether there's a Cache-Control policy on the object? Try changing the cache-control section to "max-age=0,no-cache".

ndb.BlobProperty vs BlobStore: which is more private and more secure

I have been reading all over stackoverflow concerning datastore vs blobstore for storing and retrieving image files. Everything is pointing towards blobstore except one: privacy and security.
In the datastore, the photos of my users are private: I have full control on who gets a blob. In the blobstore, however, anyone who knows the url can conceivable access my users photos? Is that true?
Here is a quote that is supposed to give me peace of mind, but it's still not clear. So anyone with the blob key can still access the photos? (from Store Photos in Blobstore or as Blobs in Datastore - Which is better/more efficient /cheaper?)
the way you serve a value out of the Blobstore is to accept a request
to the app, then respond with the X-AppEngine-BlobKey header with the
key. App Engine intercepts the outgoing response and replaces the body
with the Blobstore value streamed directly from the service. Because
app logic sets the header in the first place, the app can implement
any access control it wants. There is no default URL that serves
values directly out of the Blobstore without app intervention.
All of this is to ask: Which is more private and more secure for trafficking images, and why: datastore or blobstore? Or, hey, google-cloud-storage (which I know nothing about presently)
If you use google.appengine.api.images.get_serving_url then yes, the url returned is public. However the url returned is not guessable from a blob's key, nor does the url even exist before calling get_serving_url. (Or after calling delete_serving_url).
If you need access control on top of the data in the blobstore you can write your own handlers and add the access control there.
BlobProperty is just as private and secure as BlobStore, all depends on your application which serves the requests. your application can implement any permission checking before sending the contents to the user, so I don't see any difference as long as you serve all the images yourself and don't intentionally create publicly available URLs.
Actually, I would not even thinlk about storing photos in the BlobProperty, because this way the data ends up in the database instead of the BlobStore and it costs significantly more to store data in the database. BlobStore, on the other hand, is cheap and convenient.

Disable Session.checkAgent for one action

I have built a controller that is uses the media view to stream videos to users. When someone accesses the controller from an iOS device, the user agent being sent is not matching and the session logs out.
I am using the iPad plugin for Flow Player and I have seen other posts about flash not sending the correct user agent strings, so instead of messing with that, I'd like to disable Session.checkAgent for that specific action. I have tried adding it to beforeFilter(), but the check clearly happens before that point.
Is there some other method I can override to implement this?
I haven't tested it, but if you know (part of) the URL, you can check the $_GET['url'] inside your app/Config/core.php and modify the session configuration based upon its value, For example, $_GET['url'] starts with '/videos/view'.
You need to do this inside the configuration file, otherwise the session is already started as you already discovered.
Note that $_GET['url'] is only used in older versions of CakePHP. For newer versions of CakePHP, you may need to user $_SERVER['REQUEST_URI'] or another $_SERVER environment variable.

Resources