Collections and Folders in Google Docs v3 APIs? - filesystems

The Google Docs v3 APIs do not deal in folders, but rather collections, which allow a many-to-many relationship. However, the API can be used to access files and folders on Google Drive. Because Google Drive has to emulate a file system on the user's hard disk, does this mean that, if I'm using The Google Docs v3 APIs to access files or folders on Google Drive, each folder and file will only have one parent? And if not, can I find an object's local-filesystem parent using the google docs v3 APIs?

First, Google Docs v3 API is officially deprecated and Google Drive SDK replaces it.
Second, Google Drice doesn't fully emulate a file system and one file can have multiple parents.
Third, I'm not 100% sure about what you mean by "local filesystem", but you can access list of object's parents with Parents.list(). However, you cannot access any information about user's local filesystem that is synchronized with Drive sync client.
Fourth, for most of the time, each object has only one parent. Please make some assumption by, for example, always choosing first parent like Parents.list()[0]

Related

Permission denied on external access to Google Cloud Datastore

I want to access datastore (and storage) data of an AppEngine project via google-cloud-datastore and google-cloud-storage with an Python program on my own server.
This works with my AppEngine staging server, by creating a service account and giving it owner access (to the project).
Doing the same thing with the production AppEngine instance fails with
google.api_core.exceptions.PermissionDenied: 403 Missing or insufficient permissions.
Part of the problem might be, that I might be using the wrong project to create the service account with. There are more than one project with the same name in my cloud console. How do I identify the correct one?
How do I get more details about the problem?
First, note that the Datastore and the Cloud Storage are 2 different products with 2 different accessing methods.
The Datastore is closely tied to the GAE project - each project has its own datastore. The external access procedure in general is captured in How do I use Google datastore for my web app which is NOT hosted in google app engine?.
When switching the project (staging to production in your case) there are 2 things to keep in mind:
as you observed, you need to change the project you're accessing.
you also need to change the credentials you load and use for access to match the project you select, as each project has it own service account key configured in the above-mentioned procedure
For the google-cloud-datastore library both of these are simultaneously configured via the datastore.Client() call parameters (emphasis mine):
class google.cloud.datastore.client.Client(project=None,
namespace=None, credentials=None, _http=None, _use_grpc=None)
project (str) – (Optional) The project to pass to proxied API methods.
credentials (Credentials) – (Optional) The OAuth2 Credentials to use for this client. If not passed (and if no _http object is passed),
falls back to the default inferred from the environment.
The Cloud Storage is completely independent from GAE, the GAE project/credentials you use (if any) have no bearing on bucket/object access restrictions whatsoever. There's nothing you need to do from the google-cloud-storage library perspective when switching from one GAE project to another
To eliminate the confusion created by multiple projects having the same name just go to the IAM & admin Settings page, select the respective projects from the drop-down list on the top blue bar and rename them using meaningful names (click in the Project name box to edit the name, then click SAVE). Then re-check if you're using the right keys for the desired project.

GWT how to store information on google App Engine?

In my GWT application, a 'root' user upload a specific text file with data and that data should be available to anyone who have access to the app (using GAE).
What's the classic way to store a data that will be available to all users? I don't want to use any database (objectify!?) since this is a relatively small amount of information and it changes from time to time by root.
I was wondering if there was such static MAP on the 'engine level' (not user's session) that this info can be stored (and if the server is down - no bigi, root will upload again)
Thanks
You have three primary options:
Add this file to your /war/ directory and deploy with the app. This is what we typically do with all static files that rarely change (like .css file, images, etc.) This file will be available to all users, whether they are authenticated or not.
Add this file to your /war/WEB-INF/ directory and deploy with the app. This file will be available to your server-side code, so you can read it on the server-side and show to a user. This way you can decide which users can see this file and which users should not have access to it.
Upload this file to Google Cloud Storage. You can do it through an app, or you can simply upload it manually to a bucket using a GCS console or gsutil command-line tool. Then you simply provide a link to your users. The advantage of this option is that you do not have to redeploy your app when a file changes.
The only reason to go with the first two options is to have this file under version control. If you don't need that, I would recommend going with the GCS option.

Google Realtime API - creating and removing shortcut file in Google Drive

Looking at Realtime API quickstart example, a shortcut file is used to store realtime document model. I'm assuming that this is a file that holds realtime document model state.
Question: do I need to create and clean up this shortcut file for each collaboration session?
Note: Eventually I want to persist data to my database, not Google Drive.
From an API perspective, the realtime documents are designed to be persistent storage. The files are long lived, and there is there is no need to ever recreate them or store data elsewhere.
If you want to copy data elsewhere, how and when to do that sounds like a design decision you need to make given whatever makes sense for your app.

How to manage asymmetric keys without checking them into source control?

I have a google app engine application which needs to be given a public-private key pair. I don't want to check this into source control because it will be accessible by too many people. Since this is GAE I can't use the build system to write the keys to the file system of the server.
Is there a known best practice for this?
My first thought was does Jenkins provide a way to manage keys securely? I know I can just copy the keys to a location on the jenkins server and copy them into the build but this project will be used by third party teams so I need to provide a UI based solution in jenkins. I did not find any relevant plugin but I would like to make sure there isn't a better way before writing my own.
There are of course several approaches to this. I believe certificates are a concern of admins, not developers.
What we do is have custom admin pages where we upload certificates to blobstore under separate namespace. Then we have an internal "service" (just a simple factory) so that other pieces of code can retrieve certs.
If you are happy to use a cloud based Jenkins, we (CloudBees) have an oauth based solution at appengine.cloudbees.com
You could roll your own. It is not excessively tricky. You will need to
Register with google's api console to get a client key and secret and define the endpoints that your app will show up as
Write some way if feeding those credentials to your Jenkins build. I would recommend using the credentials plugin
Either write a build wrapper that exposes the refresh token to your build (python sdk deployment) or exposes the access token (java sdk... Got to love that the two sdks do the same thing in different ways)
Or use our free service ;-)

What is the 2-legged oauth scope described as "Download PDFs and arbitrary files"

The "Download PDFs and Arbitrary Files" scope name would seem to refer to downloading any non-native google file types, but I can't find any more information about it. I am able to use the documents list API doc feed entry to obtain a download link, which usually works, but I have recently encountered some unexpected auth failures while attempting to download which led me to discover this additional application manifest oauth scope.
The defined scopes are described here and another list (which omits the "download" scope) here. What is this scope and where can I find more information about it?
EDIT
I am using various google data APIs within the context of google app engine, and it is the app engine instance manifest which is declaring its required scopes. Since different scope lists are documented differently depending on the contexts in which the google data APIs are being used, and I am referring to the documentation for an app engine application manifest, I am including the app engine tag.
EDIT
The scope is additionally mentioned (by URL - https://docs.googleusercontent.com/) in this documentation, and would seem to be required in order encompass all potential download sources.
I just did some quick testing and it seems that the scope https://docs.google.com/feeds/ gives you access to the Document List API itself (listing files, reading and writing metadata) while https://docs.googleusercontent.com/ allows you to download non-native Google Docs formats so files such as PDF, images etc... that have been uploaded by the user to Drive. To me it seems that https://docs.googleusercontent.com/feeds/download/ is an alias to https://docs.googleusercontent.com/

Resources