I am writing files to Google app engine managed vm (flexible environment). I have deployed the code to cloud. The code opened the file and write it without show errors like permission errors. However, my another code trying to open the file failed with "No such file or directory".
I have printed the location directory, so I used ssh command, also cannot find the file inside managed vm. And I cannot find any documentation about writing files to local storage of managed VM.
So how to write it to managed vm? What is the default storage location? Why I cannot find the file?
The reason you can't find the file inside the VM is because the VM is running a Docker container, and your app is actually inside that container. If you really wanted to, you could docker exec -it <container-name> -- /bin/bash to poke around inside the container and then you should see your file (use docker ps to find the container ID).
On App Engine flexible (new name for Managed VMs), your code could be scaled across many container so there's no promise that a file you write to one will be accessible later. You can get away with dealing with some writing to some /tmp files but not much more. You will be much better off writing any data to something like Cloud Datastore, Cloud SQL, Memcache, or Cloud Storage.
Related
I'm running a local instance of a PHP app-engine project, I have got some buckets setup in GCP specifically for the local dev version, however instead of the data that I write to the bucket appearing online, they are being saved locally into the dev_appserver Datastore. I can see the files in the local admin interface (localhost:8000) under Datastore.
This is an issue as the application I'm developing has a companion app which needs to also access the bucket files.
The
--support_datastore_emulator=[true|false]
flag is documented under
dev_appserver.py -h
But it doesn't seem to have any effect when using =false.
So my question is: How do I stop the dev_appserver from using the local Datastore and make it use the 'real' buckets on the web?
Try setting the --default_gcs_bucket_name flag documented here to establish the default GCS bucket to use:
dev_appserver.py app.yaml --default_gcs_bucket_name gs://BUCKET-NAME
I managed to get the "Quickstart for Python 3 in the App Engine Standard Environment" example up and running, and I thought I'd try and further my knowledge a little, perhaps by attempting to get a cron job running.
So I updated the python code, adding another endpoint, counter, like this:
#app.route('/counter')
def counter():
with open('counter.txt', 'a') as the_file:
the_file.write('Hello\n')
return 'Counter incremented'
I intend to have the cron job periodically hit /counter. When this endpoint is hit it will open the file and add a line to it. The /counter endpoint works on my local machine. After I deploy this updated code to the Google cloud, if I go to my blahblah.appspot.com/counter url it should update this 'counter.txt' file.
My question is: How do I see that file to know if it is being updated or not? How do I view that file in the cloud? Thanks.
It not possible to write files in the Google App Engine Standard Python3 environment except in the /tmp directory. As stated in the Python3 GAE official documentation:
The runtime includes a full filesystem. The filesystem is read-only except for the location /tmp, which is a virtual disk storing data in your App Engine instance's RAM.
I agree with #Josh J answer, you should use Google Cloud Storage instead.
You shouldn't write to the local file system in Google App Engine. It may or may not be visible to your app when it scales.
Google Cloud Storage is the preferred method of file storage.
https://cloud.google.com/appengine/docs/standard/python3/using-cloud-storage
For python 2.7 you can import a module from string instead of file, you can see in this answer:
https://stackoverflow.com/a/7548190/8244338
Apologies for the seemingly obvious question, but I figure the answer might help others. I can't for the life of me find documentation on the filepath within the Google App Engine VM (Cloud Shell) where I can find the static files being served from. I need to pull the latest upstream changes from a private github repo.
Note that I navigated elsewhere in the VM and even restarting the session didn't put me in a default project root path within the VM as I expected it to.
There are several issues to address here:
The Cloud Shell is a virtual shell
Google Cloud Shell is an interactive shell environment for Google
Cloud Platform.
The environment where you're working is a container running in a VM in a Google-owned project inside GCP.
You can verify this by checking the metadata server (only available for GCP VMs):
curl -H 'Metadata-Flavor:Google' "http://metadata.google.internal/computeMetadata/v1/?recursive=true&alt=text"
In the metadata provided you'll see how this container is created and configured.
The Cloud Shell is tied to the user, so you'll always access the same environment if you access it with the same credentials, no matter the project. However, if you access with a different user, you'll get a different environment.
You can't access GAE standard instances
GAE is a fully managed environment, and you won't be able to access it. In this way, you won't be able to find the root of the running app engine project.
However, by the way GAE deploys your code, it uses a staging bucket to gather the code before compiling. You can find your staging bucket through the App Engine Admin API. This is usually staging.<PROJECT_ID>.appspot.com, although you can change this configuration. You can get your files from there.
You can access GAE flex apps
However, the deployment in flex gets your files, build a Docker container with them, and then deploys this container inside a VM.
As per the docs, you can connect directly to your container by running:
gcloud app instances ssh [INSTANCE-NAME] --service [SERVICE] --version [VERSION]
docker exec -it gaeapp /bin/bash
Regarding your issue
According what you say in the comments of the question, your issue could come from a myriad of places. From changing the shell you're connecting to, to resetting your shell environment (deleting all the files), to a thousand different possible problems.
The best way to think about it is regard the Cloud Shell as a temporal environment to run commands, but not as a virtual machine.
Knowing that, you could mount a persistent filesystem (GCS through GCSFuse, Cloud Filestore, ...) to persist your work, or simply use Git to have your work always synced on a repo.
GAE Flex has some nice CI integrations, so that's a plus for going the Git route.
Is there a way to update selected files when using the App Engine Flexible env?
I'm facing an issue whenever I do a small change in the app.yaml file: to test it I would need to deploy the whole application which takes ~5mins.
Is there a way to update only the config file? OR is there a way to test these files locally.
Thanks!
The safe/blanket answer would be no as the flex env docker image would need to be updated regardless of how tiny the changes are, see How can I speed up Rails Docker deployments on Google Cloud Platform?
However, there might be something to try (YMMV).
From App Engine Flexible Environment:
You always have root access to Compute Engine VM instances. SSH access to VM instances in the flexible environment is disabled by
default. If you choose, you can enable root access to your app's VM
instances.
So you might be able to login as root on your GAE instance VM and try to manually modify a particular app artifact. Of course, you'd need to locate the artifact first.
Some artifacts might not even be present in the VM image itself (those used exclusively by the GAE infra, queue definitions, for example). But it should be possible to update these artifacts without updating the docker image, since they aren't part of the flex env service itself.
Other artifacts might be read-only and it might not be possible to change them to read-write.
Even if possible, such manual changes would be volatile, they would not survive an instance reload (which would be using the unmodified docker image), which might be required for some changes to take effect.
Lots of "might"s, lots of risks (manual fiddling with the app code could negatively impact its functionality), up to you to determine if a try is really worthy.
Update: it seems this is actually documented and supported, see Accessing Google App Engine Python App code in production
I have uploaded several files into the same folder on Google Cloud Storage using the Google Cloud Console. I would now like to move several of the files to a newly created folder in Google Cloud Storage and I cannot see how to do that via the Google Cloud Console. I found instructions to move the files via command prompt instructions on gsutil. However, I am not comfortable with command line interfaces and have not been able to get gsutil to work on my machine.
Is there a way to move files in Google Cloud Storage from one folder to another via the Google Cloud Console?
Update: Google Cloud Shell provides a terminal within the Google Cloud Console site without having to manually create VMs; it comes with gsutil and Google Cloud SDK pre-installed and pre-authenticated.
Prior answer: If you're having issues installing gsutil on your computer, consider the following approach:
Spin up an f1-micro instance with the Google-provided Debian image which will have gsutil preinstalled.
Use the SSH button to connect to it using the browser interface (you can also use gcutil or gcloud commands, if you have those installed and available).
Run gcloud auth login --no-launch-browser within the instance. It will give you a URL to open with your browser. Once you open it, grant the OAuth permissions, and it will display a code. Paste that code back into the command-line window where you ran the command so that it gets the authentication token.
Run the gsutil mv command, as suggested by Travis Hobrla:
gsutil mv gs://bucket/source-object gs://bucket/dest-object
Once you're done with gsutil, delete the instance by clicking on the Delete button at the top of the VM instance detail page. Make sure that the box marked "Delete boot disk when instance is deleted" on the same VM instance page is checked, so that you don't leave an orphaned disk around, which you will be charged for.
You can also browse your persistent disks on the "Disks" tab right below the "VM instances" tab, and delete disks manually there, or make sure there aren't an orphaned disks in the future.
Given the current price of $0.013/hr for an f1-micro instance, this should cost you less than a penny to do this, as you'll only be charged while the instance exists.
There is not currently a way to do this via the Google Cloud Console.
Because folders in Google Cloud Storage are really just placeholder objects in a flat namespace, it's not possible to do an atomic move or rename of a folder, which is why this scenario is more complex than doing a folder move in a local filesystem (with a hierarchical namespace). That's why a more complex tool like gsutil is needed.
Google Cloud Storage now has the functionality to move files from one folder/bucket to another using Cloud Console. To do this, simply select the file(s), click on the 3 vertical dots to get the option of move. Select the target folder/bucket to move the file.