Context:- We are using GAE with Python3 and so GAE APIs package isn't available so we are using google-cloud-* packages for interacting with GAE services
i.e. google-cloud-tasks for push queues, google-cloud-datastore for datastore.
Problem:- There is no way to test things in development environment as google-cloud-* packages directly act on production services.
i.e. if I push a task using google-cloud-tasks it would push in production queue, similarly if I create or update an entity from development environment it would be updating entity in production datastore.
Earlier with GAE APIs packages in local system it used to have local cloud tasks and datastore for development purpose.
I see it as a big and very common issue, I wonder if someone else as well faced such issue and found any solution to this.
For Cloud Datastore you can follow the instructions at https://cloud.google.com/datastore/docs/tools/datastore-emulator to use the local emulator instead of your production Datastore database.
As noted in https://cloud.google.com/tasks/docs/migrating, Cloud Tasks is not currently supported in an emulator.
I built an in-process emulator for Python development.
See also some emulators that run in a separate process in localhost: Potato London’s gcloud-tasks-emulator, mentioned in the answer above, and Aert van de Hulsbeek’s cloud-tasks-emulator.
This Local Emulator for Google Cloud Tasks worked for me.
pip install gcloud-tasks-emulator
gcloud-tasks-emulator start --port=9090
// Note -: by default the gcloud-tasks-emulator command
does not available globally switch installation directory.
/Users/{userName}/Library/Python/3.7/bin
./gcloud-tasks-emulator start --port=9090
Now we can add code changes to support cloud task in local env.
import grpc
from google.cloud.tasks_v2 import CloudTasksClient
from google.cloud.tasks_v2.gapic.transports.cloud_tasks_grpc_transport import C
loudTasksGrpcTransport
client = CloudTasksClient(
transport=CloudTasksGrpcTransport
(channel=grpc.insecure_channel("127.0.0.1:9090"))
)
visit this link for complete instruction https://pypi.org/project/gcloud-tasks-emulator/
Related
The Go package, google.golang.org/appengine, provides IsDevAppServer which reports whether an App Engine app is running in the development App Server (e.g. localhost:8080). However, this does not work unless the (deprecated) standalone SDK is used. See appengine.go#L57 for the implementation.
New GAE apps written in Go are basically a regular web server that can be compiled and started locally like any go program;
old; dev_appserver.py
new; go run main.go
Detecting a development server can be useful for to prevent CORS issues when running locally:
func setDevHeaders(w http.ResponseWriter) {
w.Header().Set("Access-Control-Allow-Origin", "http://localhost:4200")
w.Header().Set("Access-Control-Allow-Headers", "Content-Type")
w.Header().Set("Access-Control-Request-Method", "POST, GET")
w.Header().Set("Access-Control-Expose-Headers", "Content-Disposition")
}
When needed I can then branch:
if appengine.IsDevAppServer() {
setDevHeaders(w)
}
What is the recommended way to achieve this in a standalone Go server running on App Engine?
The App Engine Standard Go environnement sets a number of environment variables automatically. You can have a look at the list here.
You can check if they are set and if they aren't, then your code is running locally (or at least not deployed). Or you can set the NODE_ENV environment variable to development on your machine (in your shell where you run your app locally, not in the app.yaml file) and check for its value. It'll be set to production when running on App Engine.
It sounds tricky but I found that metadata server also respond to app engine standard.
Furthur, I used cloud.google.com/go/compute/metadata pacakge,
metadata.OnGCE()
as a quick check whether it is in my local machine or on google's machine (yeah, it returns true on app engine)
Apologies for the seemingly obvious question, but I figure the answer might help others. I can't for the life of me find documentation on the filepath within the Google App Engine VM (Cloud Shell) where I can find the static files being served from. I need to pull the latest upstream changes from a private github repo.
Note that I navigated elsewhere in the VM and even restarting the session didn't put me in a default project root path within the VM as I expected it to.
There are several issues to address here:
The Cloud Shell is a virtual shell
Google Cloud Shell is an interactive shell environment for Google
Cloud Platform.
The environment where you're working is a container running in a VM in a Google-owned project inside GCP.
You can verify this by checking the metadata server (only available for GCP VMs):
curl -H 'Metadata-Flavor:Google' "http://metadata.google.internal/computeMetadata/v1/?recursive=true&alt=text"
In the metadata provided you'll see how this container is created and configured.
The Cloud Shell is tied to the user, so you'll always access the same environment if you access it with the same credentials, no matter the project. However, if you access with a different user, you'll get a different environment.
You can't access GAE standard instances
GAE is a fully managed environment, and you won't be able to access it. In this way, you won't be able to find the root of the running app engine project.
However, by the way GAE deploys your code, it uses a staging bucket to gather the code before compiling. You can find your staging bucket through the App Engine Admin API. This is usually staging.<PROJECT_ID>.appspot.com, although you can change this configuration. You can get your files from there.
You can access GAE flex apps
However, the deployment in flex gets your files, build a Docker container with them, and then deploys this container inside a VM.
As per the docs, you can connect directly to your container by running:
gcloud app instances ssh [INSTANCE-NAME] --service [SERVICE] --version [VERSION]
docker exec -it gaeapp /bin/bash
Regarding your issue
According what you say in the comments of the question, your issue could come from a myriad of places. From changing the shell you're connecting to, to resetting your shell environment (deleting all the files), to a thousand different possible problems.
The best way to think about it is regard the Cloud Shell as a temporal environment to run commands, but not as a virtual machine.
Knowing that, you could mount a persistent filesystem (GCS through GCSFuse, Cloud Filestore, ...) to persist your work, or simply use Git to have your work always synced on a repo.
GAE Flex has some nice CI integrations, so that's a plus for going the Git route.
I have been tweaking my environment in Google's App Engine to have several instances of my app for testing and production. However I am uncertain if the intended use of versions applies here.
App background:
- Node.js express app configured on app engine, using Cloud sql.
- 2 modules:
- - default/main - front end code code and API
- - workers - separate app that has a variety of workers
- Redis to keep track of the queue, kue.js for implementation
I was under the impression that I could use versions here, so that I keep only 2 modules, my default one and a workers module. Each will then have 2 versions, staging and production. The commands to push each one would then be:
gcloud preview app deploy --version staging --no-promote
gcloud preview app deploy --version production --promote --no-stop-previous-version
That is all separated well, and perhaps the intended use of versions. However, what I can't achieve with this is no downtime. What seems to happen is that the old machine is tore down then the new one build up, resulting in 3-4 minutes of down time during deploys. As opposed to keeping the old until the new is finished and then just rerouting. Note, the production version in this case should always have 100% of the traffic.
What I found to work well is to keep a module for each version, so I end up with 4 modules (default, default-staging, workers, workers-staging) and no real versions specified during deploys. When deploying with this, there is no downtime, but old versions are kept running:
gcloud preview app deploy --promote
I have a helper script to delete all versions that get 0% of the traffic. Is this the correct approach for setting up separate environments? Just looking for some feedback in case I am missing something obvious.
I lead a web/mobile project and I still need to know the tools we will be using for development.
We have a 6 months access to IBM Bluemix, and its security check tools, CloudFoundry, and others may appear really useful.
However, we don't want to rely on a solution that would trap our project without any possibility of migration if needed.
I looked up on the internet how to export a project from Bluemix as a docker, with elements created from IBM. I didn't find anything relevant (I might be bad at googling, but all I can find is "how to export to Bluemix/how to work locally").
Does Bluemix allow to export the entire project onto another hoster, does it depend on the services we used in the project ?
Thank you in advance.
If you package your application in a container you can run it on any provider that supports Docker. That could be another cloud, in a local datacenter or on your own laptop.
If you are planning to use Bluemix services as part of that application then you will have two options if moving your application off Bluemix.
Keep using the services in Bluemix but connect to them remotely from wherever you're now hosting your appliaction. This will require internet connectivity and you'll have to hard code the service credentials in to your application (not good practice).
Migrate the services as well as the application. This will only be possible for the non-unique services IBM offer e.g. Redis, Mongo, Elasticsearch etc.. You'll need to refactor your application to accept the new provider of these services.
If your service/app is dockerized, and is being hosted as a container on Bluemix.
You can pull the container image of your service/app in your own docker enabled cloud or local environment. Following steps can be followed for the same:
install bluemix-container cli package https://www.ng.bluemix.net/docs/containers/container_cli_ov.html
do cf ic login using your bluemix credentials
check for your images using cf ic images command
pull the image in your environment using docker pull <image-registry-url>
run the container with required parameters using docker run
Hope it helps. Thanks.
Is there a way to export the data on my AppEngine database to the development server (for testing purposes etc.) ?
Yes! Check out Google's "Uploading and Downloading Data"
If you'd like to test how your data
works with the app before uploading
it, you can load it into the
development server. Use the --url
option to point the tool at the
development server URL. For example:
appcfg.py upload_data --config_file=album_loader.py --filename=album_data.csv --kind=Album --url=http://localhost:8080/remote_api <app-directory>
The subsection on uploading and downloading all data is also worth looking at.
Not yet it seems
Of course you can go pulling the data yourself, one batch at a time...
Yes we can download all data from google app engine and can upload to datastore but sometimes uploading data to local development server is painfull because of errors. App Engine SDK versioning diffrences occurs this like problems. For example i developed an app 1 year ago. Today, i want update it. I downloaded all data from Google App Engine real servers. But i can't upload its to local development server. You know, we using EntityLoader class for this operation. Entity Class importing db module, but SDK throws, "no module named by db".
I suggest for App Engine lovers that; save your first test data for future. Don't think that i will download all datum for testing future. Save your own test data with Sqlite support. And save your deveopment enviromenment version for future. SDK Version updating sometimes causing painfully times for developers