I can ping a VM started with 'create cluster' command from the app engine VM. However the names given to the cluster VMs are not user friendly (such as gke-mongo-cluster-default-pool-37e9b787-k7wl). I would like to assign a cleaner name to the VM - such as mongo-1 . When I deploy with deployment manager (say mongodb); I do get such cleaner names.
So the question is,
(1) How can I assign names to the VMs created under a gcloud cluster created with create-cluster command?
(2) Is there another way to map a name to an IP address within the project?
Thanks.
(1) How can I assign names to the VMs created under a gcloud cluster created with create-cluster command?
This isn't possible. The names are chosen for you by Google Container Engine and have names assigned such that the number of nodes in the cluster can be dynamically scaled up and down without creating naming conflicts.
It isn't clear why you are trying to ping a node directly. You can create a Kubernetes service running inside your GKE cluster that has a stable name that you can address.
Related
Our AppEngine app is connecting to a remote service which requires a VPN and also required me to add entries to the hosts file on my local machine in order to connect to their endpoints.
e.g.
10.200.30.150 foo.bar.com
This is working fine when running the app locally, but I can't figure out how to set this up on Google Cloud to work once deployed.
I can't use the IP addresses directly because it errors that the IP is not on the cert's list.
How do I map the host names to the IPs in Google Cloud so that AppEngine can use them?
From the error mentioned in the comment I suspect connecting directly through the IP fails because the certificate doesn't recognize the IP to DNS mapping as valid and therefore the secure connection setup breaks. Based on the requirements of connecting to the API by VPN and tweaking the hosts mapping there are few things you may try.
The simplest approach that may work would be using a Google Compute Engine VM instance, since there you would able to manipulate the etc/hosts file and replicate the local machine setup. This VM could be used either as the main app service or as a proxy from App Engine to the 3rd party API endpoint. To go that route I would suggest taking a look at these two posts which explain how to change the etc/hosts file on GCE (Changing the file once wouldn't work as the VM periodically overrides it, see the posts for cronjob like workaround).
Separately, as your app runs in App Engine flexible environment there is the chance to provide a docker container with the app packaged. It may be possible to set the workaround above in the docker file and have it working in App Engine too.
I am neither able to create an instance on gcloud compute nor my application on gcloud app engine able to pull instances. Our application is down since yesterday. I observed that it can be specific to my project. When I created a new account and tried from that its working fine. Please advise. I don't want to move to a different account as this already have my company funds.
Create an instance first without an external IP address. One method of doing this is to add the flag --no-address in statement
gcloud compute instances create instance-name --no-address
Step 2.
Go to VPC Networking -> External IP addresses. Create an external IP address and associate it with the instance you're attempting to make. Once they're created you're good to go. The operation may fail but is easier to be repeated than manually launching instances over an over.
This will not resolve issues with Flex Deployments/Dataflow etc.
Apologies for the seemingly obvious question, but I figure the answer might help others. I can't for the life of me find documentation on the filepath within the Google App Engine VM (Cloud Shell) where I can find the static files being served from. I need to pull the latest upstream changes from a private github repo.
Note that I navigated elsewhere in the VM and even restarting the session didn't put me in a default project root path within the VM as I expected it to.
There are several issues to address here:
The Cloud Shell is a virtual shell
Google Cloud Shell is an interactive shell environment for Google
Cloud Platform.
The environment where you're working is a container running in a VM in a Google-owned project inside GCP.
You can verify this by checking the metadata server (only available for GCP VMs):
curl -H 'Metadata-Flavor:Google' "http://metadata.google.internal/computeMetadata/v1/?recursive=true&alt=text"
In the metadata provided you'll see how this container is created and configured.
The Cloud Shell is tied to the user, so you'll always access the same environment if you access it with the same credentials, no matter the project. However, if you access with a different user, you'll get a different environment.
You can't access GAE standard instances
GAE is a fully managed environment, and you won't be able to access it. In this way, you won't be able to find the root of the running app engine project.
However, by the way GAE deploys your code, it uses a staging bucket to gather the code before compiling. You can find your staging bucket through the App Engine Admin API. This is usually staging.<PROJECT_ID>.appspot.com, although you can change this configuration. You can get your files from there.
You can access GAE flex apps
However, the deployment in flex gets your files, build a Docker container with them, and then deploys this container inside a VM.
As per the docs, you can connect directly to your container by running:
gcloud app instances ssh [INSTANCE-NAME] --service [SERVICE] --version [VERSION]
docker exec -it gaeapp /bin/bash
Regarding your issue
According what you say in the comments of the question, your issue could come from a myriad of places. From changing the shell you're connecting to, to resetting your shell environment (deleting all the files), to a thousand different possible problems.
The best way to think about it is regard the Cloud Shell as a temporal environment to run commands, but not as a virtual machine.
Knowing that, you could mount a persistent filesystem (GCS through GCSFuse, Cloud Filestore, ...) to persist your work, or simply use Git to have your work always synced on a repo.
GAE Flex has some nice CI integrations, so that's a plus for going the Git route.
I'm trying to implement a simple design in google cloud using app engine standard and flexible with datastore. App1 lives in GAE standard environment. When a user interacts with this app, it writes some data to datastore and queues a task. The target of the queued task is App2 that lives in app engine flexible environment (the task can take a longer time to complete than standard environment allows). The idea is for App2 to read the data from datastore, perform the task using the data, once complete it should write a report entity to datastore. I've attached a simple diagram.
In App1 I've set up a Service Account named flexKey with Owner permissions, downloaded the json file.
when I run App2 locally I first export the path to the credentials json file as an environment variable:
GOOGLE_APPLICATION_CREDENTIALS: "path/to/flexKey.json"
then launch the app with mvn jetty:run-exploded and everything works fine, App2 is able to authenticate with live datastore (not local emulation), and read the data written by App1. When I unset the environment variable, I get an 'Unauthenticated' error (expected)
To use the same service account when App2 is deployed I've added the following in app.yaml for App2 to set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the path of the service account json file flexKey.json (this is the path to the file on the deployed instance):
env: flex
env_variables:
GOOGLE_APPLICATION_CREDENTIALS: "/var/lib/jetty/webapps/root/WEB-INF/classes/flexKey.json"
runtime: java
However, when I deploy App2 to app engine flexible environment, there is an error authenticating with datastore when trying to do the read query (this works fine when querying datastore with the same credentials from a locally running instance of App2):
com.google.cloud.datastore.DatastoreException: Missing or insufficient permissions.
at com.google.cloud.datastore.spi.v1.HttpDatastoreRpc.translate(HttpDatastoreRpc.java:129)
at com.google.cloud.datastore.spi.v1.HttpDatastoreRpc.translate(HttpDatastoreRpc.java:114)
at com.google.cloud.datastore.spi.v1.HttpDatastoreRpc.runQuery(HttpDatastoreRpc.java:182)
at com.google.cloud.datastore.DatastoreImpl$1.call(DatastoreImpl.java:178)
at com.google.cloud.datastore.DatastoreImpl$1.call(DatastoreImpl.java:174)
at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:89)
at com.google.cloud.RetryHelper.run(RetryHelper.java:74)
at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:51)
at com.google.cloud.datastore.DatastoreImpl.runQuery(DatastoreImpl.java:173)
.....
the code is PERMISSION_DENIED
If I leave out the environment variable GOOGLE_APPLICATION_CREDENTIALS from the app.yaml file for App2, then I get an 'Unauthenticated' error, so I think it's reading the file so I'm not sure what the issue is
I'm using Objectify v6
I'm not able to see why the same credentials (created with Owner role) work fine when querying the datastore from a locally running instance of the app but don't work when datastore is queried from the deployed version of the same app (deployed to flexible environment). setting the path to the credentials file via an environment variable in app.yaml is the method recommended in the documentation unless I am mistaken.
Is the GOOGLE_APPLICATION_CREDENTIALS environment variable not properly set in app.yaml?
Is there something conceptually problematic about my design?
All help appreciated.
You have a blocking issue in your design: it is not possible for one application to enqueue tasks into a push queue targeted at a service from another application. From the <target> (push queues) rows in the Syntax tables for both queue.yaml and queue.xml references:
The string is prepended to the domain name of your app when
constructing the HTTP request for a task. For example, if your app ID
is my-app and you set the target to my-version.my-service, the
URL hostname will be set to
my-version.my-service.my-app.appspot.com.
If you want to use the task queue then you have to make the 2 services part of the same application. As a (positive) side effect you don't have to worry about setting up the authentication for datastore access anymore - both services can directly access the app's datastore.
I would like to debug my Google App Engine (GAE) app locally but without using localhost. Since my application is made up of microservices, the urls in a production environment would be along the lines of:
https://my-service.myapp.appspot.com/
But code in one service can call another service and that means that the urls are hardcoded. I could of course use a mechanism in code to determine whether the app is running locally or on GAE and use urls that are different although I don't see how a local url would handle the since the only way to run an app locally is to use localhost. Hence:
http://localhost:8080/some-service
Notice that "some-service" maps to a servlet, whereas "my-service" is a name assigned to a service when the app is uploaded. These are really two different things.
The only possible solution I was able to find was to use a reverse proxy which would map one url to a different one. Still, it isn't clear whether the GAE development SDK even supports this.
Personally I chose to detect the local development vs GAE environment and build my inter-services URLs accordingly. I feel it was a well-worthy effort, I've been (re)using it a lot. No reverse proxy or any other additional ops necessary, it just works.
Granted, I'm using Python, so I'm not 100% sure a complete similar Java solution exists. But maybe it can point you in the right direction.
To build the per-service URLs I used modules.get_hostname() (the implementation is presented in Resolve Discovery path on App Engine Module). I believe the Java equivalent would be getInstanceHostname() from com.google.appengine.api.modules.
This method, when executed on the local server, automatically provides the particular port the server listens to for each service.
BTW, all my services for an app are executed by a single development server process, which listens on multiple ports (this is, I guess, how it can provide the modules.get_hostname() info). See Running multiple services using dev_appserver.py on different ports. This is part I'm unsure about: if/how the java local dev server can simultaneously run multiple services. Apparently this used to be supported some time ago (when services were still called modules):
Serving multiple GAE modules from one development server?
GAE modules on development server
This can be accomplished with the following steps:
Create an entry in the hosts file
Run the App Engine Dev server from a Terminal using certain options
Use IntelliJ with Remote debugging to attach the App Engine Dev server.
To edit the hosts file on a Mac, edit the file /etc/hosts and supply the domain that corresponds to your service:. Example:
127.0.0.1 my-service.myapp.com
After you save this, you need to restart your computer for the changes to take place.
Run the App Engine Dev server manually:
dev_appserver.sh --address=0.0.0.0 --jvm_flag=-Xdebug
--jvm_flag=-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8000
[path_to_exploded_war_directory]
In IntelliJ, create a debug configuration. Use the Remote template to create this configuration. Set the host to the url you set in the hosts file and set the port to 8000.
You can set a breakpoint and run the app in IntelliJ. IntelliJ will attach to the running instance of App Engine Dev server.
Because you are using a port during debugging and no port is actually used when the app is uploaded to the GAE during production, you need to add code that identifies when the app is running locally and when it's running on GAE. This can be done as follows:
private String mServiceUrl = "my-service.my-app.appspot.com";
...
if (SystemProperty.environment.value() != SystemProperty.Environment.Value.Production) {
mServiceUrl += ":8000";
}
See https://cloud.google.com/appengine/docs/standard/java/tools/using-local-server
An improved solution is to avoid including the port altogether and not having to use code to determine whether your app is running locally or on the production server. One way to do this is to use Charles (an application for monitoring and interacting with requests) and use a feature called Remote Mapping which lets you map one url to another. When enabled, you could map something like:
https://my-service.my-app.appspot.com/
to
https://localhost:8080
You would then enable the option to include the original host, so that this gets delivered to the local dev server. As far as your code is concerned it only sees:
https://my-service.my-app.appspot.com/
although the ip address will be 127.0.0.1:8080 when remote mapping is enabled. To use https on local host however does require that you enable ssl certificates for Charles.
For a complete overview on how to setup and debug microservices for a GAE Java app in IntelliJ, see:
https://github.com/JohannBlake/gae-microservices