I am deploying a docker service to Gcloud. The service is internal and needs external public access disabled. The only way I see to do this is using handlers for the service, but there is no way I see to use handlers with a Docker service.
From the example google app engine:
handlers:
- url: /_ah/push-handlers/.*
login: admin
script: main.app
This doesn't appear to work for docker since none of the scripts are directly accessible, and the URL is set inside the docker container.
How do I setup a handler (or at least disable external access to the service) for a docker container?
Related
I've scoured App Engine documentations for an explanation on what is an entry point, and I frankly have hit a wall. Was hoping someone on SO can provide an explanation of what and the purpose of an entry point is.
An entrypoint is a Docker command that is executed when the container starts, allowing you to configure a container that will run as an executable.
For App Engine the entrypoint is specified in the app.yaml file, the command present in the entrypoint field will be included in the entrypoint of your app's Dockerfile, meaning that it is the one that will tell how the application has to be started when you are deploying it. The entrypoint should start a web server that will listen on the port 8080, which is the port used by App Engine to send requests to the deployed container. App Engine provides the PORT environment variable for ease of use.
For example:
entrypoint: gunicorn -b :$PORT main:app
With this entrypoint you are telling how you want the app to be started, in this case using gunicorn, and where do you want it to keep listening.
By default, this gunicorn command is the entrypoint used by App Engine when you do not explicitly set one in the app.yaml file.
You always need an entrypoint because all App Engine apps are deployed using Docker containers. Even if you only deploy a file with your code, App Engine will build a Docker container with the parameters set in the app.yaml, because when you deploy an app using App Engine, internally the process used is a build, where the image is given by App Engine.
Also when you deploy an app with App Engine you will be able to find the related build if you go to the Cloud Build section in your GCP console, where you’ll find all the steps and information for the build of the Docker container where your App Engine is being deployed.
In conclusion, App Engine uses the entrypoint from Docker because internally what App Engine is doing while the deploying is using Cloud Build service to build a container image for your app with the information given in the entrypoint.
The entrypoint tells the container what to do when it is run. I see it most frequently with Docker, but other container formats will have something equivalent.
For App Engine, the key thing the entrypoint setting does is start the HTTP server which listens for requests. Here is the Python documentation describing the entrypoint, but there are also links for other runtimes at the top of the page.
Deploying an application to Google App Engine using the 'Custom Runtimes Flexible Environment' option requires a Dockerfile to build the docker image Google-side. I want to specify an image from my private Docker registry in the Dockerfile FROM clause. However, I cannot find any documentation or see any obvious options explaining where I would specify credentials for a private registry, or invoke a docker login. Without this, gcloud app deploy fails, of course, attempting to pull the image Google-side.
For example:
$ gcloud app deploy
...
Beginning deployment of service
...
Sending build context to Docker daemon 3.072kB
Step 1/1 : FROM registry.gitlab.com/my/private/registry/image:latest
Get https://registry.gitlab.com/v2/my/private/registry/image/manifests/latest: denied: access forbidden
The Dockerfile in this case would simply be:
FROM registry.gitlab.com/my/private/registry/image:latest
Does anyone out there know if this is possible with Google App Engine, and if so, how to configure it?
There is this Stackoverflow post that already covers the topic and provides an answer.
In fact, if you upload your image to Google Container Registry, it will be private and you will be able to control who has access to GCR by using IAM permissions. After that you can use it in your App Engine deployments:
gcloud app deploy --image-url $GCR_IMAGE_PATH
I have a NodeJs App Engine project . I also have an Apache website in another server which hosts the project dashboard. This site is the one using the node API.
I am willing to host both projects on the same server on this Google Cloud project.
Can this be achieved simply by using services in the app.yaml?
I also have an Apache website in another server which hosts the project dashboard.
What does this other server actually do? If it's serving static files, you could easily do this by adding a static_dir handler in your app.yaml
handlers:
# All URLs beginning with /dashboard are treated as paths to
# static files in the web-dashboard/ directory.
- url: /dashboard
static_dir: web-dashboard
If there is actual webserver code running, you could set up and app engine flex with a custom runtime & dockerfile to run apache
https://cloud.google.com/appengine/docs/flexible/custom-runtimes/
But an easier lift would be just rewriting your webserver code to work with one of app engines existing flex runtimes https://cloud.google.com/appengine/docs/flexible/
Once you do that, then you route traffic between the 2 services with dispatch.yaml
https://cloud.google.com/appengine/docs/standard/python/reference/dispatch-yaml
I am currently using the app engine maven plugin, which seems to trigger a Google cloud build to build a docker image and then push to app engine.
Is it possible for me to just push an exiting docker image from docker hub or google container registry?
You can deploy to App Engine using a specific Docker image hosted on Google Container Registry by using the --image-url flag like this:
gcloud app deploy --image-url=[HOSTNAME]/[PROJECT-ID]/[IMAGE]
See doc here for more info on the hostname options.
It is also possible to do this through the Dockerfile in your app directory.
I noticed this while searching for ways to customize Google's own NGINX container in the App Engine instance (this is what is used to serve your app).
The first line of the Nginx Dockerfile is FROM nginx. This is referencing the 'nginx' image in the default image repository. As such this could be any image in the default registry as referenced by name. The default registry seems to be the Docker-hub registry (did not investigate if Google is mirroring or similar).
In this way, your app directory only need contain 2 files: app.yaml and Dockerfile.
I wish to know how I can use a secure connection (https) with Dart and Managed VM's on localhost and when it's deployed.
Thank you.
When an application is deployed using gcloud preview app deploy the default is that the App Engine application will be served on both HTTP and HTTPS. If you have an application on
http://project.appspot.com
you can access it using HTTPS on
https://project.appspot.com
If not accessing the default version the URL are:
http://version.project.appspot.com
and HTTPS on
https://version-dot-project.appspot.com
Note the first . changing to -dot-.
You can specify the following int the app.yaml to only serve the application over HTTPS:
- url: /.*
script: dummy
secure: always
This will also redirect from HTTP to HTTPS, but unfortunately not do the rewrite from . to -dot- if not using the default version.
For local development using gcloud preview app run it is not possible to use HTTPS. The following quote is from the App Engine documentation:
The development web server does not support HTTPS connections. It
ignores the secure parameter, so paths intended for use with HTTPS can
be tested using regular HTTP connections to the development web
server.
See https://github.com/dart-lang/appengine/issues/16 and https://cloud.google.com/appengine/docs/python/config/appconfig#Python_app_yaml_Secure_URLs.