I am deploying a binary on Google cloud flexible app engine for two different services. So I have {app-service1.yaml, Dockerfile-service1} and {app-service2.yaml, Dockerfile-service2}. And use "gcloud app deploy" command to deploy them.
Is it possible to send a param from app-service[1|2].yaml to a single Dockerfile, so that I can maintain only one Dockerfile?
I tried two things but they didn't work with "gcloud deploy" command:
"entrypoint:" in app.yaml -- It does not override what is set in CMD in Dockerfile.
"env_variables:" in app.yaml -- Dockerfile's ENV or ARG do not see any variables defined in env_variables:.
There's currently no way (I can think of) to pass parameters into the docker build process while using gcloud app deploy. If the dockerfiles you're using are similar, you may want to consider creating a base docker file, building a base image, and then sending it to gcr.io. Then you can extend the base image with your other Dockerfile(s).
Hope this helps!
Related
I've scoured App Engine documentations for an explanation on what is an entry point, and I frankly have hit a wall. Was hoping someone on SO can provide an explanation of what and the purpose of an entry point is.
An entrypoint is a Docker command that is executed when the container starts, allowing you to configure a container that will run as an executable.
For App Engine the entrypoint is specified in the app.yaml file, the command present in the entrypoint field will be included in the entrypoint of your app's Dockerfile, meaning that it is the one that will tell how the application has to be started when you are deploying it. The entrypoint should start a web server that will listen on the port 8080, which is the port used by App Engine to send requests to the deployed container. App Engine provides the PORT environment variable for ease of use.
For example:
entrypoint: gunicorn -b :$PORT main:app
With this entrypoint you are telling how you want the app to be started, in this case using gunicorn, and where do you want it to keep listening.
By default, this gunicorn command is the entrypoint used by App Engine when you do not explicitly set one in the app.yaml file.
You always need an entrypoint because all App Engine apps are deployed using Docker containers. Even if you only deploy a file with your code, App Engine will build a Docker container with the parameters set in the app.yaml, because when you deploy an app using App Engine, internally the process used is a build, where the image is given by App Engine.
Also when you deploy an app with App Engine you will be able to find the related build if you go to the Cloud Build section in your GCP console, where you’ll find all the steps and information for the build of the Docker container where your App Engine is being deployed.
In conclusion, App Engine uses the entrypoint from Docker because internally what App Engine is doing while the deploying is using Cloud Build service to build a container image for your app with the information given in the entrypoint.
The entrypoint tells the container what to do when it is run. I see it most frequently with Docker, but other container formats will have something equivalent.
For App Engine, the key thing the entrypoint setting does is start the HTTP server which listens for requests. Here is the Python documentation describing the entrypoint, but there are also links for other runtimes at the top of the page.
Deploying an application to Google App Engine using the 'Custom Runtimes Flexible Environment' option requires a Dockerfile to build the docker image Google-side. I want to specify an image from my private Docker registry in the Dockerfile FROM clause. However, I cannot find any documentation or see any obvious options explaining where I would specify credentials for a private registry, or invoke a docker login. Without this, gcloud app deploy fails, of course, attempting to pull the image Google-side.
For example:
$ gcloud app deploy
...
Beginning deployment of service
...
Sending build context to Docker daemon 3.072kB
Step 1/1 : FROM registry.gitlab.com/my/private/registry/image:latest
Get https://registry.gitlab.com/v2/my/private/registry/image/manifests/latest: denied: access forbidden
The Dockerfile in this case would simply be:
FROM registry.gitlab.com/my/private/registry/image:latest
Does anyone out there know if this is possible with Google App Engine, and if so, how to configure it?
There is this Stackoverflow post that already covers the topic and provides an answer.
In fact, if you upload your image to Google Container Registry, it will be private and you will be able to control who has access to GCR by using IAM permissions. After that you can use it in your App Engine deployments:
gcloud app deploy --image-url $GCR_IMAGE_PATH
I'm using Google AppEngine Flexible with python environment. Right now I have two services: default and worker that share the same codebase, configured by app.yaml and worker.yaml. Now I need to install native C++ library, so I had to switch to Custom runtime and added Dockerfile.
Here is the Dockerfile generated by gcloud beta app gen-config --custom command
FROM gcr.io/google-appengine/python
LABEL python_version=python3.6
RUN virtualenv --no-download /env -p python3.6
# Set virtualenv environment variables. This is equivalent to running
# source /env/bin/activate
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
ADD requirements.txt /app/
RUN pip install -r requirements.txt
ADD . /app/
CMD exec gunicorn --workers=3 --threads=3 --bind=:$PORT aces.wsgi
Previously my app.yaml and worker.yaml each had it's own entrypoint: config that specified the command needed to be run to start the service.
So, my question is how can I use two different commands to start the services?
EDIT 1
So far I was able to solve this by rewriting CMD line in dockerfile for each deploy of each service. However, I'm not quite satisfied with this solution.
gcloud app deploy command has --image-url flag that allows to set image url from GCR. I haven't researched that yet, but it seems that I can just upload images to GCR and use the urls since don't change that often
Yes, as you mentioned, I think using the --image-url flag, is a good option here.
Specify a custom runtime.
Build the image locally, tag it, and push it to Google Container Registry (GCR)
then, deploy your service, specifying a custom service file, and specifying the remote image on GCR using the --image-url option.
Here's an example that accomplishes different entrypoints in 2 services that share the same code:
...this is assuming that the "flex" and not "standard" app engine offering is being used.
lets say you have a: project called my-proj
with a default service that is not important
and a second service called queue-processor which is using much of the same code from the same directory.
Create a separate dockerfile for it called QueueProcessorDockerfile
and a separate app.yaml called queue-processor-app.yaml to tell google app engine what i want to happen.
QueueProcessorDockerfile
FROM node:10
# Create app directory
WORKDIR /usr/src/app
COPY package.json ./
COPY yarn.lock ./
RUN npm install -g yarn
RUN yarn
# Bundle app source
COPY . .
CMD [ "yarn", "process-queue" ]
*of course i have a "process-queue" script in my package.json
queue-processor-app.yaml
runtime: custom
env: flex
... other stuff...
...
build and tag the docker image
Check out googles guide here -> https://cloud.google.com/container-registry/docs/pushing-and-pulling
docker build -t eu.gcr.io/my-proj/queue-processor -f QueueProcessorDockerfile .
push it to GCR
docker push eu.gcr.io/my-proj/queue-processor
deploy the service, specifying which yaml config file google should use, as well as the image url you have pushed
gcloud app deploy queue-processor-app.yaml --image-url eu.gcr.io/my-proj/queue-processor
Since the Dockerfile name cannot be changed, the only way to not have to modify the Dockerfile would be to store each service in its own, separate directory. Clean separation, each service has its own Dockerfile and/or startup configuration.
But this raises a question: how to deal with the code shared by multiple services? Using symlinks (which works great for sharing code across standard env services) doesn't work for the flexible env services, see Sharing code between flexible environment modules in a GAE project.
I see a few possible approaches, none really ideal, but maybe more appealing than what you currently have:
hard-link each and every shared source code file (since hardlinking directories is not possible). A bit tedious and error-prone, but you only have to do that once per file
package and publish your shared code as an external library, added to the requirements.txt file of each service using it
split the shared code in a separate repository and have a copy of that repository in each service using it (maybe as a git submodule if using git?). You just need to ensure at the service deployment time that the shared repository is pulled at the proper version - can be quite reliably done through automation. A bit more complicated if you have uncommited changes in this repo - you'd have to patch the same changes in all services.
have multiple copies of the Dockerfiles with different names which you simply copy over instead of always editing the same file. Symlinking instead of copying might work as well, since the symlink doesn't need to be followed outside of the service directory, if it's just replicated as a symlink it'll work.
So i had a very similar issue with my Java applications. We were looking to migrate from Heroku to GAE and were attempting to simulate the Heroku Procfile with GAE services. Effectively what we did was to create separate directories in our application src/main/appengine/web and src/main/appengine/worker where each directory conainted the app.yaml and Dockerfile specific to the process. Then using the mvn appengine:deploy capabilities, we specified the -Dapp.stage.dockerDirectory and -Dapp.stage.appEngineDirecory respectively for each service we wanted to deploy. Then using just some parameters we were able to basically script out parallel deployments of each service from the same code base. Not sure if this works in your situation, but it was very useful for us: Here are the two example commands in their entirety:
Web Process:
mvn appengine:deploy -Dapp.stage.dockerDirectory=src/main/appengine/web -Dapp.stage.appEngineDirectory=src/main/appengine/web -Dapp.stage.stagingDirectory=target/appengine-web -Dapp.deploy.projectId=${project-id} -Dapp.deploy.version=${project-version}
Worker Process:
mvn appengine:deploy -Dapp.stage.dockerDirectory=src/main/appengine/worker -Dapp.stage.appEngineDirectory=src/main/appengine/worker -Dapp.stage.stagingDirectory=target/appengine-worker -Dapp.deploy.projectId=${project-id} -Dapp.deploy.version=${project-version}
My problem is that I want to create dev, stage, prod environments by using different GCP projects.
Basically they are running the same code, just running them in different isolated environments.
I'm using gcloud app deploy in command line to deploy app right now.
How can I efficiently deploy an app to different project?
Do I have to do gcloud init to change my configuration of default project every time?
There must be some better practices.
Or, is there a better way for me to set up dev... environments in the context of app engine?
Thanks.
Instead of using gcloud init to change your configuration each time, use this code to deploy to multiple projects faster:
gcloud app deploy -q --project [YOUR_PROJECT_ID]
If your projects have similar IDs, let's say: test1, test2, test3, test4, You can deploy to the four projects with one command. Use this code:
for i in {1..4}; do gcloud app deploy -q --project test${i}; done
The "standard" approach is to use versions, e.g.
qa.myApp.appspot.com
Once a version is ready for next step, you deploy it with a different version id.
One problem with using multiple projects is that you have to maintain a different data set for each project.
My preference is to have the different environments managed via the same version control as the code - one branch for each environment, keeping the deployments perfectly aligned with the natural flow of code changes, promoted via branch merges: dev -> stage -> production.
To minimize the risk of human error I try as much as possible to keep the deployment configs in the code itself (i.e. - have the app IDs, versions, etc. picked up from the .yaml files, not passed to the deploy cmd as args). The deployment cmds themselves are kept in a cheat-sheet file (too simple to warrant a full-blown script at this time), also git-controlled. Illustrated in this answer: https://stackoverflow.com/a/34111170/4495081
Deployments are done from separate, dedicated workspaces - one for each environment, based on the corresponding git branch (I never switch the branches in these workspaces). I just update the workspace corresponding to the desired environment to the version needed and copy-paste the deployment cmd from the workspace's cheat-sheet.
This model is IMHO CI/CD-ready and can easily be entirely automated.
For Python applications, you can set application in the app.yaml file. This allows you to use different data for each project. This is when you deploy using the appcfg.py command.
application: myproject
version: alpha-001
runtime: python27
api_version: 1
threadsafe: true
handlers:
- url: /
script: home.app
If you don't want to change the application value in this file for each project, you can run the following:
appcfg.py -A <YOUR_PROJECT_ID> -V v1 update myapp/
https://cloud.google.com/appengine/docs/python/config/appref
If you do not specify the application in the file, use the --application option in the appcfg command when you deploy. This element is ignored when you deploy using the gcloud app deploy command.
Is possible pass arguments building a managed vm to use 'ARG' Docker command?.
In Dockerfile sets default value...
ARG env="dev"
Building Docker container I can change this value...
docker build -t test/app --build-arg env=pr .
I have two environments and I want deploy the managed vm with different configuration files in Dockerfile build process.
Thanks.
Sadly, this isn't currently supported. All of our docker build stuff right now is kind of magically integrated. There are a few options here, though none of them are quite what you're looking for.
You can build your docker container locally, push it to gcr.io, and then use the --image-url flag on gcloud app deploy.
In the next few weeks, we're going to start using our container builder service by default for docker builds with Managed VMs. While we don't have a plan right now to expose the setting, there's a config setting that allows you to define environment variables via the container builder API. It's going to be easier to support something like this in the future.
Hope this helps!