Is there a way to run a list of linux commands on the docker after deployement finish automatically, like lifecycle (valable for Kubernetes ) on the yaml file?
I do not want to have to ssh to the instance and run my command.
I need to install ssh-client and some time vim and other.
For those who are looking for a solution to this problem.
App Engine with runtime: python or other default source in the app.yaml, there won't be too much customization.
To be able to create your own build you have to use runtime: custom and add Dockerfile file in same directory (root).
This is what it looks like:
app.yaml:
Only the first line change.
runtime: custom
# the PROJECT-DIRECTORY is the one with settings.py and wsgi.py
entrypoint: gunicorn -b :$PORT mysite.wsgi # specific to a GUnicorn HTTP server deployment
env: flex # for Google Cloud Flexible App Engine
# any environment variables you want to pass to your application.
# accessible through os.environ['VARIABLE_NAME']
env_variables:
# the secret key used for the Django app (from PROJECT-DIRECTORY/settings.py)
SECRET_KEY: 'lkfjop8Z8rXWbrtdVCwZ2fMWTDTCuETbvhaw3lhwqiepwsfghfhlrgldf'
DEBUG: 'False' # always False for deployment
DB_HOST: '/cloudsql/app-example:us-central1:example-postgres'
DB_PORT: '5432' # PostgreSQL port
DB_NAME: 'example-db'
DB_USER: 'mysusername' # either 'postgres' (default) or one you created on the PostgreSQL instance page
DB_PASSWORD: 'sgvdsgbgjhrhytriuuyokkuuywrtwerwednHUQ'
STATIC_URL: 'https://storage.googleapis.com/example/static/' # this is the url that you sync static files to
automatic_scaling:
min_num_instances: 1
max_num_instances: 2
handlers:
- url: /static
static_dir: static
beta_settings:
cloud_sql_instances: app-example:us-central1:example-postgres
runtime_config:
python_version: 3 # enter your Python version BASE ONLY here. Enter 2 for 2.7.9 or 3 for 3.6.4
Dockerfile:
FROM gcr.io/google-appengine/python
# Create a virtualenv for dependencies. This isolates these packages from
# system-level packages.
# Use -p python3 or -p python3.7 to select python version. Default is version 2.
RUN virtualenv /env -p python3
# Setting these environment variables are the same as running
# source /env/bin/activate.
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
# Install custom linux packages (python-qt4 for open-cv)
RUN apt-get update -y && apt-get install vim python-qt4 ssh-client git -y
# Add the application source code and install all dependencies into the virtualenv
ADD . /app
RUN pip install -r /app/requirements.txt
# add my ssh key for github
RUN mv /app/.ssh / && \
chmod 600 /.ssh/*
# Run a WSGI server to serve the application.
EXPOSE 8080
# gunicorn must be declared as a dependency in requirements.txt.
WORKDIR /app
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 Blacks.wsgi:application --timeout 0 --preload && \
python3 manage.py makemigrations && \
python3 manage.py migrate
App Engine is Serverless solution. In product description you may find:
Fully managed
A fully managed environment lets you focus on code while
App Engine manages infrastructure concerns.
This means that, if you choose App Engine you don't have to care about server. So as design, it's for those who do not want to have any access to server, but focus on code leaving all server maintenance to GCP. I think the main feature is automatic scaling of the app.
We do not know what do you intend to do, you can review the app.yaml reference to find all features available. The configuration is different depending of the language environment you want to use.
If you want to have access to environment you should use Kubernetes solutions or even Compute engine.
I hope it will help somehow!
Another simple workaround would be to create a separate URL handler that will run your shell script. For example /migrate. And after deployment, you may use curl to trigger that URL.
Please note that in such case anybody, who would try to find some secret URLs of your backend may find it and trigger it as many times, as they want. So if you need to ensure only trusted people can trigger it - you should either:
come up with more secret URL than just /migrate
check permissions inside this view (but in such case it will be more difficult to call it via curl, cause you'll also need to pass some auth data)
Example basic view (using python + django rest framework):
from io import StringIO
from django.core.management import call_command
from rest_framework.renderers import StaticHTMLRenderer
from rest_framework.response import Response
from rest_framework.views import APIView
class MigrateView(APIView):
permission_classes = ()
renderer_classes = [StaticHTMLRenderer]
def post(self, request, *args, **kwargs):
out = StringIO()
# --no-color because ANSI symbols (used for colors)
# render incorrectly in browser/curl
call_command('migrate', '--no-color', stdout=out)
return Response(out.getvalue())
Related
After deploying Metabase in Gcloud, GAE app url shows error page.
I followed all the instructions on this link https://www.cloudbooklet.com/install-metabase-on-google-cloud-with-docker-app-engine/ to deploy metabase on GAE.
I have tried with both mysql and Postgres db but the result is always an error page
Here is my App.yaml code.
env: flex
manual_scaling:
instances: 1
env_variables:
MB_JETTY_PORT: 8080
MB_DB_TYPE: postgres
MB_DB_DBNAME: metabase
MB_DB_PORT: 5432
MB_DB_USER: root
MB_DB_PASS: password
MB_DB_HOST: 127.0.0.1
beta_settings:
cloud_sql_instances: <sql_instance>=tcp:5432
Here is my dockerfile
FROM gcr.io/google-appengine/openjdk
EXPOSE 8080
ENV PORT 8080
ENV MB_PORT 8080
ENV MB_JETTY_PORT 8080
ENV MB_DB_PORT 5432
ENV METABASE_SQL_INSTANCE <sql_instance>=tcp:5432
ENV JAVA_OPTS "-XX:+IgnoreUnrecognizedVMOptions -Dfile.encoding=UTF-8 --add-opens=java.base/java.net=ALL-UNNAMED --add-modules=java.xml.bind"
ADD https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 ./cloud_sql_proxy
ADD http://downloads.metabase.com/v0.33.2/metabase.jar /metabase.jar
RUN chmod +x ./cloud_sql_proxy
CMD ./cloud_sql_proxy -instances=$METABASE_SQL_INSTANCE=tcp:$MB_DB_PORT & java -jar ./metabase.jar
Following is the error I get on console log
INFO metabase.driver :: Registered abstract driver :sql ?
Also the error message on App engine URL says the following,
Error: Server Error
The server encountered a temporary error and could not complete your request.
Please try again in 30 seconds.
I tried all options I could find, please help me with a working solution.
First, start by following the instructions on the Connecting from App Engine page. Make sure that the SQL Admin API is enabled, and that the service account being used has the Cloud SQL Connect IAM role.
Second, you don't need to run the proxy in the docker container. When you specify it in the app.yml, it allows you to access it on 172.17.0.1:<PORT>. (Although if you are using a container, I would highly suggest you try Cloud Run instead).
Finally, according to this Metabase setup instructions, you need to provide the environment variables to the container to specify what database you want it to use. These env vars are all in format MB_DB_*.
Here is what a dockerfile without the proxy might look like:
FROM gcr.io/google-appengine/openjdk
ENV MB_JETTY_PORT 8080
ENV MB_DB_TYPE postgres
ENV MB_DB_HOST 172.17.0.1
ENV MB_DB_PORT 5432
ENV MB_DB_USER <your-username>
ENV MB_DB_PASS <your-password>
ENV MB_DB_DBNAME <your-database>
ENV JAVA_OPTS "-XX:+IgnoreUnrecognizedVMOptions -Dfile.encoding=UTF-8 --add-opens=java.base/java.net=ALL-UNNAMED --add-modules=java.xml.bind"
ENTRYPOINT java -jar ./metabase.jar
For bonus points, you might consider using the distroless container (gcr.io/distroless/java:11) as a base instead (especially if you switch to Cloud Run).
There is a lot of documentation but not specific about Dockerfiles (or I missed it).
My app.yaml file:
runtime: custom
env: flex
env_variables:
MYSQL_DSN: mysql:unix_socket=/cloudsql/project-name:europe-west1:test001;dbname=db001
MYSQL_USER: root
MYSQL_PASSWORD: 'qwerty'
My Dockerfile:
FROM ubuntu:16.04
ARG dbuser
ENV dbuser ${MYSQL_USER}
ARG dbpass
ENV dbpass ${MYSQL_PASSWORD}
ARG dbhost
ENV dbhost ${MYSQL_DSN}
RUN apt-get update
RUN apt-get install mysql-client
RUN mysql -h ${dbhost} -u ${dbuser} -p${dbpass} -e "CREATE DATABASE 'test';"
Documentation followed:
https://cloud.google.com/appengine/docs/flexible/nodejs/configuring-your-app-with-app-yaml#Node.js_app_yaml_Defining_environment_variables
https://cloud.google.com/appengine/docs/php/cloud-sql/
The mysql command line does not understand the DSN syntax you are passing. The socket and database must be passed in separately.
Additionally "RUN" entries in your Dockerfile are run when building the docker image, before it is actually ran in your App. As a result it doesn't have the environment available to it. Moreover you probably don't want to be configuring or accessing a remote database when building an image.
Here's an alternative:
app.yaml
runtime: custom
env: flex
env_variables:
MYSQL_SOCK: /cloudsql/project-name:europe-west1:test001
MYSQL_DB: db001
MYSQL_USER: root
MYSQL_PASSWORD: 'qwerty'
your_program.sh
#!/bin/sh
mysql -S $MYSQL_SOCK -u $MYSQL_DB -p$MYSQL_PASSWORD $MYSQL_DB -e "CREATE DATABASE 'test';"
Dockerfile
FROM ubuntu:16.04
RUN apt-get update
RUN apt-get install mysql-client
CMD ["your_program.sh"]
I am trying to show on my browser the webapp I've created for a school project.
First of all, I've put my Dockerfile and my .war file in the same folder /home/giorgio/Documenti/dockerProject. I've written in my Dockerfile the following:
# Pull base image
From tomcat:7-jre7
# Maintainer
MAINTAINER "xyz <xyz#email.com">
# Copy to images tomcat path
RUN rm -rf /usr/local/tomcat/webapps/ROOT
COPY file.war /home/giorgio/Documenti/apache-tomcat-7.0.72/webapps/
Then I've built the image with the command from the ubuntu shell:
docker build -t myName /home/giorgio/Documenti/dockerProjects
Finally, I've run on the shell:
docker run --rm -it -p 8080:8080 myName
Now, everything works fine and it doesn't show any errors, however when I want to reach localhost:8080 from my browser anything shows up, nevertheless tomcat has started running perfectly fine.
Any thoughts about a poossible problem which I can't see?
Thank you!
This is your whole Dockerfile?
Because You just remove all ROOT content (step #3)
then copy war file with your application (step #4) - probably wrong folder in the question only (should be /usr/local/tomcat/webapps/)
But I don't see any endpoint or start foreground application.
I suppose you need to add:
CMD ["/usr/local/tomcat/bin/catalina.sh", "run"]
and with that just run tomcat. And It is routines to EXPOSE port, but when you are using -p docker does an implicit exposing.
So your Dockerfile should looks like:
# Pull base image
From tomcat:7-jre7
# Maintainer
MAINTAINER "xyz <xyz#email.com">
# Copy to images tomcat
RUN rm -rf /usr/local/tomcat/webapps/ROOT
# fixed path for copying
COPY file.war /usr/local/tomcat/webapps/
# Routine for me - optional for your case
EXPOSE 8080
# And run tomcat
CMD ["/usr/local/tomcat/bin/catalina.sh", "run"]
I would like to customise a (Python) Standard Runtime Managed VM.
In theory, this should be possible by adding some extra commands to the VM Dockerfile.
Google's documentation states that a VM Dockerfile is automatically generated when the App is first deployed;
If you are using a standard runtime, the SDK will create a Dockerfile for you the first time you run the gcloud preview app deploy commands. The file will exist in a predetermined location:
If you are developing in Java, the Dockerfile appears in the root of the compiled Web Application Archive directory (WAR)
If you are developing in Python or Go, the Dockerfile appears in the root of your application directory.
And that extra commands can indeed be added;
You can add more docker commands to this file, while continuing to run and deploy your app with the standard runtime declaration.
However in practice the Dockerfile is automatically deleted immediately after deployment competes, preventing any customisation.
Has anyone managed to add Dockerfile commands to a Managed VM with a Standard Runtime? Any help would be gratefully appreciated.
I tried the same thing and did not succeed. There is however an equivalent way of doing this that I fell back to.
You can create a custom runtime that mimics the standard runtime.
You can do this because Google provides the Docker base images for all the standard runtimes. Mimicking a standard runtime is therefore simply a matter of selecting the right base image in the Dockerfile of the custom runtime. For the standard Python App Engine VM the Dockerfile is:
FROM gcr.io/google_appengine/python-compat
ADD . /app
Now that you have recreated the standard runtime as a custom runtime, you can modify the Dockerfile to make any customizations you need.
Important Note
The development server does not support custom Dockerfiles (you will get an error about --custom-entrypoint), so you have to move your test environment to App Engine servers if you are doing this. I think this is true regardless of whether you are using a standard runtime and customizing the Dockerfile or using a custom runtime. See this answer.
A note about the development server not working with custom runtimes - dev_appserver.py doesn't deal with Docker or Dockerfiles, which is why it complains about needing you to specify --custom_entrypoint. However as a workaround you can manually set up the dependencies locally. Here's an example using 'appengine-vm-fortunespeak' which uses a custom runtime based on python-compat:
$ git clone https://github.com/GoogleCloudPlatform/appengine-vm-fortunespeak-python.git
$ cd appengine-vm-fortunespeak-python
# Local dependencies from Dockerfile must be installed manually
$ sudo pip install -r requirements.txt
$ sudo apt-get update && install -y fortunes libespeak-dev
# We also need gunicorn since its used by python-compat to serve the app
$ sudo apt-get install gunicorn
# This is straight from dev_appserver.py --help
$ dev_appserver.py app.yaml --custom_entrypoint="gunicorn -b localhost:{port} main:app"
Note that if you are using any of the non -compat images, you can run your app directly using Docker since they don't need to emulate the legacy App Engine API, for example using 'getting-started-python' which uses the python runtime:
$ git clone https://github.com/GoogleCloudPlatform/getting-started-python.git
$ cd 6-pubsub
# (Configure the app according to the tutorial ...)
$ docker build .
$ docker images # (note the IMAGE_ID)
$ docker run -p 127.0.0.1:8080:8080 -t IMAGE_ID
Try the above with any -compat images and you will have problems - for example on python-compat you'll see initialization errors in runtime/google/appengine/tools/vmboot.py. It needs to be run on a real Managed VM instance.
In Google App Engine's Python runtime for Managed VMS, I want to install the Splinter (selenium) Chromedriver. According to the documentation for Linux, I have the following in my dockerfile:
# Dockerfile extending the generic Python image with application files for a
# single application.
FROM gcr.io/google_appengine/python-compat
RUN apt-get update && apt-get install -y apt-utils zip unzip wget
ADD requirements.txt /app/
RUN pip install -r requirements.txt
RUN cd $HOME/
RUN wget https://chromedriver.googlecode.com/files/chromedriver_linux64_20.0.1133.0.zip
RUN unzip chromedriver_linux64_20.0.1133.0.zip
RUN mkdir -p $HOME/bin
RUN mv chromedriver /bin
ENV PATH "$PATH:$HOME/bin"
ADD . /app
I can't get the web application to start Splinter with the chrome webdriver as it does not find it in the PATH.
WebDriverException: Message: 'chromedriver' executable needs to be
available in the path. Please look at
http://docs.seleniumhq.org/download/#thirdPartyDrivers
and read up at
http://code.google.com/p/selenium/wiki/ChromeDriver
And if I run docker exec -it <container id> chromedriver, as expected, it doesn't work.
Also, the environment variables printed out in Python are:
➜ ~ docker exec -it f4d9541c4ba6 python
Python 2.7.3 (default, Mar 13 2014, 11:03:55)
[GCC 4.7.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> print os.environ
{'GAE_MODULE_NAME': 'parsers', 'API_HOST': '10.0.2.2', 'GAE_SERVER_PORT': '8082', 'MODULE_YAML_PATH': 'parsers.yaml', 'HOSTNAME': 'f4d9541c4ba6', 'SERVER_SOFTWARE': 'Development/2.0', 'GAE_MODULE_INSTANCE': '0', 'DEBIAN_FRONTEND': 'noninteractive', 'GAE_MINOR_VERSION': '580029170989395749', 'API_PORT': '59768', 'GAE_PARTITION': 'dev', 'GAE_LONG_APP_ID': 'utix-app', 'PATH': '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin', 'GAE_MODULE_VERSION': 'parsers-0-0-1', 'HOME': '/root'}
What would be the correct way of making the chromedriver be in the PATH, or any workaround?
Thanks a lot
You need to check the ENTRYPOINT and CMD associated with that image (do a docker inspect on the container you launched)
If the image is set to open a new bash session, the profile or .bashrc associated with the account running that session might redefine $PATH, overriding the Dockerfile ENV PATH "$PATH:$HOME/bin" directive.
If that is the case, making sure the profile or .bashrc defines the right PATH is easier (with a COPY of a custom .bashrc for instance) that modifying the ENV.