Is it possible? Here is my app.yaml:
runtime: nodejs8
env_variables:
NODE_ENV: production
PORT: 8080
API_KEY: ${API_KEY}
${API_KEY} is like a placeholder.
When I run API_KEY=xdfj212c gcloud app deploy app.yaml command, I want to pass API_KEY=xdfj212c to app.yaml and replace the placeholder with
xdfj212c.
Expect result:
runtime: nodejs8
env_variables:
NODE_ENV: production
PORT: 8080
API_KEY: xdfj212c
Or, After I run
export API_KEY=xdfj212c
gcloud app deploy
I want the same behavior.
Is this make sense for google app engine deployment workflow?
In app.yaml you can include another YAML config
includes:
- extra_env_vars.yaml
that you can create on the fly while inserting the environment variables
# Unix-like OS
export DB_PASSWORD=your_password
export DB_HOST=your_host
echo -e "env_variables:\n DB_PASSWORD: $DB_PASSWORD\n DB_HOST: $DB_HOST" > extra_env_vars.yaml
# Windows
set DB_PASSWORD=your_password
set DB_HOST=your_host
(echo env_variables: & echo. DB_PASSWORD: %DB_PASSWORD% & echo. DB_HOST: %DB_HOST%) > extra_env_vars.yaml
The resulting extra_env_vars.yaml looks like this:
env_variables:
DB_PASSWORD: your_password
DB_HOST: your_host
Finally, ignore extra_env_vars.yaml in your version control system.
You could always use sed:
$ sed -i 's/${API_KEY}/xdfj212c/g' app.yaml && gcloud app deploy
The 'bad' thing is that this stores the key back, but you can always append a new sed command to replace the key again with the placeholder, or use your VCS mechanism to just reset the change the file.
Another option is saving your app.yaml file as something like app_template.yaml and do this for your deployments:
$ sed 's/${API_KEY}/xdfj212c/g' app_template.yaml | tee app.yaml; gcloud app deploy
This will do the replacement in a new file, app.yaml, and then do the deployment.
Related
I'm new to react development. I have created .env file(inside root) and got some url for my application. after publish my application to azure my application not getting url values. I have stored it new .env file inside my public folder also. But its not getting values.
.env file(inside root)
REACT_APP_SERVICE_BASE_URL = https://localhost:44385/
REACT_APP_CONFIG_BASE_URL = https://localhost:44354/
js Code
require('dotenv').config()
let SERVICE_BASE_URL = process.env.PUBLIC_URL.REACT_APP_SERVICE_BASE_URL;
Can anyone have an idea to fix my issue. localhost working fine. after publish and change url is not working.
my customers have different Urls. so they need to change with their variables. So I thought if i add .env file inside public folder they can change their Url and use it
Tried this way also. But this also not calling public folder .env Its also taking root folder .env
require('dotenv').config(process.env.PUBLIC_URL+ '/.env')
As mentioned, create-react-app creates a static app, so nothing can be read from environment variables dynamically after build. Instead values from your .env file are copied into the static website during build. Any change afterwards won't change your app.
If you're using Azure App Service: Rather than building the app locally, then publishing the pre-built app bundle, you can instead publish the source of the app and have Azure App Service build the app. This way the customer-specific environment variables (App Settings) are present during build and will be set appropriately in your app.
There's two approaches:
Use a custom build script that you publish with your source code to Azure App Service. The documentation on this isn't great, but it works if you prefer to deploy from git or from a zip file. Kudu is the engine behind both of these deployment scenarios, see the wiki for details. See this example deploy script.
(recommended) Deploy your app using containers, and use an entry point script to replace the environment variables' placeholders with the customer-specific App Service's environment variable values.
Example of #2 (recommended):
Some code examples below. You can also reference this project as a working example of using this approach.
React App code to get the environment variable:
export const getGitHubToken = () => {
if (process.env.NODE_ENV !== 'production') {
if (!process.env.REACT_APP_SERVICE_BASE_URL) throw new Error('Must set env variable $REACT_APP_SERVICE_BASE_URL');
return process.env.REACT_APP_SERVICE_BASE_URL;
}
return '__REACT_APP_SERVICE_BASE_URL__';
};
Entrypoint script run by container:
#!/usr/bin/env bash
# Get environment variables to show up in SSH session
eval $(printenv | sed -n "s/^\([^=]\+\)=\(.*\)$/export \1=\2/p" | sed 's/"/\\\"/g' | sed '/=/s//="/' | sed 's/$/"/' >> /etc/profile)
pushd /home/site/wwwroot/static/js > /dev/null
pattern="main.*.js"
files=( $(compgen -W "$pattern") )
mainFile=$files
sed -i 's|__REACT_APP_SERVICE_BASE_URL__|'"$REACT_APP_SERVICE_BASE_URL"'|g' "$mainFile"
sed -i 's|__REACT_APP_CONFIG_BASE_URL__|'"$REACT_APP_CONFIG_BASE_URL"'|g' "$mainFile"
popd > /dev/null
Dockerfile:
FROM nginx
RUN mkdir -p /home/LogFiles /opt/startup /home/site/wwwroot \
&& echo "root:Docker!" | chpasswd \
&& echo "cd /home" >> /etc/bash.bashrc \
&& apt-get update \
&& apt-get install --yes --no-install-recommends \
openssh-server \
openrc \
yarn \
net-tools \
dnsutils
# init_container.sh is in react app's deploy/startup folder
COPY deploy/startup /opt/startup
COPY build /home/site/wwwroot
RUN chmod -R +x /opt/startup
ENV PORT 8080
ENV SSH_PORT 2222
EXPOSE 2222 8080
ENV WEBSITE_ROLE_INSTANCE_ID localRoleInstance
ENV WEBSITE_INSTANCE_ID localInstance
ENV PATH ${PATH}:/home/site/wwwroot
WORKDIR /home/site/wwwroot
ENTRYPOINT ["/opt/startup/init_container.sh"]
let SERVICE_BASE_URL = process.env.REACT_APP_SERVICE_BASE_URL
As mentioned here https://create-react-app.dev/docs/adding-custom-environment-variables/ .env file is for development purpose.
I suppose you use create-react-app to build your application. In that case the environment variable is injected in your appication at build time.
When you develop locally .env variables are automatically injected to your code.
In the case of a deploy on azure you shoud define your environment variables in that environment and build you application there.
Is there a way to run a list of linux commands on the docker after deployement finish automatically, like lifecycle (valable for Kubernetes ) on the yaml file?
I do not want to have to ssh to the instance and run my command.
I need to install ssh-client and some time vim and other.
For those who are looking for a solution to this problem.
App Engine with runtime: python or other default source in the app.yaml, there won't be too much customization.
To be able to create your own build you have to use runtime: custom and add Dockerfile file in same directory (root).
This is what it looks like:
app.yaml:
Only the first line change.
runtime: custom
# the PROJECT-DIRECTORY is the one with settings.py and wsgi.py
entrypoint: gunicorn -b :$PORT mysite.wsgi # specific to a GUnicorn HTTP server deployment
env: flex # for Google Cloud Flexible App Engine
# any environment variables you want to pass to your application.
# accessible through os.environ['VARIABLE_NAME']
env_variables:
# the secret key used for the Django app (from PROJECT-DIRECTORY/settings.py)
SECRET_KEY: 'lkfjop8Z8rXWbrtdVCwZ2fMWTDTCuETbvhaw3lhwqiepwsfghfhlrgldf'
DEBUG: 'False' # always False for deployment
DB_HOST: '/cloudsql/app-example:us-central1:example-postgres'
DB_PORT: '5432' # PostgreSQL port
DB_NAME: 'example-db'
DB_USER: 'mysusername' # either 'postgres' (default) or one you created on the PostgreSQL instance page
DB_PASSWORD: 'sgvdsgbgjhrhytriuuyokkuuywrtwerwednHUQ'
STATIC_URL: 'https://storage.googleapis.com/example/static/' # this is the url that you sync static files to
automatic_scaling:
min_num_instances: 1
max_num_instances: 2
handlers:
- url: /static
static_dir: static
beta_settings:
cloud_sql_instances: app-example:us-central1:example-postgres
runtime_config:
python_version: 3 # enter your Python version BASE ONLY here. Enter 2 for 2.7.9 or 3 for 3.6.4
Dockerfile:
FROM gcr.io/google-appengine/python
# Create a virtualenv for dependencies. This isolates these packages from
# system-level packages.
# Use -p python3 or -p python3.7 to select python version. Default is version 2.
RUN virtualenv /env -p python3
# Setting these environment variables are the same as running
# source /env/bin/activate.
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
# Install custom linux packages (python-qt4 for open-cv)
RUN apt-get update -y && apt-get install vim python-qt4 ssh-client git -y
# Add the application source code and install all dependencies into the virtualenv
ADD . /app
RUN pip install -r /app/requirements.txt
# add my ssh key for github
RUN mv /app/.ssh / && \
chmod 600 /.ssh/*
# Run a WSGI server to serve the application.
EXPOSE 8080
# gunicorn must be declared as a dependency in requirements.txt.
WORKDIR /app
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 Blacks.wsgi:application --timeout 0 --preload && \
python3 manage.py makemigrations && \
python3 manage.py migrate
App Engine is Serverless solution. In product description you may find:
Fully managed
A fully managed environment lets you focus on code while
App Engine manages infrastructure concerns.
This means that, if you choose App Engine you don't have to care about server. So as design, it's for those who do not want to have any access to server, but focus on code leaving all server maintenance to GCP. I think the main feature is automatic scaling of the app.
We do not know what do you intend to do, you can review the app.yaml reference to find all features available. The configuration is different depending of the language environment you want to use.
If you want to have access to environment you should use Kubernetes solutions or even Compute engine.
I hope it will help somehow!
Another simple workaround would be to create a separate URL handler that will run your shell script. For example /migrate. And after deployment, you may use curl to trigger that URL.
Please note that in such case anybody, who would try to find some secret URLs of your backend may find it and trigger it as many times, as they want. So if you need to ensure only trusted people can trigger it - you should either:
come up with more secret URL than just /migrate
check permissions inside this view (but in such case it will be more difficult to call it via curl, cause you'll also need to pass some auth data)
Example basic view (using python + django rest framework):
from io import StringIO
from django.core.management import call_command
from rest_framework.renderers import StaticHTMLRenderer
from rest_framework.response import Response
from rest_framework.views import APIView
class MigrateView(APIView):
permission_classes = ()
renderer_classes = [StaticHTMLRenderer]
def post(self, request, *args, **kwargs):
out = StringIO()
# --no-color because ANSI symbols (used for colors)
# render incorrectly in browser/curl
call_command('migrate', '--no-color', stdout=out)
return Response(out.getvalue())
After deploying Metabase in Gcloud, GAE app url shows error page.
I followed all the instructions on this link https://www.cloudbooklet.com/install-metabase-on-google-cloud-with-docker-app-engine/ to deploy metabase on GAE.
I have tried with both mysql and Postgres db but the result is always an error page
Here is my App.yaml code.
env: flex
manual_scaling:
instances: 1
env_variables:
MB_JETTY_PORT: 8080
MB_DB_TYPE: postgres
MB_DB_DBNAME: metabase
MB_DB_PORT: 5432
MB_DB_USER: root
MB_DB_PASS: password
MB_DB_HOST: 127.0.0.1
beta_settings:
cloud_sql_instances: <sql_instance>=tcp:5432
Here is my dockerfile
FROM gcr.io/google-appengine/openjdk
EXPOSE 8080
ENV PORT 8080
ENV MB_PORT 8080
ENV MB_JETTY_PORT 8080
ENV MB_DB_PORT 5432
ENV METABASE_SQL_INSTANCE <sql_instance>=tcp:5432
ENV JAVA_OPTS "-XX:+IgnoreUnrecognizedVMOptions -Dfile.encoding=UTF-8 --add-opens=java.base/java.net=ALL-UNNAMED --add-modules=java.xml.bind"
ADD https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 ./cloud_sql_proxy
ADD http://downloads.metabase.com/v0.33.2/metabase.jar /metabase.jar
RUN chmod +x ./cloud_sql_proxy
CMD ./cloud_sql_proxy -instances=$METABASE_SQL_INSTANCE=tcp:$MB_DB_PORT & java -jar ./metabase.jar
Following is the error I get on console log
INFO metabase.driver :: Registered abstract driver :sql ?
Also the error message on App engine URL says the following,
Error: Server Error
The server encountered a temporary error and could not complete your request.
Please try again in 30 seconds.
I tried all options I could find, please help me with a working solution.
First, start by following the instructions on the Connecting from App Engine page. Make sure that the SQL Admin API is enabled, and that the service account being used has the Cloud SQL Connect IAM role.
Second, you don't need to run the proxy in the docker container. When you specify it in the app.yml, it allows you to access it on 172.17.0.1:<PORT>. (Although if you are using a container, I would highly suggest you try Cloud Run instead).
Finally, according to this Metabase setup instructions, you need to provide the environment variables to the container to specify what database you want it to use. These env vars are all in format MB_DB_*.
Here is what a dockerfile without the proxy might look like:
FROM gcr.io/google-appengine/openjdk
ENV MB_JETTY_PORT 8080
ENV MB_DB_TYPE postgres
ENV MB_DB_HOST 172.17.0.1
ENV MB_DB_PORT 5432
ENV MB_DB_USER <your-username>
ENV MB_DB_PASS <your-password>
ENV MB_DB_DBNAME <your-database>
ENV JAVA_OPTS "-XX:+IgnoreUnrecognizedVMOptions -Dfile.encoding=UTF-8 --add-opens=java.base/java.net=ALL-UNNAMED --add-modules=java.xml.bind"
ENTRYPOINT java -jar ./metabase.jar
For bonus points, you might consider using the distroless container (gcr.io/distroless/java:11) as a base instead (especially if you switch to Cloud Run).
I used to configure the application name in app.yaml. But I just re-read the latest docs, and they say:
The recommended approach is to remove the application element from your app.yaml file and instead, use a command-line flag to specify your application ID:
To use the gcloud app deploy command, you must specify the --project flag:
gcloud app deploy --project [YOUR_PROJECT_ID]
To use the appcfg.py update command, you specify the -A flag:
appcfg.py update -A [YOUR_PROJECT_ID]
But what about dev_appserver.py? How do I configure it with the project name?
dev_appserver.py also supports the -A flag to set the application id.
From the output for dev_appserver.py -h:
-A APP_ID, --application APP_ID
Set the application, overriding the application value
from the app.yaml file. (default: None)
I'm trying to deploy a golang app to app engine. Now I'm able to do it via the gcloud CLI on my mac, and this works fine (running gcloud app deploy app.yaml). However, I'm getting the following error on Bitbucket Pipelines:
+ gcloud --quiet --verbosity=error app deploy app.yaml --promote
You are about to deploy the following services:
- some-project/default/20171128t070345 (from [/go/src/bitbucket.org/acme/some-app/app.yaml])
Deploying to URL: [https://project-url.appspot.com]
Beginning deployment of service [default]...
ERROR: (gcloud.app.deploy) Staging command [/tmp/google-cloud-sdk/platform/google_appengine/goroot/bin/go-app-stager /go/src/bitbucket.org/acme/some-app/app.yaml /tmp/tmpLbUCA5] failed with return code [1].
------------------------------------ STDOUT ------------------------------------
------------------------------------ STDERR ------------------------------------
2017/11/28 07:03:45 failed analyzing /go/src/bitbucket.org/acme/some-app: cannot find package "github.com/gorilla/context" in any of:
($GOROOT not set)
/go/src/github.com/gorilla/context (from $GOPATH)
GOPATH: /go
--------------------------------------------------------------------------------
Here's my bitbucket-pipelines.yaml content:
image: golang:onbuild
pipelines:
branches:
develop:
- step:
script: # Modify the commands below to build your repository.
# Downloading the Google Cloud SDK
- curl -o /tmp/google-cloud-sdk.tar.gz https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-155.0.0-linux-x86_64.tar.gz
- tar -xvf /tmp/google-cloud-sdk.tar.gz -C /tmp/
- /tmp/google-cloud-sdk/install.sh -q
- source /tmp/google-cloud-sdk/path.bash.inc
- PACKAGE_PATH="${GOPATH}/src/bitbucket.org/${BITBUCKET_REPO_OWNER}/${BITBUCKET_REPO_SLUG}"
- mkdir -pv "${PACKAGE_PATH}"
- tar -cO --exclude-vcs --exclude=bitbucket-pipelines.yml . | tar -xv -C "${PACKAGE_PATH}"
- cd "${PACKAGE_PATH}"
- go get -v
- go get -u github.com/golang/dep/cmd/dep
- go build -v
- go install
- go test -v
- echo $GOOGLE_CLIENT_SECRET | base64 --decode --ignore-garbage > ./gcloud-api-key.json
- gcloud auth activate-service-account --key-file gcloud-api-key.json
- gcloud components install app-engine-go
#- GOROOT="/tmp/go"
# Linking to the Google Cloud project
- gcloud config set project $CLOUDSDK_CORE_PROJECT
# Deploying the application
- gcloud --quiet --verbosity=error app deploy app.yaml --promote
- echo $GCLOUD_API_KEYFILE | base64 --decode --ignore-garbage > ./gcloud-api-key.json
#- gcloud auth activate-service-account --key-file gcloud-api-key.json
And, though it shouldn't be an issue since deploying to the cloud works fine, my app.yaml file as well:
runtime: go
api_version: go1
handlers:
- url: /.*
script: _go_app
nobuild_files:
- vendor
skip_files:
- |
^(.*/)?(
(#.*#)|
(.*\.mapping)|
(.*\.po)|
(.*\.pot)|
(.*\.py[co])|
(.*\.sw?)|
(.*\.yaml)|
(.*_test\.go)|
(.*~)|
(LICENSE)|
(Makefile.*)|
(\..*)|
(vendor/.*)|
)$
I'm fairly certain my issue is with how my bitbucket yaml file or the docker image I'm starting with, but I'm stuck. Any thoughts?
Is github.com/gorilla/context only used within your test files?
go get, will not by default get test dependencies.
You can exclusively add go get github.com/gorilla/context to your pipeline script.