Multiple services with different dockerfiles on GAE Flexible - google-app-engine

I'm using Google AppEngine Flexible with python environment. Right now I have two services: default and worker that share the same codebase, configured by app.yaml and worker.yaml. Now I need to install native C++ library, so I had to switch to Custom runtime and added Dockerfile.
Here is the Dockerfile generated by gcloud beta app gen-config --custom command
FROM gcr.io/google-appengine/python
LABEL python_version=python3.6
RUN virtualenv --no-download /env -p python3.6
# Set virtualenv environment variables. This is equivalent to running
# source /env/bin/activate
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
ADD requirements.txt /app/
RUN pip install -r requirements.txt
ADD . /app/
CMD exec gunicorn --workers=3 --threads=3 --bind=:$PORT aces.wsgi
Previously my app.yaml and worker.yaml each had it's own entrypoint: config that specified the command needed to be run to start the service.
So, my question is how can I use two different commands to start the services?
EDIT 1
So far I was able to solve this by rewriting CMD line in dockerfile for each deploy of each service. However, I'm not quite satisfied with this solution.
gcloud app deploy command has --image-url flag that allows to set image url from GCR. I haven't researched that yet, but it seems that I can just upload images to GCR and use the urls since don't change that often

Yes, as you mentioned, I think using the --image-url flag, is a good option here.
Specify a custom runtime.
Build the image locally, tag it, and push it to Google Container Registry (GCR)
then, deploy your service, specifying a custom service file, and specifying the remote image on GCR using the --image-url option.
Here's an example that accomplishes different entrypoints in 2 services that share the same code:
...this is assuming that the "flex" and not "standard" app engine offering is being used.
lets say you have a: project called my-proj
with a default service that is not important
and a second service called queue-processor which is using much of the same code from the same directory.
Create a separate dockerfile for it called QueueProcessorDockerfile
and a separate app.yaml called queue-processor-app.yaml to tell google app engine what i want to happen.
QueueProcessorDockerfile
FROM node:10
# Create app directory
WORKDIR /usr/src/app
COPY package.json ./
COPY yarn.lock ./
RUN npm install -g yarn
RUN yarn
# Bundle app source
COPY . .
CMD [ "yarn", "process-queue" ]
*of course i have a "process-queue" script in my package.json
queue-processor-app.yaml
runtime: custom
env: flex
... other stuff...
...
build and tag the docker image
Check out googles guide here -> https://cloud.google.com/container-registry/docs/pushing-and-pulling
docker build -t eu.gcr.io/my-proj/queue-processor -f QueueProcessorDockerfile .
push it to GCR
docker push eu.gcr.io/my-proj/queue-processor
deploy the service, specifying which yaml config file google should use, as well as the image url you have pushed
gcloud app deploy queue-processor-app.yaml --image-url eu.gcr.io/my-proj/queue-processor

Since the Dockerfile name cannot be changed, the only way to not have to modify the Dockerfile would be to store each service in its own, separate directory. Clean separation, each service has its own Dockerfile and/or startup configuration.
But this raises a question: how to deal with the code shared by multiple services? Using symlinks (which works great for sharing code across standard env services) doesn't work for the flexible env services, see Sharing code between flexible environment modules in a GAE project.
I see a few possible approaches, none really ideal, but maybe more appealing than what you currently have:
hard-link each and every shared source code file (since hardlinking directories is not possible). A bit tedious and error-prone, but you only have to do that once per file
package and publish your shared code as an external library, added to the requirements.txt file of each service using it
split the shared code in a separate repository and have a copy of that repository in each service using it (maybe as a git submodule if using git?). You just need to ensure at the service deployment time that the shared repository is pulled at the proper version - can be quite reliably done through automation. A bit more complicated if you have uncommited changes in this repo - you'd have to patch the same changes in all services.
have multiple copies of the Dockerfiles with different names which you simply copy over instead of always editing the same file. Symlinking instead of copying might work as well, since the symlink doesn't need to be followed outside of the service directory, if it's just replicated as a symlink it'll work.

So i had a very similar issue with my Java applications. We were looking to migrate from Heroku to GAE and were attempting to simulate the Heroku Procfile with GAE services. Effectively what we did was to create separate directories in our application src/main/appengine/web and src/main/appengine/worker where each directory conainted the app.yaml and Dockerfile specific to the process. Then using the mvn appengine:deploy capabilities, we specified the -Dapp.stage.dockerDirectory and -Dapp.stage.appEngineDirecory respectively for each service we wanted to deploy. Then using just some parameters we were able to basically script out parallel deployments of each service from the same code base. Not sure if this works in your situation, but it was very useful for us: Here are the two example commands in their entirety:
Web Process:
mvn appengine:deploy -Dapp.stage.dockerDirectory=src/main/appengine/web -Dapp.stage.appEngineDirectory=src/main/appengine/web -Dapp.stage.stagingDirectory=target/appengine-web -Dapp.deploy.projectId=${project-id} -Dapp.deploy.version=${project-version}
Worker Process:
mvn appengine:deploy -Dapp.stage.dockerDirectory=src/main/appengine/worker -Dapp.stage.appEngineDirectory=src/main/appengine/worker -Dapp.stage.stagingDirectory=target/appengine-worker -Dapp.deploy.projectId=${project-id} -Dapp.deploy.version=${project-version}

Related

How can I make cloud foundry read from specific .env file in react app?

I have a react app and it has .env.development, .env.test, .env.production files. They have only REACT_APP_PROFILE variable set in each of them, and their values are set to dev, test and prod accordingly.
I deploy my app using npm run build command and push it to cloud foundry using cf push command, and print out process.env.REACT_APP_PROFILE and process.env.NODE_ENV variables on the screen.
I have only one space in vmw apps manager called 'dev'. I want this space to read .env.test file, but it reads from .env.production file (probably because I ran npm run build command) .
My questions are:
1- How can I make the space 'dev' in apps manager read from my .env.development or .env.test file?
2- Should I create a different space in apps manager for each environment? If yes, then can I define different manifest.yml files such as manifest-test.yml, manifest-prod.yml? I only have one manifest.yml file right now and its content is like the following:
---
applications:
- name: myreactapp
instances: 1
memory: 64M
path: build/
timeout: 120
routes:
- route: myreactapp.apps.xxx.xxx
buildpack: staticfile_buildpack
How can I make the space 'dev' in apps manager read from my .env.development or .env.test file?
I don't think that this is specific to CF. It's just cause by the files you are pushing to CF.
(probably because I ran npm run build command) .
That would be my thought as well. You are using cf push to deploy the output of that command. If the output is built using production mode, you're only going to see production until you change the build process to product something in debug mode.
2- Should I create a different space in apps manager for each environment? If yes, then can I define different manifest.yml files such as manifest-test.yml, manifest-prod.yml? I only have one manifest.yml file right now and its content is like the following:
It's entirely up to you and how you want to organize and control access to your applications. I think controlling access is typically what affects the design of spaces the most.
You can invite different developers to each space. If, for example, you have dev and prod in different spaces, then you can effectively limit access to prod. You can invite the whole team to dev, but perhaps only your SRE team to prod. If you put all of your apps in the same space, then you can't make that type of distinction.
At the same time, more spaces are more work to manage so if you don't need to make that type of access control distinction, it's probably easier to have fewer spaces.
A specific env file is used according to script (docs):
npm start: .env.development.local, .env.local, .env.development, .env
npm run build: .env.production.local, .env.local, .env.production,
.env
npm test: .env.test.local, .env.test, .env (note .env.local is
missing)
That said, you can configure your script to pick any file you want when building the application.

Consume Cloud Run environment variables inside Nextjs app

I've recently built a Nextjs app, which I am hosting on Google Cloud Run.
My app makes some requests to external APIs from the getStaticProps() method.
I would like to be able to point to a different API host depending on the environment (e.g prod or dev) using environment variables which would be set differently for each environment.
I know I can store these variables in environment specific files like .env.development and .env.production however I would like to be able to store these environment variables in the environment variables field in the Google Cloud console for the cloud run service and skip storing them in the files altogether.
I have tried adding in the variables to Cloud Run, but it does not work. I have also tried prefixing the variables with NEXT_PUBLIC_... With no luck.
Did anyone have any tips on how to accomplish this?
Ok... I think I have figured it out now.
I was using Cloud Builds to build my container, and the container runs npm run build before it runs npm run start.
I assume that my Cloud Run variables aren't available at the point in time when Cloud Build is building the project.
So, I think my solution is prob to try and inject the variables at the point when it is building, using substitution variables.
EDIT: Confirmed. If I start Nextjs in dev mode, such that the page is rendered on the server for each request, then the Cloud Run environment variables are used.
To build the Nextjs app for production, I include the environment variables in the Dockerfile that is built by Cloud Build
EDIT: as request, an example of a dockerfile setting the environment variable:
FROM node:16.13-alpine
RUN mkdir -p /usr/src
WORKDIR /usr/src
COPY . /usr/src
ENV NEXT_PUBLIC_MY_API_HOST='https://some.host.com'
RUN npm install -only=production
RUN npm run build
EXPOSE 3000
CMD npm run start
Then you can just reference the environment variable from within your code using process.env.NEXT_PUBLIC_MY_API_HOST

How to use environment variables in React app hosted in Azure

I'm pretty new to React, and exploring Azure in general as well.
I've gotten an ERP background, but that background did include using tools like VSTS and CI/CD.
I've heavily relied upon using the 'libraries' in VSTS to specify variables per environment, and then specifying these upon deployment.
But! I've been reading around on the internet, and playing with settings, but to my understanding, I can only 'embed' parameters in the actual code that is generated by NPM. This would basically mean that I'd need to create a seperate build per environment, which I'm not used to. I've always been tought (and tell others) that what you ship to production, should be exactly the same as what has been on pre-prod, or staging, or ... . Is there really no other way to use environment variables? I was thinking of using the Application Settings in Azure App Service, but I can't get them to even pop up in the console.
The libraries in VSTS, haven't found how to use these in my deployment either, as there's just one step.
And reading the docs at https://github.com/facebook/create-react-app/blob/master/packages/react-scripts/template/README.md#adding-custom-environment-variables doesn't make me feel comfortable putting .env files in source control either. I even tried the approach of putting
{process.env.NODE_ENV}
in my code, but in Azure it just shows up as 'Development', while I even do npm run build (which should be production)...
So, I'm a bit lost here! How can I use environment variables specified in Azure App Service, in my React app?
Thanks!
The Good Options
I had this problem as well you can customize which env variables are used by using different build scripts for your envs.
Found this CRA documentation
https://create-react-app.dev/docs/deployment/#customizing-environment-variables-for-arbitrary-build-environments
You can also set your variables in your YAML. https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#set-variables-in-pipeline
But what if I need a single build?
I haven't solved this yet if you are using a single build and release stages for different envs (dev, staging, prod). Since everything is built React has whatever env variables you provided at build time. Alternatives I've considered:
Separating react build from .NET build, so that you could do this as for each deploy
Define all env variables and append eg REACT_APP_SOME_KEY_ then based on subdomain pick specific env eg https://dev.yoursite.com https://yoursite.com , but this option seems non-canonical.
Might be a limitation of React needing to build for every environment. Accept that you need separate builds.
Add the venerable directly to build pipeline Variables. This will add to the Azure environment variable and the app can use it
When you do the deployment using VSTS to Azure, you can give your environment variables in the build pipeline which will automatically include it in the ReactJS project.
now the end of 2019 and I am still facing the issue with env variables in nodeJs and azure devops.
I didn't find a solution, but I use a workaround. I use pseudo "env var".
I created "env.json" file with the same structure as ".env" file in the project's root. Put this file to ".gitignore" file. Imported this file explicitly to files where I need to use env var. Use it as regular object, instead of process.env.***
Example:
we have ".env", that we need to replace:
REACT_APP_SOMW_KEY=KEY
The next steps for project itself are:
Create "env.json":
{"REACT_APP_SOMW_KEY":"KEY"}
Add it to ".gitignore".
In case of using typescript add the next settings to tsconfig.json:
"resolveJsonModule": true,
In files where process.env.REACT_APP_SOMW_KEY are located change process.env.REACT_APP_SOMW_KEY to config.REACT_APP_SOMW_KEY and add const config = require("../pathTo/env.json") as a import module in the begginning.
In case of typescript yo can also create interface just to have autocomplete:
export interface IEnvConfig{
REACT_APP_SOMW_KEY?: string;
}
const config: IEnvConfig = require("../pathTo/env.json");
The result will be something like this:
const reactSomeKey = /*process.env.REACT_APP_SOMW_KEY*/ config.REACT_APP_SOMW_KEY;
Next steps for Azure DevOps:
Add your keys to azure "key vault" or "variables".
In the CI pipeline before the step of building the project you can set the PowerShell task, which will create the "env.json" file. The same as we should create ".env" file locally since we made git clone with the hidden ".env" file.
I put yml task here (in the end you can see 2 debug commands just to be sure that file is created and exist in a project):
- powershell: |
New-Item -Path $(System.DefaultWorkingDirectory) -Name "env.json" -Force -Value #'
{
"REACT_APP_SOMW_KEY": "$(REACT_APP_SOMW_KEY)",
}
'#
Get-Content -Path $(System.DefaultWorkingDirectory)\env.json
Get-ChildItem -Path $(System.DefaultWorkingDirectory)
displayName: 'Create "env.json" file'
Outcome: you have almost the same flow with json object keys as you are usually using with ".env". Also you can have both ".env" and "env.json" in the project.
I used a YAML build and wrote the variable to the .env file. The package I was using to do the transforms in reactjs was dotenv version 8.2.0
So here is my YAML build file, with tasks added to accomplish this
variables:
- group: myvariablegroup
trigger:
batch: true
branches:
include:
- develop
- release/*
pool:
vmImage: 'ubuntu-latest'
stages:
- stage: dev
condition: eq(variables['build.sourceBranch'], 'refs/heads/develop')
jobs:
- job: DevelopmentDelpoyment
steps:
- task: CmdLine#2
inputs:
script: 'echo APP_WEB_API = $(myvariable-dev) > Web/.env'
displayName: 'Setting environment variables'
- script: |
cd Web
npm install
npm run build
displayName: 'npm install and build'
- stage: prod
condition: eq(variables['build.sourceBranch'], 'refs/heads/master')
jobs:
- job: ProductionDelpoyment
steps:
- task: CmdLine#2
inputs:
script: 'echo APP_WEB_API = $(myvariable-prod) > Web/.env'
displayName: 'Setting environment variables'
- script: |
cd Web
npm install
npm run build
displayName: 'npm install and build'
This route only applicable if you are using Azure DevOps.
Azure DevOps has Section in Pipeline called Library.
Create a new Variable Group and add your env variables.
Associate last create Variable group to your build process.
Also remember to name your env variable starting with REACT_APP_
All proposed solutions are way too complex because others already have solved this problem during the package and build process.
To deploy this to azure 2 things have to be done. First remove the .ignore rule that excludes the .env* files. NOTE: ASSUMED you do not put secrets here!
Most of the config in the .env file is visible online anyway, during the auth-flow. So, why panic about this file in git? Espially in a private Git I don't see any problem for those .env files.
So, I have .env.dev and a .env.prod...
this contains e.g.
REACT_APP_AUTH_URL=https://auth.myid4.info
REACT_APP_ISSUER=https://auth.myid4.info
REACT_APP_IDENTITY_CLIENT_ID=myclientid
REACT_APP_REDIRECT_URL=https://myapp.info/signin-oidc
REACT_APP_AUDIENCE=
REACT_APP_SCOPE=openid profile email roles mysuperapi
REACT_APP_SILENT_REDIRECT_URL=https://myapp.info/silent-renew
REACT_APP_LOGOFF_REDIRECT_URL=https://myapp.info/logout
API_URL=/
the following must be done.
npm i --save-dev env-cmd
now, modify in package.json like this. You may have some others, but essentially, add just the correct .env for your environment
env-cmd -f .env.prod
so in my case in package.json
"start": "env-cmd -f .env.dev rimraf ./build && react-scripts start",
"build": "env-cmd -f .env.prod react-scripts build"
Now, I deployed my react JS to azure. I use, FYI, the .NET Core Spa feature.
Had the same problem, my environment variables didn't load on azure build and deploy, and after hours of googling and hitting my head against the wall i just ocurred to me that maybe the blanks before and after the equals sign ("=") were not supposed to be there.
So i changed:
REACT_APP_API_URL = https://some_url
For:
REACT_APP_API_URL=https://some_url
And it worked alright !!
Many of the proposed solutions here did not work (and should not work) but I solved it the following way. However, first let me explain why other solutions may not (should not) work (please correct me if I am wrong)
Adding pipeline variables (even though they are environment variables) should not work since a react app is run on the client side and there is no server side code that can inject environment variables to the react app.
Installing environment variable task on the classic pipeline should not work for the same reason.
Adding to Application Settings in azure app service should not work for the same reason.
Having .env or .env.development or .env.production file in a git repo should not be a good practice as it may compromise api keys and other sensitive information.
So here is my solution -
Step1: Add all those .env files to azure devops library as secure files. You can download these secure files in the build machine using a DownloadSecureFile#1 pipeline task (yml). This way we are making sure the correct .env file is provided in the build machine before the task yarn build --mode development in the pipeline.
Step2:
Add the following task in your azure yml pipeline in appropriate place. I have created a github repo https://github.com/mail4hafij/react-yarn-azure-pipeline if you want to see a complete example.
# Download secure file from azure library
- task: DownloadSecureFile#1
inputs:
secureFile: '.env.development'
# Copy the .env file
- task: CopyFiles#2
inputs:
sourceFolder: '$(Agent.TempDirectory)'
contents: '**/*.env.development'
targetFolder: '$(YOUR_DEFINED_PROJECT_ROOT_FOLDER_VARIABLE)'
cleanTargetFolder: false
Keep note, secure files can't be edited but you can always re-upload.
It's not exactly what you are looking for, but maybe this is an alternative solution for your problem (it substitutes the process-env.x into real values during the build step):
https://github.com/babel/minify/tree/master/packages/babel-plugin-transform-inline-environment-variables
As others have said, in your Azure pipeline, add the variable to the pipeline. However some corrections on what others have posted, possibly leveraging newer functionality since their responses were written:
if your variable in your .env file is named REACT_APP_MY_VARIABLE, then the variable you need to add to your Azure pipeline should also be named REACT_APP_MY_VARIABLE (not process.env.REACT_APP_MY_VARIABLE)
when setting up the Azure pipeline variable, you can leave the value empty and check the box for "Let users override this value when running this pipeline". This seems to be the trick to letting react still process the .env file content to retrieve your desired values.
As an update, it's a bit different then my original approach, but I've gone through the route of using DotEnv and thus using .env files, which I will generate on the fly in VSTS, using the library variables, and thus NOT storing them in source control.
To use DotEnv, I updated the webpack.config;
const Dotenv = require('dotenv-webpack');
module.exports = {
...
plugins: [
new Dotenv()
],
Then basically, I created a .env file containing my parameters
MD_API_URL=http://localhost:7623/api/
And to be able to consume them in my TSX files I just use process.env;
static getCustomer(id) {
return fetch(process.env.MD_API_URL + 'customers/' + id, { mode: 'cors' })
.then(response => {
return response.json();
}).catch(error => {
return error;
});
}

create-react-app + docker = QA and PROD Deploy

I'm using create-react-app for my projects using docker as my dev env.
Now I would like to know how is the best practice to deploy my project into AWS (I'll deploy the docker).
Maybe my question is a dummy but I'm really stuck on it.
My docker file has a command yarn start... for dev it is enough I don't need to build anything, my bundle will run in memory, but for QA or PROD I would like to build using npm run build but as I know it will create a new folder with the files that should be used on prod env.
That said, my question is: what is the best practice for this kind of situation?
Thanks.
This is what I did:
Use npm run build to build all static files.
Use _/nginx image to customize an HTTP server which serves those static files. (Dockerfile)
Upload the customized image to Amazon EC2 Container Service (ECS).
Load the image in ECS task. Then use ELBv2 to start a load balance server to forward all outside requests to ECS.
(Optional) Enable HTTPS in ELBv2.
One time things:
Figure out the mechanism of ECS. You need to create at least one host server for ECS. I used the Amazon ECS-Optimized AMI.
Create a Docker repository on ECS so you can upload your customized Docker image.
Create ECS task definition(s) for your service.
Create ECS cluster(s) and add task(s).
Configure ELBv2 so it can forward the traffic to your internal ECS dynamic port.
(Optional) Write script to automate everyday deployment.
I would get paid if someone wants me to do those things for her/him. Or you can figure it out by yourself following those clues.
However, if your website is a simple static site, I recommend to use Github pages: it's free and simple. My solution is for multiple static + dynamic applications which may involved other services (e.g. Redis, ElasticSearch) and required daily/hourly deployments.
You would have to run npm run build and then copy the resulting files into your container. You could use a separate Dockerfile.build to build the files, extract them and add them to your final container. Your final container should be able to serve the files. You can base it on nginx or another server. You can also use it as a data volume container in your existing server container.
Recent versions of Docker make this process easier by allowing you to combine the two Dockerfiles. You can have a build container and then the final container both be defined in the same file.
Here's a simple example for your use case:
FROM node:onbuild AS builder
RUN npm run build
FROM nginx:latest
COPY --from=builder /usr/src/app/build /usr/share/nginx/html
You'd probably want to include your own nginx configuration file.
More on multistage builds here:
https://docs.docker.com/engine/userguide/eng-image/multistage-build/

app.yaml to send params to Dockerfile

I am deploying a binary on Google cloud flexible app engine for two different services. So I have {app-service1.yaml, Dockerfile-service1} and {app-service2.yaml, Dockerfile-service2}. And use "gcloud app deploy" command to deploy them.
Is it possible to send a param from app-service[1|2].yaml to a single Dockerfile, so that I can maintain only one Dockerfile?
I tried two things but they didn't work with "gcloud deploy" command:
"entrypoint:" in app.yaml -- It does not override what is set in CMD in Dockerfile.
"env_variables:" in app.yaml -- Dockerfile's ENV or ARG do not see any variables defined in env_variables:.
There's currently no way (I can think of) to pass parameters into the docker build process while using gcloud app deploy. If the dockerfiles you're using are similar, you may want to consider creating a base docker file, building a base image, and then sending it to gcr.io. Then you can extend the base image with your other Dockerfile(s).
Hope this helps!

Resources