dotenv preventing build in production - reactjs

I am using create-react-app, Travis CI and netlify. I have a config file that looks like this:
require('dotenv').load();
module.exports = {
API_BASE_URL: process.env.REACT_APP_DATABASE_URL || 'http://localhost:8080'
}
When I try to deploy to Netlify, I get this error in Travis:
Creating an optimized production build...
Failed to compile.
Failed to minify the code from this file:
./node_modules/dotenv/lib/main.js:23
If I remove the require('dotenv').load(); part, it loads, but then the app tries to go to localhost, which is obviously not what I want.
This article: https://github.com/motdotla/dotenv/issues/261
brings up the same issue, but they don't offer a solution. I'm stuck. Help!

disclaimer: I work for netlify.
TL;DR unix shells complicate things.
The preferred way to use environment variables in Netlify is to set them in our environment - be that in the Build Environment Variables configuration widget (second on the "Build & Deploy settings" page - under your the repo+build command section), or via netlify.toml (https://www.netlify.com/docs/continuous-deployment/#deploy-contexts). The latter is a bit more flexible as you can set different values for different contexts - e.g. staging uses a staging DATABASE_URL and production uses a production one.
So - those variables are "available" in the build environment - if your build command were env, then you'd see them - in addition to $PATH and $NODE_VERSION and some other stuff Netlify sets automatically. However, depending on how your build pipeline works, they may or not be available inside of it. If your build command is node -p "process.env" - that will show you what node sees for environment variables - and that should show the same thing as env shows (which is what the shell run by the build script sees).
Unfortunately, many of the build pipelines that folks use DON'T automatically import/inherit variables from the parent shells. This thread shows such an example: https://github.com/theintern/intern/issues/136#issue-26148596 . So - the best practice is not to necessarily use something like dotenv (though that has worked for folks that aren't trying to minify it :) ) - but instead, use a build process that appropriately passes those environment variables that we expose in the shell, into the build environment. How you do that is kind of up to you and your tools.
a further PS: unless your build pipeline DOES something with the environment variable - it's not going to be much use in the code that gets published and served to the browser - which doesn't understand $REACT_APP_DATABASE_URL - that's just a string to the browser. I know you're not trying to do that, but wanted to point it out for folks who might see this answer later - it's a common misunderstanding among newer developers of static sites.

Related

Continuous Deployment - Modify backend url in the frontend before deployment

My current situation:
I have a jenkins pipeline to dockerize my node/express backend and build+dockerize my react frontend after every commit to github. This works so far. I am using docker and jenkins on ubuntu 18.
Problem:
My frontend (of course) can't connect to the backend when on the live server (because the route to the backend is http://127.0.0.1:8080. My first idea was to use environment variables but this is not working since react can't read env variables after built (because it's pure html/css/js). What are common solutions to this problem? I don't want to change the backend to the actual domain every time before I push to the repository and change it back to 127.0.0.1 to work on it again.
I researched some more and environment variables CAN be replaced by their value in build time (which is what I want) when you don't use an npm package like dotenv, but rather define variables that start with REACT_APP_.
More Information
"The environment variables are embedded during the build time" - should have read that before.
You can use env files to define different variables based on the environment

create-react-app build/serve environment variables

Relatively new to working with react. I have an application that is working fine in local docker. I populate a bunch of REACT_APP_ environment variables by exporting them to the environment before starting the docker container.
Now, I'm trying to deploy this to a kubernetes pod by running a yarn build and then serving up the build. I see that the environment variables are available on the pod itself by looking at printenv but the application doesn't appear to be picking them up.
Is there something special with serving a production build of a react-app to get it to see the exported environment variables that I'm missing?
I don't want to embed an .env file into the built docker image for security reasons, so I'm hoping that running a react build via serve can still pick up exported REACT_APP_ environment variables that are set via kubernetes secrets.
So apparently, whenever you build a react application with npm, static files are produced that don't know anything about any environment variables you may try to inject at runtime with Kubernetes.
The article below does a good job of explaining this and why they choose to attach the environment variables to the JavaScript window object since it has application-scope available to it.
Making a React application container environment-aware at Kubernetes deployment

How to use environment variables in React app hosted in Azure

I'm pretty new to React, and exploring Azure in general as well.
I've gotten an ERP background, but that background did include using tools like VSTS and CI/CD.
I've heavily relied upon using the 'libraries' in VSTS to specify variables per environment, and then specifying these upon deployment.
But! I've been reading around on the internet, and playing with settings, but to my understanding, I can only 'embed' parameters in the actual code that is generated by NPM. This would basically mean that I'd need to create a seperate build per environment, which I'm not used to. I've always been tought (and tell others) that what you ship to production, should be exactly the same as what has been on pre-prod, or staging, or ... . Is there really no other way to use environment variables? I was thinking of using the Application Settings in Azure App Service, but I can't get them to even pop up in the console.
The libraries in VSTS, haven't found how to use these in my deployment either, as there's just one step.
And reading the docs at https://github.com/facebook/create-react-app/blob/master/packages/react-scripts/template/README.md#adding-custom-environment-variables doesn't make me feel comfortable putting .env files in source control either. I even tried the approach of putting
{process.env.NODE_ENV}
in my code, but in Azure it just shows up as 'Development', while I even do npm run build (which should be production)...
So, I'm a bit lost here! How can I use environment variables specified in Azure App Service, in my React app?
Thanks!
The Good Options
I had this problem as well you can customize which env variables are used by using different build scripts for your envs.
Found this CRA documentation
https://create-react-app.dev/docs/deployment/#customizing-environment-variables-for-arbitrary-build-environments
You can also set your variables in your YAML. https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#set-variables-in-pipeline
But what if I need a single build?
I haven't solved this yet if you are using a single build and release stages for different envs (dev, staging, prod). Since everything is built React has whatever env variables you provided at build time. Alternatives I've considered:
Separating react build from .NET build, so that you could do this as for each deploy
Define all env variables and append eg REACT_APP_SOME_KEY_ then based on subdomain pick specific env eg https://dev.yoursite.com https://yoursite.com , but this option seems non-canonical.
Might be a limitation of React needing to build for every environment. Accept that you need separate builds.
Add the venerable directly to build pipeline Variables. This will add to the Azure environment variable and the app can use it
When you do the deployment using VSTS to Azure, you can give your environment variables in the build pipeline which will automatically include it in the ReactJS project.
now the end of 2019 and I am still facing the issue with env variables in nodeJs and azure devops.
I didn't find a solution, but I use a workaround. I use pseudo "env var".
I created "env.json" file with the same structure as ".env" file in the project's root. Put this file to ".gitignore" file. Imported this file explicitly to files where I need to use env var. Use it as regular object, instead of process.env.***
Example:
we have ".env", that we need to replace:
REACT_APP_SOMW_KEY=KEY
The next steps for project itself are:
Create "env.json":
{"REACT_APP_SOMW_KEY":"KEY"}
Add it to ".gitignore".
In case of using typescript add the next settings to tsconfig.json:
"resolveJsonModule": true,
In files where process.env.REACT_APP_SOMW_KEY are located change process.env.REACT_APP_SOMW_KEY to config.REACT_APP_SOMW_KEY and add const config = require("../pathTo/env.json") as a import module in the begginning.
In case of typescript yo can also create interface just to have autocomplete:
export interface IEnvConfig{
REACT_APP_SOMW_KEY?: string;
}
const config: IEnvConfig = require("../pathTo/env.json");
The result will be something like this:
const reactSomeKey = /*process.env.REACT_APP_SOMW_KEY*/ config.REACT_APP_SOMW_KEY;
Next steps for Azure DevOps:
Add your keys to azure "key vault" or "variables".
In the CI pipeline before the step of building the project you can set the PowerShell task, which will create the "env.json" file. The same as we should create ".env" file locally since we made git clone with the hidden ".env" file.
I put yml task here (in the end you can see 2 debug commands just to be sure that file is created and exist in a project):
- powershell: |
New-Item -Path $(System.DefaultWorkingDirectory) -Name "env.json" -Force -Value #'
{
"REACT_APP_SOMW_KEY": "$(REACT_APP_SOMW_KEY)",
}
'#
Get-Content -Path $(System.DefaultWorkingDirectory)\env.json
Get-ChildItem -Path $(System.DefaultWorkingDirectory)
displayName: 'Create "env.json" file'
Outcome: you have almost the same flow with json object keys as you are usually using with ".env". Also you can have both ".env" and "env.json" in the project.
I used a YAML build and wrote the variable to the .env file. The package I was using to do the transforms in reactjs was dotenv version 8.2.0
So here is my YAML build file, with tasks added to accomplish this
variables:
- group: myvariablegroup
trigger:
batch: true
branches:
include:
- develop
- release/*
pool:
vmImage: 'ubuntu-latest'
stages:
- stage: dev
condition: eq(variables['build.sourceBranch'], 'refs/heads/develop')
jobs:
- job: DevelopmentDelpoyment
steps:
- task: CmdLine#2
inputs:
script: 'echo APP_WEB_API = $(myvariable-dev) > Web/.env'
displayName: 'Setting environment variables'
- script: |
cd Web
npm install
npm run build
displayName: 'npm install and build'
- stage: prod
condition: eq(variables['build.sourceBranch'], 'refs/heads/master')
jobs:
- job: ProductionDelpoyment
steps:
- task: CmdLine#2
inputs:
script: 'echo APP_WEB_API = $(myvariable-prod) > Web/.env'
displayName: 'Setting environment variables'
- script: |
cd Web
npm install
npm run build
displayName: 'npm install and build'
This route only applicable if you are using Azure DevOps.
Azure DevOps has Section in Pipeline called Library.
Create a new Variable Group and add your env variables.
Associate last create Variable group to your build process.
Also remember to name your env variable starting with REACT_APP_
All proposed solutions are way too complex because others already have solved this problem during the package and build process.
To deploy this to azure 2 things have to be done. First remove the .ignore rule that excludes the .env* files. NOTE: ASSUMED you do not put secrets here!
Most of the config in the .env file is visible online anyway, during the auth-flow. So, why panic about this file in git? Espially in a private Git I don't see any problem for those .env files.
So, I have .env.dev and a .env.prod...
this contains e.g.
REACT_APP_AUTH_URL=https://auth.myid4.info
REACT_APP_ISSUER=https://auth.myid4.info
REACT_APP_IDENTITY_CLIENT_ID=myclientid
REACT_APP_REDIRECT_URL=https://myapp.info/signin-oidc
REACT_APP_AUDIENCE=
REACT_APP_SCOPE=openid profile email roles mysuperapi
REACT_APP_SILENT_REDIRECT_URL=https://myapp.info/silent-renew
REACT_APP_LOGOFF_REDIRECT_URL=https://myapp.info/logout
API_URL=/
the following must be done.
npm i --save-dev env-cmd
now, modify in package.json like this. You may have some others, but essentially, add just the correct .env for your environment
env-cmd -f .env.prod
so in my case in package.json
"start": "env-cmd -f .env.dev rimraf ./build && react-scripts start",
"build": "env-cmd -f .env.prod react-scripts build"
Now, I deployed my react JS to azure. I use, FYI, the .NET Core Spa feature.
Had the same problem, my environment variables didn't load on azure build and deploy, and after hours of googling and hitting my head against the wall i just ocurred to me that maybe the blanks before and after the equals sign ("=") were not supposed to be there.
So i changed:
REACT_APP_API_URL = https://some_url
For:
REACT_APP_API_URL=https://some_url
And it worked alright !!
Many of the proposed solutions here did not work (and should not work) but I solved it the following way. However, first let me explain why other solutions may not (should not) work (please correct me if I am wrong)
Adding pipeline variables (even though they are environment variables) should not work since a react app is run on the client side and there is no server side code that can inject environment variables to the react app.
Installing environment variable task on the classic pipeline should not work for the same reason.
Adding to Application Settings in azure app service should not work for the same reason.
Having .env or .env.development or .env.production file in a git repo should not be a good practice as it may compromise api keys and other sensitive information.
So here is my solution -
Step1: Add all those .env files to azure devops library as secure files. You can download these secure files in the build machine using a DownloadSecureFile#1 pipeline task (yml). This way we are making sure the correct .env file is provided in the build machine before the task yarn build --mode development in the pipeline.
Step2:
Add the following task in your azure yml pipeline in appropriate place. I have created a github repo https://github.com/mail4hafij/react-yarn-azure-pipeline if you want to see a complete example.
# Download secure file from azure library
- task: DownloadSecureFile#1
inputs:
secureFile: '.env.development'
# Copy the .env file
- task: CopyFiles#2
inputs:
sourceFolder: '$(Agent.TempDirectory)'
contents: '**/*.env.development'
targetFolder: '$(YOUR_DEFINED_PROJECT_ROOT_FOLDER_VARIABLE)'
cleanTargetFolder: false
Keep note, secure files can't be edited but you can always re-upload.
It's not exactly what you are looking for, but maybe this is an alternative solution for your problem (it substitutes the process-env.x into real values during the build step):
https://github.com/babel/minify/tree/master/packages/babel-plugin-transform-inline-environment-variables
As others have said, in your Azure pipeline, add the variable to the pipeline. However some corrections on what others have posted, possibly leveraging newer functionality since their responses were written:
if your variable in your .env file is named REACT_APP_MY_VARIABLE, then the variable you need to add to your Azure pipeline should also be named REACT_APP_MY_VARIABLE (not process.env.REACT_APP_MY_VARIABLE)
when setting up the Azure pipeline variable, you can leave the value empty and check the box for "Let users override this value when running this pipeline". This seems to be the trick to letting react still process the .env file content to retrieve your desired values.
As an update, it's a bit different then my original approach, but I've gone through the route of using DotEnv and thus using .env files, which I will generate on the fly in VSTS, using the library variables, and thus NOT storing them in source control.
To use DotEnv, I updated the webpack.config;
const Dotenv = require('dotenv-webpack');
module.exports = {
...
plugins: [
new Dotenv()
],
Then basically, I created a .env file containing my parameters
MD_API_URL=http://localhost:7623/api/
And to be able to consume them in my TSX files I just use process.env;
static getCustomer(id) {
return fetch(process.env.MD_API_URL + 'customers/' + id, { mode: 'cors' })
.then(response => {
return response.json();
}).catch(error => {
return error;
});
}

angular app with docker - production & development

I have a simple AngularJS application. the backend can be treated like a service (external api), so no sever side is needed at all. I would like to run it on a docker, however, i'm not sure what is the best practice here.
what i'm expecting to achieve is the following:
the docker should be able to run everything i was doing locally with nodejs - using webpack/grunt/gulp without the need to install anything on my local machine + making sure every team member is working on the same version of basically everything.
the docker should be able to be deployed to production easily and run as lightly as possible (its just static content!)
the real issue is that as far as i understand, the dev docker should be based on nodejs with a mounted volume and everything.. however, the production docker should be super simple nginx server that serves static content. so i might end up with a 2 separate dockers that use the same code base. not sure if this is the right way to go..
can anyone shed some light over this topic? thanks
Your ideas seems ok. I generally create a bash script(as for me it's flexible enough) to deploy different environments according to requirement(dev&prod).
Assumed created a bash script deployApp.sh
sh deployApp.sh `{dev or prod}`
So you can also create(or switch) Dockerfile on the fly according to your environment and build your app with this Dockerfile. So you can manage your prod environment requirements(only deploy to nginx with webpack's created bundles etc.) what you need respectively.
An example about creating deployApp.sh:
webpack `{if other required parameters here}` #created bundle.js etc.
#After webpack operations , choose Dockerfile for prod or dev :
#./prod/Dockerfile , ./dev/Dockerfile
#check if first parameter is prod or dev
docker build -f ./prod/Dockerfile #this will build nginx based container
#and copy needed files&folders
That is just an approach according to your idea, also i use like that approach. You just create that setup one time. Also you can apply another projects If it is suitable.

AppEngine: Multiple configurations/environments

I have a Google App Engine app (Go lang, if that matters) that I would like to deploy more than once, with slightly different setup. Think production vs. QA.
env_variables in app.yaml seemed promising, but it seems I can only have one such file. For example, I don't see a way to call "goapp deploy" with app-qa.yaml.
How can I tweak deployment configuration? Is it possible to have more than one app.yaml, without custom script that copies files to a directory and manipulates app.yaml? Any other way to configure this?
My preference is to have the delta between the staging/QA and production reflected in (and controlled via) the VCS (git in my case):
my main development happens on the main branch, where the staging environment config is reflected in the .yaml files' contents
my production branch is pulled off the main branch and contains the production environment config - the .yaml files are modified accordingly
When I perform a deployment I do it from a workspace based on the appropriate branch depending on which config I need to deploy.
The delta between the production branch and the main branch is pretty much just deployment config changes. Whenever I'm happy with the staging results and I'm ready to deploy in production I just sync the production branch to the main refpoint corresponding to the OK'd staging deployment and deploy.
Another possible approach is to directly use the SDK's appcfg.py tool, which is what goapp deploy ultimately invokes anyways:
goapp deploy wraps the appcfg.py python tool provided in the SDK. You
can also invoke this tool directly if you need greater control over
the deployment:
The appcfg.py tool allows you to actually have and use alternate .yaml files residing in the same app/module directory (you'd probably have to use that anyways if you go with multiple or non-standard module configurations, since auto-detection from the app's dir won't work anymore):
appcfg.py update app-qa.yaml
App Engine Modules could be used for such purposes.

Resources