Multiple env files choosing which one on docker - create react app - reactjs

I am building a react app using cra, so the problem is the application just has the client side code which means there is no nodejs part.
I have two different environments one is development and one is production, as cra tells there is a order of preference:
.env
.env.development
.env.production
So if .env.production file is there in repo it will take that one and use that config based on the script that I give, if I use npm run build it will use .env.production and if I use npm start it will use .env.development if the file is there.
So I can add .env, .env.development, .env.production, but when I build the image in the Docker I can give only one command either it should be npm start or npm run build. So how should I solve this?

Install a local development environment; typically your only host dependency will be Node itself. Use the .env.development file there, via something like the Webpack dev server, with a command like yarn start.
Use Docker principally as a deployment mechanism. Your Dockerfile can build your application using the .env.production file, and then copy it into something like an Nginx container that doesn't need Node at all. It should follow the pattern in the CRA Creating a Production Build docs. Loosely,
FROM node:lts AS build
WORKDIR /app
COPY package.json package.lock .
RUN npm install
COPY . .
ENV NODE_ENV=production
RUN npm run build
FROM nginx
COPY --from=build /app/build /usr/share/nginx/html
# Base image provides default EXPOSE, CMD
This pattern gets around all of the difficulties of trying to make Docker act like a local development environment (filesystem permissions, node_modules not updating, live reloading being flaky, ...) by just using an actual local development environment; but at deployment time you get the benefits of a self-contained Docker image with no host dependencies.

Related

Where does React put the continuous build files when using create-react-app

I'm using create-react-app. When I run npm start (react-scripts start) it continuously builds the changes for me and does it magic. But what is the output folder for that? I know when I build it manually where the files go.
I want to use firebase emulator to serve the current version (the continuous build) of my react all but I don't understand where's the output folder or how to achieve it.
You could try this package https://github.com/Nargonath/cra-build-watch
Install it and add the script to your package.json
{
"scripts": {
"watch": "cra-build-watch"
}
}
and run it
npm run watch
more info here
https://ibraheem.ca/writings/cra-write-to-disk-in-dev/
and if you go to the react repo issue linked in the article you would find more workarounds
tl;dr
run npm run build, not npm run start
More Detail
react-scripts start runs webpack-dev-server internally. As a default setting, webpack-dev-server serves bundled files from memory and does not write files in directory.
If you want to write files with webpack-dev-sever, you could set writeToDisk option to true in your dev server configuration.
However, I dont think this is what you want to serve on firebase emulator. Webpack-dev-server does not build optimal app for production, and you also need to use react-app-rewired to customize dev server configuration in cra template.
What you want to do is npm run build to run react-scripts build, which builds optimized production app in /build directory.

Appsettings from azure not accessible in the backend or frontend of my next js application

I have a next js application that I want to host on azure app service. I've setup a dockerfile for running the application, which looks the following:
# Install dependencies only when needed
FROM node:14.15.5-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile
# Rebuild the source code only when needed
FROM node:alpine AS builder
WORKDIR /app
COPY . .
COPY --from=deps /app/node_modules ./node_modules
RUN yarn build:testing
# Production image, copy all the files and run next
FROM node:alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# You only need to copy next.config.js if you are NOT using the default configuration
COPY --from=builder /app/next.config.js ./next.config.js
COPY --from=builder /app/next-i18next.config.js ./next-i18next.config.js
COPY --from=builder /app/public ./public
COPY --from=builder /app/.next ./.next
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./package.json
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
RUN chown -R nextjs:nodejs /app/.next
USER nextjs
EXPOSE 3000
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry.
# RUN npx next telemetry disable
CMD ["yarn", "start:testing"]
The docker image is build and push to azure from bitbucket pipeline. The image is working fine and the push goes well. I am pushing using Microsofts own supplied code https://bitbucket.org/microsoft/azure-web-apps-containers-deploy/src/master/.
My issue lies in the appsettings set in the app service is not available in the application hosted on azure. I am trying to access the variables as environment variables as described in https://learn.microsoft.com/en-us/azure/app-service/configure-language-nodejs?pivots=platform-linux
but with no luck. The variable is just undefined. I've tried accessing both from client and serverside in next js (should only be available server side as far as I understand).
I have a staging and production deployment slot, which needs to change which backend to use, and I want to switch between them by swapping staging into production. One of the things that change when swapping is appsettings as far as I can see. I've checked that the variable I am trying to access is set in kudu.
To repeat the question: How do I access the appsettings variables in my next js applicaton?
Best
Drachon
PS. I made an error, and error with the deployment flow of docker. Now I can fetch the appsettings from the serverside of nextjs
As Thomas's comment is right.
You can't access app settings from front end. front end runs in browser not on the server side
Related Posts:
1. I am unable to read environment variable from azure app service configuration from my REACT app
2. Azure WebApps React: the environment variables exist but are not present when I call process.env
3. Azure app service externalize variables environment into my Angular web application
Workaround:
You can use restapi to get access the appsettings variables, not by using process.env.Your_Parmas
Related post: Accessing Azure Application Settings with JavaScript
Official doc: Web Apps - List Application Settings

Unable to start React with TypeScript in Docker container

I'm trying to npm run start a React application which was created with --template typescript.
Typescript is therefore installed (as a React dependency) but my Docker container complains with a generic error message that TypeScript wouldn't be installed. I'm therefore unable to start the application inside a Docker container.
Everything works, when I start the application (with the same package.json) outside the container.
Error
> frontend#0.1.0 start /app
> react-scripts start
It looks like you're trying to use TypeScript but do not have typescript installed.
Please install typescript by running npm install typescript.
npm ERR! Exit status 1
I added TypeScript via npm install typescript and rebuilded the Docker container. But it still shows the error message.
Even after adding typescript manually as a dependency (even inside the container with a direct call to npm install typescript there!), the container complained about not being able to find TypeScript (which doens't seem to be true as I can validate that TypeScript was installed inside the container as tsc -version shows me the correct output).
Code
My Dockerfile looks like this:
FROM node:15.4.0-alpine3.12
# set working directory
ARG app_path=/app
WORKDIR ${app_path}
# add `node_modules/.bin` to $PATH
ENV PATH ${app_path}/node_modules/.bin:$PATH
# set Docker port
EXPOSE 3000
# copy configs, no need to copy src files as they get bind mounted later on (see docker-compose)
COPY package*.json ./
COPY tsconfig.json ./
# install all app dependencies
RUN npm install --silent
# validate typescript installation
RUN tsc --version
My docker-compose.yaml file looks like this:
version: '3.8'
services:
frontend:
build:
context: ./frontend
dockerfile: ../Dockerfile
image: frontend:dev
container_name: dev_frontend_react
ports:
- 4000:3000
command: ['npm', 'run', 'start']
volumes:
- ${HOST_PROJECT_PATH}/frontend:/app
# add a virtual volume to overwrite the copied node_modules folder as the
# container installs its own node_modules
- node_modules:/app/node_modules
environment:
- NODE_ENV=development
restart: unless-stopped
volumes:
node_modules:
name: frontend_node_modules
Same question, non working solution
I found another solution to the same question here on StackOverflow:
Asked to install Typescript when already installed when building Docker image
But this solution is hard-copying the project into the container and creating an actual build. But that solution is not acceptable for me as it prevents the hot reload feature of React from working.
During Docker image build (i.e. in the Dockerfile) you are installing the dependencies in to the node_modules folder inside the container (RUN npm install --silent). Type script (tsc --version) works as it gets installed there.
Later, in the docker-compose file, you are replacing the node_modules folder with a folder from the host machine. So, effectively, the commands from the Dockerfile have no effect.
I'm not sure what your goal is, but I can see several options:
if you want the docker container to install dependencies into the folder on the host machine (?), you need to do it when the container is being started, not when the image is being build
if you just want to use dependencies from the host folder, make sure that the typescript is there
if your goal is to run the app on docker but still be able to edit and hot-reload the app, try mounting the src folder alone (but then you won't b able to add new dependencies)
UPDATE:
Actually your solution should work. Docker-compose should copy files from the node_modules inside the container into the mounted volume. There is one gotcha though: it does it only when the volume is newly created (see populate a volume using a container). So, to solve the problem, try removing the volume and rerun docker compose:
docker volume rm frontend_node_modules
Unfortunately you need to do it every time there are any changes to the dependiences. Alternatively you can use:
docker-compose down -v
which will remove volumes as well (by default docker-compose down does not remove volumes, thus the -v switch)

Using Docker for create react app without using Nodejs to serve the file

Without using NodeJs to serve the static file, I am trying to build Docker image for create react app with the below folder structure
sampleapp -
client
src
...
DockerFile
So client is build by create-react-app client, the application is just consuming services and rendering it.
Dockerfile:
FROM node:10.15.1-alpine
COPY . /var/panda/client
WORKDIR /var/panda/client
RUN npm install --no-cache && npm run build
EXPOSE 4000
CMD ["npm", "start"]
How can I start the Docker container in local and prod, and is the above Docker script is fine for running the application in production build?
If you're asking how to run an image, you simply do docker build -t client ., and then docker run client. Your Dockerfile is not fine for a prod environment because it runs as root. You should add the following lines just before the last line.
RUN adduser -D user
USER user
Once you've run npm run build you will have a set of static files you can serve. Anything that serves static files will work fine. One typical setup is to use a multi-stage build to first build the application, then serve it out of an Nginx image:
FROM node:10.15.1-alpine AS build
COPY . /var/panda/client
WORKDIR /var/panda/client
RUN npm install --no-cache && npm run build
FROM nginx:1.17
COPY --from=build /var/panda/client/build /usr/share/nginx/html
For day-to-day development, just use the ordinary CRA tools like npm run start directly on your desktop system. You'll get features like live reloading, and you won't have to mess with Docker volumes or permissions. Since the application ultimately runs in the browser it can't participate in things like Docker networking; except for pre-deployment testing there's not really any advantage to running it in Docker.

AngularJS on nginx in Docker container

We have an app written in Angular
We will use an nginx container to host the angular
but the problem is where we have to perform the npm install for creating the /dist folder in angular.
Do we have to perform it in the dockerfile of our nginx-webserver or is this against the rules?
You are obviously using node as your dev server and want to use NGINX as your prod server? We have a similar setup
this is how we do it ...
in our dev environment, we have /dist on .gitignore
on a push to git we have a Jenkins job that does a build (this does the npm install inside a Jenkins build server)
on a successful Jenkins job we do a docker build (a downstream job), the docker build copies the /dist files into the docker image
we then do a docker push
the resulting docker image can be pulled from any server - hope that helps
would welcome your thoughts :)
PS The problem with doing the npm install during the docker build is that your docker container becomes messy. You end up installing loads of software inside it just for setup purposes.
All you really want in your docker image is NGINX serving up your build files.
This is why we do not do an npm install during the docker build.

Resources