Loopback 4 served static files not showing on kubernetes - reactjs

everyone. Have been banging my head against this for a while, and maybe someone else has a better idea of what my issue is. I have a react and lb4 application and I want to host it on our kubernetes cluster. I build the react project and put it into a build folder in my lb4 project and serve those files while using the lb4 backend also for APIs. I put everything in a docker container, and when I run the container locally, it works as I would expect. When I put the container on kubernetes, I am able to hit the APIs from the loop back project but get a 404 when trying to hit the GUI.
In my LB4 project, I have this to serve the static files:
constructor(options: ApplicationConfig = {}) {
super(options);
// Set up the custom sequence
this.sequence(MySequence);
// Set up default home page
this.static('/',path.join(__dirname, '../build'));
// Customize #loopback/rest-explorer configuration here
this.bind(RestExplorerBindings.CONFIG).to({
path: '/explorer',
});
this.component(RestExplorerComponent);
this.projectRoot = __dirname;
and here is my docker file that I'm using:
RUN mkdir -p /src/app
COPY . /src/app
WORKDIR src/app
ARG env_name
ENV NODE_ENV=env_name
ENV PUBLIC_PATH "/"
RUN npm install
RUN npm run build:client
COPY /src/client/build /src/server/
EXPOSE 3001
CMD ["npm", "run", "start"]
Anyone notice anything that might be the issue? Would appreciate it greatly. Thanks.
Edit: Kind of found the issue, I think. Looks like the copying of the static files at the copy step in my dockerfile doesn't quite work as I intended, so I think it's looking at an empty folder on the kubernetes cluster. Now just to see why that isn't working.

Related

Is there any way to configure the API_URL for web-app API calls at runtime or while deploying rather than at build time?

I am using Webpack to bundle my React application.
In production, I am using a Dockerfile to serve the static files generated using npm build using Nginx.
Then this docker image is then deployed using Ansible.
Right now I have to specify the API_URL for the requests to the server at the build time using an Environment file OR use Webpack build-time settings.
But, I want to be able to specify this URL while deploying as I will only know the server URL then.
What I have tried:
I can mount Docker volumes to change variables, but I want to avoid changing the minified code.
I tried using a config.json file in assets, but on importing that file in my JS code, Webpack resolves and loads it inline as minified JS code and I cannot just replace the config.json file in assets.
Fetching the config.json file at runtime from the public URL makes me wait before I can make any API calls and before that file is resolved, all API requests will be made to an undefined URL.
Another approach I tried was to split chunks in Webpack and split the config.json file.
The minified file which I can edit is as below:
!function(e){var t={};function r(n){if(t[n])return t[n].exports;var o=t[n]={i:n,l:!1,exports:{}};return e[n].call(o.exports,o,o.exports,r),o.l=!0,o.exports}r.m=e,r.c=t,r.d=function(e,t,n){r.o(e,t)||Object.defineProperty(e,t,{enumerable:!0,get:n})},r.r=function(e){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},r.t=function(e,t){if(1&t&&(e=r(e)),8&t)return e;if(4&t&&"object"==typeof e&&e&&e.__esModule)return e;var n=Object.create(null);if(r.r(n),Object.defineProperty(n,"default",{enumerable:!0,value:e}),2&t&&"string"!=typeof e)for(var o in e)r.d(n,o,function(t){return e[t]}.bind(null,o));return n},r.n=function(e){var t=e&&e.__esModule?function(){return e.default}:function(){return e};return r.d(t,"a",t),t},r.o=function(e,t){return Object.prototype.hasOwnProperty.call(e,t)},r.p="/",r(r.s=164)}({164:function(e){e.exports=JSON.parse('{"apiBaseURL":"http://localhost:3000"}')}});
Is there a simpler approach to solving this, which I might be missing?
Here is my Dockerfile for reference:
## Build the app
FROM node:12-alpine as build
WORKDIR /app
COPY package*.json ./
RUN npm ci --silent
COPY . /app
RUN npm run build
## Expose port and start the app using nginx
FROM nginx:1.16.0-alpine
COPY --from=build /app/dist /usr/share/nginx/html
COPY --from=build /app/nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
For me, I solved it by adding environment variables.
The trick is to inject a js script into the head tag.
<script src="/config.js"></script>
You can serve this file config.js with content:
window._env_ = {
API_URL: "http://localhost:3000",
};
This way you expose the variable _env_ to the JS window object and use that anywhere in code.
The following resource is extremely helpful:
https://www.freecodecamp.org/news/how-to-implement-runtime-environment-variables-with-create-react-app-docker-and-nginx-7f9d42a91d70/

Unable to access react app running in docker container

I have seen this question asked different ways on SO. However, I have not been able to find an answer that works for me. Perhaps I haven't done the right search. So here I go. I'm brand new to docker and have deployed a simple react app using docker. I am able to hit the react app when I run it locally on my host, but when I try to access it from the host while running in the container, my luck runs out.
I understand that the issue is that the container is listening on its loopback interface, but it should listen on all intefaces (0.0.0.0). My issue is that I am not sure how to do that. I've seen instructions on how to do it for a node js app, for python http.server, etc. But not for a react app.
My app is super straightforward. I've created an app using create-react-app. I am able to run it locally and see the react page (http://localhost:3000). I've create a standard Dockerfile for a react app:
FROM node:12.15.0-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "npm", "start" ]
I then built and ran it using the following commands:
docker build -t sampleapp .
docker run -p 3000:3000 -d sampleapp
And as mentioned am not able to see the app on http://localhost:3000
Any help would be ver appreciated. Thanks in advance.
Try adding this to your .env.development
PORT=3000
If you don't have .env.development then make it and add this env var and see if it works.

Multiple env files choosing which one on docker - create react app

I am building a react app using cra, so the problem is the application just has the client side code which means there is no nodejs part.
I have two different environments one is development and one is production, as cra tells there is a order of preference:
.env
.env.development
.env.production
So if .env.production file is there in repo it will take that one and use that config based on the script that I give, if I use npm run build it will use .env.production and if I use npm start it will use .env.development if the file is there.
So I can add .env, .env.development, .env.production, but when I build the image in the Docker I can give only one command either it should be npm start or npm run build. So how should I solve this?
Install a local development environment; typically your only host dependency will be Node itself. Use the .env.development file there, via something like the Webpack dev server, with a command like yarn start.
Use Docker principally as a deployment mechanism. Your Dockerfile can build your application using the .env.production file, and then copy it into something like an Nginx container that doesn't need Node at all. It should follow the pattern in the CRA Creating a Production Build docs. Loosely,
FROM node:lts AS build
WORKDIR /app
COPY package.json package.lock .
RUN npm install
COPY . .
ENV NODE_ENV=production
RUN npm run build
FROM nginx
COPY --from=build /app/build /usr/share/nginx/html
# Base image provides default EXPOSE, CMD
This pattern gets around all of the difficulties of trying to make Docker act like a local development environment (filesystem permissions, node_modules not updating, live reloading being flaky, ...) by just using an actual local development environment; but at deployment time you get the benefits of a self-contained Docker image with no host dependencies.

Using Docker for create react app without using Nodejs to serve the file

Without using NodeJs to serve the static file, I am trying to build Docker image for create react app with the below folder structure
sampleapp -
client
src
...
DockerFile
So client is build by create-react-app client, the application is just consuming services and rendering it.
Dockerfile:
FROM node:10.15.1-alpine
COPY . /var/panda/client
WORKDIR /var/panda/client
RUN npm install --no-cache && npm run build
EXPOSE 4000
CMD ["npm", "start"]
How can I start the Docker container in local and prod, and is the above Docker script is fine for running the application in production build?
If you're asking how to run an image, you simply do docker build -t client ., and then docker run client. Your Dockerfile is not fine for a prod environment because it runs as root. You should add the following lines just before the last line.
RUN adduser -D user
USER user
Once you've run npm run build you will have a set of static files you can serve. Anything that serves static files will work fine. One typical setup is to use a multi-stage build to first build the application, then serve it out of an Nginx image:
FROM node:10.15.1-alpine AS build
COPY . /var/panda/client
WORKDIR /var/panda/client
RUN npm install --no-cache && npm run build
FROM nginx:1.17
COPY --from=build /var/panda/client/build /usr/share/nginx/html
For day-to-day development, just use the ordinary CRA tools like npm run start directly on your desktop system. You'll get features like live reloading, and you won't have to mess with Docker volumes or permissions. Since the application ultimately runs in the browser it can't participate in things like Docker networking; except for pre-deployment testing there's not really any advantage to running it in Docker.

AngularJS on nginx in Docker container

We have an app written in Angular
We will use an nginx container to host the angular
but the problem is where we have to perform the npm install for creating the /dist folder in angular.
Do we have to perform it in the dockerfile of our nginx-webserver or is this against the rules?
You are obviously using node as your dev server and want to use NGINX as your prod server? We have a similar setup
this is how we do it ...
in our dev environment, we have /dist on .gitignore
on a push to git we have a Jenkins job that does a build (this does the npm install inside a Jenkins build server)
on a successful Jenkins job we do a docker build (a downstream job), the docker build copies the /dist files into the docker image
we then do a docker push
the resulting docker image can be pulled from any server - hope that helps
would welcome your thoughts :)
PS The problem with doing the npm install during the docker build is that your docker container becomes messy. You end up installing loads of software inside it just for setup purposes.
All you really want in your docker image is NGINX serving up your build files.
This is why we do not do an npm install during the docker build.

Resources