Dockerize your Angular NodeJS application - angularjs

We have an front-end application.
It's written in Angular (html + css + javascript) which needs to be hosted by a webserver (nginx).
Angular is communicating with a NodeJs server which will communicate with the backend.
Now we have to run this in Docker.
We want to use 2 Docker containers: one with nodejs and one with nginx and let them work together
So is it possible to write 2 dockerfiles in the one repository?
The main idea is to have 1 dockerfile for nodejs which is also running the bower install, npm install, ...
which will look like this:
# Create app directory
RUN mkdir -p /usr/src/www
WORKDIR /usr/src/www
RUN npm install -g bower
RUN npm install -g gulp
# Install app dependencies
COPY . /usr/src/www/
RUN bower install
RUN npm install
RUN gulp build
EXPOSE port
CMD [ "node", "server.js" ]
And one dockerfile in which we run a nginx-webserver but will also include a nginx.conf so it can point to the right /dist folder in our node.js-container
The dockerfile of nginx will look like this:
# Set nginx base image
FROM nginx
# Copy custom configuration file from the current directory
COPY nginx.conf /etc/nginx/nginx.conf
An example of the nginx.conf
location ~* /dist {
proxy_pass http://nodejs:port;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;

Using 2 docker containers is the best option in my opinion, single responsibility per container design is worth to follow.
It's very common having to create more than one container per project:
database
backend server
frontend server
One approach is create a folder for docker definitions and for each docker context, create an script docker_build.sh that prepares the docker context (copy all the artifacts required: libs, source code, etc) and finally make the docker build.
project_root/
|----src/
|----docker/
|----|----angular/
|----|----|-----Dockerfile
|----|----|-----docker_build.sh
|----|----nodejs/
|----|----|-----Dockerfile
|----|----|-----docker_build.sh
An example of docker_build.sh
#!/bin/bash
# create temp directory for building
mkdir DockerBuildTempPath/
# copy files to temp directory
cp -arv Dockerfile DockerBuildTempPath/
cp -arv ../../src/ DockerBuildTempPath/
# ... etc
cd DockerBuildTempPath
#build image
docker build -t myapp .
# remove temp directory
cd ..
rm -r ./DockerBuildTempPath/

Try jwilder/nginx-proxy(https://github.com/jwilder/nginx-proxy). I'm currently using it to host a main docker Nginx that proxys for all my other docker services.

Related

how to update docker images and restart it for ReactJS app

I have a reactjs app, and now I dockerized it with the following Dockerfile
# get the base node image
FROM node:alpine as builder
# set the working dir for container
WORKDIR /frontend
# copy the json file first
COPY ./package.json /frontend
# install npm dependencies
RUN yarn install
ENV REACT_APP_API_ADDRESS=http://localhost:7001
# copy other project files
COPY . .
# build the folder
RUN yarn build
# Handle Nginx
FROM nginx
COPY --from=builder /frontend/build /usr/share/nginx/html
COPY ./docker/nginx/default.conf /etc/nginx/conf.d/default.conf
and my nginx conf is simple like below:
server {
listen 7001;
server_name _;
index index.html;
root /usr/share/nginx/html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
location / {
try_files $uri /index.html =404;
}
}
and then, I build it and push it to docker hub, I hope if I have publish a new image or you can said the latest one, the reactjs application should popup a window or a notification to let the user click it (I can make it so far).
and then, if user click the update notification button, the another server i call it hook_upgrade_server in anther container, which will get a signal and then it will execute docker pull reactjs and restart it by docker run -d -p7001:7001 reactjs docker container with latest images.
Which mean, I want to install two container on the user host:
reactjs do some business
hook_upgrade_server for wait user click signal and pull latest image and restart
So, How can i make it happen?

How to disable NGINX logging with file

I am very new to Nginx and noticed that whenever I hit my server locally it logs. I was wondering, what config files do I need to create (and where do I put them) and what do I have to put into them to disable that behavior (I am trying to prevent spew). I am running my application on aws and am getting a lot of log lines of the form '172.31.22.19 - - [23/Jun/2021:23:38:33 +0000] "GET / HTTP/1.1" 200 3022 "-" "ELB-HealthChecker/2.0" "-"' Is there a way to disable that? Or do I need to disable everything?
My docker file is:
# pull official base image
FROM node:16 AS builder
# set working directory
WORKDIR /app
# install app dependencies
#copies package.json and package-lock.json to Docker environment
COPY package.json ./
# Installs all node packages
RUN npm install
# Copies everything over to Docker environment
COPY . ./
RUN npm run build
#Stage 2
#######################################
#pull the official nginx:1.19.0 base image
FROM nginx:1.19.0
#copies React to the container directory
# Set working directory to nginx resources directory
WORKDIR /usr/share/nginx/html
# Remove default nginx static resources
RUN rm -rf ./*
# Copies static resources from builder stage
COPY --from=builder /app/build .
EXPOSE 80
# Containers run nginx with global directives and daemon off
ENTRYPOINT ["nginx", "-g", "daemon off;"]
I can successfully run the above with 'docker run -p 80:80 my-app'
if you are using a docker run command to run the container then add the flag --log-driver none to the run command
Looking at your dockerfile youre running node and nginx in a single container. I would advise against this and seperate them into seperate containers using docker-compose
if you do this then add the line driver: none to the service running the nginx container
I fixed the issue by adding a nginx.conf file (see below) and changing the value of access_log to 'off'. I describe the steps I took
get the nginx.conf file:
Per What is the 'default' nginx.conf that comes with docker Nginx image? do the following:
# Create a temporary container
docker run -d --rm --name mynginx nginx
# Copy the nginx configuration in the container
docker cp mynginx:/etc/nginx .
create nginx.conf file in root of project. Mine was:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log off;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Modify Dockefile to (note the 'COPY nginx.conf /etc/nginx/nginx.conf'):
# pull official base image
FROM node:16 AS builder
# set working directory
WORKDIR /app
# install app dependencies
#copies package.json and package-lock.json to Docker environment
COPY package.json ./
# Installs all node packages
RUN npm install
# Copies everything over to Docker environment
COPY . ./
RUN npm run build
#Stage 2
#######################################
#pull the official nginx:1.19.0 base image
FROM nginx:1.19.0
COPY nginx.conf /etc/nginx/nginx.conf
#copies React to the container directory
# Set working directory to nginx resources directory
WORKDIR /usr/share/nginx/html
# Remove default nginx static resources
RUN rm -rf ./*
# Copies static resources from builder stage
COPY --from=builder /app/build .
EXPOSE 80
# Containers run nginx with global directives and daemon off
ENTRYPOINT ["nginx", "-g", "daemon off;"]

Container internal communication [duplicate]

I have the following docker-compose file:
version: "3"
services:
scraper-api:
build: ./ATPScraper
volumes:
- ./ATPScraper:/usr/src/app
ports:
- "5000:80"
test-app:
build: ./test-app
volumes:
- "./test-app:/app"
- "/app/node_modules"
ports:
- "3001:3000"
environment:
- NODE_ENV=development
depends_on:
- scraper-api
Which build the following Dockerfile's:
scraper-api (a python flask application):
FROM python:3.7.3-alpine
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "./app.py"]
test-app (a test react application for the api):
# base image
FROM node:12.2.0-alpine
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:/app/src/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /app/package.json
RUN npm install --silent
RUN npm install react-scripts#3.0.1 -g --silent
RUN npm install axios -g
# start app
CMD ["npm", "start"]
Admittedly, I'm a newbie when it comes to Docker networking, but I am trying to get the react app to communicate with the scraper-api. For example, the scraper-api has the following endpoint: /api/top_10. I have tried various permutations of the following url:
http://scraper-api:80/api/test_api. None of them have been working for me.
I've been scavenging the internet and I can't really find a solution.
The React application runs in the end user's browser, which has no idea this "Docker" thing exists at all and doesn't know about any of the Docker Compose networking setup. For browser apps that happen to be hosted out of Docker, they need to be configured to use the host's DNS name or IP address, and the published port of the back-end service.
A common setup (Docker or otherwise) is to put both the browser apps and the back-end application behind a reverse proxy. In that case you can use relative URLs without host names like /api/..., and they will be interpreted as "the same host and port", which bypasses this problem entirely.
As a side note: when no network is specified inside docker-compose.yml, default network will be created for you with the following name [dir location of docker_compose.yml]_default. For example, if docker_compose.yml is in app folder. the network will be named app_default.
Now, inside this network, containers are reachable by their service names. So scraper-api host should resolve to the right container.
It could be that you are using wrong endpoint URL. In the question, you mentioned /api/top_10 as an endpoint, but URL to test was http://scraper-api:80/api/test_api which is inconsistent.
Also, it could be that you confused the order of the ports in docker-compose.yml for scraper-api service:
ports:
- "5000:80"
5000 is being exposed to host where docker is running. 80 is internal app port. Normally, flask apps are listening on 5000, so I thought you might have meant to say:
ports:
- "80:5000"
In which case, between containers you have to use :5000 as destination port in URLs: http://scraper-api:5000 as an example (+ endpoint suffix, of course).
To check connectivity, you might want to bash into client container, and see if things are connecting:
docker-compose exec test-app bash
wget http://scraper-api
wget http://scraper-api:5000
etc.
If you get a response, then you have connectivity, just need to figure out correct endpoint URL.

How to run Ionic serve permanently?

I am using Ionic framework for one application. The code is on a linux server. I am running the application using ionic serve command through putty.
But, the problem is if I close the putty the application is stopped. Is there any way to run the ionic serve permanently as a daemon process?
I'm suspecting you're trying to do this because you want to serve your Ionic app as a web app, correct?
In that case - you don't have to run ionic serve permanently. All you have to do is take all the code from the www folder and place it in the http folder (or any other which is valid for your system) of your web server.
So, basically, spin up apache (or nginx) and serve the code from the Ionic's www folder. Basically, ionic serve command does the same thing - it spins up a local web server and serves the content from the www folder. It does that for faster local testing.
You can take a look at this SO question for more info on how to deploy Ionic as a website.
I wanted to test on my server using Ionic and Capacitor but the www folder wasn't running the third party apps.
Although not tested, technically the setup should be able to work for Ionic with the other frameworks e.g Vue, React, etc.
Using nginx and supervisor I got it to work.
Nginx config
sudo apt-get install nginx
sudo apt-get install nano
Create a conf file
sudo nano /etc/nginx/sites-available/ionic
Add the following inside the file.
server {
listen 8100;
server_name your-domain.com your-ip-address;
charset utf-8;
client_max_body_size 10M;
#Django media and static files
location / {
proxy_pass http://127.0.0.1:8101;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Note, I'm listening on port 8100 but you could use any other port e.g. 80. Also, proxy_pass is set to 8101 as ionic will be running on this port. See below under the Supervisor config
Supervisor config
sudo apt-get install supervisor
Then create a conf file
sudo nano /etc/supervisor/conf.d/ionic.conf
Add the following inside
[program:ionic]
command=ionic serve --port=8101
directory=/path/to/your/ionic/folder
autostart=true
autorestart=true
startretries=3
stderr_logfile=/var/log/supervisor/ionic/error.log
stdout_logfile=/var/log/supervisor/ionic/out.log
As described earlier in the Nginx config, I'm serving ionic on port 8101.
Note: not get an error, create the ionic folder in the logs
sudo mkdir /var/log/supervisor/ionic
Then enable and restart the services
sudo ln -s /etc/nginx/sites-available/ionic /etc/nginx/sites-enabled
sudo systemctl restart nginx
sudo systemctl restart supervisor
sudo supervisord
Before opening your website, check that ionic is running on correct port in the log output file
tail -80 /var/log/supervisor/ionic/out.log
sudo systemctl enable supervisor
sudo systemctl enable nginx
http://your-domain.com:8100 or http://your-ip-address:8100

how to deploy yeoman angular-fullstack project?

I want to deploy a simple angular projet made with angular fullstack.
https://github.com/DaftMonk/generator-angular-fullstack
I tried :
yo angular-fullstack test
grunt build
Then, in dist I got 2 folders: server and public.
how to deploy them on a linux server ?
with forever/node and nginx ???
I want to self host my project.
thanks
1.) Install nginx
2.) Proxy forward nginx to your node port. See Digital Oceans How-To.
nginx.conf
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://localhost:9000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
3.) Start app.js with node in your dist folder with the correct variables:
$ export NODE_ENV=production; export PORT=9000; node dist/server/app.js
4.) Browse to the hostname configured in nginx in step 2.
In case you get many 404's you most likely are using angular.js in HTML5 mode and need to re-wire your routes to serve static angular.js content. I described this and how-to tackle many other bugs that you may face in my blog article: "Continous Integration with Angular Fullstack".
You can try pm2 as well which is straightforward and easy, it comes with lots of useful features.
https://github.com/Unitech/pm2
// Start new node process
$ pm2 start dist/server/app.js
// list all process
$ pm2 list
Also, if you're running a mongo db with a host you will need to change the /server/config/environment/production.js uri to match development.js and it should work.
With MongoLab you have something to this effect:
mongodb://user:pass#XXXXX.mongolab.com:XXXXX/yourdatabase
then run the grunt serve:dist command in your app directory.
This worked for me.
Install generator-angular-fullstack:
npm install -g generator-angular-fullstack
Make a new directory, and cd into it:
mkdir my-new-project && cd $_
Run yo angular-fullstack, optionally passing an app name:
yo angular-fullstack [app-name]
Run grunt for building, gruntserve for preview, andgrunt serve:dist` for a preview of the built app

Resources