How to use aws Lightsail for my react build - reactjs

I'm trying to use lightsail to host a website.
It almost works fine but I have to write example.com:5000 but I don't know what to do to remove this :5000.
I used npm run build to create a file and I use pm2 to serve it automatically on this port.

Since you're using PM2 to serve the react application, you can serve it directly in port 80 by doing the following:
Connect to your server (Note: Only root can bind ports which are less than 1024 so that's why we're going to use authbind which allow this port binding for non-root users)
Bind the 80 port using authbind by executing the following commands:
sudo apt-get install authbind Install the authbind package
sudo touch /etc/authbind/byport/80 Create a "binding file" to bind port 80
sudo chown YOUR-USER /etc/authbind/byport/80 Make your user the owner of this file (make sure to replace YOUR-USER with your username)
chmod 755 /etc/authbind/byport/80 Set the access right for this file
Start the app by using authbind --deep pm2
You can view more information about these steps via the official PM2 documentation: https://pm2.keymetrics.io/docs/usage/specifics/
Also, if you're just serving a React application, you can use S3 to host it since it's pretty cheap and you gives you advantages such as CDN and other features. If you're doing that just make sure to enable CORS in your S3 bucket.

Related

Build, npm serve, npm forever: how to keep alive a deployed site ( react + nodejs)

I deployed a website (React + node.js) using a VDS (hostvds).
I installed apache 2, npm serve and npm forever.
The problem:
I can't keep alive frontend and backend at same time when i quit puTTy..
What i did to deploy the application:
-To run the backend, I use: forever server.js (using VDS console)
-To run the frontend, in /var/www/html folder, where i moved my front build folder, I use serve build (using puTTy)
Everything works perfectly, but when i quit puTTy the frontend stop to work.
Could someone tell me how to run and keep alive frontend?
Thanks
The problem you're facing is that the command you run in the front is attached to the tty and when you close the connection the command dies as well. This is not happening on the back because the forever tool detach it so it can effectively run forever. Your question can be summarized as "How to run multiple commands in detached mode?" A quick search give some results that can achieve what you are looking for, for example using screen. Yo have multiple approaches:
Op1: Using Screen
# run backend command
screen -dm "npm start"
# run frontend command
screen -dm "npm start"
Note that the screen command is used to create new sessions and detach them from the tty. So nohup could handle your issue.
and
Op2: Using systemd service
Another, and more robust way is using services of systemd and handling the lifecycle using systemctl command. In this way you can define restart policies (autorestart when failed) and also autostart when the machine reboots. You would have to create two different units, one for back and one for front.
Create the files
/etc/systemd/system/backend.service
[Unit]
Description=My backend
[Service]
Type=simple
Restart=always
User=nobody
Group=nobody
WorkingDirectory=/your/back/dir
ExecStart=/usr/bin/npm start
[Install]
WantedBy=multi-user.target
/etc/systemd/system/frontennd.service
[Unit]
Description=My frontennd
[Service]
Type=simple
Restart=always
User=nobody
Group=nobody
WorkingDirectory=/your/front/dir
ExecStart=/usr/bin/npm start
[Install]
WantedBy=multi-user.target
Once the files are created you can handle the service lifecycling with systemctl:
Run the apps:
systemctl start [backend|frontend]
Stop the apps:
systemctl stop [backend|frontend]
Check status:
systemctl status [backend|frontend]
To enable the autostart on boot just enable the service(s) using systemctl enable [backend|frontend]. You can disable it using `systemctl disable [backend|frontend].
Op3: Static frontend
Doing the options 1 and 2 will solve your issue, but have in mind you are serving a frontend using npm when it could be build to static files and served using apache2 directly, which will reduce cpu/memory consumption and it would be much faster. This is just regarding the frontend, the backend is dynamic and it needs the option 1 or 2.
As you mention it I assume you know how apache2 works, so just build the frontend application to generate plain html, css and js files, then move them to the apache2 folder and it will serve the files to the users for you.
cd /your/front/folder
npm run build
cp -r build/ /var/www/html
More info on how to build the statics here
Summary
Running commands in a shell will attach them and if you close the shell they will die unless you detach them. You can use detaching tools like screen or nohup, or you can change the approach for this specific scenario and use services to handle the lifecycle (apache2 is also a service).
Why don't you try to use forever for the front-end as well? If I remember well, the whole point of the forever service is to keep the command running even if you stop the terminal. I would try something like forever start -c "npm start".

Cannot open a React app in the browser after dockerising

I'm trying to dockerise a react app. I'm using the following Dockerfile to achieve this.
# base image
FROM node:9.4
# set working directory
WORKDIR /usr/src/app
# install and cache app dependencies
COPY package*.json ./
ADD package.json /usr/src/app/package.json
RUN npm install
# Bundle app source
COPY . .
# Specify port
EXPOSE 8081
# start app
CMD ["npm", "start"]
Also, in my package.json the start script is defined as
"scripts": {
"start": "webpack-dev-server --mode development --open",
....
}
I build the image as:
docker build . -t myimage
And I finally run the image, as
docker run IMAGE_ID
This command then runs the image, however when I go to localhost:8080 or localhost:8081 I dont see anything.
However, when I go into the docker container for myimage, and do curl -X GET http:localhost:8080 I'm able to access my react app.
I also deployed this on google-kubernetes and exposed a load-balancer service on this. However, the same thing happened, I cannot access the react-app on the exposed endpoint, but when I logged into the container, and made curl request, I was getting back the index.html.
So, how do I run the image of this docker image so that I could access the application through a browser.
When you use EXPOSE in Dockerfile it simply states that the service is listening on the specified port (in your case 8081), but it does not actually create any port forwarding.
To actually forward traffic from host machine to the service you must use the -p flag to specify port mapping
For example:
docker run -d -p 80:8080 myimage would start a container and forward requests to localhost:80 to the containers port 8080
More about EXPOSE here https://docs.docker.com/engine/reference/builder/#expose
UPDATE
So usually when you are developing node applications locally and run webpack dev-server it will listen on 127.0.0.1 which is fine since you intend to visit the site from the same machine as it is hosted. But since in docker the container can be thought of as a separate instance that means you need to be able to access it from the "outside" world which means that it is necessary to reconfigure the dev-server to listen on 0.0.0.0 (which basically means all IP addresses assigned to the "instance")
So by updating the dev-server config to listen on 0.0.0.0 you should be able to visit your application from your host machine.
Link to documentation: https://webpack.js.org/configuration/dev-server/#devserverhost

How to create react-app using directly only docker instead of host?

I am creating new Reactjs application using Docker and I want to create new instance without installing Node.js to host system. I have seen many of tutorials but everytime first step was to install Node.js to the host, init app and then setup Docker. Problem I ran into was the official Node.je Docker images are designed for run application only instead of to run like detached container, so I cannot use container command line to initial install. I was about to create image based on any linux distro and install Node.js on my own, but with these approache I cannot use advantages of prepared official images of Node.js.
Does exist any option how to init React app using Docker without installing Node.js to the host system?
Thank You
EDIT: Based od #David Maze answer I decide to use docker-compose, just mount project directory to container and put command: ["sleep", "infinity"] to docker-compose file. So I didn't need to install Node.js to host and I can manage everthing from container command line as usual in project folder. I wasn't solving any shared global cache, but I am not really sure that it is needed if I will have more versions of node containered because of conflict of npms of different versions. Maybe I try to mount it like volume to containers from some global place in the host one day, but disk space is not so big problem ...
You should be able to run something like:
sudo docker run \
--rm \
-it \
-u$(id -u):$(id -g) \
-w/ \
-v"$PWD":/app \
node:10 \
npx create-react-app app
You will have to repeat this litany of Docker options every time you want to do anything to use a Docker-packaged version of Node.
Ultimately this sequence of things starts in the container root directory (-w/) and uses create-react-app to create an app directory; the -v option has that backed by the current directory on the host, and the -u option is needed to make filesystem permissions line up. The -it options make it possible to answer interactive questions, and --rm causes the container to clean up after itself.
I suspect you will find it much easier to just install Node.

Connect to docker sqlserver via ssh

I've created a docker container that contains a mssql Database. On the command line ip a gives an ip address for the container, however trying to ssh into it username#docker_ip_address yields ssh: connect to host ip_address port 22: Connection refused. So I'm wondering if I am even able to ssh into the container so I don't have to always be using the docker tool docker exec .... and if so how would I go about doing that?
To ssh into container you should full-fill followings
SSH server(Openssh) should be installed within the container and ssh service should be running
Port 22 should be published from container (when you run the container).more info here > Publish ports on Docker
docker ps command should display mapped ports 22
Hope above information helps for you to understand the situation...
If your container contains a database server, the normal way to interact with will be through an SQL client that connects to it; Google suggests SQL Server Management Studio and that connector libraries exist for popular languages. I'm not clear what you would do given a shell in the container, and my main recommendation here would be to focus on working with the server in the normal way.
Docker containers normally run a single process, and that's normally the main server process. In this case, the container runs only SQL Server. As some other answers here suggest, you'd need to significantly rearchitect the container to even have it be possible to run an ssh daemon, at which point you need to worry about a bunch of other things like ssh host keys and user accounts and passwords that a typical Docker image doesn't think about at all.
Also note that the Docker-internal IP address (what you got from ip addr; what docker inspect might tell you) is essentially useless. There are always better ways to reach a container (using inter-container DNS to communicate between containers; using the host's IP address or DNS name to reach published ports from the same or other hosts).
Basically, alter your Dockerfile to something like the following - that will install openssh-server, alter a prohibitive default configs and start the service:
# FROM a-image-with-mssql
RUN echo "root:toor" | chpasswd
RUN apt-get update
RUN apt-get install -y openssh-server
COPY entrypoint.sh .
RUN cd /;wget https://gist.githubusercontent.com/spekulant/e04521d6c6e1ccffbd3455c673518c5b/raw/1e4f6f2cb32caf3a4a9f73b02efdcbd5dde4ba7a/sshd_config
RUN rm /etc/ssh/sshd_config; cp sshd_config /etc/ssh/
ENTRYPOINT ["./entrypoint.sh"]
# further commands
Now you've got yourself an image with ssh server inside, all you have to do is start the service, you cant do RUN service ssh start because it won't work - docker specifics, refer to the documentation. You have to use a Entrypoint like the following:
#!/bin/bash
set -e
sh -c 'service ssh start'
exec "$#"
Put it in a file entrypoint.sh next to your Dockerfile - remember to chmod 755 entrypoint.sh it. There's one thing to mention here, you still wouldn't be able to ssh into the container - the default SSH server configuration doesn't allow login into root account using a password. So you either change the configs yourself and provide it to the image, or you can trust me and use the file I created - inspect it with the link from Dockerfile - nothing malicious there, only a change from prohibit-password to yes.
Fortunately for us - MSSQL official images start from Ubuntu so all the commands above fit perfectly into the environment.
Edit
Be sure to ask if something is unclear or I'm jumping too fast.

How do I customise a Google App Engine Managed VM with a Standard Runtime?

I would like to customise a (Python) Standard Runtime Managed VM.
In theory, this should be possible by adding some extra commands to the VM Dockerfile.
Google's documentation states that a VM Dockerfile is automatically generated when the App is first deployed;
If you are using a standard runtime, the SDK will create a Dockerfile for you the first time you run the gcloud preview app deploy commands. The file will exist in a predetermined location:
If you are developing in Java, the Dockerfile appears in the root of the compiled Web Application Archive directory (WAR)
If you are developing in Python or Go, the Dockerfile appears in the root of your application directory.
And that extra commands can indeed be added;
You can add more docker commands to this file, while continuing to run and deploy your app with the standard runtime declaration.
However in practice the Dockerfile is automatically deleted immediately after deployment competes, preventing any customisation.
Has anyone managed to add Dockerfile commands to a Managed VM with a Standard Runtime? Any help would be gratefully appreciated.
I tried the same thing and did not succeed. There is however an equivalent way of doing this that I fell back to.
You can create a custom runtime that mimics the standard runtime.
You can do this because Google provides the Docker base images for all the standard runtimes. Mimicking a standard runtime is therefore simply a matter of selecting the right base image in the Dockerfile of the custom runtime. For the standard Python App Engine VM the Dockerfile is:
FROM gcr.io/google_appengine/python-compat
ADD . /app
Now that you have recreated the standard runtime as a custom runtime, you can modify the Dockerfile to make any customizations you need.
Important Note
The development server does not support custom Dockerfiles (you will get an error about --custom-entrypoint), so you have to move your test environment to App Engine servers if you are doing this. I think this is true regardless of whether you are using a standard runtime and customizing the Dockerfile or using a custom runtime. See this answer.
A note about the development server not working with custom runtimes - dev_appserver.py doesn't deal with Docker or Dockerfiles, which is why it complains about needing you to specify --custom_entrypoint. However as a workaround you can manually set up the dependencies locally. Here's an example using 'appengine-vm-fortunespeak' which uses a custom runtime based on python-compat:
$ git clone https://github.com/GoogleCloudPlatform/appengine-vm-fortunespeak-python.git
$ cd appengine-vm-fortunespeak-python
# Local dependencies from Dockerfile must be installed manually
$ sudo pip install -r requirements.txt
$ sudo apt-get update && install -y fortunes libespeak-dev
# We also need gunicorn since its used by python-compat to serve the app
$ sudo apt-get install gunicorn
# This is straight from dev_appserver.py --help
$ dev_appserver.py app.yaml --custom_entrypoint="gunicorn -b localhost:{port} main:app"
Note that if you are using any of the non -compat images, you can run your app directly using Docker since they don't need to emulate the legacy App Engine API, for example using 'getting-started-python' which uses the python runtime:
$ git clone https://github.com/GoogleCloudPlatform/getting-started-python.git
$ cd 6-pubsub
# (Configure the app according to the tutorial ...)
$ docker build .
$ docker images # (note the IMAGE_ID)
$ docker run -p 127.0.0.1:8080:8080 -t IMAGE_ID
Try the above with any -compat images and you will have problems - for example on python-compat you'll see initialization errors in runtime/google/appengine/tools/vmboot.py. It needs to be run on a real Managed VM instance.

Resources