I have a React-Go application set up in as follows:
--root -> (client & server)
Is there a way to launch both applications (npm run start & go run .) from the root directory, maybe using a shell script?
Go and React App in their subdirectiories
If layout of your project is:
root
|-client (React)
|-server (Go)
you can add shell script or Makefile to simply run both apps. Super naive shell script running two subshells, one for each component would look like:
#!/bin/bash
(cd client/ && npm start) &
(cd server/ && go run .)
Starting Go server using npm start
Assuming layout of project is something like typical React app:
root
|-node_modules
|-public
|-src
|-server (Go code)
|-package.json
|-go.mod
...
You can add your go command to package.json start command like this:
"start": "react-scripts start & go run ./server"
where ./server is path to main package of Go application. Note that it can be literally anywhere (even complete different directory or above the React app, etc.)
Running npm start will then bring up both React and Go apps.
In both cases you will have to send SIGINT (press CTRL+C) multiple times to terminate node.js and then Go app. You can overcome this by writing better shell script.
NOTE: This is only suitable for development; for production you probably want to use pre-built Go binary and both parts under proper service manager that handles crashes of either part gracefully.
Related
I deployed a website (React + node.js) using a VDS (hostvds).
I installed apache 2, npm serve and npm forever.
The problem:
I can't keep alive frontend and backend at same time when i quit puTTy..
What i did to deploy the application:
-To run the backend, I use: forever server.js (using VDS console)
-To run the frontend, in /var/www/html folder, where i moved my front build folder, I use serve build (using puTTy)
Everything works perfectly, but when i quit puTTy the frontend stop to work.
Could someone tell me how to run and keep alive frontend?
Thanks
The problem you're facing is that the command you run in the front is attached to the tty and when you close the connection the command dies as well. This is not happening on the back because the forever tool detach it so it can effectively run forever. Your question can be summarized as "How to run multiple commands in detached mode?" A quick search give some results that can achieve what you are looking for, for example using screen. Yo have multiple approaches:
Op1: Using Screen
# run backend command
screen -dm "npm start"
# run frontend command
screen -dm "npm start"
Note that the screen command is used to create new sessions and detach them from the tty. So nohup could handle your issue.
and
Op2: Using systemd service
Another, and more robust way is using services of systemd and handling the lifecycle using systemctl command. In this way you can define restart policies (autorestart when failed) and also autostart when the machine reboots. You would have to create two different units, one for back and one for front.
Create the files
/etc/systemd/system/backend.service
[Unit]
Description=My backend
[Service]
Type=simple
Restart=always
User=nobody
Group=nobody
WorkingDirectory=/your/back/dir
ExecStart=/usr/bin/npm start
[Install]
WantedBy=multi-user.target
/etc/systemd/system/frontennd.service
[Unit]
Description=My frontennd
[Service]
Type=simple
Restart=always
User=nobody
Group=nobody
WorkingDirectory=/your/front/dir
ExecStart=/usr/bin/npm start
[Install]
WantedBy=multi-user.target
Once the files are created you can handle the service lifecycling with systemctl:
Run the apps:
systemctl start [backend|frontend]
Stop the apps:
systemctl stop [backend|frontend]
Check status:
systemctl status [backend|frontend]
To enable the autostart on boot just enable the service(s) using systemctl enable [backend|frontend]. You can disable it using `systemctl disable [backend|frontend].
Op3: Static frontend
Doing the options 1 and 2 will solve your issue, but have in mind you are serving a frontend using npm when it could be build to static files and served using apache2 directly, which will reduce cpu/memory consumption and it would be much faster. This is just regarding the frontend, the backend is dynamic and it needs the option 1 or 2.
As you mention it I assume you know how apache2 works, so just build the frontend application to generate plain html, css and js files, then move them to the apache2 folder and it will serve the files to the users for you.
cd /your/front/folder
npm run build
cp -r build/ /var/www/html
More info on how to build the statics here
Summary
Running commands in a shell will attach them and if you close the shell they will die unless you detach them. You can use detaching tools like screen or nohup, or you can change the approach for this specific scenario and use services to handle the lifecycle (apache2 is also a service).
Why don't you try to use forever for the front-end as well? If I remember well, the whole point of the forever service is to keep the command running even if you stop the terminal. I would try something like forever start -c "npm start".
I need a test.sh (bash)/test.ps1 (powershell) shell script of the following structure:
<build preparation>
npx react-scripts start
<build cleanup>
The build cleanup step should run after I stop the localhost development server.
Currently, if I send Ctrl+C to the terminal, it stops both the dev server as well as the parent script, so the build cleanup step is not executed.
I did not find similar questions online (I am not sure what to search). Preferably, I need both PowerShell and Linux solutions. My only guess is somehow the react-scripts command should "trap" the shell Ctrl+C invocation, and self-exit safely, but I do not know how to do that unless react-scripts implements it on their own.
In PowerShell you can use a try{}finally{} construct to ensure the cleanup is run on interruption:
# <build preparation>
try {
npx react-scripts start
}
finally {
# <build cleanup>
}
I have built an application that uses a ReactJS as frontend and Flask as middleware. I'm able to execute it using the following sequence of commands.
npm start
cd Equation-Solver
python -m flask run
After executing the first command, I have to open another terminal window and execute the next 2 commands. I would like to execute them together using a single shell script. Any help would be really appreciated.
Create a shell script. Add an ampersand at the end of the npm start line. This backgrounds that process. The other commands will then run. In this case the react app will show on the terminal, but the python app will be running also.
npm start &
cd Equation-Solver
python3 -m flask run
I am creating new Reactjs application using Docker and I want to create new instance without installing Node.js to host system. I have seen many of tutorials but everytime first step was to install Node.js to the host, init app and then setup Docker. Problem I ran into was the official Node.je Docker images are designed for run application only instead of to run like detached container, so I cannot use container command line to initial install. I was about to create image based on any linux distro and install Node.js on my own, but with these approache I cannot use advantages of prepared official images of Node.js.
Does exist any option how to init React app using Docker without installing Node.js to the host system?
Thank You
EDIT: Based od #David Maze answer I decide to use docker-compose, just mount project directory to container and put command: ["sleep", "infinity"] to docker-compose file. So I didn't need to install Node.js to host and I can manage everthing from container command line as usual in project folder. I wasn't solving any shared global cache, but I am not really sure that it is needed if I will have more versions of node containered because of conflict of npms of different versions. Maybe I try to mount it like volume to containers from some global place in the host one day, but disk space is not so big problem ...
You should be able to run something like:
sudo docker run \
--rm \
-it \
-u$(id -u):$(id -g) \
-w/ \
-v"$PWD":/app \
node:10 \
npx create-react-app app
You will have to repeat this litany of Docker options every time you want to do anything to use a Docker-packaged version of Node.
Ultimately this sequence of things starts in the container root directory (-w/) and uses create-react-app to create an app directory; the -v option has that backed by the current directory on the host, and the -u option is needed to make filesystem permissions line up. The -it options make it possible to answer interactive questions, and --rm causes the container to clean up after itself.
I suspect you will find it much easier to just install Node.
I am on a Windows machine trying to create a Dart server. I had success building and image with my files with ADD and running the container. However, it is painful to build an image every time I wan't to test my code so I thought it would be better to mount my files with the -v command since they are access live from my host machine at runtime.
The problem is that dart's packages folder at /bin/packages is really a symlink (if its called symlink in windows) and docker or boot2docker or whatever doesn't seem able to go past it and I get
Protocol error, errno = 71
I've used dart with GAE and the gcloud command somehow created the container, got your files in there and reacted to changes in your host files. I don't know if they used the -v option (as I am trying) or they have some auto-builder that created a new image with your files using ADD and the ran it, in anycase that seemed to work.
More Info
I've been using this Dockerfile that I modified from google/dart
FROM google/dart
RUN ln -s /usr/lib/dart /usr/lib/dart/bin/dart-sdk
WORKDIR /app
# ADD pubspec.* /app/
# RUN pub get
# ADD . /app
# RUN pub get --offline
WORKDIR /app/bin
ENTRYPOINT dart
CMD server.dart
As you see, most of it is commented out because instead of ADD I'd like to use -v. However, you can notice that on this script they do pub get twice, and that effectively creates the packages inside the container.
Using -v it can't reach those packages because they are behind host symlinks. However, pub get actually takes a while as it install the standard packages plus your added dependencies. It this the only way?
As far as I know you need to add the Windows folder as shared folder to VirtualBox to be able to mount it using -v using boot2docker.
gcloud doesn't use -v, it uses these Dockerfiles https://github.com/dart-lang/dart_docker.
See also https://www.dartlang.org/server/google-cloud-platform/app-engine/setup.html, https://www.dartlang.org/server/google-cloud-platform/app-engine/run.html
gclould monitors the source directory for changes and rebuilds the image.