Is using nohup in production is a bad practice ? (how to run server forever) - reactjs

I have Linux Droplet on Digital Ocean, and I want to run on it some services – like SpringbootWeb and React.js.
Clearly I need to run the servers all the time, without being depends
on the terminal on/off (I’m using Putty) and I am planning to do it ,
by using nohup.
I saw other methods like those
In spring boot (See 3. Installing Spring Boot Applications) and in npm.
But I prefer for now use nohup since it’s easier and simpler.
I there is problem with that approach and it considered “bad practice” for production ?
(And if does, what considered a good practice ?)
Edit
Now seeing that nohup not saving react running after closing Putty-console
found also this idea for deploying React on nginx. (Digital Ocean run nginx)

There's nothing wrong with it, but you would still need to create some sort of init script to start your app on boot and stop it on shutdown.
So on a Linux system you would typically want to use systemd unit files for this, and have the init system handle the lifecycle of your server application. The reference guide mentions it here, or refer to this as a more complete example.

Related

Running Vespa outside Docker

I'd like to run an instance of Vespa outside of a container (e.g. Docker). The Docker path is definitely quite convenient and works great. But I would like to go thru the process by hand of setting up an instance on macOS and seeing more of the 'nuts and bolts' of Vespa.
It appears there are nice docs which outline a path to building RPM's for Centos, etc. Would walking thru that process and adapting to macOS be my best bet?
Unfortunately, running Vespa on MacOS directly is not yet supported. I'd suggest instead running a CentOS VM or cloud instance and experimenting there.

WebkitGtk application is not loading file URL

I am building a kiosk application using webkitgtk on the raspberry pi 4.
This application will not be connected to the internet and all the html,css, javascript for the UI are all located on the local filesystem.
I am using buildroot to setup the Linux system, starting with the pi 4 defconfig provided in buildroot.
I have enabled all the packages needed to get webkitgtk running.
Also, the kiosk application has been tested on my desktop, using the same software stack and it works
However, when i try to launch the application on the raspberry pi, a blank page pops up. I have played around with the WebKitWebSettings object associated with my WebKitWebView by enabling local file access. It still shows up a blank screen.
Also included in my pi4 application bundle is a simple gtk3+ application. This launches successfully!
I will really appreciate some pointers as to why this is happening as i have sort of reached a dead end
UPDATE
I enabled the MiniBrowser app that comes with the Webkitgtk package.
Entering the local url, The page does not load. It only gives me a message at the top saying "Successfully downloaded".
It seems to be treating my input as a download
UPDATE 2
After some more experimenting, i was finally able to get webkitgtk working on the pi 4.
The problem seems to originate from using the webkit_web_view_load_uri() api.
It does not seem to recognize my html document as a web page.
I got around it using the webkit_web_view_load_html() call. This included some hacks by first reading in the contents of the html doc into a character buffer, and passing it to webkit_web_view_load_html().
You also have to provide a base path to this function call to be able to resolve all the urls (scripts, css, images etc) in your html document.
Another problem i haven't been able to work around is, SVG images are not loading in webkitgtk. I have used jpg formats and they work. I suspect this my be due to a configuration switch in building webkigtk
It's hard for me to figure out what might be happening without having access to your environment and settings. My gut feeling is that pages are showing blank because perhaps some shared libraries are missing. You can check that with:
$ ldd WebKitBuild/GTK/Release/bin/MiniBrowser
I am using buildroot to setup the Linux system, starting with the pi 4 defconfig provided in buildroot.
There's a buildroot repository for building WPE for RPi. WPE (WebPlatform for Embeded) is like WebKitGTK but doesn't depend on GTK toolkit. Another important difference is that WPE runs natively on Wayland.
If you're interested in having a webapp embedded in a browser running in a device with limited capabilities, WPE is a better choice than WebKitGTK. The buildroot repo for building WPE for RPi is here:
https://github.com/WebPlatformForEmbedded/buildroot
There's is also this very interesting step-by-step guide on how to build WPE for RPi3:
https://samdecrock.medium.com/building-wpe-webkit-for-raspberry-pi-3-cdbd7b5cb362
I'm not sure whether the buildroot recipe would work for RPi4. It seems to work for all previous versions, so you might be stepping in new land if you try to build WPE on RPi4.
If you have an RPi3 available I'd try to build WPE for RPi3 first, and make sure that works. Then try for RPi4.

How to initiate Aldebaran ServiceManager?

I would like to stop and start ALTactileGesture service through ServiceManager during my app. I'm using Choregraphe and python boxes. I have tried different options to initiate ServiceManager but none of them works. Is there any way of doing this?
Edit:
I have already tried self.sm = session.service('ServiceManager') but did not work.
The idea is to stop ALTactileGesture as soon as the app has started:
(1) ServiceManager.stopService('ALTactileGesture') (see this)
and start/restart ALTactileGesture before the application ends:
(2) ServiceManager.startService('ALTactileGesture')
My question is how to reach ServiceManager so I can then use (1) and (2)?
You have to understand that the word "service" actually means two different things in NAOqi. See an explanation here:
NAOqi services (also called "modules"), that expose an API and are
registered to the ServiceDirectory. You can call them with qicli,
subscribe to their signals, etc.
systemd services, that are standalone executables packaged in an
Application Package, declared in it's manifest with a tag.
These are managed by ALServiceManager, who can start and stop them
(they will have their own process). For clarity's sake, these are
called "Executables" in this doc.
The confusion between the two is increased by the fact that a common
pattern is to write an executable whose sole purpose is to run a NAOqi
service, and sometimes to identify both with the same name (e.g. both
are called “ALFuchsiaBallTracker”).
Your problem here is that the NAOqi service ALTactileGesture is run by the executable registered under the ID ALTactileGesture-serv. So you need to do
ALServiceManager.stop("ALTactileGesture-serv")
(I just tested it, it works fine)
(edit) by the way, I'm not sure that actually stopping and starting ALTactileGesture is the best way of doing what you're trying to do (it seems a bit hacky to me), but if you want to do it that way, this is how :)
Just try this in robot shell (old style proxy connection):
$ python
import naoqi
s = naoqi.ALProxy("ALServiceManager", "localhost", 9559 )
s.stopService('ALTactileGesture')
>>> False
s.startService('ALTactileGesture')
>>> False # (a bit weird, but ...)
So I think it's not completely working, but at least you can connect to the ServiceManager as requested...

Using nix in a continuous delivery workflow

Can nix be used in a continuous-delivery workflow?
We're using semaphore as our continuous integration service, and now I'm looking into building packages upon a successful build. For this I'm considering using nix.
I don't know what would be the right way of setting up a continuous delivery pipeline with this package manager. It seems that such an automated process would involve:
Making a branch of the nixpkgs repository (in the CI server).
Updating the rev field of fetchFromGithub.
(automatically) submitting a pull-request.
But I don't know if this makes sense, and also I'm concerned that the continuous-delivery process involved a manual step (having an human aproving the pull-request).
Can nix be used in a continuous-delivery workflow?
Yes. It's typically done with Hydra, a CI system built with Nix. But, it may be possible to do this with Semaphore.
Semaphore CI provides build environments that are language specific, but... it's running Ubuntu, so theoretically you can do something like this:
Install Nix as if it were a dependency. See this article.
Add your Nix package, which I suppose you can do with Git. You don't really need to clone Nixpkgs.
Use nix-build to build your package. This will create a result symbolic link to the build output.
Deploy using git-deploy.
If you do something like this with your package you can call it directly from nix-build because you won't have to provide the package dependencies as arguments:
{ pkgs ? import <nixpkgs> {} }:
let
stdenv = pkgs.stdenv;
...
in
stdenv.mkDerivation {
..
}
Optimization
Installing Nix for every build is wasteful, but perhaps you can cache the Nix store. See this article.

Simple Tensorflow with Custom Packages on Google Cloud

The task: Run a tensorflow train.py script I wrote in the cloud with at least 32GB of memory.
Requirements: The script has some dependencies like numpy, scipy, and mkt. I need to be able to install these. I just want a no-nonsense ssh shell like experience. I want to put all my files including the training data in a directory, pip install the packages if necessary, then just hit python train.py and let it run. I'm not looking to run a web app or have Google's machine learning platform do it for me.
All the tutorials around seem needlessly complicated, like they're meant for scaled deployments with http requests and all that. I'm looking for a simple way to run code on a server since my computer is too weak for machine learning.
Don't use AppEngine -- use Compute Engine instead. Almost the same thing, but very simple and you are completely in control of what you run, what you install etc.
Simple steps that should work for you:
-Create a Compute Engine instance
-Chose operating system (Ubuntu xx, but you can choose others instead)
-Chose how many CPUs and how much memory you want (select Customize in order to set yourself the CPU/memory ratio rather than getting default options)
-Enable HTTP/HTTPs in order to be able to use Tensorboard later
-Once created, SSH into the machine. Python is already pre-installed (2.7 default, but 3.x also available as Python3 alias)
-Install Tensorflow, Numpy, Pandas, and whatever you want with simple PIP
-You can also install Bazel if you want to build Tensorflow from source and to speed up the CPU operations
-Install gcsfuse if you want to copy/paste stuff quickly from cloud storage buckets
-Use tmux if you want to run several Tensorflow sessions in parallel (i.e.to try different hyperparameters/etc.)
This is all very clean and simple and works really well. Don't forget to shut it down after finished. You can also create a Preemptable instance to make it super-cheap (but it can be shut down at any time without warning, but happens rarely).

Resources