I'm no React developer, and I've been doing a docker course that uses a multi-stage build Dockerfile with node and nginx to dockerize a React app. Why is nginx needed? And why can't we simply use npm start in production? Doesn't it already start a server and exposes the port for React to run?
You are correct, there is really nothing stopping you from just doing npm start even for production. For development purposes using a Nginx server is kind of overkill. However, the story is different for production environments. There are many reasons to use a "proper" webserver. Here some points:
Performance and obfuscation when doing a production build of the React code: In order to improve performance you should do a npm build to obtain minified and optimized code. This will reduce the file size of your application which will reduce storage, memory, processing and networking resources. The result of npm build is a bunch a static files which can be served from any web server.
This will also obfuscate your code, making it harder for other to see what the code does, making it harder to exploit potential bugs and weaknesses
Obfuscation of infrastructure: Having a front-facing webserver can act as a "protective layer" from the internet, hiding your infrastructure from the outside is good for security purposes.
More performance and security; A battle tested production web server such as Nginx is highly performant and has HTTPS capabilities built-in. A dev server usually won't have the same abilities, it will perform worse, and it won't have near the same level of security.
Convenience: Handling a production environment can be significantly different from a dev environment. Nginx provides built in logging, you can easily restrict/allow/redirect calls to your server, load balancing, caching, A/B testing and much more. All of which can be very essential in production.
I guess your course just sets up example cases that are also relevant for people wanting to create production ready systems.
When you are ready with your react app and publish it (npm run build), you will get a bunch of static html and js files. And somehow you want to serve them, this is where a webserver, for example nginx comes in. (You can use any other webserver, for example serve)
You can serve with node, if you create a server, which can serve static files too, npm start is for debugging, it will use much more resources (ram, cpu) and the file size will be bigger (it will load much slower)
Related
Struggling with collision of technical terms (most especially the term "plugin" which has about seven different meanings within the react development stack).
Short question:
Is there a way to pre-compile static webpack modules that can be installed separately from a main static react web application, while still sharing modules contained in the main web application? (The question as best I can formulate it using my relatively naïve react developer skills). I'd like the ability to plug in web user interface components supplied by 3rd party developers after the fact. i.e. installable runtime React UI components, not requiring react compilation at install time.
Details:
I have a static React web app that allows remote control of audio plugins (specifically LV2 audio plugins). It's a single-page static react app (that communicates via we sockets with the running application), hosted by a static C++ web server. Realtime and IOT agility requirements make a python hosted dynamic web server an runtime compilation an unattractive prospect (https://github.com/rerdavies/pipedal)
What I want to do is allow extension of the web UI using separate bundles provided by 3rd-party LV2 plugins. The ideal solution would be to allow static webpack bundles pre-compiled by the lv2 plaigns and placed in /usr/lib/lv2/<pluginname.lvw>/resource directories to be consumed by the web app at runtime. I'm using a custom C++ web server, so redirecting URLs into the /usr/lib/lv2/xx/resource directories is straightforward.
The main app would be distributed one apt package. Lv2 plugins would be compiled (potentially by 3rd party developers) against an "sdk package" provided by the main app build, after the main app was built, and then distributed in separate packages. Ideally, I'd like to pre-compile the ui code for the plugins to static webpack modules before their installers are built.
I more-or-less understand how I would do this if I were using raw CLI tools and configuration files (tsc, webpack, babel). But I can't help thinking I would be reinventing a wheel. (And I do have concerns that I'm going to incur serious version-dependency problems).
I would like to code-share the base modules (react, #mui controls, and a limited set of app-supplied components and interfaces).
I see the path through the various tools to make this happen, using my own custom build script, I think. I can get the typescript compiler to do code-splitting; I can probably figure out how to get the babel transpiler to do the right thing. And I think I understand how to write webpack configuration files that will process do sharing of modules from the main app. And a likely path to build and distribute an npm package to do the setup and build of LV2 plugin projects. And how to write supporting CMake build rules for building and installing such packages. &c. But I'm concerned that I'm going to go down a large rabbit hole trying to reinvent something that surely must exist already. And I can imagine seven thousand ways for this to go horribly wrong. :-P
So far, I have implemented the TypeScript compiler portion of the build procedure. And writing various bits to dynamicall intercept and service resource requests in the web server is trivial. But it has become painfully obvious that I also need to do babel and webpack build steps as well.
I haven't yet looked at the react-scripts package contents to see if I can steal code to build what I want there. Perhaps that's a viable path.
Is there a way to do this with off-the-shelf npm packages and off-the-shelf npm build procedures? I can find all kinds of bits to get me part way; but the integration of all the bits is rather daunting. Should I just do the deed, and start writing my own custom build scripts to make this happen?
I'm developing an application in ReactJS where I quite often push new changes to the the application.
When the users load upp the application they do not always get the newest version of the application causing breaking changes and errors with the express backend I have.
From what I have researched you can invalidate the cache using "cache busting" or a similar method. Although from all the questions I have seen on stackoverflow they have no clear consensus on how to do it, and the latest update was sometime in 2017.
How would one in a modern day ReactJS application invalidate the browsers cache in an efficient and automatic way when deploying?
If it's relevant, I'm using docker and docker-compose to deploy my application
There's not one-fits-all solution. Pretty common is adding some random hash to the bundle file, which will cause browser to process the file again from server.
Something like: app.js?v=435893452 instead of app.js. Most modern bundle tools like Webpack can do all of that automatically but it's hard to give you direction without knowing your setup.
I've got an app that I'm working on that consists of a Frontend - written using CRA - and a Backend - written in Java. We would like to support deploying this in multiple environments throughout our process, namely:
Local development machine - This is easy. Just run "npm start" and use the "development" environment
Local End-to-end tests - these build an infrastructure in Docker consisting of the Frontend, Backend and Database, and runs all of the tests against that
Development CI system
QA CI System
Pre-production System
Production System
The problem that I've got is - in the myriad of different systems, what is the best way for the Frontend to know where the Backend is? I'm also hoping that the Frontend can be deployed onto a CDN, so if it can be static files with the minimal complexity that would be preferred.
I've got the first two of these working so far - and they work by the use of the .env files and having a hard-coded hostname to call.
Once I get into the third and beyond - which is my next job - this falls down, because in each of the cases there is a different hostname to be calling, but with the same output of npm run build doing the work.
I've seen suggestions of:
Hard-code every option and determine it based on the current browser location in a big switch statement. That just scares me - not only do I have to maintain my deployment properties inside my source code, but the Production source will know the internal hostnames, which is arguably a security risk.
Run different builds with provided environment variables - e.g. REACT_APP_BACKEND=http://localhost:8080/backend npm run build. This will work, but then the code going into QA, PreProd and Prod are actually different and could arguably invalidate testing.
Adding another Javascript file that gets deployed separately onto the server and is loaded from index.html. That again means that the different test environments are technically different code.
Use DNS Tricks so that the hostnames are the same everywhere. That's just a mess.
Serve the files from an active web server, and have it dynamically write the URL out somehow. That would work, but we're hoping that the Production one can just be a simple CDN that just serves up static files.
So - what is the best way to manage this?
Cheers
How can docker help automation testers?
I know it provides linux containers which is similar to virtual machines but how can I use those containers in software automation testing.
Short answer
You can use Docker to easily create an isolated, reproducible and portable environment for testing. Every dependency goes to an image and whenever you need an environment to test your application you just run some images.
Long answer
Applications have a lot of dependencies
A typical application has a lot of dependencies to other system. You might have a database, a LDAP, a Memcache or a many more things your system depends on. The application itself needs a certain run time (Java, Python, Ruby) in a dedicated version (Java 7 or Java 8). You might also need a server (Tomcat, Jetty, NGINX) with settings for your application. You might need a special folder structure for your application and so on.
Setting up an test environment becomes complicated
All this things make up the environment you need for your application. You need this environment to run your application in production, to develop it and to test it (manual or automated). This environment can become quite complicated and maintaining it will cost you a lot of time and trouble.
Dependencies become images
This is where Docker comes into play: Docker let's you put your database (with the initial data of your application already set up) to a Docker image. The same goes for your LDAP, your Memcache and all other applications you depend on. Docker let's you even package your own application into an image which provides the correct run time, server, folder structure and configuration.
Images make your environment easily reproducible
Those images are self-contained, isolated and portable. This means you can pull them on every machine and just run them as they are. Instead of installing a database, LDAP, Memcache and configure all of them you just pull the images and run them. This makes it super easy to spin up a new and fresh environment in seconds whenever you need.
Testing becomes easier
And that's the basic for your tests, because you would need a clean, fresh and reproducible environment to perform tests against. Especially "reproducible" and "fresh" is important. If you run automated tests (locally on the developer maschine or on your build server) you must use the same environment. Otherwise your tests are not reliable. Fresh is important because it means you can just stop all containers when your tests are finished and every data mess your tests created is gone. When you run your tests again you just spin up a new enviroment which is clean and in its initial state.
TL;DR Is there a way to deploy App Engine modules in parallel?
I've built a go application using Google's App Engine SDK for Go. This application defines multiple modules. These modules are self-contained, and do not require any sort of dependency across other modules.
When I attempt to deploy the modules to the Google Cloud, I can't help but notice that the modules are uploaded sequentially. This would be fine if deployment was relatively quick, but each module requires it's own redundant compilation of the Go binary. Hence, on top of the regular upload time, I have to wait for my app to compile [module count] x [compilation time] every time I want to deploy.
The obvious (quick) solution is to deploy in parallel, so I created a simple bash script to deploy each module independently. The problem I immediately encountered with this "solution" was a HTTP 500 response from the App Engine API. The whole umbrella application, spanning across all the modules, seems to "lock" whenever any individual module is updated. This scenario creates a race condition, under which only the first module to trigger a deploy succeeds and the others fail.
I fear that this is a holdover from the legacy languages in App Engine. Since every module uses the same Go binary, it doesn't really necessitate multiple compilations of the same code. Repeated compilation is redundant, and there is no way to circumvent the lock.
One hypothetical solution, which I have only a vague understanding of, is to compile in parallel and deploy in series. I imagine that this approach would involve taking apart the configuration tool and reworking it to execute in the aforementioned manner- though I can't say for sure (yet).
Any help here would be much obliged. Thanks!
You can deploy to another "version" of your App Engine app, then when all modules are deployed, do a very fast version switch?
Versions also allow for traffic splitting if you need/want that kind of thing.