React clarifications - reactjs

I am trying to understand React to possibly create an app for Production purposes.
I am used to creating an Node Express app, and with Express if we want https we require https and in the https.createserver() we provide a path to our pem file from our CA and our passphrase to access it.
Do we not have this option to use a CA Certificate file for React? Or since it is just a UI does it not need it?
Say I have an Express API, I created using https that I can get information from using postman with no problems, and I want my React App to access my API. Is the only thing I will need to get info from my api the proxy line in my react package.json file pointing to https://api.to.call.com? I thought it was bad practiced to post or get to https server from http.
Plus I just read that Create-React-App should not be used for production. Then what am I supposed to use for production? Is there a Create-React-Production-App? Or is this something else I need to add to my express server? I am very confused about the entire setup.

React is a front-end library. As such, it has nothing to do with SSL certificates, since in the end it's just javascript code that runs inside the client interface (i.e. a browser).
React is also agnostic when it comes to your back-end. It can be Node/Express, Rails, Python, PHP, your front-end really doesn't care, all it will see is an API that provides data (usually in the JSON form) upon request.
You do not have to serve your API and your front-end files from the same server - you can, but it's a matter of preference. And to access your API from your front-end, you can use one of several npm packages, Axios is a great one, there are others (Fetch, ...) ; all you have to do is hookup your API calls using the same URLs you used in Postman in the components that require them.
As for Create-React-App, it's a nice helper tool, but I would recommend learning a bit about the React ecosystem rather than rely blindly on it. The trickiest part is probably the Webpack configuration, but once you get the hang of it, everything becomes a lot easier. There are also newer tools like poi that can do a lot of the grunt work for you. With Webpack, you can have a dev configuration (hot reloading dev server, etc.) and a production configuration (that will output your production build JS, HTML, CSS, can optimize with chunking, etc.).
I'm not very familiar with Create-React-App (I prefer to use my own boilerplate projects), but if I'm not mistaken, there is a possibility for you to access full customization of your project, see here. Be warned however, it's a one-way operation.

Related

Is there a way to dynamically inject sensitive environment variables into a serverless React frontend application using Azure/Github Actions?

I'm sort of restricted to Azure since that is what the client is already using.
But basically, we have this React website which is your typical react-scripts no server website, which means that there's nowhere in Azure Static Webapps to set environment variables for a frontend application.
I saw this on Azure Static Webapps Configuration, but subject to the following restrictions, won't work for my use case because there is no backend API for my frontend application - the backend associated with the frontend is published separately to Azure App services. And I need the secrets on the frontend to use some npm packages that require them, which I would prefer to do on frontend instead of backend.
Are available as environment variables to the backend API of a static web app
Can be used to store secrets used in authentication configuration
Are encrypted at rest
Are copied to staging and production environments
May only be alphanumeric characters, ., and _
I was doing some more research, and this seems to sort of be up the alley of what I'm looking for:
https://learn.microsoft.com/en-us/answers/questions/249842/inject-environment-variables-from-pipeline-to-azur.html
Essentially, I really want to avoid hardcoding secrets into the React code because that's bad practice.
I personally see a few different (potential) options:
Create an endpoint on the backend Spring Boot api that simply serves all environment variables
This is the most trivial to implement but I have concerns about security with this approach. As my frontend has no access to any kind of secrets, there's no way for it to pass a secure token to the backend or anything to authenticate the request, so someone could conceivably have chrome network inspect element tab open, see that I'm making a request to /getEnvironmentVariables, and recreate the request. The only way I can see to prevent this is to have IP restrictions enacted on the backend API, so it only accepts incoming requests from the IP address of my frontend website, but honestly that just sounds like so much overhead to worry about. Especially because we're building the product as more of a POC, so we don't have access to their production environments and can't just test it like that.
Have the Azure Static Webapps Github Actions workflow somehow inject environment variables
So I've actually done something similar with GCP before. In order to login to a GCP service account to deploy the app during continuous build, the workaround was to encode a publicly viewable file that could be freely uploaded to whatever public repo, which could only (realistically) be decrypted using secrets set on the CI/CD pipeline, which would be travis-ci, or in my case, Github Actions. And I know how to set secrets on Github Actions but I'm not sure how realistic a solution like that would be for my use case because decrypting a file is not enough, it has to be formatted or rewritten in such a way that React is able to read it, and I know React is a nightmare working with fs and whatnot, so I'm really worried about the viability of going down a path like that. Maybe a more rudimentary approach might be writing some kind of bash script that could run in the github actions, and using the github actions secrets to store the environment variables I want to inject, run a rudimentary file edit on a small React file that is responsible for just disbursing environment variabless, before packaging with npm and deploying to Azure?
TLDR: I have a window in github actions when I have access to a linux environment and any environment variables I want, in which I want to somehow inject into React during ci/cd before deployment.

Should I develop a separate express server, or handle all API calls in my next.js app?

My website will perform CRUD operations and will work with MongoDB and Firebase storage+auth.
What are the reasons / advantages to developing a separate Express server instead of integrating everything in my next.js app?
As far as I have seen, it can all be done in my next.js app, but I still see many projects working with a separate server.
Depends on what your app does and how you are hosting it.
Running Next.Js on a standard server will be of little difference whether you are using nextjs's /api or expressjs.
However if you are hosting on serverless (e.g. Vercel), I would recommend using a separate express server if you have alot of CRUD operations because the warming up of serverless is really bad user experience.
Build and Deployment
Next/JS - If you want to edit something on the backend, and push the changes, it will require you to build the entire JS app, and depending on how big is your app, it can take alot of time (especially if alot of static generated pages).
Express - If you running express separately, you can build and deploy front end and backend separately. It's time saving, and you can also better organise your codes frontend/backend.
Choice of deployment
I have a choice to take advantage of Vercel to host my frontend, with static generated pages and some server side generated pages (automatic scaling, caching, CDN etc), and host my backend with a separate cluster of servers.
PS: I moved from single Next.JS app to NextJs+Express
I can think of a few things why they would have a different server from the one NextJS provides:
Familiarity with Express, Koa, etc. All next-connect helps with this
There is an already existing API in PHP, Express, Flask, etc.
It is literally based on what you would want to do, the extra interactions with MongoDB & Firebase would be same on both the technologies, unless you want to isolate respective things separately, I don't see any harm in doing everything together on next.
Given that the idea of using next.js, as per my understanding would be to utilise server side rendering.
I've been using Next.js with Typescript for quite a while now and I, as of now, have found one reason not to include express.js in my project. And the reason is Vercel.
Since I use Vercel for continuous deployment of my projects, and Vercel Not supporting any custom server as of there Docs here, I refrain from using Express or any other custom servers.
I didn't face any problem performing CRUD operations with MongoDB, can't say about firebase.
On Next.js Docs, I found these points to be considered:
A custom server can not be deployed on Vercel, the platform Next.js was made for.
A custom server will remove important performance optimizations, like serverless functions and Automatic Static Optimization.
But at the end of the day it very personal opinion weather to use a custom server or not. It might depend on a very specific use case you might be looking for.
Personally, I try to keep it to just NextJS, but if I have to manage real-time data with Socket.io, I get a separate server because other than WebSockets, serverless functions can do everything else.

How to Develop React App on Apache Webserver

i have got a problem, which i am trying to find a solution for weeks now.
I think it´s an understanding error on my side, but i couldn't figure it out.
What i am trying to do:
Develop, not deploy, a react app on an Apache2 server.
I know that react is an frontend library, so it should be possible to do.
I also know that nodeJs is kinda required to "npm" all the packages and to create the "Simple" react app..
What i want also to do:
Use the MATERIAL-UI
Build a PHP Backend
Collaborate with my Team Members (they should also work on the react app)
Thanks in advance.
Hosting a ReactJS App is no different than hosting any other javascript code on any type of server - by having it as a static file on your web server and including it in the html returned from the server.
Depending on the way your React project is set up, you would:
use node to build a javascript bundle of your react app (possibly by running "npm run build" in any CLI),
include the resulting script file or files in the head element of your root or master template
Make the Apache web server return the needed page with the master template where the script tag is
Additional setup could include starting the PHP server and react development build at the same time, but that is highly specific on your setup and would require you to add some more information on used frameworks and setup.
It is not possible to deploy apps with Apache and React. The nodejs based- Webpack server is incompatible with Apache web server. You'll have to pick one server or another, you can not route index.html -> index.tsx and vice versa. Besides if you ever got both servers working in tandem with SSL, it's a technically security nightmare and has no production value. It's best to avoid java(script) all together as it's merit in real software development has greatly diminished. Mostly for gold diggers.

React App Docker deployment Express/PM2 vs Nginx

i have created and build (react-script build) a simple react application. I want to deploy it to my ubuntu server via a Docker image and i am looking for advise. Is it better to use expressjs and pm2 to serve the react app or would it be more useful to serve it via nginx? What are the advantages and disadvantages?
Thanks a lot for your advises
When you're going to deploy a React application, you've typically used a tool like Webpack to compile it down to a set of static files. Once you've done that, deploying it via nginx will be smaller and faster than an Express server, and since you don't have the entire Node interpreter involved, there are fewer parts involved to potentially have security issues.
In fact, even if the rest of your application is in containers, it can be beneficial to host the front-end somewhere external (if you're otherwise deploying to AWS, for example, put the built front-end into an S3 bucket). This can simplify the deployment mechanics a little bit, and with Webpack's file hashing, you can keep older versions of the code for browsers that haven't reloaded the page recently. You still probably need the nginx proxy (to serve a /index.html page, to proxy the back-end services) but you don't necessarily need to redeploy it when the front-end code changes.
The only real advantage to an Express-based deployment setup is being able to live-reload your code in production. That would come with a high risk of human error, though: make an accidental typo and save the file, and end users see an error page. I'd avoid this path.

Structuring a Rails and Angular app

I'd like to build a new single page app using Rails 4 and Angular and Capistrano for deployment process.
I want all the front end to be a static app on Amazon S3, but I'm openminded for other suggestions.
What's important to me is a fast developing process with the ability to scale up easily.
I was wondering what is the best structure I should use:
keep all assets in app/assets and set Bower path to vender directory.
that way i can use rails precompile methods and enjoying Rails html tags for index.html, but i'm sure it will be easy to upload it to S3 and keep it separated.
keep all assets including Bower components in public/app directory, which will keep it as a complete separate application, but then i need to use Grunt or any other service for precompiling assets.
any other idea?
From my experience, I found this approach to work really well:
API app (Rails/Sinatra/Grape/Node/whatever) serves only JSON APIs. Deploys to a server, say api.yourapp.com. Serves Access-Control headers.
Static web app: started by generating with yeoman an AngularJS, Gulp, Bower app. Deploys using gulp aws deploy module to S3.
There's no real reason to have both views and apis in the same app or built with the same technology (as in Rails).
Now there are issues:
S3 doesn't support nicely Angular's HTML5 mode URLs. So pure S3 website isn't an option.
Facebook doesn't read OpenGraph tags that are not in the source of the page.
Couldn't figure out the state of Google/SEO and Angular apps. I didn't see the content in the search results.
So as a solution I introduced another web server app. Can be based on anything - pure rack, node etc. I chose rack.
Solutions to the problems:
Web server app was hosted on www.yourapp.com and proxied (and cached) requests to S3. It supported all URLs (html5Mode) - just proxied to index.html.
OpenGraph meta tags - the API had an endpoint that gets a URL or ID of an object and returns meta tags information. Web server issues a request to API once per URL (caches the response) and injects it in the served index.html.
SEO - as a middleware, used prerender for rack that rendered pages on the server.
As a bonus -
Most apps today have a landing page/marketing site and the actual app. Sometimes it's better to maintain these separately. The web sever knows according to a cookie which app to present on www.yourapp.com - actual app or marketing site. On sign in - set a cookie on client side and voila.
First, I think there's a bit of a confusion here, let me try to clear it up.
There are a couple of ways for achieving this
Pure client -> API
When you have a static application, there's no need to go through the Rails asset pipeline, there are far better ways to manage assets when you are using the tooling for client side applications.
For example, your client application will be an Angular application and you will manage assets with a combination of bower (dependencies) and grunt (build and distribution).
There's no point of deploying to S3 with Capistrano, if it's a pure static application, you can use aws CLI in order to just upload your content.
I'd go through a CDN as well. Something like Fastly works really well over Amazon S3.
I have a Rake task that uploads to S3 and then clears the cache on Fastly (if I need to).
As for your Rails application, it would act as an API, it should not have any assets
Combined
If you have a combined application, some of the actions are served by the server (Rails) and just invokes some client side code (Angular).
If this is the case, I would go through Rails asset pipeline and just keep everything as Rails best practice with compilation pre-deploy etc...
It's one of those questions where "it depends" is the answer really, it all depends on what you want to achieve.
When I have a client application, I try to have a pure client and have the server only as an API, with no assets at all, this way, I separate the concerns.
EDIT 9/9/15
I'd have to say that as long as you can, I'd keep the apps separate.
It's not always possible, especially with more complex apps.
Most apps I have seen in the recent months have kept the client side and the server side code separate, I have seen less use of rails and more use of rails-api because of that (some even ditched rails completely for thinner solutions).

Resources