Do I need Nginx alongside nodejs in production (using Firebase for hosting and database) - angularjs

I have the below configuration
Firebase - for hosting (serving static files) and storing data
(Database).
Nodejs - for making API calls to Firebase, Twilio and Sendgrid.
Angularjs - for frontend
Do I need Nginx for the above configuration ? looking at answers like these makes me consider Nginx.
My app is intended to serve several hundred users.

No, you do not NEED nginx. There are zillions of node.js apps at the scale you describe that do not need to use something like nginx.
You would use Nginx if you had a specific problem in your deployment and Nginx was the easiest/best way to solve that problem. You have not described any specific problem (other than scaling to a few hundred users which node.js can do just fine by itself) so you have not described any reason that you need Nginx.
Nginx has a bunch of things it is great at, but until you identify a specific need for more than node.js offers, I would not recommend that you complicate your deployment just because Nginx helps some people. Instead, deploy your app, measure its performance, understand where your weaknesses are and then evaluate if Nginx is the best tool to help you fix any weaknesses that need fixing.

Related

How to deploy next.js frontend and separated express backend

I'm developing an app that contains next.js as a frontend and separated backend server running on express. I'm wondering about production deploy and costs, I did some research but I'm not sure what's the best way to do it.
My folder structure is following. I have separated packages.json on the frontend and separated on the backend. Two apps also run on different port. Also I'm doing SSR on the frontend.
Next.js already includes a server like express. API Routes allow you to build a backend deployed along with the rest of the next application.
API Routes live in the /pages/api folder.
Think longterm. If in the future, it'll need to be scaled to accommodate traffic or have must separate domains then separating the backend from the frontend is the best way to go. This way each team can focus on their part without messing up the organization of the entire project. Also you have a clean interface for your clients (frontend, CLI and SDK). If not then having the backend in NextJS should be fine.

Deploying react app with nestjs server on cloud service for better SEO

I am aware that my topic is already has many answers, but I can not find what I need and want to listen to cutting edge trend.
I am building react application with create-react-app and nestjs as backend server. After deployment I found out that there is some trouble with SEO on my website and I thought my app deploying structure might be wrong.
Here is my structure.
S3 for react app hosting.
Cloudfront
Nestjs server on EC2
RDS for database
Route53
So my react app website url is https://myapp.com, server url is https://server.myapp.com. I call server apis on client by using axios with server url like https://server.myapp.com/v1/users.
I found out many people deploy there both client and server in one EC2 instance using tools like NGINX or APACHE. The reason I was not adopting these were the benefit of using cloud services was I do not have to concern about these. But after deploying applications, it seems deploying on same instance is more better is many ways.
Can I listen to some ways of structuring web app deployment with server? And is my structure is the reason of poor SEO?
It's upon to you how you want to deeply and host your frontend and backend whether on the same instance or different instances it's according to your apps traffic and whole other factors.
Now we come upon the SEO part. In your case, the first SEO factor will be the content that you are serving on the internet and another factor will be the performance of your website. The content part totally depends upon you that how you are designing it or writing the content. But there are some strategies I can share on increasing your performance so that this factor doesn't create any chaos for you.
Since your content generation is dynamic and it will generate when the user requests a particular resource from your server. So caching can help you here to optimize the initial server response time. So you can cache your content by using Nginx, varnish, or use a service like Cloudflare.

Routing between App Engine Standard Environment and Flexible Environment projects in production

I have two App Engine projects in Go that comprise a single user-facing app. One project is a Standard Environment project and has the bulk of the functionality and also serves the React frontend static bundle. The second project is a Flexible Environment project and serves a specific purpose of communicating with and transferring large files to a third-party API (it is a Flexible Environment project b/c we ran into size limitations using urlfetch).
Now that I am deploying the app, I am running into a problem with api requests from the frontend. In development, our frontend node server would proxy requests, e.g. /api/project and /api/user to the appropriate App Engine services running on different ports, but in production, my standard environment project is at something like https://my-project-std.appspot.com, and the flexible env project is at something like https://my-project-flex.appspot.com.
I use a dispatch.yaml file with the std env project to route api requests (e.g. /api/project and /api/user) to the appropriate service, but am not sure the best way to route requests that should go to the flexible environment service (e.g. /api/model). Should I route them through the std env project and redirect? setup a reverse proxy? some other approach?
Thanks!
I went with a reverse proxy approach, and it is now behaving as I'd hoped. This blog post was very helpful in arriving at a solution https://blog.semanticart.com/2013/11/11/a-proper-api-proxy-written-in-go/
What does your dispatch.yaml & yaml for your services look like?
Did you mean that you have 2 services in the same project (not 2 separate projects)?
If so you can do this
https://cloud.google.com/appengine/docs/standard/python/how-requests-are-routed
https://[SERVICE_ID]-dot-[MY_PROJECT_ID].appspot.com

Hosting React/Redux Web Apps (front end)

One thing I do not quite understand yet is this. Say I have a website or single page app, made with React/Redux. Say I have no API calls or anything like this, so it's just the front-end I am talking about.
Do I have any restrictions when it comes to hosting?
People speak of Heroku & AWS here a lot, but as I understand it this concerns only the backend (?) Could I just host my React site with a 'traditional' provider or would I have to look out for something more specific?
There are no specific requirements for hosting a react JS webapp.
Let's say you're using webpack to build your webapp, it'll build a static javascript file that need to be served just like regular static assets.
So just the way you'd serve any other website that has static html/css/js assets.
That is the case when you've no special requirements. Now if you wish to build on the server you'd expect the server to run nodejs and have decent memory to support the build process. If you need to do caching/load balancing/rate limiting you'll have to use different required solutions.

Using Docker compose within Google App Engine

I am currently experimenting with the Google App Engine flexible environment, especially the feature allowing you to build custom runtimes by providing a Dockerfile.
Docker provides a really nice feature called docker-compose for defining and running multi-container Docker applications.
Now the question is, is there any way one can use the power of docker-compose within GAE? If the answer is no, what would be the best approach for deploying a multi-container application (for instance Nginx + PHP-FPM + RabbitMQ + Elasticsearch + Redis + MongoDB, ...) within GAE flexible environment using Docker?
It is not possible at this time to use docker-compose to have multiple application containers within a single App Engine instance. This does seem however to be by design.
Scaling application components independently
If you would like to have multiple application containers, you would need to deploy them as separate App Engine services. There would still only be a single application container per service instance but there could be multiple instances of each service. This would grant you the flexibility you seek of scaling each application component independently. In addition, if the application in a container were to hang, it could not affect other services as they would reside in different VMs.
An added benefit of deploying each component as a separate service is that one need not use the flexible environment for every service. For some very small tasks such as API backends or serving relatively slow-changing web content, the standard environment may suffice and may be less expensive at low resource levels.
Communication between components
Since one of your comments mentions getting instance IPs, I thought you might find inter-service communication useful. I'm not certain for what reason you wish to use VM instance IPs but a typical use case might be to communicate between instances or services. To do this without instance IPs, your best bet is to issue HTTP request from one service to another simply using the appropriate url. If you have a service called web and one called api, the web service can issue a request to api.mycustomdomain.com where your application is hosted and the api service will receive a request with the X-Appengine-Inbound-Appid header specified with your project ID. This can serve as a way a identifying the request as coming from your own application.
Multicontainer application using Docker
You mention many examples of applications including NGinx, PHP-FPM, RabbitMQ, etc.. With App Engine using custom runtimes, you can deploy any container to handle traffic as long as it responds to requests from port 8080. Keep in mind that the primary purpose of the application is to serve responses. The instances should be designed to start up and shut down quickly to be horizontally scalable. They should not be used to store any application data. That should remain outside of App Engine using tools like Cloud SQL, Cloud Datastore, BigQuery or your own Redis instance running on Compute Engine.
I hope this clarifies a few things and answers your questions.
You can follow following steps to create a container with docker-compose file in Google App Engine.
Follow link
You can build your custom image using docker-compose file
docker-compose build
Create a tag for local build
docker tag [SOURCE_IMAGE] [HOSTNAME]/[PROJECT-ID]/[IMAGE]
Push image to google registry
docker push [HOSTNAME]/[PROJECT-ID]/[IMAGE]
deploy Container
gcloud app deploy --image-url=[HOSTNAME]/[PROJECT-ID]/[IMAGE]
please add auth for docker commands to run.

Resources