I am starting to learn Angular and I am employing a MEAN stack. The grey area in my mind is when my angular app is finished and ready to deploy on a server.
Would I still need to use Apache of Nginx to route a domain or subdomain to my app?
I guess the node/express.js is my main question. I have used it when working locally, but when deploying would that run my app on the server side.
Thanks in advance.
You can run Node and your app on the server as-is, single-threaded...provided you took care of telling DNS where to reach the app.
To riff off your nginx question...here are a few of other deployment/config questions that you may consider:
server crashes: sooooo...node isn't at 1.0 yet and apps sometimes do unexpected things and die. tools like forever, supervisor and similar things can auto-restart the server.
logging: tools like morgan, winston and more can provide logging so you can see what was happening on the server before big events (everybody hit the same page, the server crashed every time page XYZ was hit, etc)
load balancing: a node server is single-threaded, single-instance. if you have a super busy site or you are stuck with synchronous stuff (blah!) you'll want to consider how to spin up multiple node instances. nginx and node clustering would be things to consider, but if your app is small, this is probably not a priority over handling crashes and logging
Related
I'm aware Heroku sleeps apps after 30 mins of inactivity, and that's fine.
I have a React front-end hosted on Vercel, with an ExpressJS back-end hosted on Heroku, and it's very likely that the user won't need to make a MongoDB call (all this server does) within less than the time it takes to "wake up" the server on Heroku. So if when the app loads it were somehow able to "poke" the server to wake it up, most users wouldn't even know it was sleeping in the first place.
Is this possible without making an intentionally redundant CRUD request?
As per the comment from #jonrsharpe under the question, it appears the simplest and easiest way to do this is to just call get('/wakeUp') on the initial page load and let it return a 404. Not pretty but it works.
There's no reason this can't also return something more meaningful, but apparently it doesn't need to.
i have created and build (react-script build) a simple react application. I want to deploy it to my ubuntu server via a Docker image and i am looking for advise. Is it better to use expressjs and pm2 to serve the react app or would it be more useful to serve it via nginx? What are the advantages and disadvantages?
Thanks a lot for your advises
When you're going to deploy a React application, you've typically used a tool like Webpack to compile it down to a set of static files. Once you've done that, deploying it via nginx will be smaller and faster than an Express server, and since you don't have the entire Node interpreter involved, there are fewer parts involved to potentially have security issues.
In fact, even if the rest of your application is in containers, it can be beneficial to host the front-end somewhere external (if you're otherwise deploying to AWS, for example, put the built front-end into an S3 bucket). This can simplify the deployment mechanics a little bit, and with Webpack's file hashing, you can keep older versions of the code for browsers that haven't reloaded the page recently. You still probably need the nginx proxy (to serve a /index.html page, to proxy the back-end services) but you don't necessarily need to redeploy it when the front-end code changes.
The only real advantage to an Express-based deployment setup is being able to live-reload your code in production. That would come with a high risk of human error, though: make an accidental typo and save the file, and end users see an error page. I'd avoid this path.
Initial connections to my website are extremely slow (100+ seconds). How can I diagnose the issue?
Using the Chrome dev tool network tab, I see that the issue is "initial connection" and not things like SSL or Waiting/TTFB.
This only happens for the first page visit to the website for a given device; after the first page loads, everything on the website is very fast. This consistently happens for new devices, on the same device if I don't visit the website for X days, and on the same device if I clear the cache and browsing history.
The website is a Django app is hosted using Google Cloud App Engine with 2 flexible instances.
User traffic to the website is low, so I doubt the issue is load balancing or traffic spikes.
Thanks!
Yesterday I tried opening the page and I noticed 1.8min to load the main page and 2.1min to make a search, later attempts were faster as you mentioned. I also tried accessing the page today and it loaded quite fast.
From my understanding the high latency of the first connection might be related to session handling, database connections, ssl certificates problem, huge amount of uncached data, expensive operations run before the server sends the response. It's nearly impossible for us to determine it without access to your code, logs and database configurations.
As for how to narrow down the issue I might suggest the following:
Examine your logs for possible causes.
Add timeit logging interleaved with each of the statements that handle the requests and watch for bottlenecks or long-running code.
Deploy the same endpoint without logos, images and other data that would be browser-cached and see if it makes a difference.
Create a hello-world simple endpoint and check it's latency. Keep slowly evolving the endpoint to resemble your actual handling code with hopes on finding what's the issue.
If only the first connection is slow, it might be because the instance is starting and you do not have minimum idle instances and warmup requests enabled. This configuration will make you have instances ready for taking traffic and the latency will be slower in the first connection.
As it is stated in the documentation:
If you set a minimum number of idle instances, pending latency will
have less effect on your application's performance. Because App Engine
keeps idle instances in reserve, it is unlikely that requests will
enter the pending queue except in exceptionally high load spikes. You
will need to test your application and expected traffic volume to
determine the ideal number of instances to keep in reserve.
Also you can find more information about warmup requests in this documentation about Configuring Warmup Requests to Improve Performance
I resolved this issue by removing and recreating custom domain settings for my App Engine project, and removing and recreating the corresponding DNS records in domains.google, following these instructions:
https://cloud.google.com/appengine/docs/standard/python/mapping-custom-domains
I'm still not sure what the underlying issue was, but this fixed it. Hope this can help anyone encountering a similar issue.
I had this exact issue. it was killing our load performance once we switched to a load balancer.
What is ended up being was the instance group port setting. We're obviously using ssl certs for the site but I had indicated port 80 and 443.
Once I removed port 80 from the instance group that the load balancer refers to, it loaded all the pages immediately.
I have a servlet hosted within Google App Engine. As part of the business logic, it performs a HTTPS request to another webservice. I started getting SSL connection reset exceptions but they appear to be happening in a random fashion. I haven't been able to repeat the exception and they start happening even more over time but then can fade away and come back. I've performed stress testing on the remote service directly from other machines and never receive a single issue. The only way I've been able to deal with this is by redeploying the application but it can take a few redeployments before the issue goes away. I am confident it isnt an issue with the remote service but something happening within Google App Engine.
Thank you for your comments, we just got to the bottom of this yesterday. It was a dynamic application being deployed, so the instances should be managing their resources by themselves. However, what was happening is that an instance was running low on memory which resulted in SSL connection resets. This is why they were appearing in a random manner and degrading even worse overtime. The way this was resolved was to upgrade the resources, changing the deployment configuration to use an F2 instance rather than the default F1 instance. Since doing so there hasn't been a connection dropped. Thanks again for your responses and I hope this helps anyone who may experience something similar.
I followed this guide: Quickstart for Python. After launching the "hello, world" app to app engine (flex) I went to [project].appspot.com and noticed that it is very slow. I tried testing it in different devices and network conditions and I still have the same issue. I went to Cloud Trace and can't build a report due to a lack of traces. It is also slow in both http and https. I deployed to us-central and I am in Texas.
I have attached some logs from Logging and a snippet from Google Chrome's Dev Tools to show the slowness.
Logs from Logging:
Chrome Dev Tools:
The images don’t show anything especially unexpected. Response time will vary depending on the location of the client and its distance to the region of the App Engine Flex instance. Responses that take an especially long time are likely due to a cold boot.
You probably use a free instance of app engine. Because it's free the lifespan is very short, therefor it shuts down after a short amount of time without requests. When you send a new request after some time, the instance has to set up and then process the request, which takes time. You can keep pinging the app to keep the instance alive. Similiar question is anwered here.