aws ec2 and gcp e2 instances constantly become unresponsive - reactjs

I've deployed a Django app on AWS-ec2-micro instance and a React app on GCP-e2-micro instance before, but I encountered almost the exact same problem: Server will randomly become unresponsive and unreachable while doing some heavy I/O operations. It happens almost all the time if I try to install some large packages such as tesseract, but it sometimes freezes even when I'm just trying to run a react app using npm start. I've looked at the monitoring and they all have one thing in common: super high CPU usage. Especially after the server becomes unreachable, the CPU meters continue to rise. AWS-ec2 usually will reach almost 100% while GCP-e2 instance will reach beyond 100% to something like 140%. At a certain time, the CPU usage will become stabilized at about 50%, but the server is still unreachable using SSH.
The server sometimes recovers itself after hours of being unreachable, but usually, it ends up having to force stop and restart the server. This will cause the public ipv4 to change which I really don't like, so I want to find out why my server is constantly unresponsive.
Here is what I've installed on my server:
ssh-server
vscode-server
And then on GCP-e2, I've also installed npm, react and some UI packages. A simple react app should not have such a high I/O operation that will directly makes the server unresponsive, so I begin to think if I have something configured wrong, but I have no clue what that will be. Please help me. Thank you!

I had the same issue, I used the free tier t2.micro and it was not keeping up with all the processes that needed to be handled when executing npx create-react-app react-webapp. I had to reboot it at least 2 times to be able to ssh into it again.
Upgrading the instance type to c5a.large solved the problem, hope this helps.

Related

Do i need to stop my localhost server when pushing, pulling or merging code to/from github repository?

I have two terminals open in VS Code for React development. One for running the server and one for running bash commands. Now i've been told this is a bad practise as it creates problems in the build of the project. If you want to commit, pull, push, merge you should stop the server and perform these actions and then restart server. Stopping/Restarting the server takes time and seems like a hassle to me. If anyone could answer that thanks
Largely speaking, this should be fine. Assuming you are using some kind of development server (which is monitoring files for changes), any changes that are applied because of a git pull/merge/etc will cause your app to reload (usually in some optimised way).
AN exception to this can be if files that configure the development server change, as it has probably read its configuration on start and thus those changes might not take effect.
Another exception is adding/changing dependencies. If some new/different packages were added to your package.json, then you'd need to rerun npm install again before starting the development server again.
Pushing code to git while the server is running is not problematic either.
So in summary, it's usually fine to do this and is not destructive in any way. If something unpredictable happens, try restarting the dev server (and possible rerunning npm install).

DNN site running slow in dot net nuke version - 9.4.1

I have recently upgraded my website to the dot net nuke version- 9.4.1 but here getting performance issue, the website runs slow. I have searched for this and applied the performance configuration inside the server setting and also did the cache configuration at the page level.
I have minified the files(Js and CSS) and have updated the setting value inside the host setting table.
Thanks a lot in advance.
Check the DNN Scheduler to see if there are any active jobs that are taking longer than they should. For example, if the Site Crawler scheduler is constantly running then you should check the files in the Portals folders to make sure all of the files located in the Portals folder should actually be there. The crawler is rebuilding the index and if you have a lot of files it could take hours to complete. If the files should be there, disable the crawler scheduler and then run during your slowest time of the day (1:00 AM?). I ran into this problem on a server that had hundreds of thousands of documents in the portals folder. Ended up solving it by running the crawler between 1:00 AM and 5:00 AM for a few days until it indexed all of the files. Once the files are indexed it will only have to index changed and new files; so it should just be a burden the first time it runs.
Another possible cause are exceptions. If your site is throwing a large amount of exceptions it will slow down your site. The handling of the exceptions and then the logging of them (to the DNN EventLog table in the database and the Log4Net files) can be brutal if your site is constantly throwing exceptions. If your site is also running in DEBUG mode the performance hit is multiplied by at least 30 times due to .Net collection all of the additional information about the exception while running in debug mode. That will be brutal to your sites performance.
Check the server logs to see how often IIS is recycling the application pool for your DNN site. If it's occurring often then that is also a sign of a large amount of exceptions being thrown if you are using the default IIS application pool settings. By default, IIS will recycle your application pool if too many exceptions are thrown within a short period of time. If you also have the option set to bring up a new instance of your site and run it side by side before IIS terminates the existing instance while your site is throwing exceptions that can cause a bottleneck and will cripple performance. For this situation, I usually disable IIS from recycling the application pool if too many exceptions are thrown within a short period of time. That may not be the best option for you but if you are on top of the exceptions being thrown on the site then you can disable that and let IIS run instances side by side after an app recycle (this is nice to have when you recycle during active periods so that all existing traffic completes with the old instance and all new traffic is sent to the new instance. Once all traffic is hitting the new instance of your site IIS will terminate the older instance.)
If none of the above help, run SQL Profiler on your database to see if there is any extreme database activities going on. Also check for any db locks.
There are a lot of possible causes that can slow down DNN. The best way to find out what is going on is to run a profiler on the server (RedGate Ants profiler or Telerik (Progress) Just Trace).

Error restarting Zeppelin interpreters and saving their parameters

I have Zeppelin 0.7.2 installed and connected to Spark 2.1.1 standalone cluster.
It has been running fine for quite a while until I changed the Spark workers' settings, to double the workers' cores and executor memory. I also tried to change the parameters SPARK_SUBMIT_OPTIONS and ZEPPELIN_JAVA_OPTS on zeppelin-env.sh, to make it request for more "Memory per node" on the Spark workers but it always requests only 1GB per node so I removed them.
I had an issue while developing a paragraph so I tried set zeppelin.spark.printREPLOutput to true on the web interface. But when I tried to save that setting, I only got a small transparent red box at right side of my browser window. So it fails to save that setting. I also got that small red box when I tried to restart the Spark interpreter. The same actually happens when I tried to change the parameters of all other interpreters or restart them.
There is nothing on the log files. So I am quite puzzled on this issue. Do any of you has ever experienced this issue? If so, what kind of solutions that you applied to fix it? If not, do you have any suggestions on how to debug this issue?
I am sorry for the noise. The issue was actually due to my silly mistake.
I actually have Zeppelin behind nginx. I recently played around with a new CMS. I didn't separate the configuration of the CMS and the proxy to Zeppelin. So any access to location containing /api/, like restarting Zeppelin interpreters or saving the interpreters' settings, got blocked. Separating the site configuration of the CMS and the proxy to Zeppelin on nginx solves the problem.

How to launch app from the web/cloud

I have developed an app in Twilio which I would like to run from the cloud. I tried learning about AWS and Google App Engine but am quite confused at this stage:
I have 2 questions which I hope to get your help on:
1) How can I store my scripts and database in the cloud? Right now, everything is running out of my local machine but I would like to transfer the scripts and db to another server and run my app at a predetermined time of day. What would be the best way to do this?
2) How can I write a batch file to run my app at a predetermined time of day in the cloud?
I understand this does not have code, but I really hope someone can point me to the right direction. I have spent lots of time trying to understand this myself but still am unsure. Tks in adv.
Update: The application is a Twilio app that makes calls to people, the script simply applies an algorithm to make calls in a certain fashion and the database is a mysql db that provides the details of people to be called.
This is quite difficult to provide an exact answer without understanding what is the application, what is the DB or what is the script that you wish to run.
I can give you a couple of ideas that might be helpful in such cases.
OpsWorks (http://aws.amazon.com/opsworks/) is a managed service for managing applications. You can define your stack (multiple layers like web, workers, DB...) and what are the chef recipes that should run in various points in the life of the instances in each layer (startup, shutdown, app deployment or stack modification..). Then you can use the ability to add instances to each layer in specific days and hours, to implement the functionality of running at predetermined times as you requested.
In such a solution you can either have some of your instances (like DB) always on, or even to bootstrap them using the chef recipes every day, with restore from snapshot on start and create snapshot on shutdown.
Another AWS service that you use is Data Pipeline (http://aws.amazon.com/datapipeline/). It is designed to move data periodically between data sources, for example from a MySQL database to Amazon Redshift, the Data warehouse service. But you can use it to trigger scripts and run random shell scripts that you wish (http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-shellcommandactivity.html), and schedule it to run in various conditions like every hour/day or specific times (http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-concepts-schedules.html).
A simple path here would be just to create an EC2 instance in AWS, and put the components needed to run your app there. A thorough walk through is here:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/get-set-up-for-amazon-ec2.html
Essentially you will create an EC2 virtual machine, which you can for most purposes treat just like any other Linux server. You can install MySQL on it, copy your script there, and run it. Of course whatever container or support libraries your code requires will need to be installed as well.
You don't say what OS you are using locally, but if it is Mac or Linux, you should be able to follow almost the same process to get your script running on an EC2 instance that you used on your local machine.
As you get to know AWS, there are sophisticated services you can use for deployment, infrastructure orchestration, database services, and so on. But just to get started running a script from a virtual machine should be pretty straightforward.
I recently developed a Twilio application using Ruby on Rails for the backend and found Heroku extremely simple to setup and launch. While Heroku does cost more than AWS, I found that the time I saved using Heroku more than made up this. As an early stage startup, we wanted to spend our time developing important features, and not "wasting" time optimizing our AWS cloud.
However, while I believe Heroku is ideal for early-stage websites/startups I do believe hosting should be reevaluated once a company reaches a certain size. At some point it becomes economically viable to devote resources into optimizing an AWS cloud solution because it will be cheaper than Heroku in the long run.

Service Unavailable in IIS

When I access a wrong call to a sql server data into my application in classical ASP I get this message in my entire site: Service Unavailable. It stopped. My site is in a remote host. DonĀ“t know what to do. What can I tell to the "support team" of them to fix that?
If you check out Administration Tools/Event Viewer - Application log you will probably see an error message.
This should give you more information as too why the application pool died or why IIS died.
If you paste this into your question we should be able to narrow things down a bit.
Whenever there are a number of subsequent errors in your asp.net page, the application pool may shut down. There's a tolerance level, typically 5 errors in 10 mins, or so. Beyond this level, IIS will stop the service. I've run into a lot of problem due to this error.
What you can do is either fix all your websites (will take time), or increase the tolerance level or just disable the auto shutdown system. Here's how
Run IIS
Right click on the node 'Application Pools' in your left sidebar.
Click on the tab 'Health'
Remove the check on 'Enable Rapid Fail Protection'
or change the tolerance level.
Hope that helped.
One reason you can get this is if the application pool has stopped.
Application pools can stop if they error. Usually after 5 errors in 5 minutes IIS shutsdown the AppPool. It is part of the Rapid-fail protection and it can be disabled for an AppPool otherwise the AppPool has to be restarted every time it happens.
These settings can be changed by the IIS administrator. It looks like you can setup a script to restart and app-pool so you should be able to set up a new web application (in a different app-pool) to restart your closed app-pool. Hoster might not like that though.
Best result for you would be to catch all the exceptions before they get out into IIS.
Could be a SQL exception in your Application_Start (or similar) method in Global.asx. If the application (ASP.NET worker process) can't start, it can't run, so the worker process has to shut down.

Resources