When I access a wrong call to a sql server data into my application in classical ASP I get this message in my entire site: Service Unavailable. It stopped. My site is in a remote host. DonĀ“t know what to do. What can I tell to the "support team" of them to fix that?
If you check out Administration Tools/Event Viewer - Application log you will probably see an error message.
This should give you more information as too why the application pool died or why IIS died.
If you paste this into your question we should be able to narrow things down a bit.
Whenever there are a number of subsequent errors in your asp.net page, the application pool may shut down. There's a tolerance level, typically 5 errors in 10 mins, or so. Beyond this level, IIS will stop the service. I've run into a lot of problem due to this error.
What you can do is either fix all your websites (will take time), or increase the tolerance level or just disable the auto shutdown system. Here's how
Run IIS
Right click on the node 'Application Pools' in your left sidebar.
Click on the tab 'Health'
Remove the check on 'Enable Rapid Fail Protection'
or change the tolerance level.
Hope that helped.
One reason you can get this is if the application pool has stopped.
Application pools can stop if they error. Usually after 5 errors in 5 minutes IIS shutsdown the AppPool. It is part of the Rapid-fail protection and it can be disabled for an AppPool otherwise the AppPool has to be restarted every time it happens.
These settings can be changed by the IIS administrator. It looks like you can setup a script to restart and app-pool so you should be able to set up a new web application (in a different app-pool) to restart your closed app-pool. Hoster might not like that though.
Best result for you would be to catch all the exceptions before they get out into IIS.
Could be a SQL exception in your Application_Start (or similar) method in Global.asx. If the application (ASP.NET worker process) can't start, it can't run, so the worker process has to shut down.
Related
I've deployed a Django app on AWS-ec2-micro instance and a React app on GCP-e2-micro instance before, but I encountered almost the exact same problem: Server will randomly become unresponsive and unreachable while doing some heavy I/O operations. It happens almost all the time if I try to install some large packages such as tesseract, but it sometimes freezes even when I'm just trying to run a react app using npm start. I've looked at the monitoring and they all have one thing in common: super high CPU usage. Especially after the server becomes unreachable, the CPU meters continue to rise. AWS-ec2 usually will reach almost 100% while GCP-e2 instance will reach beyond 100% to something like 140%. At a certain time, the CPU usage will become stabilized at about 50%, but the server is still unreachable using SSH.
The server sometimes recovers itself after hours of being unreachable, but usually, it ends up having to force stop and restart the server. This will cause the public ipv4 to change which I really don't like, so I want to find out why my server is constantly unresponsive.
Here is what I've installed on my server:
ssh-server
vscode-server
And then on GCP-e2, I've also installed npm, react and some UI packages. A simple react app should not have such a high I/O operation that will directly makes the server unresponsive, so I begin to think if I have something configured wrong, but I have no clue what that will be. Please help me. Thank you!
I had the same issue, I used the free tier t2.micro and it was not keeping up with all the processes that needed to be handled when executing npx create-react-app react-webapp. I had to reboot it at least 2 times to be able to ssh into it again.
Upgrading the instance type to c5a.large solved the problem, hope this helps.
I have recently upgraded my website to the dot net nuke version- 9.4.1 but here getting performance issue, the website runs slow. I have searched for this and applied the performance configuration inside the server setting and also did the cache configuration at the page level.
I have minified the files(Js and CSS) and have updated the setting value inside the host setting table.
Thanks a lot in advance.
Check the DNN Scheduler to see if there are any active jobs that are taking longer than they should. For example, if the Site Crawler scheduler is constantly running then you should check the files in the Portals folders to make sure all of the files located in the Portals folder should actually be there. The crawler is rebuilding the index and if you have a lot of files it could take hours to complete. If the files should be there, disable the crawler scheduler and then run during your slowest time of the day (1:00 AM?). I ran into this problem on a server that had hundreds of thousands of documents in the portals folder. Ended up solving it by running the crawler between 1:00 AM and 5:00 AM for a few days until it indexed all of the files. Once the files are indexed it will only have to index changed and new files; so it should just be a burden the first time it runs.
Another possible cause are exceptions. If your site is throwing a large amount of exceptions it will slow down your site. The handling of the exceptions and then the logging of them (to the DNN EventLog table in the database and the Log4Net files) can be brutal if your site is constantly throwing exceptions. If your site is also running in DEBUG mode the performance hit is multiplied by at least 30 times due to .Net collection all of the additional information about the exception while running in debug mode. That will be brutal to your sites performance.
Check the server logs to see how often IIS is recycling the application pool for your DNN site. If it's occurring often then that is also a sign of a large amount of exceptions being thrown if you are using the default IIS application pool settings. By default, IIS will recycle your application pool if too many exceptions are thrown within a short period of time. If you also have the option set to bring up a new instance of your site and run it side by side before IIS terminates the existing instance while your site is throwing exceptions that can cause a bottleneck and will cripple performance. For this situation, I usually disable IIS from recycling the application pool if too many exceptions are thrown within a short period of time. That may not be the best option for you but if you are on top of the exceptions being thrown on the site then you can disable that and let IIS run instances side by side after an app recycle (this is nice to have when you recycle during active periods so that all existing traffic completes with the old instance and all new traffic is sent to the new instance. Once all traffic is hitting the new instance of your site IIS will terminate the older instance.)
If none of the above help, run SQL Profiler on your database to see if there is any extreme database activities going on. Also check for any db locks.
There are a lot of possible causes that can slow down DNN. The best way to find out what is going on is to run a profiler on the server (RedGate Ants profiler or Telerik (Progress) Just Trace).
I have scheduled one DNN scheduling process in Host=>Schedule section and it can take 3 to 4 hours to completed. But process can't able to completed because of "Web Server Updated" message popping up randomly in event viewer section and it restarts my application. It stopped scheduling process and forcing to restart scheduler. I'm using DNN version 07.03.02.
Do anyone knows what is the reason of this "Web Server Updated" message. Do I contact my hosting provider? OR Is it DNN problem?
Please review below screen shots.
https://www.dropbox.com/s/fjni9an5ajwghcq/2017-04-03%2011_16_44-Journal.png?dl=0
https://www.dropbox.com/s/kzhzv6tcvrq3z7b/2017-04-03%2011_16_44-Journal_1.png?dl=0
This issue would be due to the worker process restarting. DNN Inserts the "Web Server Updated" log entry at the start of the application.
For processes running that long, I'd recommend moving them outside of DNN due to the inherent nature of web applications. But if it is mandatory that it stay inside make sure that you have Always On enabled in IIS. I'd also recommend using a monitoring or similar solution to ensure you have traffic all the time, at least every 10-15 minutes before the 20 minute process shutdown.
Note: Even with the best configuration possible, it is NOT guaranteed that your process will run for 3-4 hours without interruption.
hello i am using ibm worklight application development platform v 6.0.1. I am having a problem with the worklight server, which was working nicely till now. Whenever i try to start the server, it does not and displays the following error:
Worklight server was unable to start within 120 seconds. If the server requires more time, try increasing the timeout in the server editor.
i increased the timeout many times but the problem still persisted. Can anyone help?
thanks in advance.
Please read the following documentation Worklight has provided with the issue described above:
http://www-01.ibm.com/support/docview.wss?uid=swg21668175
A solution is provided below:
To resolve this problem, you can apply one or both of the following workarounds.
Complete the following steps to increase the timeout default value:
1.Open the Servers view.
2.Double-click the Worklight Development Server to open the Overview pane.
3.Expand the Timeouts section.
4.Increase the value in the Start (in seconds) field. Consider doubling the default value; that is, set it to 120 seconds.
Complete the following steps to remove the unnecessary applications from Worklight Development Server.
1.Open the Servers view.
2.Right-click the Worklight Development Server.
3.Select the Add and Remove option.
4.Remove all applications that you do not intend to work on.
After you have made any of these changes to the configuration, restart the server.
As you can see having a lot of applications deployed to your sever may cause the startup time to increase. Have you tried increasing the time beyond 120s as your error indicates?
My WPF application currently only shows a screen with some controls, it doesn't connect to DB or has any other functionality. It's a simple UI screen.
When I was testing in some computers (WinXP SP2), I've detected that it took more than 15 seconds to startup. They were all in our domain.
I've grabbed a similar computer, only with Windows installed, and the application took 2 seconds to startup.
Then I added the computer to our domain, and testing it with a domain user showed that it also took 15 seconds to startup. I tested again with the previous user (local user) and it's still fast. I created another local user, but it takes the 15 seconds that the domain user also takes.
I've added other local users but they were also slow.
To summarize: the application starts fast (2 sec) in only one user, the first one I tested. All other users (domain or local) are slow (15 sec).
I've been checking Improving WPF applications startup time but my problem seems to need a different approach. Does anyone figure out what can be happening?
I found another solution to this problem in this documentation from Microsoft.
Adding the following configuration to the app.config file will also solve the problem:
<configuration>
<runtime>
<generatePublisherEvidence enabled="false"/>
</runtime>
</configuration>
This way, you don't need to change computer configurations. It's just configuration of the application.
UPDATE:
Seems that .NET 4.0 fixed this issue, as documented here on MSDN.
Is the system connected to a network, but cannot reach the internet because the proxy is not configured? If so, go to Internet Settings (i.e. Internet Explorer Properties), Advanced, and look in the tree view for Security and a checkbox like "check revoked certificates" or something (I'm using German Windows, so I don't have the English label at hands). Uncheck and test again.
If this fixed the problem, you have one signed assembly that is not from Microsoft for which the .NET Framework will check for revocations, and time out after 15 seconds. If you disable the checking or configure the internet connection properly, you won't have to wait.
Does it open up a file or interact in the network in some way? Because if not, I would suggest that whether or not you're logged into a domain or running as a local user is probably a red herring.
Are you building in debug or release mode? It's worth trying release mode if you've not already because running in debug does a load of extra error checking..
Have you checked if there are any domain policies that can affect this scenario?
I had still this Problem (.NET 4.5). I my case the problem was, that the computer was not connected to the internet, but there were some other device (cameras etc.) which were connected via GigE. The startup of every .NET Application was delayed for about 20 seconds.
The solution was quite easy: Just connected the computer once to the internet, started any .NET application (first startup took about 7 seconds) and after that, every startup was quite fast, even if the computer was no longer connected to the internet. In addition I had to disable the protocol TCP/IP V6 (caused 3-5 seconds delay).
Another possible solution is to select Properties for the "internet Protocol Version 4 (TCP/IPv4), then select Advanced, select the tab "WINS" and set "Disable NetBIOS over TCP/IP".