I am using solr (v4) - I am sporadically getting the following exception:
Timeout occured while waiting response from server at: http://localhost:8983/solr
I am assuming that I can change the timeout parameter through a config file or via my code in solr (which I believe I reduced already a few weeks ago). Besides changing the timeout period in the config/code (and checking why my code or perhaps solr is taking so long for the connection, is there anything else I can look into to troubleshoot this issue)?
Update:
This seems to occur around the time when I try to commit a few documents to solr (which are well defined) - however, a few of them might already be in solr & I'm not sure if that is causing any issues with solr
Edit:
What I mentioned in my first edit seems to be be the case though I am not entirely certain.
Are you caching or pooling HttpClient or connection? If so, do you have by any chance a firewall between your client and Solr Server?
It might be a case of long running connection being shut down by firewall and all consequent attempts to use that connection will either get packets dropped (leading to timeout) or get RST packet back.
If you can replicate it, try running Wireshark on the client and seeing what happens to the connection. But if it is firewall, it is usually very hard to replicate (need to create a connection gap of X hours).
Related
I've deployed a Django app on AWS-ec2-micro instance and a React app on GCP-e2-micro instance before, but I encountered almost the exact same problem: Server will randomly become unresponsive and unreachable while doing some heavy I/O operations. It happens almost all the time if I try to install some large packages such as tesseract, but it sometimes freezes even when I'm just trying to run a react app using npm start. I've looked at the monitoring and they all have one thing in common: super high CPU usage. Especially after the server becomes unreachable, the CPU meters continue to rise. AWS-ec2 usually will reach almost 100% while GCP-e2 instance will reach beyond 100% to something like 140%. At a certain time, the CPU usage will become stabilized at about 50%, but the server is still unreachable using SSH.
The server sometimes recovers itself after hours of being unreachable, but usually, it ends up having to force stop and restart the server. This will cause the public ipv4 to change which I really don't like, so I want to find out why my server is constantly unresponsive.
Here is what I've installed on my server:
ssh-server
vscode-server
And then on GCP-e2, I've also installed npm, react and some UI packages. A simple react app should not have such a high I/O operation that will directly makes the server unresponsive, so I begin to think if I have something configured wrong, but I have no clue what that will be. Please help me. Thank you!
I had the same issue, I used the free tier t2.micro and it was not keeping up with all the processes that needed to be handled when executing npx create-react-app react-webapp. I had to reboot it at least 2 times to be able to ssh into it again.
Upgrading the instance type to c5a.large solved the problem, hope this helps.
From what I can tell, if make a bad query or I restart the server while a connection is open, in both cases PQresultStatus() will return PGRES_FATAL_ERROR (handling the result of PQexec). I'd rather know the difference because I'm working on a retry mechanism. Is that possible?
PQstatus() looks like it's the answer, and from my tests it seems like it is, but the documenation is a bit mealy-mouthed about its reliability.
I'm running PostgreSQL 9.4 on Debian 8.2.
I am running Tomcat 6 and have a connectionTimeout set to 3000 (3 seconds).
I am trying to find out, on the Tomcat server itself, is there any way for me to know how many connection timeouts are occurring?
In Tomcat 6 there isn't anything the explicitly counts connection timeouts. (In later versions you could enable debug logging which would provide details of every failure). What you can do is use JMX to look at the error count. Every failed request increments the error count. Be aware that any response code >=400 will increment the error count so there is lots of legitimate traffic that could trigger this.
In a JMX browser (e.g. JConsole) you need to look at the errorCount attribute of Catalina:type=GlobalRequestProcessor,name=http-8080 (assuming an http connector on port 8080).
For a more accurate figure you could use the access log to count up the number of requests with >=400 response codes. The errorCount less that figure should be close to the number of connection timeouts in 6.0.x. In later versions that won't work because the access logging was improved to capture more failed requests.
I am trying to use MongoDB on Ubuntu 11.10 with CakePHP 1.3 using the cakephp-mongodb driver from ichikaway, but I am seeing persistent connection problems.
My model is very simple. I am attempting a connect and a read
$this->loadModel('Mongo.Mpoint');
$data = $this->Mpoint->find('first');
However the result is inconsistent. A significant amount of the time the server returns
Error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection
without sending any data.
However issuing a refresh, or several refreshes in quick succession will eventually deliver the expected returned data. It feels like the server is going to sleep and needs to be woken up as repeatedly hitting return does not generate errors, but this is subjective. The crash occurs in the find, not the connection itself.
I have rockmongo installed on the server which never fails to connect. Also I see the same behaviour if I point the connection at a different sever (same version of mongo, but on centos) so I do not believe the issue is with mongodb itself.
I have attempted setting the connection to persistent and directly setting the timeout, all without success.
My colleague, who also has a copy of the app running directly on the centos server, says he saw this problem initially, but 'it went away'.
From what I can see therefore the issue is most likely in the cakePHP layer as connections across different servers yields the same result and a direct connection in PHP is trouble-free, but placing diagnostics does not reveal anything of immediate use. It is rather odd to see a complete PHP crash with nothing useful returned from the server. Has anyone else seen this behaviour before and fixed it?
Try updating the Mongodb php driver. I had the same problem and I just upgraded from 1.2.7 to 1.2.9 and It seems to have been solved.
Just for reference
I am using
Ubuntu 11.10 x86
Xampp(Lampp) 1.7.7 with php-mongo-driver 1.2.9
Mongodb 2.0.2
CakePHP 2.0.6
cakephp-mongodb driver from ichikaway (Branch:cake2.0)
When I access a wrong call to a sql server data into my application in classical ASP I get this message in my entire site: Service Unavailable. It stopped. My site is in a remote host. DonĀ“t know what to do. What can I tell to the "support team" of them to fix that?
If you check out Administration Tools/Event Viewer - Application log you will probably see an error message.
This should give you more information as too why the application pool died or why IIS died.
If you paste this into your question we should be able to narrow things down a bit.
Whenever there are a number of subsequent errors in your asp.net page, the application pool may shut down. There's a tolerance level, typically 5 errors in 10 mins, or so. Beyond this level, IIS will stop the service. I've run into a lot of problem due to this error.
What you can do is either fix all your websites (will take time), or increase the tolerance level or just disable the auto shutdown system. Here's how
Run IIS
Right click on the node 'Application Pools' in your left sidebar.
Click on the tab 'Health'
Remove the check on 'Enable Rapid Fail Protection'
or change the tolerance level.
Hope that helped.
One reason you can get this is if the application pool has stopped.
Application pools can stop if they error. Usually after 5 errors in 5 minutes IIS shutsdown the AppPool. It is part of the Rapid-fail protection and it can be disabled for an AppPool otherwise the AppPool has to be restarted every time it happens.
These settings can be changed by the IIS administrator. It looks like you can setup a script to restart and app-pool so you should be able to set up a new web application (in a different app-pool) to restart your closed app-pool. Hoster might not like that though.
Best result for you would be to catch all the exceptions before they get out into IIS.
Could be a SQL exception in your Application_Start (or similar) method in Global.asx. If the application (ASP.NET worker process) can't start, it can't run, so the worker process has to shut down.