I am trying to use MongoDB on Ubuntu 11.10 with CakePHP 1.3 using the cakephp-mongodb driver from ichikaway, but I am seeing persistent connection problems.
My model is very simple. I am attempting a connect and a read
$this->loadModel('Mongo.Mpoint');
$data = $this->Mpoint->find('first');
However the result is inconsistent. A significant amount of the time the server returns
Error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection
without sending any data.
However issuing a refresh, or several refreshes in quick succession will eventually deliver the expected returned data. It feels like the server is going to sleep and needs to be woken up as repeatedly hitting return does not generate errors, but this is subjective. The crash occurs in the find, not the connection itself.
I have rockmongo installed on the server which never fails to connect. Also I see the same behaviour if I point the connection at a different sever (same version of mongo, but on centos) so I do not believe the issue is with mongodb itself.
I have attempted setting the connection to persistent and directly setting the timeout, all without success.
My colleague, who also has a copy of the app running directly on the centos server, says he saw this problem initially, but 'it went away'.
From what I can see therefore the issue is most likely in the cakePHP layer as connections across different servers yields the same result and a direct connection in PHP is trouble-free, but placing diagnostics does not reveal anything of immediate use. It is rather odd to see a complete PHP crash with nothing useful returned from the server. Has anyone else seen this behaviour before and fixed it?
Try updating the Mongodb php driver. I had the same problem and I just upgraded from 1.2.7 to 1.2.9 and It seems to have been solved.
Just for reference
I am using
Ubuntu 11.10 x86
Xampp(Lampp) 1.7.7 with php-mongo-driver 1.2.9
Mongodb 2.0.2
CakePHP 2.0.6
cakephp-mongodb driver from ichikaway (Branch:cake2.0)
Related
From what I can tell, if make a bad query or I restart the server while a connection is open, in both cases PQresultStatus() will return PGRES_FATAL_ERROR (handling the result of PQexec). I'd rather know the difference because I'm working on a retry mechanism. Is that possible?
PQstatus() looks like it's the answer, and from my tests it seems like it is, but the documenation is a bit mealy-mouthed about its reliability.
I'm running PostgreSQL 9.4 on Debian 8.2.
I have Zeppelin 0.7.2 installed and connected to Spark 2.1.1 standalone cluster.
It has been running fine for quite a while until I changed the Spark workers' settings, to double the workers' cores and executor memory. I also tried to change the parameters SPARK_SUBMIT_OPTIONS and ZEPPELIN_JAVA_OPTS on zeppelin-env.sh, to make it request for more "Memory per node" on the Spark workers but it always requests only 1GB per node so I removed them.
I had an issue while developing a paragraph so I tried set zeppelin.spark.printREPLOutput to true on the web interface. But when I tried to save that setting, I only got a small transparent red box at right side of my browser window. So it fails to save that setting. I also got that small red box when I tried to restart the Spark interpreter. The same actually happens when I tried to change the parameters of all other interpreters or restart them.
There is nothing on the log files. So I am quite puzzled on this issue. Do any of you has ever experienced this issue? If so, what kind of solutions that you applied to fix it? If not, do you have any suggestions on how to debug this issue?
I am sorry for the noise. The issue was actually due to my silly mistake.
I actually have Zeppelin behind nginx. I recently played around with a new CMS. I didn't separate the configuration of the CMS and the proxy to Zeppelin. So any access to location containing /api/, like restarting Zeppelin interpreters or saving the interpreters' settings, got blocked. Separating the site configuration of the CMS and the proxy to Zeppelin on nginx solves the problem.
I am having problems with the Bluemix Monitoring and Analytics service.
I have 2 applications with bindings to a single Monitoring and Analytics service. Every ~1 minute I get the following log line in both apps:
ERR [Resource Monitoring][ERROR]: JsonSender request error: Error: unsupported certificate purpose
When I remove the bindings, the log message does not appear. I also greped my code for anything related to "JsonSender" or "Resource Monitoring" and did not find anything.
I am doing some major refactoring work on our server, which might have broken things. However, our code does not use the Monitoring service directly (we don't have a package that connects to the monitoring server or something like that) - so I will be very surprised if the problem is due to the refactoring changes. I did not check the logs before doing the changes.
Any ideas will help.
Bluemix have 3 production environments: ng, eu-gb, au-syd, and I tested with ng, and eu-gb, both using 2 applications with same M&A service, and tested with multiple instances. They are all work fine.
Meanwhile, I received a similar problem that claim they are using Node.js 4.2.6.
So there are some more information we need to know to identify the problem:
1. Which version of Node.js are you using (Bluemix Default or any other one)
2. Which production environment are you using? (ng, eu-gb, au-syd)
3. Is there any environment variables are you using in your application?
(either the creating in code one, or the one using USER-DEFINED Variables)
4. One more thing, could you please try to delete the M&A service, and create it again, in case we are trapped in a previous fault of M&A.
cf ds <your M&A service name>
cf cs MonitoringAndAnalytics <plan> <your M&A service name>
NodeJS versions 4.4.* all appear to work
NodeJS uses openssl and apparently did/does not like how one of the M&A server certificates were constructed.
Unfortunately NodeJS does not expose the openssl verify purpose API.
Please consider upgrading to 4.4 while we consider how to change the server's certificates in the least disruptive manner as there are other application types that do not have an issue with them (e.g. Liberty and Ruby)
setting node js version 4.2.4 in package.json worked for me, however this is an alternative by-passing solution. Actual fix is being handled by core team. Thanks.
The Microsoft node.js sql server driver (https://github.com/Azure/node-sqlserver) has not had any commits for 11 months. Anyone know what's going on with this effort? My company is using it actively, but has run across some issues that led me to the repo and the discovery that it seems to have been abandoned. Lots of open bugs also.
Should we give up on this driver and try another? Any recommendations?
Microsoft, please weigh in here.
I emailed the Microsoft main contributor and he was very helpful, although he did admit that officially MS has never declared one way or the other if they were going to continue support. Guess we'll wait and see.
In regards to my original problem - this info may help someone.
I was using queryRaw and listening for events to build the response. This method allows the user to submit multiple sql queries in one request (just separate them with ;). A large text datatype field was getting truncated and I couldn't figure out why. Turns out that the 'more' parameter that is supplied by the driver means that you must concatinate the return data.
Lots of trial and error when figuring out this driver.
I am using solr (v4) - I am sporadically getting the following exception:
Timeout occured while waiting response from server at: http://localhost:8983/solr
I am assuming that I can change the timeout parameter through a config file or via my code in solr (which I believe I reduced already a few weeks ago). Besides changing the timeout period in the config/code (and checking why my code or perhaps solr is taking so long for the connection, is there anything else I can look into to troubleshoot this issue)?
Update:
This seems to occur around the time when I try to commit a few documents to solr (which are well defined) - however, a few of them might already be in solr & I'm not sure if that is causing any issues with solr
Edit:
What I mentioned in my first edit seems to be be the case though I am not entirely certain.
Are you caching or pooling HttpClient or connection? If so, do you have by any chance a firewall between your client and Solr Server?
It might be a case of long running connection being shut down by firewall and all consequent attempts to use that connection will either get packets dropped (leading to timeout) or get RST packet back.
If you can replicate it, try running Wireshark on the client and seeing what happens to the connection. But if it is firewall, it is usually very hard to replicate (need to create a connection gap of X hours).