I am getting the following error message and hence metrics not collecting for SQL Server when using new relic
I am a little confused as the basic server monitor for this box is working, and it looks like metrics are being collected based on previous messages in the log file.
Any thoughts or solutions are much appreciated, Ive been waiting for days for help from newrelic support, but it hasn't been forthcoming.
>2013-08-21 10:25:09,974 [8] ERROR - Error sending data to connector
>System.Net.WebException: The operation has timed out
>at System.Net.HttpWebRequest.GetRequestStream(TransportContext& context)
>at System.Net.HttpWebRequest.GetRequestStream()
>at NewRelic.Platform.Binding.DotNET.Request.Send()
>at NewRelic.Microsoft.SqlServer.Plugin.Communication.SqlRequest.SendData()
>at NewRelic.Microsoft.SqlServer.Plugin.MetricCollector.SendComponentDataToCollector(ISqlEndpoint >endpoint)
>2013-08-21 10:25:09,975 [SqlPoller] INFO - Recorded 186 metrics
>2013-08-21 10:25:09,975 [SqlPoller] DEBUG - SqlPoller: Sleeping for 00:01:00
While most New Relic Platform plugins are supported by the plugin author, this particular plugin is supported by New Relic. Please follow up with our support team so we can help you achieve a resolution.
You can reach New Relic support here: http://support.newrelic.com/
Response from new relic around this issue is regarding the use of proxies in our network, and the lack of proxy configuration allowed by the sql server plugin at this stage.
Related
Apache Flink/Ververica Community Edition - Question
I am trying to add a custom connector to ververica community edition and keeps giving me the following error:
"The jar contains multiple connector. Please choose one.", it doesn't allow me to choose more jars. I am testing with the following repo generated custom connectors: https://github.com/deadwind4/slink/tree/master/connector-es6
My specific question is there anything specific missing from this repo that we should add to signal ververica about a custom record.
The error message is misleading, and the issue is that no connector was found.
This is because Ververica Platform only supports the new connector interfaces.
Factory discovery also requires an entry in META-INF/services, which appears to be missing.
For examples of connectors that implement these interfaces, see https://github.com/Airblader/flink-connector-imap and https://github.com/knaufk/flink-faker.
(This was answered on the mailing list by Ingo Bürk; I've paraphrased his response.)
I am trying to figure out if Automated Deployments for Archer GRC is possible for the On Prem version ?
Currently it is deployed manually.
Latest version of Archer (v6.8, v6.9) has limited API provided to allow package deployments, BUT last time I checked they don't allow mapping and partial installs (I can be wrong, so double check).
API is there, but functionality is limited to the point that I don't see how package installation can be automated via provided API. I hope that in the next Archer versions it will be extended to replicate the functionality available with manual package deployment (mapping, partial installs, and other options).
Technically, if you like complex and time consuming tasks, you can decode/parse the package installation page. Then you can write an application to simulate HTTP packets sent to Archer server simulating the package installation.
I'm not aware of any company doing something like this as of today.
If you write a product to implement proper Code/Configuration Version Control for RSA Archer, then you may be able to sell it as well :)
Good luck!
I am having problems with the Bluemix Monitoring and Analytics service.
I have 2 applications with bindings to a single Monitoring and Analytics service. Every ~1 minute I get the following log line in both apps:
ERR [Resource Monitoring][ERROR]: JsonSender request error: Error: unsupported certificate purpose
When I remove the bindings, the log message does not appear. I also greped my code for anything related to "JsonSender" or "Resource Monitoring" and did not find anything.
I am doing some major refactoring work on our server, which might have broken things. However, our code does not use the Monitoring service directly (we don't have a package that connects to the monitoring server or something like that) - so I will be very surprised if the problem is due to the refactoring changes. I did not check the logs before doing the changes.
Any ideas will help.
Bluemix have 3 production environments: ng, eu-gb, au-syd, and I tested with ng, and eu-gb, both using 2 applications with same M&A service, and tested with multiple instances. They are all work fine.
Meanwhile, I received a similar problem that claim they are using Node.js 4.2.6.
So there are some more information we need to know to identify the problem:
1. Which version of Node.js are you using (Bluemix Default or any other one)
2. Which production environment are you using? (ng, eu-gb, au-syd)
3. Is there any environment variables are you using in your application?
(either the creating in code one, or the one using USER-DEFINED Variables)
4. One more thing, could you please try to delete the M&A service, and create it again, in case we are trapped in a previous fault of M&A.
cf ds <your M&A service name>
cf cs MonitoringAndAnalytics <plan> <your M&A service name>
NodeJS versions 4.4.* all appear to work
NodeJS uses openssl and apparently did/does not like how one of the M&A server certificates were constructed.
Unfortunately NodeJS does not expose the openssl verify purpose API.
Please consider upgrading to 4.4 while we consider how to change the server's certificates in the least disruptive manner as there are other application types that do not have an issue with them (e.g. Liberty and Ruby)
setting node js version 4.2.4 in package.json worked for me, however this is an alternative by-passing solution. Actual fix is being handled by core team. Thanks.
The Microsoft node.js sql server driver (https://github.com/Azure/node-sqlserver) has not had any commits for 11 months. Anyone know what's going on with this effort? My company is using it actively, but has run across some issues that led me to the repo and the discovery that it seems to have been abandoned. Lots of open bugs also.
Should we give up on this driver and try another? Any recommendations?
Microsoft, please weigh in here.
I emailed the Microsoft main contributor and he was very helpful, although he did admit that officially MS has never declared one way or the other if they were going to continue support. Guess we'll wait and see.
In regards to my original problem - this info may help someone.
I was using queryRaw and listening for events to build the response. This method allows the user to submit multiple sql queries in one request (just separate them with ;). A large text datatype field was getting truncated and I couldn't figure out why. Turns out that the 'more' parameter that is supplied by the driver means that you must concatinate the return data.
Lots of trial and error when figuring out this driver.
I am trying to parse excel 2007 (.xlsx) file using Apache POI library on Google AppEngine but while doing that I am getting an exception (see below).
java.lang.IllegalAccessException: Class com.google.appengine.tools.development.agent.runtime.Runtime$21 can not access a member of class org.apache.poi.xssf.usermodel.XSSFSheet with modifiers "protected"
So I checked with Apache POI team, but they claim that its an AppEngine issue. I am not sure what is the right place for AppEngine questions, but I know lot of appengine developers monitor Stackoverflow. So posting this question here.
Bug filed for Apache POI team : https://issues.apache.org/bugzilla/show_bug.cgi?id=55665
This bug has a sample maven project, and instructions to reproduce it.
I am not sure how to attach this zip file here.
If any one knows how to fix this then let me know, or right place to file bug.
The key part of the stacktrace is:
java.lang.IllegalAccessException: Class com.google.appengine.tools.development.agent.runtime.Runtime$21 can not access a member of class org.apache.poi.xssf.usermodel.XSSFSheet with modifiers "protected"
at sun.reflect.Reflection.ensureMemberAccess(Reflection.java:105)
at com.google.appengine.tools.development.agent.runtime.Runtime$22.run(Runtime.java:488)
at java.security.AccessController.doPrivileged(Native Method)
at com.google.appengine.tools.development.agent.runtime.Runtime.checkAccess(Runtime.java:485)
at com.google.appengine.tools.development.agent.runtime.Runtime.checkAccess(Runtime.java:479)
at com.google.appengine.tools.development.agent.runtime.Runtime.newInstance_(Runtime.java:123)
at com.google.appengine.tools.development.agent.runtime.Runtime.newInstance(Runtime.java:135)
at org.apache.poi.xssf.usermodel.XSSFFactory.createDocumentPart(XSSFFactory.java:60)
I've run into the same issue. I think this is only an issue with the development server. Admittedly, this doesn't fully answer your question but I guess the situation at least isn't as bas as you'd think. To get around the issue I've been developing my POI code in a standard Java project (using dummy data) and then copying it into the App Engine project.
I've logged the issue with Google: https://code.google.com/p/googleappengine/issues/detail?id=11752
If you're interested, in the process of logging the issue, I created a sample project which is also available on App Engine (which works as it's running in the production environment).
Sample project: https://bitbucket.org/bronze/jakarta-poi-issue
App running on production environment: http://bronze-gae-poi-issue.appspot.com/