I am having problems with the Bluemix Monitoring and Analytics service.
I have 2 applications with bindings to a single Monitoring and Analytics service. Every ~1 minute I get the following log line in both apps:
ERR [Resource Monitoring][ERROR]: JsonSender request error: Error: unsupported certificate purpose
When I remove the bindings, the log message does not appear. I also greped my code for anything related to "JsonSender" or "Resource Monitoring" and did not find anything.
I am doing some major refactoring work on our server, which might have broken things. However, our code does not use the Monitoring service directly (we don't have a package that connects to the monitoring server or something like that) - so I will be very surprised if the problem is due to the refactoring changes. I did not check the logs before doing the changes.
Any ideas will help.
Bluemix have 3 production environments: ng, eu-gb, au-syd, and I tested with ng, and eu-gb, both using 2 applications with same M&A service, and tested with multiple instances. They are all work fine.
Meanwhile, I received a similar problem that claim they are using Node.js 4.2.6.
So there are some more information we need to know to identify the problem:
1. Which version of Node.js are you using (Bluemix Default or any other one)
2. Which production environment are you using? (ng, eu-gb, au-syd)
3. Is there any environment variables are you using in your application?
(either the creating in code one, or the one using USER-DEFINED Variables)
4. One more thing, could you please try to delete the M&A service, and create it again, in case we are trapped in a previous fault of M&A.
cf ds <your M&A service name>
cf cs MonitoringAndAnalytics <plan> <your M&A service name>
NodeJS versions 4.4.* all appear to work
NodeJS uses openssl and apparently did/does not like how one of the M&A server certificates were constructed.
Unfortunately NodeJS does not expose the openssl verify purpose API.
Please consider upgrading to 4.4 while we consider how to change the server's certificates in the least disruptive manner as there are other application types that do not have an issue with them (e.g. Liberty and Ruby)
setting node js version 4.2.4 in package.json worked for me, however this is an alternative by-passing solution. Actual fix is being handled by core team. Thanks.
Related
following the golang library instructions if you write logs with the client library, where can one see those logs when running your server locally during development (eg via go run main.go)?
in my case (not sure if it's relevant) i'm using the library as part of golang logic in appengine, and even the relevant-looking instructions on "viewing logs" for those docs don't mention local development explicitly. Is that because it (running gcloud app logs tail and seeing local server logs) should "just work" or because there's no way to see logs for a local logs sdk interaction?
It's a good question and the Cloud Logging libraries do appear bound to Google's Cloud Logging service but, for local development (your question) and, loose-coupling as a generally good principle, these libraries really ought to be pluggable. Why shouldn't services running on e.g. GCP route logs to e.g. AWS?
With OpenTelemetry (nee OpenCensus), Google (and others) promote the ability to disconnect metric and trace production from consuming services, and logs aren't distinctly different.
A popular logging library in Go, Logrus supports pluggable logging via Hooks and an old (!) Stackdriver Logging implementation exists; this should be straightforward to upgrade to the current API (version).
Meantime, I think your question would benefit from being posted to Google's public issue tracker for Stackdriver (sic.) logging (link) and I'm going to ask someone who's very familiar with Cloud Logging as she may have some insight into this for us.
Update
I emailed with some former colleagues at Google and learned that Open Telemetry will eventually encompass logging. This is mentioned briefly on the project's About page.
tl;dr Tentatively answering myself: that's not supported - instead one has to just conditionally swap out calls to regular logger if env (eg empty GAE_INSTANCE env variable) indicates you're on localhost.
Walking through the code under the NewClient(...) call on the logging package, I end up a spot where the upstream API is really being called (note the rpc context used by the very last turtle - I never saw logic as I walked through that seemed to be switching to something for local development), so I suspect there really is no emulation capturing.
EDIT: See DazWilkin's helpful answer below for more context
I've been working on a small Google App Engine (standard environment) project that uses Cloud Endpoints v2. My code is largely based on the quickstart provided by Google.
Everything was working fine, but I re-deployed today after having not looked at it for a few weeks, and I'm getting the following error when I attempt to call the endpoint:
error: An error occured while connecting to the server: DNS lookup failed for URL: metadata.google.internal
This wasn't happening before. It seems to be happening when the endpoints package is being imported by Python.
My endpoint doesn't do anything fancy - I haven't changed the source from the sample EchoApi. The error ends up in the GCP Logging console no matter if I try to access the API through the API Explorer or via Curl.
I don't get any errors during deployment.
Edit #1
Some further information:
The error originates from within Google's code that is included with the google-endpoints package which I've included in my lib folder, per
the documentation. Specifically, the error occurs on line 54 of google/api/control/wsgi.py.
Basically, it's making a request to metadata.google.internal using urllib2.
I'm guessing this address is only available from within the Google Cloud, and that for whatever reason, the instance that's hosting my app can't do a DNS lookup on it.
Edit: #2
Dug a bit further.
It seems that the error originates in the google-endpoints-api-management package. Changes committed to that package on October 19th seem to have introduced additional platform reporting. metadata.google.internal is queried to check if the code is running within the Google Container Engine, then it blows up, because the metadata address doesn't resolve.
Here's the commit:
https://github.com/cloudendpoints/endpoints-management-python/commit/0a37d0e443091053ed03e455e06d3a0ae770999f
The google-endpoints package only requires google-endpoints-api-management >=1.0.0b1. On my end, things were working fine on version 1.0.0b2, but then I built a new lib folder, which brought down 1.0.0b5, and things went sideways. Required packages haven't changed between b2 and b5, so I'm thinking I may be able to just downgrade back to b2 for the time being. Haven't tried it yet.
Sent the Google Dev an email. Perhaps he'll chime in with further tips.
Edit: 2016-11-07
Tested downgrading the google-endpoints-api-management package to 1.0.0b2. Seems to be working, kludgy a fix as it is. If you're using the lib folder, the following will scrub the newer error-prone wsgi.py file and put back the older one:
pip install -t lib google-endpoints-api-management==1.0.0b2 --upgrade
Not pretty, but it may just get you back in business.
On a side note, the Google engineer promptly replied saying that he would take a look at this issue soon. With luck, endpoints v2 will eventually come out of beta, 'cause I'm really liking it so far.
This will be fixed in an upcoming patch to the google-endpoints-api-management package (which will be 1.0.0b6). It will probably be released sometime on Monday, 11/6.
If you'd like to continue testing right away and this error is blocking you, you can go back to 1.0.0b4 until 1.0.0b6 comes out. Everything should still work as normal with that version.
Thanks for bringing this to our attention! We're doing our best to iron out all of these wrinkles now during beta in preparation for our first general release.
EDIT: 1.0.0b6 has been released and resolves this issue. Thanks for your patience during our beta phase!
(Posted solution on behalf of the OP).
Google has released version 1.0.0b6 of the google-endpoints-api-management package to address this issue. It solved the problem for me. For anyone who is encountering this problem, clean out your lib folder and re-install the google-endpoints package. This will bring down the new google-endpoints-api-management package with it.
Thanks to Brad at Google for really quick action on this.
I'm using MSpec to drive some automated UI tests using Selenium WebDriver. Much like the examples I found online. I'm having problems getting it to take screenshot when the test fails.
I saw a comment on another issue where it works because they have a ResultSupplementer in the sample web specs. However, ResultSupplementer does not seem to exist in the latest version of Mspec (0.9.1).
Is there a different way to do this in the latest version of mspec? Ultimately, I'm going to generate HTML reports as TeamCity artifacts and include the screenshot on any failing specs.
I've updated the samples for the latest version of MSpec (in short, you need to implement ISupplementSpecificationResults yourself).
I've also merged the solutions and converted the MVC project to Nancy. You'll find that there's a bit more infrastructure-related code that grew over the last couple of years and works around various things, like
status codes 4xx and 5xx logged by IIS Express
IIS and Chrome Driver ports bound by other processes
page objects access the web driver with a high-level API
I use Paket for dependency management because it's far more powerful than plain NuGet
All that said, you need to run msbuild.exe mspec-samples.sln and then All-Specs.cmd. I've also checked that a TeamCity build creates screenshots.
Currently, I have developed a web application. In my web application, I used embedded solr server to make indexing. After that I deployed onto the Tomcat 6 on window xp. Everything is ok. Next, I have tried my web application to deploy on Amazon AWS. My platform is linux + mysql. When I deployed, I got the exception related with embedded solr.
[ WARN] 19:50:55 SolrCore - [] Solr index directory 'solrhome/./data/index' doesn't exist. Creating new index...
[ERROR] 19:50:55 CoreContainer - java.lang.RuntimeException: java.io.IOException: Cannot create directory: /usr/share/tomcat6/solrhome/./data/index
at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:403)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:552)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:480)
So how to fix my problem. I am novie to linux.
My guess is that the user you are running Solr under does not have permission to access that directory.
Also, which version of Solr are you using? Looks like 3+. The latest version is 4, so it may make sense to try using that from the start. Probably a bit more troubleshooting to start, but a much better pay off that starting with legacy configuration.
I got solution. That is because of permission affair on Amazon Linux with ec2-user. So , I changed permission by following.
sudo chmod -R ugo+rw /usr/share/tomcat6
http://wiki.apache.org/solr/SolrOnAmazonEC2strong text
t should allow access to ports 22 and 8983 for the IP you're working from, with routing prefix /32 (e.g., 4.2.2.1/32). This will limit access to your current machine. If you want wider access to the instance available to collaborate with others, you can specify that, but make sure you only allow as much access as needed. A Solr instance should not be exposed to general Internet traffic. If you need help figuring out what your IP is, you can always use whatismyip.com. Please note that production security on AWS is a wide ranging topic and is beyond the scope of this tutorial.
I would like my web app to log using SLF4j and logback. However, I am using ActiveMQ - which then requires that some if its jars go in /usr/share/tomcat6/lib (this is because the queues are defined outside of the web app so the classes to support them must be at container level).
ActiveMQ 5.5+ requires SLF4j-api so that jar has to go in to. Because SLF4j is now starting it needs to have a logging library added or it will simply nop. Thus, logback-core and logback-classic go in too.
After quite some frustration I got this working well enough that I can tidy it up shortly. I needed to configure logback to use a JNDI lookup to get the context. Then it can lookup logback-kenobi.xml in my web app and have a separate configuration there.
However, I'm wondering if this is the best way to do this. For one, the context handling appears not to support the groovy format. I did have a logback.groovy in my web app that logged to console when I was developing locally (which means that Eclipse WTP works nicely) but logs to file and to Splunk Storm when everywhere else. I'm going to want to do something similar with this setup but I'm not sure if I should do that by overwriting the logback-kenobi.xml or some other method.
Note that I don't, currently, need Tomcat itself to log with slf4j although I am planning to do that. Nor do I really need ActiveMQ to log with slf4j but I did need it to stop spewing debug messages every 30s as it was doing. I am aware of tomcat-slf4j-logbak but I don't believe it is directly useful as it is ActiveMQ requiring logging which is the issue.
However, I'm wondering if this is the best way to do this.
Best is an opinion, working is a fact.