How to get appdynamics to detect Apache Camel Business Transactions - apache-camel

has anybody gotten Appdynamics java agent to detect Apache Camel business transactions? Picking up files from a directory (polling) and then sending off to activemq.
Another case is camel deployed on apache karaf, need to track outgoing http calls using appDynamics
Best

AFAIK the critical point for AppDynamics (or profilers like that) it is essential to find an entry point. Usually the prefered way is to have an Servlet "Endpoint" that starts a threat and can be followed.
For the scenario you are describing this wouldn't work as it's missing the "trigger" to start the following. Most likely you'll need to build your own app-dynamics monitoring extension for it.

By default much of the Apache stuff is excluded. Try adding Call Graph Settings (Configure >> Instrumentation >> Call Graph Settings), to include specific transports, like org.apache.camel.component.file.* in the Specific sub-packages / classes from the Excluded Packages to be included in call graphs section. Do not include org.apache.camel.* as it will instrument all the camel code which is very expensive. You may want to do it at first to detect what you want to watch, but make sure to change it back.

Edit AppServerAgent\conf\app-agent-config.xml:
--under-->
<app-agent-configuration>
<agent-services>
<agent-service name="TransactionMonitoringService" enable="true">
<configuration-properties>
--add--> <property name="enable-async-correlation-for" value="camel"/>
From the Controller web site:
Configure >> Instrumentation >> Call Graph Settings
Add Always Shown Package/Class: org.apache.camel.*
Servers >> App Servers >> {tiername} >> {nodename} >> Agents
App Server Agent
Configure
Use Custom Configuration
find-entry-points: true

Related

Idempotency in a camel application running in Kubernetes cluster

I am using apache camel as integration framework in my microservice. I am deploying it in a Kubernetes cloud as multiple pods. I had written a route for reading file from a directory and write to another. But I am facing an issue as the different pods are picking same file. I need to avoid that. I only want any of the pod to pick the file and process but currently all the pods are picking and processing the file. Can someone help with this. Please suggest some examples available in GitHub or any other.
Thanks in advance.
Camel recently introduced some interesting clustering capabilities - see here.
In your particular case, you could model a route which is taking the leadership when starting the directory polling, preventing thereby other nodes from picking the (same or other) files.
Set it up is very easy and all you need is to prefix singleton
endpoints according to the master component syntax:
master:namespace:delegateUri
This would result in something like this:
from("master:mycluster:file://...")
.routeId("clustered-route")
.log("Clustered file polling !");

Bluemix Monitoring and Analytics: Resource Monitoring - JsonSender request error

I am having problems with the Bluemix Monitoring and Analytics service.
I have 2 applications with bindings to a single Monitoring and Analytics service. Every ~1 minute I get the following log line in both apps:
ERR [Resource Monitoring][ERROR]: JsonSender request error: Error: unsupported certificate purpose
When I remove the bindings, the log message does not appear. I also greped my code for anything related to "JsonSender" or "Resource Monitoring" and did not find anything.
I am doing some major refactoring work on our server, which might have broken things. However, our code does not use the Monitoring service directly (we don't have a package that connects to the monitoring server or something like that) - so I will be very surprised if the problem is due to the refactoring changes. I did not check the logs before doing the changes.
Any ideas will help.
Bluemix have 3 production environments: ng, eu-gb, au-syd, and I tested with ng, and eu-gb, both using 2 applications with same M&A service, and tested with multiple instances. They are all work fine.
Meanwhile, I received a similar problem that claim they are using Node.js 4.2.6.
So there are some more information we need to know to identify the problem:
1. Which version of Node.js are you using (Bluemix Default or any other one)
2. Which production environment are you using? (ng, eu-gb, au-syd)
3. Is there any environment variables are you using in your application?
(either the creating in code one, or the one using USER-DEFINED Variables)
4. One more thing, could you please try to delete the M&A service, and create it again, in case we are trapped in a previous fault of M&A.
cf ds <your M&A service name>
cf cs MonitoringAndAnalytics <plan> <your M&A service name>
NodeJS versions 4.4.* all appear to work
NodeJS uses openssl and apparently did/does not like how one of the M&A server certificates were constructed.
Unfortunately NodeJS does not expose the openssl verify purpose API.
Please consider upgrading to 4.4 while we consider how to change the server's certificates in the least disruptive manner as there are other application types that do not have an issue with them (e.g. Liberty and Ruby)
setting node js version 4.2.4 in package.json worked for me, however this is an alternative by-passing solution. Actual fix is being handled by core team. Thanks.

how to use cacti to monitor remote hosts

I have nagios installed in a server and it's monitoring different remote hosts using different plugins. But I am not able to view the process of each system in a graph format. Is it possible to use cacti for the same purpose? I just installed cacti on the same machine. But not sure how to install plugins and monitor different servers. Also just wanted to know can I use cacti as the frontend tool for Nagios? How cacti works
Can someone help me on this please.
Thanks
I'm not sure on how Cacti interacts with Nagios, but I do have the pnp4nagios plugin/extension installed and configured for one of my Nagios Instances which gives me a great overview in graphs for the services I monitor. (not all of them, but only those who are variable and are usefull to see in a graph) It's a really nice tool and not so hard to setup. I compiled it from source and it's install.php gives you great feedback on what to do next in the installation procedure. One thing they didn't mention was that you had to enable Includes in your Nagios instance's apache2 config file. (this is necessary if you want to use the SSI include in the Nagios CGI files. This SSI file contains jQuery Javascript definitions that will enable the popUp png graphs when you mouseover a graph in Nagios)
It also uses rrdtool (Round Robin Database files) which uses fixed size storage.
(could be beneficial if you have little space on your harddrive)
For nagios there is nagiosgraph, it generates graph for each services define in nagios, you just have to add config for nagiosgraph.
as for cacti, there is plugin called NPC, it is generate new tab on cacti which contains services define in nagios.

Configuring logging for ActiveMQ 5.5 on Tomcat 6 with web app using SLF4j and logback

I would like my web app to log using SLF4j and logback. However, I am using ActiveMQ - which then requires that some if its jars go in /usr/share/tomcat6/lib (this is because the queues are defined outside of the web app so the classes to support them must be at container level).
ActiveMQ 5.5+ requires SLF4j-api so that jar has to go in to. Because SLF4j is now starting it needs to have a logging library added or it will simply nop. Thus, logback-core and logback-classic go in too.
After quite some frustration I got this working well enough that I can tidy it up shortly. I needed to configure logback to use a JNDI lookup to get the context. Then it can lookup logback-kenobi.xml in my web app and have a separate configuration there.
However, I'm wondering if this is the best way to do this. For one, the context handling appears not to support the groovy format. I did have a logback.groovy in my web app that logged to console when I was developing locally (which means that Eclipse WTP works nicely) but logs to file and to Splunk Storm when everywhere else. I'm going to want to do something similar with this setup but I'm not sure if I should do that by overwriting the logback-kenobi.xml or some other method.
Note that I don't, currently, need Tomcat itself to log with slf4j although I am planning to do that. Nor do I really need ActiveMQ to log with slf4j but I did need it to stop spewing debug messages every 30s as it was doing. I am aware of tomcat-slf4j-logbak but I don't believe it is directly useful as it is ActiveMQ requiring logging which is the issue.
However, I'm wondering if this is the best way to do this.
Best is an opinion, working is a fact.

How can I process http responses with XSLT in Apache webserver?

I have a PHP application that I want to also publish with a different look and feel. We've chosen to do that using XSLT. That way we don't have to touch the PHP application and run risk introducing instability in the original. That's important since we're close to production.
I've looked into ways of doing XSLT processing in Apache webserver. And it seems that the only available xslt module hasn't been updated since 2005. I was hoping to use an xslt mod in a filter chain to accomplish what I want. But an unsupported module won't do.
Another option I can think of is to do the XSLT processing using a servletfilter in a java application server. It seems rather roundabout to have an http request arrive at apache webserver, be forwarded to a java application server to be forwarded back to the apache webserver to do the PHP processing, and the reverse way back for the response...
So my question is: Is there a way to do XSLT processing in apache webserver? Is there another way to do this?
Thank you in advance.
I do not know of a good way to do that in apache. You could do it with PHP using its XSL(T) module, though.

Resources