Capturing camel exchange data into database - apache-camel

How would you recommend in Camel to define key/value expressions in routes for things you want to save for auditing, and have them be picked up and written to a database transparently?
i.e. the route contains an array or set of expressions for things to save for auditing, but doesn't know how it actually gets picked up and written to a DB.
This would be like Mule's auditing feature, where you can put <flow> elements in the Mule XML and define expressions to save to Mule's DB for tracking.
I have looked at Interceptor, Event Notifiers, Tracers, WireTaps, MDC Logging - I am sure the answer lies in one or a combination of these elements but it's not clear to me.
I'm using this example of Mule auditing XML from its documentation as a comparison:
<flow name="bizFlow">
<tracking:custom-event event-name="Retrieved Employee" doc:name="Custom Business Event">
<tracking:meta-data key="Employee ID" value="#[payload['ID']]"/>
<tracking:meta-data key="Employee Email" value="#[payload['Email']]"/>
<tracking:meta-data key="Employee Git ID" value="#[payload['GITHUB_ID']]"/>
</tracking:custom-event>
</flow>
Thanks very much

For auditing I used wireTap to send exchange to special audit route where I do what I need for auditing. Not actually to DB but to JMS queue, but it does not matter.
There is only one restriction: whatever goes for auditing must not be changed after wireTap by main route (both run in parallel), so I cloned such auditing data before wireTap to special Exchange property to be used in audit route.

Related

How to gather complete object schema with Mulesoft Salesforce connector (Mule 4)

...I'm using the Mule Salesforce connector (for Mule Runtime 4.4.2) in Anypoint Studio (7.4.2).
The Salesforce query language does not allow the * operator to gather all keys from an object, so I'm looking for another means to retrieve a sample object and create a model record that I could use for updates and creation.
Using the Task object (documented here: https://developer.salesforce.com/docs/atlas.en-us.object_reference.meta/object_reference/sforce_api_objects_task.htm) as an example, I find that the describeLayout() and retrieve() methods look promising.
However, when I try to invoke retrieve(), I'm required to submit a list of fields to retrieve.
I don't see the describeLayout() method exposed in the connector, so I haven't seen what it returns.
Have I missed a general purpose approach to allow me to gather every field from a sample object?
[edited for clarity]
See if there's describe support. describeLayout is primarily used if you need to recreate a SF page in a mobile app for example, won't tell you much about field types and will list only fields the end user can see, there can be more hidden in the background.
You could have some luck with REST API describe: https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/dome_sobject_describe.htm
Or metadata API: https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_describesobjects_describesobjectresult.htm
I don't know what's available for you. I'd expect Mule connector to do it for you as part of some connection wizard, pull info about all queryable tables and after you pick one - about all fields you can see in it. Maybe you're overcomplicating something, you need a truly dynamic SELECT * equivalent, that would work when admin adds new fields without having to refresh the connection?
Metadata can also be queried, it's stored in tables like actual data. See if https://stackoverflow.com/a/60067076/313628 helps.
...so it turns out that the Mule 4 Salesforce connector does support describe SObject.
To the Anypoint Studio developer, it shows up like this:
The XML definition offers little further insight.
Update: After further investigation, it turns out that an additional operation needs to be applied using Dataweave to get a simple list of fields. After you have the SObject structure in payload, you will:
payload.fields.*name
This yields an array with the field names.

Enabling Replay mechanism with camel from messages from DB

Iam trying to implement replay mechanisam with camel ie., i will have to retrieve all the messages already persisted and forward to appropriate camel route to reprocess.This will be triggred by quartz scheduler.
I achieved the same by using below.
1) once the quartz scheduler is triggered, fwd to processor which will query db and have the message as list and set the same in camel exchange properties as list.
2) Use the camel in which LoopProcessor will set appropriate xml in the iteration in the exchange.
3) fwd it to activemq which will be forwarded to appropriate camel route to reprocess.
Every thing is fine.
I see the following TWO issues
a) there might be 'n' number of msges(10,000+) which will be available in the camel exchange properties in the form of List - i can remove the message which is sent for processing but i think this will do more good on performance and memory usage.
b) I don want to forward all the 10,000+ messages to activemq which i guess will make it exhaustive. Is there a better mechanism to forward 10000+ messages to activemq.
-- I am thinking to use SEDA/VM(using different camel contexts).how good this can give me considering above questions.
Thanks.
Regards
Senthil Kumar Sekar
If the number of messages is a problem, then not all messages should be loaded at once.
Process as follows (see also my answer for your other SO):
Limit the number of results when querying the DB.
Set a marker (e.g. processedFlag) for the DB entries that are processed
Begin at 1. and query only the not already processed entries until all records are processed.
However, you should test the ActiveMQ approach as well, if 10,000+ messages are really a problem or not.

Which design pattern to use for my usecase?

My core usecase is to read/write from database and directory server.
Eg.
createUser,
modifyUser,
associateGroup,
changePassword etc
I have several other functionalities to be done in several of these use cases.
1.) audit start of operation
2.) audit failure in case of exception/error
3.) validate data
4.) persist in db
5.) persist in directory server (LDAP)
6.) notify in somecases like password change
7.) audit success
8.) future something else
I am thinking of implementing this in some kind of decorator design pattern is there some better suggestions ???
Thanks,
Vignesh
The data should have been validated long before you persist it. Move that code up to where the service receives the request.
"Persist in DB" and "Persist in database server" are the same thing. You're overcomplicating it. Your comment suggests that you should prefer wording to distinguish between a relational database and a directory, but my conclusion doesn't change.
Notify for password change is a separate use case.
The only thing that qualifies is logging the start, completion, and error. You can do this easily with an around aspect if you use a language or framework that supports AOP, like Spring. It's a middle tier feature.

Apache Camel JMS to MySql

I have a requirement to write a code whenever there is a new entry in the JMS queue, I want that entry to be persisted in the MySql database. I read that this is possible using Apache Camel project. Could any one point out to the examples or some documentation related to the same.
Lokesh
Yes, it's rather straight forward. At least the JMS and Database parts.
from("jms:queue:someQueue")
.bean(SomeTransformerBean.class) // transform the message, custom code etc in
.to("sql:insert into FOO X VALUES(#)"); // need to enter some valid SQL statement here
Read more here
http://camel.apache.org/sql-component.html
and here
http://camel.apache.org/jms

BizTalk 2006 - Copy a received file to a new directory

I want to be able to copy the file I have which comes in as XML into a new folder location on the server. Essentially I want to hold a back up of the input files in a new folder.
What I have done so far is try to follow what has been said on this forum post - link text
At first I tried the last method which didn't do anything (file renaming while reading). So I tried one of the other options and altered the orchestration and put a Send shape just after the Receive shape. So the same message that comes in is sent out to the logical port. I export the MSI, and I have created a Send Port in the Admin console which has been set to point to my copy location. It copies the file but it continues to create one every second. The Event Viewer also reports warnings saying "The file exists". I have set the Copy Mode of the port to 'overwrite' and 'Create New', both are not working.
I have looked on Google but nothing helps - BTW I support BizTalk but I have no idea how pipelines, ports work. So any help would be appreciated.
thanks for the quick responses.
As David has suggested I want to be able to track the message off the wire before BizTalk does any processing with it.
I have tried to the CodePlex link that Ben supplied and its points to 'Atomic-Scope's BizTalk Message Archiving Pipeline Component' which looks like my client will have to pay for. I have downloaded the trial and will see if I have any luck.
David - I agree that the orchestration should represent the business flow and making a copy of a file isn't part of the business process. I just assumed when I started tinkering around I could do it myself in the orchestration as suggested on the link I posted.
I'd also rather not rely on the BizTalk tracking within the message box database as I suppose the tracked messages will need to be pruned on a regular basis. Is that correct or am I talking nonsense?
However is there a way I can do what Atomic-Scope have done which may be cheaper?
**Hi again, I have figured it out from David's original post as indicated I also created a Send port which just has a "Filter" expression like - BTS.ReceivePortName == ReceivePortName
Thanks all**
As the post you linked to suggests there are several ways of achieving this sort of result.
The first question is: What do you need to track?
It sounds like there are two possible answers to that question in your case, which I'll address seperately.
You need to track the message as received off the wire before BizTalk touches it
This scenario often arises where you need to be able to prove that your BizTalk solution is not the source of any message corruption or degradation being seen in messages.
There are two common approaches to this:
Use a pipeline component such as the one as Ben Runchey suggests
There is another example of a pipeline component for archiving here on codebetter.com. It looks good - just be careful if you use other components, and where you place this component, that you are still following BizTalk streaming model proper practices. BizTalk pipelines are all forwardonly streaming, meaning that your stream is readonly once, and all the work on them the happens in an eventing manner.
This is a good approach, but with the following caveats:
You need to be careful about the streaming employed within the pipeline component
You are not actually tracking the on the wire message - what your pipeline actually sees is the message after it has gone through the BizTalk adapter (e.g. HTTP adapter, File etc...)
Rely upon BizTalk's out of the box tracking
BizTalk automatically persists all messages to the message box database and if you turn on BizTalk tracking you can make BizTalk keep these messages around.
The main downside here is that enabling this tracking will result in some performance degradation on your server - depending on the exact scenario, this may not be a huge hit, but it can be signifigant.
You can track the message after it has gone through the initial receive pipeline
With this approach there are two main options, to use a pure messaging send port subscribing to the receive port, to use an orchestration send port.
I personally do not like the idea of using an orchestration send port. Orchestrations are generally best used to model the business flow needed. Unless this archiving is part of the business flow as understood by standard users, it could simply confuse what does what in your solution.
The approach I tend to use is to create a messaging send port in the BizTalk admin console that subscribes to your receive port. The send port will then just use a standard BizTalk file adapter, with a pass through pipeline.
I think you should look at the Biztalk Message Archiving pipeline component. You can find it on Codeplex (http://www.codeplex.com/btsmsgarchcomp).
You will have to create a new pipeline and deploy it to your biztalk group. Then update your receive pipeline to archive the file to a location that the host this receive location is running under has access to.

Resources