how to migrate ftp route from xml to java - apache-camel

i began to refactor/ rebuild a xml based camel project to a java based one (i need to strictly separate configuration from functional stuff).
i am new to camel and so i am stumbling over the very first route, a ftp route. The ftp url and credentials are configuration but all the rest should be set in java.
at the moment the urie looks as follows:
ftp://<fromConfig>&stepwise=true&delay=1000&move=${file:name}.trans&recursive=true&binary=true&filter=#doneFilter&maxMessagesPerPoll=200&eagerMaxMessagesPerPoll=false&sorter=#pcrfSorter
So how to do this in java. especially the stuff using beans with "#".
thx in advance

The uri is the same in Java or XML DSL. Only that in XML mind you need to XML escape the & so it becomes & etc.
The # is a lookup in the registry, see more here: http://camel.apache.org/how-do-i-configure-endpoints.html
So the lookup happens in the Camel registry which can be a facade for JDNI / Spring etc. So it depends in what container you run Camel.
You can find a bit more details about Camel registry at: https://camel.apache.org/registry.html

Related

Apache Camel with Kafka Schema registry

I am building a Camel application to read message from Confluent Kafka. The messages are in Avro format and added below route configuration to read the Avro messages using schema registry in Camel route. When I enable the valueDeserializer=io.confluent.kafka.serializers.KafkaAvroDeserializer,
I am not getting any messages from Kafka topic. I tested the route with out schema registry and able to consume the message.
Route definition:
from("kafka:topic1?sslTruststoreLocation=<jks file>
&valueDeserializer=io.confluent.kafka.serializers.KafkaAvroDeserializer
&brokers=host1:9092,host2:9092,host3:9092
&sslKeystoreType=JKS
&groupId=grp1
&allowManualCommit=true
&consumersCount=10
&sslKeyPassword=<password>
&autoOffsetReset=earliest
&sslKeystorePassword=<password>
&securityProtocol=SSL
&sslTruststorePassword=<password>
&sslEndpointAlgorithm=HTTPS
&maxPollRecords=10
&sslTruststoreType=JKS
&sslKeystoreLocation=<keystore_path>
&autoCommitEnable=false
&additionalProperties.schema.registry.url=https://localhost:8081
&additionalProperties.basic.auth.user.info=abc:xyz
&additionalProperties.basic.auth.credentials.source=USER_INFO");
Can you please let me know, what is wrong in above configuration for schema registry. I also tried with EndPointRouteBuilder and same issue. However the producer application which is also Camel based and uses the schema registry for publishing Avro messages is working fine.
I figured out the way to configure the basic auth with Confluent schema registry. We need to configure as below
from("kafka:topic1?sslTruststoreLocation=<jks file>
&valueDeserializer=io.confluent.kafka.serializers.KafkaAvroDeserializer
&brokers=host1:9092,host2:9092,host3:9092
&sslKeystoreType=JKS
&groupId=grp1
&allowManualCommit=true
&consumersCount=10
&sslKeyPassword=<password>
&autoOffsetReset=earliest
&sslKeystorePassword=<password>
&securityProtocol=SSL
&sslTruststorePassword=<password>
&sslEndpointAlgorithm=HTTPS
&maxPollRecords=10
&sslTruststoreType=JKS
&sslKeystoreLocation=<keystore_path>
&autoCommitEnable=false
&additionalProperties.schema.registry.url=https://localhost:8081
&additional-properties[basic.auth.user.info]=abc:xyz
&additional-properties[basic.auth.credentials.source]=USER_INFO");
Note here, we need to use additional-properties for basic.auth.user.info and basic.auth.credentials.source as mentioned above.
My issue was that the schema registry password contained special characters, such +.
So I had to wrap the property in RAW as described in the documentation [1]
Given the above example, it would then result in:
&additional-properties[basic.auth.user.info]=RAW(abc:xyz+)
[1] https://camel.apache.org/manual/faq/how-do-i-configure-endpoints.html#HowdoIconfigureendpoints-Configuringparametervaluesusingrawvalues

Storing a .jks file in Fabric profile

In our Apache Camel project, we are consuming a rest service which requires a .jks file.
Currently we are storing .jks file in a physical location and referring to that in Camel project. But it can't be used always, as we may be having access to the Fuse Management Console only and not to the physical location accessible from management console.
Another option is to store key file within bundle, which is can't be employed because, certificate may change based on the environment.
In this scenario, what can be a better solution to store key file?
Note
One option about which I thought was, storing .jks file within fabric profile. But could n't find any way to do that. Is it possible to store a file in Fabric profile?
What about storing the .jks in a java package and reading it as a resource?
You bundle imports org.niyasc.jks and loads the file from there. The bundle need not to change between environments.
Then you write 2 bundles to provide the same package org.niyasc.jks, one with production file and one with test file.
Production env:
RestConsumerBundle + ProductionJksProviderBundle
Test env:
RestConsumerBundle + TestJksProviderBundle
Mind that deploying both of them may be possible and RestConsumerBundle will be bound to the first deployed bundle. You can eventually play with OSGi directives to give priority to one of them.
EDIT:
A more elegant solution would be creating an OSGi service which exposes the .jks as an InputStream or byte[]. You can even play with JNDI if you feel to.
From Blueprint declare the dependency as mandatory, so your bundle will not start if the service is not available.
<!-- RestConsumerBundle -->
<reference id="jksProvider"
interface="org.niyasc.jks.Provider"
availability="mandatory"/>
Storing the JKS files in the Fuse profile could be a good idea.
If you have a broker profile created, such as "mq-broker-Group.BrokerName", take a look at it via the Fuse Web Console.
You can then access the jks file as a resource in the property file, as in "truststore.file=profile:truststore.jks"
And also check the "Customizing the SSL keystore.jks and truststore.jks file" section of this chapter:
https://access.redhat.com/documentation/en-us/red_hat_jboss_fuse/6.3/html/fabric_guide/mq#MQ-BrokerConfig
It has some good pointers.
Regarding how to add files to a Fabric profile, you can store any resources under src/main/fabric8 and use the fabric8 Maven plugin. For more, see:
https://fabric8.io/gitbook/mavenPlugin.html
-Codrin

XPages and Apache CXF: what's the best place for WSDL files?

I am currently testing Apache CXF (2.7.11). Purpose is to build a Web Service client. I am roughly following Martin Vereecken's blog post (http://www.bizzybee.be/2013/01/23/creating-a-java-webservice-client-in-domino-using-apache-cxf/#more-451). I have a WSDL file and I created sample code with the wsdl2java tool.
My first thought was to store the wsdl file in the NSF (e.g. WebContent\WEB-INF\resources\wsdl). However, the code generated does not seem to find the WSDL file. Code looks something like this (class name Session comes form the WSDL):
Session.java:
URL url = Session.class.getResource("WEB-INF/wsdl/twinfield/session.wsdl");
if (url == null) {
url = Session.class.getClassLoader().getResource("WEB-INF/wsdl/twinfield/session.wsdl");
}
I tried both WEB-INF and /WEB-INF but neither seem to work.
If I put the WSDL file on the web (e.g. domino/html/wsdl folder) the url above works, but the code breaks later (it seems that it uses java.io.File trying to load the WSDL).
Local reference (e.g. C:\temp\wsdl) could work but does not really sound like a robust option.
The final java code will be in WebContent\WEB-INF\src, not in Code\Java.
So, what is the "best practice" for storing and referencing WSDL files in Domino environment?
UPDATE
I went with #stwissel's proposal and noticed that the wsdl2java tool can actually create the whole jar for you. Just specify option -clientJar and the resulting JAR file will contain all class files + the wsdl file.
When you generate the Java classes from the WSDL, you should pack them into a JAR file. Put the WSDL into the Jar file, so it never gets lost. This blog article and the comments explain it.
A potential issue could be the access rights (Java execution permissions) when you keep that jar inside the NSF.
The blog entry contains the sample code, so check it out!

How do I obscure a password in a Camel configuration file

I am looking at using the Camel crypto tool for processing PGP data but have a requirement that the password to the keys used be either encrypted in the configuration file or be sourced from a secure server elsewhere. Is this possible without generating my own PGP processor?
Yes see the security menu on the Apache Camel web site: http://camel.apache.org/security.html
There is a section about configuration security, where you can use camel-jasypt for that: http://camel.apache.org/jasypt.html
This allows you to store encrypted usernames / passwords etc in a .properties file, and then you can refer to these properties from Camel crypto, using Camel's property placeholder: http://camel.apache.org/using-propertyplaceholder.html

Apache Camel: Keeping routing information completely independent of the Java Code

First of all thanks to folks who are currently involved in the development of Camel, I am grateful for all the hard work they have put in.
I am looking for some design advice.
The architecture is something like this:
I have a bunch of Java classes which when instantiated are required to connect to each other and send messages using Apache Camel. The design constraints require me to create a framework such that all routing information, producers, consumers, endpoints etc should be a part of the camel-context.xml.
An individual should have the capability to modify such a file and completely change the existing route without having the Java code available to him.(The Java code would not be provided, only the compiled Jar would be)
For example in One setup,
Bean A ->Bean B->Bean C->file->email.
in another
Bean B->Bean A->Bean C->ftp->file->email
We have tried various approached, but if the originating bean is not implemented as a Java DSL, the messages rate is very high because camel constantly invokes Bean A in the first example and Bean B in the second(they being the source).
Bean A and Bean B originate messages and are event driven. In case the required event occurs, the beans send out a notification message.
My transformations are very simple and I do not require the power of Java DSL at all.
To summarize, I have the following questions:
1) Considering the above constraints, I do I ensure all routing information, including destination addresses, everything is a part of the camel context file?
2) Are there example I can look at for keeping the routing information completely independent of the java code?
3) How do I ensure Camel does not constantly invoke the originating bean?
4) Does Camel constantly invoke just the originating bean or any bean it sends & messages to irrespective of the position of the bean in the entire messaging queue?
I have run out of options trying various ways to set this up. Any help would be appreciated.
Read about hiding the middleware on the Camel wiki pages. This allows you to let clients use an interface to send/receive messages but totally unaware of Camel (no Camel API used at all).
Even better consider buying the Camel in Action book and read chapter 14 which talks about this.
http://www.manning.com/ibsen/
Save 41% on Manning books: Camel in Action or ActiveMQ in Action. Use code s2941. Expires 6th oct. http://www.manning.com/ibsen/
If you consider using ServiceMix of FuseESB, you might want to separate your routes in two parts.
First part would be the Event-driver bean that trigger the route. It could push messages to the ServiceNMR (see http://camel.apache.org/nmr.html).
The other part would be left to the framework users, using Spring DSL. It would just listen to message on the NMR (push by the other route) and do whatever they want with it.
Of course endpoint definition could be propertized using servicemix configuration service (see http://camel.apache.org/properties.html#Properties-UsingBlueprintpropertyplaceholderwithCamelroutes)

Resources