I am trying to add a custom sftp component in Apache Camel to wrap the username, host, port and password in a configuration object to be passed to a sftpcomponent.
Below is the code that I have tried:
#Configuration
class SftpConfig {
#Bean("sourceSftp")
public SftpComponent getSourceSftpComponent(
#Qualifier("sftpConfig")
SftpConfiguration sftpConfig) throws Exception{
SftpComponent sftpComponent = new SftpComponent();
// not getting way to set the configuration
return sftpComponent;
}
#Bean("sftpConfig")
public SftpConfiguration getSftpConfig(
#Value("${host}") String host,
#Value("${port}") int port,
#Value("${applicationUserName}") String applicationUserName,
#Value("${password}") String password) {
SftpConfiguration sftpConfiguration = new SftpConfiguration();
sftpConfiguration.setHost(host);
sftpConfiguration.setPort(port);
sftpConfiguration.setUsername(applicationUserName);
sftpConfiguration.setPassword(password);
return sftpConfiguration;
}
}
//In other class
from("sourceSftp:<path of directory>") ---custom component
A similar approach in JMSComponent works fine where I have created a bean for sourcejms, but I am not able to do it for sftp as SftpComponent doesn't have set call for sftpconfiguration.
The Camel maintainers seem to be moving away from providing individual components with a "setXXXConfiguration" method to configure their properties. The "approved" method of providing properties -- which works with the SFTP -- is to specify them on the connection URL:
from ("sftp://host:port/foo?username=foo&password=bar")
.to (....)
An alternative approach is to instantiate an endpoint and set its properties, and then use a reference to the endpoint in the from() call. There's a gazillion ways of configuring Camel -- this works for me for XML-based configuration:
<endpoint id="fred" uri="sftp://acme.net/test/">
<property key="username" value="xxxxxxx"/>
<property key="password" value="yyyyyyy"/>
</endpoint>
<route>
<from uri="fred"/>
<to uri="log:foo"/>
</route>
You can customize it by extending the SftpComponent. This allows you to define multiple endpoints without providing the username/password for each endpoint definition.
Step 1: Extend SftpComponent and give your component a custom name, ie customSftp
#Component("customSftp")
public class CustomSftpComponent extends SftpComponent {
private static final Logger LOG = LoggerFactory.getLogger(CustomSftpComponent.class);
#Value("${sftp.username}")
private String username;
#Value("${sftp.password}")
private String password;
#SuppressWarnings("rawtypes")
protected void afterPropertiesSet(GenericFileEndpoint<SftpRemoteFile> endpoint) throws Exception {
SftpConfiguration config = (SftpConfiguration) endpoint.getConfiguration();
config.setUsername(username);
config.setPassword(password);
}
}
Step 2: Create a camel route to poll 2 different folders using your custom component name.
#Component
public class PollSftpRoute extends RouteBuilder {
#Override
public void configure() throws Exception {
from("{{sftp.endpoint1}}").routeId("pollSftpRoute1")
.log(LoggingLevel.INFO, "Downloaded file from input folder 1.")
.to("file:data/out1");
from("{{sftp.endpoint2}}").routeId("pollSftpRoute2")
.log(LoggingLevel.INFO, "Downloaded file from input folder 2.")
.to("file:data/out2");
}
}
Step 3: Place this in application.properties
camel.springboot.main-run-controller=true
sftp.endpoint1=customSftp://localhost.net/input/1?delay=30s
sftp.endpoint2=customSftp://localhost.net/input/2?delay=30s
sftp.username=sftp_user1_l
sftp.password=xxxxxxxxxxxx
With this you don't have to repeat the username/password for each endpoints.
Note: With this approach you wont be able to set the username/password in URI endpoint configuration. Anything you set in URI will be replaced in afterPropertiesSet.
Related
I'm deploying a Spring Batch job triggered by a Camel route. Here is the Spring Batch config:
#Configuration
#EnableBatchProcessing
public class JobConfig
{
...
#Bean(name = "personJob")
public Job personJob(JobCompletionNotificationListener personListener, Step personStep)
{
return jobBuilderFactory
.get(...)
.incrementer(new RunIdIncrementer())
.listener(...)
.flow(...)
.end()
.build();
}
...
The Camel route looks like this:
#ApplicationScoped
public class MyRouteBuilder extends RouteBuilder
{
#Override
public void configure() throws Exception
{
from("file://...")
...
.to("spring-batch:personJob?jobLauncherRef=jobLauncher");
}
Running the route above raises the following exception:
[ERROR] Caused by: org.apache.camel.ResolveEndpointFailedException: Failed to resolve endpoint: spring-batch://personJob?jobLauncherRef=jobLauncher due to: No JobLauncher named jobLauncher found in the registry.
[ERROR] Caused by: java.lang.IllegalStateException: No JobLauncher named jobLauncher found in the registry."}}}}
However, the documentation clearly states:
The #EnableBatchProcessing works similarly to the other #Enable*
annotations in the Spring family. In this case, #EnableBatchProcessing
provides a base configuration for building batch jobs. Within this
base configuration, an instance of StepScope is created in addition to
a number of beans made available to be autowired:
JobRepository: bean name "jobRepository"
JobLauncher: bean name "jobLauncher"
...
So, there should be a bean named "jobLauncher" of the type JobLauncher. Why isn't it found in the registry ?
Many thanks in advance,
Seymour
I am creating an application using Apache Camel to transfer messages from AMQP to Kafka. Code can also be seen here - https://github.com/prashantbhardwaj/qpid-to-kafka-using-camel
I thought of creating it as standalone SpringBoot app using spring, amqp and kafka starters. Created a route like
#Component
public class QpidToKafkaRoute extends RouteBuilder {
public void configure() throws Exception {
from("amqp:queue:destinationName")
.to("kafka:topic");
}
}
And SpringBoot application configuration is
#SpringBootApplication
public class CamelSpringJmsKafkaApplication {
public static void main(String[] args) {
SpringApplication.run(CamelSpringJmsKafkaApplication.class, args);
}
#Bean
public JmsConnectionFactory jmsConnectionFactory(#Value("${qpidUser}") String qpidUser, #Value("${qpidPassword}") String qpidPassword, #Value("${qpidBrokerUrl}") String qpidBrokerUrl) {
JmsConnectionFactory jmsConnectionFactory = new JmsConnectionFactory(qpidPassword, qpidPassword, qpidBrokerUrl);
return jmsConnectionFactory;
}
#Bean
#Primary
public CachingConnectionFactory jmsCachingConnectionFactory(JmsConnectionFactory jmsConnectionFactory) {
CachingConnectionFactory cachingConnectionFactory = new CachingConnectionFactory(jmsConnectionFactory);
return cachingConnectionFactory;
}
jmsConnectionFactory bean which is created using Spring Bean annotation should be picked by amqp starter and should be injected into the route. But it is not happening. When I started this application, I got following exception -
org.apache.camel.FailedToStartRouteException: Failed to start route route1 because of Route(route1)[From[amqp:queue:destinationName] -> [To[kafka:.
Caused by: java.lang.IllegalArgumentException: connectionFactory must be specified
If I am not wrong connectionFactory should be created automatically if I pass right properties in application.properties file.
My application.properties file looks like :
camel.springboot.main-run-controller = true
camel.component.amqp.enabled = true
camel.component.amqp.connection-factory = jmsCachingConnectionFactory
camel.component.amqp.async-consumer = true
camel.component.amqp.concurrent-consumers = 1
camel.component.amqp.map-jms-message = true
camel.component.amqp.test-connection-on-startup = true
camel.component.kafka.brokers = localhost:9092
qpidBrokerUrl = amqp://localhost:5672?jms.username=guest&jms.password=guest&jms.clientID=clientid2&amqp.vhost=default
qpidUser = guest
qpidPassword = guest
Could you please help suggest why during autoconfiguring connectionFactory object is not being used? When I debug this code, I can clearly see that connectionFactory bean is getting created.
I can even see one more log line -
CamelContext has only been running for less than a second. If you intend to run Camel for a longer time then you can set the property camel.springboot.main-run-controller=true in application.properties or add spring-boot-starter-web JAR to the classpath.
however if you see my application.properties file, required property is present at the very first line.
One more log line, I can see at the beginning of application startup -
[main] trationDelegate$BeanPostProcessorChecker : Bean 'org.apache.camel.spring.boot.CamelAutoConfiguration' of type [org.apache.camel.spring.boot.CamelAutoConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
Is this log line suggesting anything?
Note - One interesting fact that exactly same code was running fine last night, just restarted my desktop and there is not even a single word changed and now it is throwing exception.
This just refers to an interface
camel.component.amqp.connection-factory = javax.jms.ConnectionFactory
Instead it should refer to an existing factory instance, such as
camel.component.amqp.connection-factory = #myFactory
Which you can setup via spring boot #Bean annotation style.
When I use came-snmp resive snmp info which version is 3, it can't go to the process method.
#Component
public class SnmpCollect extends RouteBuilder {
#Override
public void configure() throws Exception {
from("snmp:0.0.0.0:162?protocol=udp&type=TRAP&snmpVersion=3&securityName=test").process(new Processor() {
#Override
public void process(Exchange arg0) throws Exception {
}
}
}
Camel xml config:
<camelContext id="camelContext" xmlns="http://camel.apache.org/schema/spring">
<routeBuilder ref="snmpCollect"/>
</camelContext>
But when the snmp info which version is 1 or 2 is coming, it can go to the process method.
What's wrong with it, and how to make it work for "snmpVersion=3" info?
Camel Version is 2.20.1
Let me try to answer your question by providing some info based in what I've found.
Seems that he requirements and interfaces from v1 and v2 version differ from v3, so it doesn't like to work just updating the version. The mainly difference, from what I've seen, is that you need to provide a security model to v3. I saw that you are passing it via parameters, but have you got the chance to check the security requirements?
When I use the TrapTest where is on the camel-snmp github “github.com/apache/camel/blob/master/components/camel-snmp/src/…”,it‘s ok.But when I change the snmpVersion to SnmpConstants.version3,it's also error
That's because the interface changed and the test should rely on ScopedPDU model instead of the base class PDU. Also the security model isn't set up in this test:
org.snmp4j.MessageException: Message processing model 3 returned error: Unsupported security model
Unfortunately there isn't any example using camel-snmp with v3, but you could take a look into this example using the inner component snmp4j.
I am new to camel, so this may be a simple problem to solve.
I have a spring-boot application with camel components which interacts with GitLab API.
My problem is that I need to keep the endpoint URIs in camel routes encoded, for example:
from("direct:start")
.setHeader("PRIVATE-TOKEN",constant("myToken"))
.to("https://gitlab.com/api/v4/projects/12345/repository/files/folder%2Ffile%2Eextension/raw?ref=master")
When the route starts, the message is sent to
"https://gitlab.com/api/v4/projects/12345/repository/files/folder/file.extension/raw?ref=master"
which returns 404, because the parameter file_path has to be encoded, as said in the GitLab doc (I've cheked with a GET from curl: with the first URI a json is returned, with the second 404).
I tried to pass the last part of the URI as HTTP_QUERY, but in this case there is the "?" between it and the URI and I get 404 again:
https://gitlab.com/api/v4/projects/12345/repository/files/?folder%2Ffile%2Eextension/raw?ref=master
I tried adding the URI with the headerHTTP_URI: this time the URI is reached correctly, but I get null body instead of the json answer.
Any idea to solve this issue?
I see that you already tried using HTTP_URI header. How did you set it? Try this:
from("direct:start")
.setHeader("PRIVATE-TOKEN", constant("myToken"))
.setHeader(Exchange.HTTP_URI, simple("https://gitlab.com/api/v4/projects/12345/repository/files/folder%2Ffile%2Eextension/raw?ref=master"))
.to("http:dummy");
This way you set the URI during the route execution, not in endpoint definition. According to docs:
Exchange.HTTP_URI: URI to call. Will override existing URI set directly on the endpoint. This URI is the URI of the HTTP server to call. Its not the same as the Camel endpoint URI, where you can configure endpoint options such as security etc. This header does not support that, its only the URI of the HTTP server.
Don't forget the dependency:
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-http</artifactId>
</dependency>
The test:
#Override
protected RoutesBuilder createRouteBuilder() throws Exception {
return new RouteBuilder() {
#Override
public void configure() throws Exception {
from("direct:start")
.setHeader("PRIVATE-TOKEN", constant("myToken"))
.setHeader(Exchange.HTTP_URI, simple("http://0.0.0.0:8080?param=folder%2Ffile%2Eextension/raw&ref=master"))
.to("http:dummy");
from("jetty:http://0.0.0.0:8080?matchOnUriPrefix=true")
.setBody(constant("{ key: value }"))
.setHeader(Exchange.CONTENT_TYPE, constant(MediaType.APPLICATION_JSON_VALUE))
.to("mock:result");
}
};
}
#Test
public void test() throws InterruptedException {
getMockEndpoint("mock:result").expectedHeaderReceived(Exchange.HTTP_QUERY, "param=folder%2Ffile%2Eextension/raw&ref=master");
final Exchange response = template.send("direct:start", new Processor() {
public void process(Exchange exchange) throws Exception {
// nothing
}
});
assertThat(response, notNullValue());
assertThat(response.getIn().getHeader(Exchange.HTTP_URI).toString(), containsString("folder%2Ffile%2"));
assertThat(response.getOut().getBody(String.class), containsString("{ key: value }"));
assertMockEndpointsSatisfied();
}
I tried adding the URI with the headerHTTP_URI: this time the URI is reached correctly, but I get null body instead of the json answer.
Keep in mind that the response should be stored at the OUT body:
Camel will store the HTTP response from the external server on the OUT body. All headers from the IN message will be copied to the OUT message, so headers are preserved during routing. Additionally Camel will add the HTTP response headers as well to the OUT message headers.
I'm trying to run simple echo server for perfomance tests. I set up netty4 tcp endpoint and ByteArrayDecoder for my purporse. Everything works fine while only one/once soket is created. When I want to connect second client, or reconnect the first one I continously get following error:
2015-12-03 14:58:08,218 | WARN | yServerTCPWorker | ChannelInitializer | 175 - io.netty.common - 4.0.27.Final | Failed to initialize a channel. Closing: [id: 0xe9f9fb16, /127.0.0.1:6056
3 => /127.0.0.1:1542]
io.netty.channel.ChannelPipelineException: io.netty.handler.codec.bytes.ByteArrayDecoder is not a #Sharable handler, so can't be added or removed multiple times.
at io.netty.channel.DefaultChannelPipeline.checkMultiplicity(DefaultChannelPipeline.java:464)[178:io.netty.transport:4.0.27.Final]
at io.netty.channel.DefaultChannelPipeline.addLast0(DefaultChannelPipeline.java:136)[178:io.netty.transport:4.0.27.Final]
at io.netty.channel.DefaultChannelPipeline.addLast(DefaultChannelPipeline.java:129)[178:io.netty.transport:4.0.27.Final]
at io.netty.channel.DefaultChannelPipeline.addLast(DefaultChannelPipeline.java:120)[178:io.netty.transport:4.0.27.Final]
at org.apache.camel.component.netty4.DefaultServerInitializerFactory.addToPipeline(DefaultServerInitializerFactory.java:118)[83:org.apache.camel.camel-netty4:2.16.0]
at org.apache.camel.component.netty4.DefaultServerInitializerFactory.initChannel(DefaultServerInitializerFactory.java:100)[83:org.apache.camel.camel-netty4:2.16.0]
at io.netty.channel.ChannelInitializer.channelRegistered(ChannelInitializer.java:69)[178:io.netty.transport:4.0.27.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRegistered(AbstractChannelHandlerContext.java:162)[178:io.netty.transport:4.0.27.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRegistered(AbstractChannelHandlerContext.java:148)[178:io.netty.transport:4.0.27.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRegistered(DefaultChannelPipeline.java:734)[178:io.netty.transport:4.0.27.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:450)[178:io.netty.transport:4.0.27.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.access$100(AbstractChannel.java:378)[178:io.netty.transport:4.0.27.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:424)[178:io.netty.transport:4.0.27.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)[175:io.netty.common:4.0.27.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)[178:io.netty.transport:4.0.27.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)[175:io.netty.common:4.0.27.Final]
at java.lang.Thread.run(Thread.java:745)[:1.8.0_25]
I was dealing with ByteArrayDecoder to find it is anyway #Sharable. Although I created factory which should return me new instances of ByteArrayDecoder but it also didn't help. I compared the versions of dependent modules on Karaf and seems that they are the same.
Below my blueprint
<!--bean id="decoder" class="io.netty.handler.codec.bytes.ByteArrayDecoder"/-->
<!--bean id="decoder" class="com.company.feature.ChannelHandlerFactoryByteArrayDecoder" factory-method="newChannelHandler"/-->
<bean id="factory" class="com.company.feature.ChannelHandlerFactoryByteArrayDecoder" />
<bean id="decoder" class="io.netty.handler.codec.bytes.ChannelInboundHandlerAdapter" factory-ref="factory" factory-method="newChannelHandler"/>
<bean id="process" class="com.company.feature.Process"/>
<camelContext id="camel_netty_tcp_test" xmlns="http://camel.apache.org/schema/blueprint" allowUseOriginalMessage="false">
<route id="featureRoute">
<from uri="{{feature.in_route}}"/>
<process ref="process"/>
<log message="Received"/>
</route>
</camelContext>
And Factory class which I use
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.bytes.ByteArrayDecoder;
import org.apache.camel.component.netty4.ChannelHandlerFactory;
import io.netty.channel.ChannelHandler;
public class ChannelHandlerFactoryByteArrayDecoder implements ChannelHandlerFactory {
public ChannelHandler newChannelHandler() {
return (ChannelHandler) new ByteArrayDecoder();
}
public void handlerAdded(ChannelHandlerContext chc) throws Exception {
throw new UnsupportedOperationException("Not supported yet."); //To change body of generated methods, choose Tools | Templates.
}
public void handlerRemoved(ChannelHandlerContext chc) throws Exception {
throw new UnsupportedOperationException("Not supported yet."); //To change body of generated methods, choose Tools | Templates.
}
public void exceptionCaught(ChannelHandlerContext chc, Throwable thrwbl) throws Exception {
throw new UnsupportedOperationException("Not supported yet."); //To change body of generated methods, choose Tools | Templates.
}
}
To avoid this error for now I inherited from ByteArrayDecoder class and implemented it as follow.
import io.netty.channel.ChannelHandler;
import io.netty.handler.codec.bytes.ByteArrayDecoder;
#ChannelHandler.Sharable
public class MyByteArrayDecoder extends ByteArrayDecoder {
}
After repleacing the return type in factory everything starts to work.