I'm running Camel 2.21.2 in Karaf on Windows Server 2016.
I have a File Producer that is using the directory
file://ServerName/ShareName/DirectoryName
and this works fine. However, when I try to create a File Consumer in a pollEnrich with
file://ServerName/ShareName/DirectoryName?fileName=MyFileName.ext&noop=true
it is not reading anything and timing out, despite the file being present and accessible.
Is there something in the file consumer that prevents this form of remote access to function, or have I done something wrong in my URL? If the file is local to the server, the pollEnrich works fine.
Thanks for looking!
You should use the jcifs component:
https://cwiki.apache.org/confluence/display/CAMEL/JCIFS
<dependency>
<groupId>org.apache-extras.camel-extra</groupId>
<artifactId>camel-jcifs</artifactId>
<version>${camel.version}</version>
</dependency>
Example:
smb://login:password#192.168.10.33/inbox?sortBy=file:name&include=.*[.](xml|XML)&delete=true&delay=180000&preMove=inprogress&consumer.bridgeErrorHandler=true
Related
Thanks to JMX (java console), I try to restart a route with a file component consumer endpoint.
from("file:<some dir>?noop=true")
I am using the wiretap pattern to record the intermediate data transformation through other files endpoint.
On first start of the camel application, everything is fine, and all the files already present in the input directory are polled and processed.
But when I try to restart the route thanks to jmx, nothing happens.
I try to manually removed .camel directory - created by I guess the default FileIdempotentRepository - before restarting the route, in vain.
I also tried to change the kind of IdempotentRepository with a MemoryIdempotentRepository :
from("file:<somedir>?noop=true").idempotentConsumer(header("CamelFileName"), MemoryIdempotentRepository.memoryIdempotentRepository(1000))
Even if I trigger the clear() operation of this MemoryIdempotentRepository before restarting the route in java console, nothing is polled from the input directory after restarting.
If I add a file, it works. Everything behaves like if there is a persistent history of the files already polled once.
I wonder if the use of the option "noop=true" creates an unmanaged idempotent repository I cannot control with jmx.
If true, the file is not moved or deleted in any way. This option is
good for readonly data, or for ETL type requirements. If noop=true,
Camel will set idempotent=true as well, to avoid consuming the same
files over and over again.
Any idea ?
(i am using camel-core 2.21)
I found the solution to my issue.
I made a bad use of idempotentConsumer; I needed to initialize the endpoint idempotent consumer inside the endpoint URI parameters list.
First, create an entry in a bean registry:
registry.put("myIdempotentRepository", MemoryIdempotentRepository.memoryIdempotentRepository(1000));
Then, refer to this idempotentRepository in the endpoint:
from("file:<somedire>noop=true&initialDelay=10&delay=1000&idempotentRepository=#myIdempotentRepository")
By doing this, GenericFileEndPoint:
will not create a default idempotentRepository
will add the idempotentRepository given in options of the endpoint to the services of the camel context. This means that it will be possible to manage it thanks to JMX
I think it would be useful to be allowed to manage the default idempotent repository in the FileEndPoint class of camel-core.
I see a weird behavior with Apache Camel SFTP. Even after setting the delete=true attribute, it doesn't delete the file after receiving. I am using 3.0.0-M3 version of camel-ftp
Following is my SFTP configuration,
sftp://<<HOST_NAME>>:<<PORT>>/<<PATH>>?username=<<USERNAME>>" +
"&password=<<PASSWORD>>" +
"&preferredAuthentications=password" +
"&readLock=changed" +
"&readLockMinAge=30000" +
"&delay=20000" +
"&delete=true";
Now Camel is able to read the file, but it doesn't delete the file after reading. While going through the docs, it says
delete (consumer) -
If true, the file will be deleted after it is processed successfully.
How does camel define if it was processed successfully ? Do we need to set any exchange property for Camel to mark it processed successfully ?
After receiving the file all I am doing is pasing it to another route, like following,
from(endpointUri).to("direct:procesSftpFile");
Should I change it from direct to vm or seda?
Looks like nobody faced this issue and I somehow figured out the where this started happening.
The issue was not because of Camel sftp component, but it was with the piece of code I was calling.
Second part of my flow looks like this,
from("direct:procesSftpFile")
.log("...")
// logging and other regular processing
....
// sending to vm InOnly
.to("vm:queue1?exchangePattern=InOnly")
.. some more processing..
.to("vm:queue2?exchangePattern=InOnly")
So the issue was with calling those queue1 and queue2 in above snipet.
Commenting them, fixed it and sftp started deleting the files. For calling the VM, instead of to(), I used producerTemplate.asyncSend as workaround.
One thing I am still confused about is, if we are using InOnly exchange pattern, then why it is affecting the sftp behavior ? Probably I should ask this in a separate question.
In our Apache Camel project, we are consuming a rest service which requires a .jks file.
Currently we are storing .jks file in a physical location and referring to that in Camel project. But it can't be used always, as we may be having access to the Fuse Management Console only and not to the physical location accessible from management console.
Another option is to store key file within bundle, which is can't be employed because, certificate may change based on the environment.
In this scenario, what can be a better solution to store key file?
Note
One option about which I thought was, storing .jks file within fabric profile. But could n't find any way to do that. Is it possible to store a file in Fabric profile?
What about storing the .jks in a java package and reading it as a resource?
You bundle imports org.niyasc.jks and loads the file from there. The bundle need not to change between environments.
Then you write 2 bundles to provide the same package org.niyasc.jks, one with production file and one with test file.
Production env:
RestConsumerBundle + ProductionJksProviderBundle
Test env:
RestConsumerBundle + TestJksProviderBundle
Mind that deploying both of them may be possible and RestConsumerBundle will be bound to the first deployed bundle. You can eventually play with OSGi directives to give priority to one of them.
EDIT:
A more elegant solution would be creating an OSGi service which exposes the .jks as an InputStream or byte[]. You can even play with JNDI if you feel to.
From Blueprint declare the dependency as mandatory, so your bundle will not start if the service is not available.
<!-- RestConsumerBundle -->
<reference id="jksProvider"
interface="org.niyasc.jks.Provider"
availability="mandatory"/>
Storing the JKS files in the Fuse profile could be a good idea.
If you have a broker profile created, such as "mq-broker-Group.BrokerName", take a look at it via the Fuse Web Console.
You can then access the jks file as a resource in the property file, as in "truststore.file=profile:truststore.jks"
And also check the "Customizing the SSL keystore.jks and truststore.jks file" section of this chapter:
https://access.redhat.com/documentation/en-us/red_hat_jboss_fuse/6.3/html/fabric_guide/mq#MQ-BrokerConfig
It has some good pointers.
Regarding how to add files to a Fabric profile, you can store any resources under src/main/fabric8 and use the fabric8 Maven plugin. For more, see:
https://fabric8.io/gitbook/mavenPlugin.html
-Codrin
In my current project we are using lighttpd server. Here I am trying to upload the file. I am getting two Response Headers, first is with 301 Status code (Moved Permanently) and second is with 200 (OK).
But when I am checking in the folder I am not able to find any file (I mean no file uploaded).
I have tried both way to upload file as given links below:
http://jsfiddle.net/danialfarid/0mz6ff9o/135/
ngFileUpload
https://jsfiddle.net/JeJenny/ZG9re/
In both way I am getting the same response.
So here I have some sort of questions:
1) Is file upload is possible using AngularJS only? (No Server Side Script)
2) If possible, Is there any config problem with lighttpd?
Thanks !
Need Help...
The server side (or any web server) must be configured to handle POST and PUT requests. CGI, FastCGI, SCGI scripts can be written, or you can proxy to another backend. For simple file uploads, lighttpd also provides mod_webdav which you can configure (and protect with mod_auth) to allow you to upload files without having to write any server-side code.
https://redmine.lighttpd.net/projects/lighttpd/wiki/Docs_ModWebdav
When I tried to put file to remote server through camel FTP component, I want to customize the PUT behavior to meet my requirement as I did for get file from remote server.
However, I cannot found which method is actually performing put file to remote through FTP component.
Camel uses commons-net library as the FTP client. You can look at that project how you can use it yourself to write you own code to upload a file to a FTP server and do any commands before/after and whatnot.
https://commons.apache.org/proper/commons-net/