Can't set azure blobname in application.properties with camel placeholder - apache-camel

I m trying to set a filename in my application.properties file.
It work well when I use the file component as exaplained here : Smallrye doc
Notice that im in a quarkus context so I hava to double the $ as explained in the doc.
However I don't need to write in a file but in an azure blob. And things become more complicated with it.
Here is my configuration :
mp.messaging.outgoing.water.endpoint-uri=azure-blob://xxxxxx/xxxxx/xxxx/xxxxx-${date:now:ddMMyyyy-hh:mm:ss}.json?credentials=#credentials&operation=updateBlockBlob
If I double the $ I got the following stacktrace :
2020-06-26 19:21:02,204 WARN [org.apa.cam.com.rea.str.ReactiveStreamsConsumer] (Camel (camel-1) thread #0 - reactive-streams://87687045129B977-0000000000000000) Error processing exchange. Exchange[8768704512
9B977-0000000000000001]. Caused by: [java.lang.IllegalArgumentException - Illegal character in path at index 86: https://zstalrsdsmdatarec01.blob.core.windows.net/materiel/MTD-niveaudeau/niveaudeau-${date:now
:ddMMyyyy-hh:mm:ss}.json]: java.lang.IllegalArgumentException: Illegal character in path at index 86: https://zstalrsdsmdatarec01.blob.core.windows.net/materiel/MTD-niveaudeau/niveaudeau-${date:now:ddMMyyyy-h
h:mm:ss}.json
at java.net.URI.create(URI.java:852)
at org.apache.camel.component.azure.blob.BlobServiceUtil.prepareStorageBlobUri(BlobServiceUtil.java:214)
at org.apache.camel.component.azure.blob.BlobServiceUtil.prepareStorageBlobUri(BlobServiceUtil.java:196)
at org.apache.camel.component.azure.blob.BlobServiceUtil.createBlockBlobClient(BlobServiceUtil.java:135)
at org.apache.camel.component.azure.blob.BlobServiceProducer.updateBlockBlob(BlobServiceProducer.java:140)
at org.apache.camel.component.azure.blob.BlobServiceProducer.process(BlobServiceProducer.java:80)
at org.apache.camel.support.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:67)
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:174)
at org.apache.camel.processor.errorhandler.RedeliveryErrorHandler$SimpleTask.run(RedeliveryErrorHandler.java:396)
at org.apache.camel.impl.engine.DefaultReactiveExecutor$Worker.schedule(DefaultReactiveExecutor.java:153)
at org.apache.camel.impl.engine.DefaultReactiveExecutor.scheduleMain(DefaultReactiveExecutor.java:60)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:147)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:286)
at org.apache.camel.component.reactive.streams.ReactiveStreamsConsumer.lambda$doSend$3(ReactiveStreamsConsumer.java:100)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.URISyntaxException: Illegal character in path at index 86: https://zstalrsdsmdatarec01.blob.core.windows.net/materiel/MTD-niveaudeau/niveaudeau-${date:now:ddMMyyyy-hh:mm:ss}.json
at java.net.URI$Parser.fail(URI.java:2848)
at java.net.URI$Parser.checkChars(URI.java:3021)
at java.net.URI$Parser.parseHierarchical(URI.java:3105)
at java.net.URI$Parser.parse(URI.java:3053)
at java.net.URI.<init>(URI.java:588)
at java.net.URI.create(URI.java:850)
... 16 more
So I tried not to double the $. I don't have any error but in that case the filename looks like this in azure :
MTD-niveaudeau/niveaudeau-now:ddMMyyyy-hh:mm:ss.json
I also tried to use the $simple{ } placeholder with no success.
I really want to use the #Outgoing annotation instead of writing like this :
#Incoming( "xxxxxxxx" )
public void consume( final String message){
log.info( "Incoming message : {}", message );
final String now = LocalDateTime.now().format(DateTimeFormatter.ofPattern("ddMMyyyy-HH:mm:ss"));
Multi
.createFrom()
.completionStage( camel
.createProducerTemplate()
.asyncSendBody(
"azure-blob://xxxxxxx/xxxxx/xxxxxx/xxxxx-"+now+".json?credentials=#credentials&operation=updateBlockBlob"
, message.toString() )
)
.subscribe()
.with( e ->
log.info( "Message successfullt sent to azure" )
)
;
}
How to do it ?
Thank you!

Related

cannot read kinesis stream with flink, getting SdkClientException: Unable to execute HTTP request: Current token (VALUE_STRING)

I am trying to read kinesis (actually running a local mocking with kinesa) from flink. This is my consumer configuration :
val consumerConfig = new Properties()
consumerConfig.put(AWSConfigConstants.AWS_REGION, "us-east-1")
consumerConfig.put(AWSConfigConstants.AWS_ACCESS_KEY_ID, "x")
consumerConfig.put(AWSConfigConstants.AWS_SECRET_ACCESS_KEY, "x")
consumerConfig.put(ConsumerConfigConstants.STREAM_INITIAL_POSITION, "LATEST")
val kinesis =
env.addSource(new FlinkKinesisConsumer[String](
inputStreamName, new SimpleStringSchema(), consumerConfig))
.print()
but when placing a record on the stream :
aws --endpoint-url=http://localhost:4567 kinesis put-record --data "hello" --partition-key 0 --stream-name ExampleInputStream
i am getting the following error:
org.apache.flink.kinesis.shaded.com.amazonaws.SdkClientException: Unable to execute HTTP request: Current token (VALUE_STRING) not VALUE_EMBEDDED_OBJECT, can not access as binary
at [Source: (org.apache.flink.kinesis.shaded.com.amazonaws.event.ResponseProgressInputStream); line: -1, column: 304]
at org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1201)
at org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1147)
at org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:796)
at org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:764)
at org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:738)
at org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:698)
at org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:680)
at org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:544)
at org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:524)
at org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.doInvoke(AmazonKinesisClient.java:2809)
at org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.invoke(AmazonKinesisClient.java:2776)
at org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.invoke(AmazonKinesisClient.java:2765)
at org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.executeGetRecords(AmazonKinesisClient.java:1292)
at org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.getRecords(AmazonKinesisClient.java:1263)
at org.apache.flink.streaming.connectors.kinesis.proxy.KinesisProxy.getRecords(KinesisProxy.java:247)
at org.apache.flink.streaming.connectors.kinesis.internals.ShardConsumer.getRecords(ShardConsumer.java:401)
at org.apache.flink.streaming.connectors.kinesis.internals.ShardConsumer.run(ShardConsumer.java:244)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.fasterxml.jackson.core.JsonParseException: Current token (VALUE_STRING) not VALUE_EMBEDDED_OBJECT, can not access as binary
at [Source: (org.apache.flink.kinesis.shaded.com.amazonaws.event.ResponseProgressInputStream); line: -1, column: 304]
at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1840)
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:712)
at com.fasterxml.jackson.dataformat.cbor.CBORParser.getBinaryValue(CBORParser.java:1660)
at com.fasterxml.jackson.core.JsonParser.getBinaryValue(JsonParser.java:1484)
at org.apache.flink.kinesis.shaded.com.amazonaws.transform.SimpleTypeCborUnmarshallers$ByteBufferCborUnmarshaller.unmarshall(SimpleTypeCborUnmarshallers.java:198)
at org.apache.flink.kinesis.shaded.com.amazonaws.transform.SimpleTypeCborUnmarshallers$ByteBufferCborUnmarshaller.unmarshall(SimpleTypeCborUnmarshallers.java:196)
at org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.model.transform.RecordJsonUnmarshaller.unmarshall(RecordJsonUnmarshaller.java:61)
at org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.model.transform.RecordJsonUnmarshaller.unmarshall(RecordJsonUnmarshaller.java:29)
at org.apache.flink.kinesis.shaded.com.amazonaws.transform.ListUnmarshaller.unmarshallJsonToList(ListUnmarshaller.java:92)
at org.apache.flink.kinesis.shaded.com.amazonaws.transform.ListUnmarshaller.unmarshall(ListUnmarshaller.java:46)
at org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.model.transform.GetRecordsResultJsonUnmarshaller.unmarshall(GetRecordsResultJsonUnmarshaller.java:53)
at org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.model.transform.GetRecordsResultJsonUnmarshaller.unmarshall(GetRecordsResultJsonUnmarshaller.java:29)
at org.apache.flink.kinesis.shaded.com.amazonaws.http.JsonResponseHandler.handle(JsonResponseHandler.java:118)
at org.apache.flink.kinesis.shaded.com.amazonaws.http.JsonResponseHandler.handle(JsonResponseHandler.java:43)
at org.apache.flink.kinesis.shaded.com.amazonaws.http.response.AwsResponseHandlerAdapter.handle(AwsResponseHandlerAdapter.java:69)
at org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleResponse(AmazonHttpClient.java:1714)
at org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleSuccessResponse(AmazonHttpClient.java:1434)
at org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1356)
at org.apache.flink.kinesis.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1139)
... 20 more
In my case i added this system properties :
System.setProperty(com.amazonaws.SDKGlobalConfiguration.AWS_CBOR_DISABLE_SYSTEM_PROPERTY, "true")
System.setProperty(SDKGlobalConfiguration.AWS_CBOR_DISABLE_SYSTEM_PROPERTY, "true")
System.setProperty(SdkSystemSetting.CBOR_ENABLED.property(), "false")
From the stacktrace it looks like Flink kinesis expects Cbor while your record is writting as a simple string.
It seems like this post contains some hints on how to align your local setup with the consumer side.

Error integration of solr 5.5.0 with nutch 1.13: 'Connection pool shut down'

I had a problem when I tried to integrate 'Solr' with 'Nutch':
version of 'Nutch':1.13
version of 'Solr': 5.5.0 (as recommended by the official
documentations https://wiki.apache.org/nutch/NutchTutorial#Verify_your_Nutch_installation)
The error is :
Active IndexWriters :
SOLRIndexWriter
solr.server.url : URL of the SOLR instance
solr.zookeeper.hosts : URL of the Zookeeper quorum
solr.commit.size : buffer size when sending to SOLR (default 1000)
solr.mapping.file : name of the mapping file for fields (default solrindex-mapping.xml)
solr.auth : use authentication (default false)
solr.auth.username : username for authentication
solr.auth.password : password for authentication
Indexer: number of documents indexed, deleted, or skipped:
Indexer: finished at 2017-11-30 01:34:49, elapsed: 00:00:01
Cleaning up index if possible
apache-nutch-1.13/bin /nutch clean -Dsolr.server.url=http://localhost:8983/solr/nutch crawling_dir/crawldb
SolrIndexer: deleting 1/1 documents
ERROR CleaningJob: java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:865)
at org.apache.nutch.indexer.CleaningJob.delete(CleaningJob.java:174)
at org.apache.nutch.indexer.CleaningJob.run(CleaningJob.java:197)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.nutch.indexer.CleaningJob.main(CleaningJob.java:208)
Error running:
apache-nutch-1.13/bin/nutch clean -Dsolr.server.url=http://localhost:8983/solr/nutch crawling_dir/crawldb
Failed with exit value 255.
on the log file:
2017-11-30 01:34:50,851 WARN output.FileOutputCommitter - Output Path is null in cleanupJob()
2017-11-30 01:34:50,851 WARN mapred.LocalJobRunner - job_local531807742_0001
java.lang.Exception: java.lang.IllegalStateException: Connection pool shut down
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529)
Caused by: java.lang.IllegalStateException: Connection pool shut down
at org.apache.http.util.Asserts.check(Asserts.java:34)
at org.apache.http.pool.AbstractConnPool.lease(AbstractConnPool.java:169)
at org.apache.http.pool.AbstractConnPool.lease(AbstractConnPool.java:202)
at org.apache.http.impl.conn.PoolingClientConnectionManager.requestConnection(PoolingClientConnectionManager.java:184)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:415)
at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:481)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:240)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:229)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:482)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:463)
at org.apache.nutch.indexwriter.solr.SolrIndexWriter.commit(SolrIndexWriter.java:191)
at org.apache.nutch.indexwriter.solr.SolrIndexWriter.close(SolrIndexWriter.java:179)
at org.apache.nutch.indexer.IndexWriters.close(IndexWriters.java:117)
at org.apache.nutch.indexer.CleaningJob$DeleterReducer.close(CleaningJob.java:122)
at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2017-11-30 01:34:51,458 ERROR indexer.CleaningJob - CleaningJob: java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:865)
at org.apache.nutch.indexer.CleaningJob.delete(CleaningJob.java:174)
at org.apache.nutch.indexer.CleaningJob.run(CleaningJob.java:197)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.nutch.indexer.CleaningJob.main(CleaningJob.java:208)
Please do you have any idea?
Had same problem and yours probably is due to the same reason
https://issues.apache.org/jira/browse/NUTCH-2269
Try patch it and error should be gone
From my finding, it appears to be a bug. Here's a blog that explains it well, https://reformatcode.com/code/apache-configuration/apache-nutch-112-with-apache-solr-621-give-an-error

Flow has failed with error Shutting down because of violation of the Reactive Streams specification

It seems I can never get the error handling right when using Akka Streams.
So this is my code
var db = Database.forConfig("oracle")
var mysqlDb = Database.forConfig("mysql_read")
var mysqlDbWrite = Database.forConfig("mysql_write")
implicit val actorSystem = ActorSystem()
val decider : Supervision.Decider = {
case _: Exception =>
println("got an exception restarting connections")
// let us restart our connections
db.close()
mysqlDb.close()
mysqlDbWrite.close()
db = Database.forConfig("oracle")
mysqlDb = Database.forConfig("mysql_read")
mysqlDbWrite = Database.forConfig("mysql_write")
Supervision.Restart
}
implicit val materializer = ActorMaterializer(ActorMaterializerSettings(actorSystem).withSupervisionStrategy(decider))
and I have a flow like this
val alreadyExistsFilter : Flow[Foo, Foo, NotUsed] = Flow[Foo].mapAsync(10){ foo =>
try {
val existsQuery = sql"""SELECT id FROM foo WHERE id = ${foo.id}""".as[Long]
mysqlDbWrite.run(existsQuery).map(v => (foo, v))
} catch {
case e: Throwable =>
println(s"Lookup failed for ${foo}")
throw e // will restart the stream
}
}.collect {case (f, v) if v.isEmpty => f}
So basically if the foo already exists in MySQL then the record should not be processed any further by the stream.
My hope with this code was that if anything fails with the mysql lookup (the mysql machine is pretty bad and timeouts are common), the record will be printed and discarded and the stream will continue with the remaining records courtesy of the supervision.
When I run this code. I see errors like
[error] (mysql_write network timeout executor) java.lang.RuntimeException: java.sql.SQLException: Invalid socket timeout value or state
java.lang.RuntimeException: java.sql.SQLException: Invalid socket timeout value or state
at com.mysql.jdbc.ConnectionImpl$12.run(ConnectionImpl.java:5576)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLException: Invalid socket timeout value or state
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:998)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:937)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:926)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:872)
at com.mysql.jdbc.MysqlIO.setSocketTimeout(MysqlIO.java:4852)
at com.mysql.jdbc.ConnectionImpl$12.run(ConnectionImpl.java:5574)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketException: Socket is closed
at java.net.Socket.setSoTimeout(Socket.java:1137)
at com.mysql.jdbc.MysqlIO.setSocketTimeout(MysqlIO.java:4850)
at com.mysql.jdbc.ConnectionImpl$12.run(ConnectionImpl.java:5574)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
and
[error] (mysql_write network timeout executor) java.lang.NullPointerException
java.lang.NullPointerException
at com.mysql.jdbc.MysqlIO.setSocketTimeout(MysqlIO.java:4850)
at com.mysql.jdbc.ConnectionImpl$12.run(ConnectionImpl.java:5574)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
One thing which surprises me here is that these exceptions don't come from my catch block. because I don't see the println statement of my catch block. The stack trace doesn't show me where it originated from... but since it is saying mysql_write I can assume that its the Flow above because only this Flow uses mysql_write.
Finally the entire stream crashes with the error
[trace] Stack trace suppressed: run last compile:runMain for the full output.
flow has failed with error Shutting down because of violation of the Reactive Streams specification.
14:51:06,973 |-INFO in ch.qos.logback.classic.AsyncAppender[asyncKafkaAppender] - Worker thread will flush remaining events before exiting.
[success] Total time: 3480 s, completed Sep 26, 2017 2:51:07 PM
14:51:07,603 |-INFO in ch.qos.logback.core.hook.DelayingShutdownHook#2320545b - Sleeping for 1 seconds
I don't know what I did to violate the reactive streams specification!!
A first stab at getting a more predictable solution would be removing the blocking behaviour (Await.result) and use mapAsync. A rewrite of the alreadyExistsFilter flow could be:
val alreadyExistsFilter : Flow[Foo, Foo, NotUsed] = Flow[Foo].mapAsync(3) { foo ⇒
val existsQuery = sql"""SELECT id FROM foo WHERE id = ${foo.id}""".as[Long]
foo → Await.result(mysqlDbWrite.run(existsQuery), Duration.Inf)
}.collect{
case (foo, res) if res.isDefined ⇒ foo
}
More info on blocking in Akka can be found in the docs.
The answer given by Stefano is correct. The error was indeed coming because of blocking code in the flow.
Although, my initial program was running against scala 2.11 and even after switching to the mapAsync, the problem persisted.
Since this is a command line tool it was easy for me to switch to scala 2.12 and try again.
When I tried with Scala 2.12 it worked perfectly.
One thing which greatly helped me is to have "ch.qos.logback" % "logback-classic" % "1.2.3", in the dependencies. This will show you each and every SQL statement which is being executed and easily see if something is going wrong.

Apache Camel move file after processing

Is there a way to move a file named 'yyyy-x.zip' after processing to a folder 'yyyy/yyyy-x.zip' by the file component? I thought of following:
from("file://directory?preMove=working&move=${${file:onlyname}.substring(0,3)}/${file:onlyname}&moveFailed=error")
.doStuff(...);
But i get always the following exception.
org.apache.camel.language.simple.types.SimpleIllegalSyntaxException: Unknown function: ${file:onlyname}.toString().subString(0,3) at location 0 ${${file:onlyname}.toString().subString(0,3)}/${file:onlyname}
at org.apache.camel.language.simple.ast.SimpleFunctionStart$1.evaluate(SimpleFunctionStart.java:107)
at org.apache.camel.builder.ExpressionBuilder$75.evaluate(ExpressionBuilder.java:1795)
at org.apache.camel.support.ExpressionAdapter.evaluate(ExpressionAdapter.java:36)
at org.apache.camel.component.file.strategy.GenericFileExpressionRenamer.renameFile(GenericFileExpressionRenamer.java:37)
at org.apache.camel.component.file.strategy.GenericFileRenameProcessStrategy.commit(GenericFileRenameProcessStrategy.java:87)
at org.apache.camel.component.file.GenericFileOnCompletion.processStrategyCommit(GenericFileOnCompletion.java:127)
at org.apache.camel.component.file.GenericFileOnCompletion.onCompletion(GenericFileOnCompletion.java:83)
at org.apache.camel.component.file.GenericFileOnCompletion.onComplete(GenericFileOnCompletion.java:57)
at org.apache.camel.util.UnitOfWorkHelper.doneSynchronizations(UnitOfWorkHelper.java:104)
at org.apache.camel.impl.DefaultUnitOfWork.done(DefaultUnitOfWork.java:230)
at org.apache.camel.util.UnitOfWorkHelper.doneUow(UnitOfWorkHelper.java:65)
at org.apache.camel.processor.CamelInternalProcessor$UnitOfWorkProcessorAdvice.after(CamelInternalProcessor.java:674)
at org.apache.camel.processor.CamelInternalProcessor$UnitOfWorkProcessorAdvice.after(CamelInternalProcessor.java:629)
at org.apache.camel.processor.CamelInternalProcessor$InternalCallback.done(CamelInternalProcessor.java:246)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:109)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:197)
at org.apache.camel.component.file.GenericFileConsumer.processExchange(GenericFileConsumer.java:460)
at org.apache.camel.component.file.GenericFileConsumer.processBatch(GenericFileConsumer.java:227)
at org.apache.camel.component.file.GenericFileConsumer.poll(GenericFileConsumer.java:191)
at org.apache.camel.impl.ScheduledPollConsumer.doRun(ScheduledPollConsumer.java:175)
at org.apache.camel.impl.ScheduledPollConsumer.run(ScheduledPollConsumer.java:102)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.camel.language.simple.types.SimpleParserException: Unknown function: ${file:onlyname}.toString().subString(0,3)
at org.apache.camel.language.simple.ast.SimpleFunctionExpression.createSimpleExpression(SimpleFunctionExpression.java:230)
at org.apache.camel.language.simple.ast.SimpleFunctionExpression.createExpression(SimpleFunctionExpression.java:45)
at org.apache.camel.language.simple.ast.SimpleFunctionStart$1.evaluate(SimpleFunctionStart.java:104)
... 27 common frames omitted
EDIT:
Thanks to the tip from Claus Ibsen I managed to get the following solution:
from("file://directory?preMove=working&move=${bean:myBean.myMethod(${file:onlyname})}&moveFailed=error")
.doStuff(...);
With the following Bean:
import org.springframework.stereotype.Service;
#Service("myBean")
public class MyBeanImpl() implements MyBean{
#Override
public String myMethod(String fileName){
return ...create the filename...;
}
}
You cannot directly in the uri use the complex functions with substring et all.
You can use a bean to compute the name, and use move=${bean:myBean.myMethod} and then register a bean with id myBean in the registry which then computes the name.

Deleting records from Salesforce via Mulesoft ESB

I am trying to delete list of records from Salesforce via Mulesoft ESB.
I have already create an example project on github and the flow xml can be found here.
The xml snippet for deleting records is below:
<sfdc:delete config-ref="Salesforce__Basic_authentication" doc:name="Salesforce">
<sfdc:ids ref="#[payload]" />
</sfdc:delete>
where, payload data type is List of string.
While deleting the records I am getting below exception:
ERROR 2015-11-05 23:39:39,755 [[tutorial].HTTP_Listener_Configuration.worker.01] org.mule.exception.DefaultMessagingExceptionStrategy:
********************************************************************************
Message : Could not serialize object (org.mule.api.serialization.SerializationException)
Type : org.mule.api.transformer.TransformerException
Code : MULE_ERROR--2
JavaDoc : http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/transformer/TransformerException.html
Transformer : ObjectToByteArray{this=7be7e15, name='_ObjectToByteArray', ignoreBadInput=false, returnClass=SimpleDataType{type=[B, mimeType='*/*', encoding='null'}, sourceTypes=[SimpleDataType{type=java.io.Serializable, mimeType='*/*', encoding='null'}, SimpleDataType{type=java.io.InputStream, mimeType='*/*', encoding='null'}, SimpleDataType{type=java.lang.String, mimeType='*/*', encoding='null'}, SimpleDataType{type=org.mule.api.transport.OutputHandler, mimeType='*/*', encoding='null'}]}
********************************************************************************
Exception stack is:
1. com.sforce.soap.partner.DeleteResult (java.io.NotSerializableException)
java.io.ObjectOutputStream:1184 (null)
2. java.io.NotSerializableException: com.sforce.soap.partner.DeleteResult (org.apache.commons.lang.SerializationException)
org.apache.commons.lang.SerializationUtils:111 (null)
3. Could not serialize object (org.mule.api.serialization.SerializationException)
org.mule.serialization.internal.AbstractObjectSerializer:68 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/serialization/SerializationException.html)
4. Could not serialize object (org.mule.api.serialization.SerializationException) (org.mule.api.transformer.TransformerException)
org.mule.transformer.simple.SerializableToByteArray:66 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/transformer/TransformerException.html)
********************************************************************************
Root Exception stack trace:
java.io.NotSerializableException: com.sforce.soap.partner.DeleteResult
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1378)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1174)
at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
at org.apache.commons.lang.SerializationUtils.serialize(SerializationUtils.java:108)
at org.apache.commons.lang.SerializationUtils.serialize(SerializationUtils.java:133)
at org.mule.serialization.internal.JavaObjectSerializer.doSerialize(JavaObjectSerializer.java:44)
at org.mule.serialization.internal.AbstractObjectSerializer.serialize(AbstractObjectSerializer.java:64)
at org.mule.transformer.simple.SerializableToByteArray.doTransform(SerializableToByteArray.java:62)
at org.mule.transformer.simple.ObjectToByteArray.doTransform(ObjectToByteArray.java:78)
at org.mule.transformer.AbstractTransformer.transform(AbstractTransformer.java:415)
at org.mule.DefaultMuleMessage.getPayload(DefaultMuleMessage.java:425)
at org.mule.DefaultMuleMessage.getPayload(DefaultMuleMessage.java:373)
at org.mule.DefaultMuleMessage.getPayloadAsBytes(DefaultMuleMessage.java:714)
at org.mule.module.http.internal.listener.HttpResponseBuilder.build(HttpResponseBuilder.java:175)
at org.mule.module.http.internal.listener.HttpMessageProcessorTemplate.sendResponseToClient(HttpMessageProcessorTemplate.java:97)
at org.mule.execution.AsyncResponseFlowProcessingPhase.runPhase(AsyncResponseFlowProcessingPhase.java:83)
at org.mule.execution.AsyncResponseFlowProcessingPhase.runPhase(AsyncResponseFlowProcessingPhase.java:38)
at org.mule.execution.PhaseExecutionEngine$InternalPhaseExecutionEngine.phaseSuccessfully(PhaseExecutionEngine.java:65)
at org.mule.execution.PhaseExecutionEngine$InternalPhaseExecutionEngine.phaseSuccessfully(PhaseExecutionEngine.java:69)
at com.mulesoft.mule.throttling.ThrottlingPhase.runPhase(ThrottlingPhase.java:185)
at com.mulesoft.mule.throttling.ThrottlingPhase.runPhase(ThrottlingPhase.java:1)
at org.mule.execution.PhaseExecutionEngine$InternalPhaseExecutionEngine.process(PhaseExecutionEngine.java:114)
at org.mule.execution.PhaseExecutionEngine.process(PhaseExecutionEngine.java:41)
at org.mule.execution.MuleMessageProcessingManager.processMessage(MuleMessageProcessingManager.java:32)
at org.mule.module.http.internal.listener.DefaultHttpListener$1.handleRequest(DefaultHttpListener.java:126)
at org.mule.module.http.internal.listener.grizzly.GrizzlyRequestDispatcherFilter.handleRead(GrizzlyRequestDispatcherFilter.java:83)
at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:283)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:200)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:132)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:111)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:536)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:112)
at org.mule.module.http.internal.listener.grizzly.ExecutorPerServerAddressIOStrategy.run0(ExecutorPerServerAddressIOStrategy.java:102)
at org.mule.module.http.internal.listener.grizzly.ExecutorPerServerAddressIOStrategy.access$100(ExecutorPerServerAddressIOStrategy.java:30)
at org.mule.module.http.internal.listener.grizzly.ExecutorPerServerAddressIOStrategy$WorkerThreadRunnable.run(ExecutorPerServerAddressIOStrategy.java:125)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
You are actually deleting successfully. The problem is that your Mule flow ends after making the Salesforce call, and by default it tries to return the current payload. That payload is the result of the SF call, which is of type com.sforce.soap.partner.DeleteResult, and Mule doesn't know how to serialize it.
To start, try simply adding a "Set payload" component after the SF call and set payload to anything you want, say "hello world". Once you understand what's happening, you can add a groovy script to process the DeleteResult and actually determine if the deletion was successful or not, and possibly do something if it wasn't.
In this case, if you don't want to explicitly change the payload and want the exact payload being sent out of the SFDC connector, then try having an object to json connector after the sfdc connector and a logger to log the payload in this case.

Resources