Deleting records from Salesforce via Mulesoft ESB - salesforce

I am trying to delete list of records from Salesforce via Mulesoft ESB.
I have already create an example project on github and the flow xml can be found here.
The xml snippet for deleting records is below:
<sfdc:delete config-ref="Salesforce__Basic_authentication" doc:name="Salesforce">
<sfdc:ids ref="#[payload]" />
</sfdc:delete>
where, payload data type is List of string.
While deleting the records I am getting below exception:
ERROR 2015-11-05 23:39:39,755 [[tutorial].HTTP_Listener_Configuration.worker.01] org.mule.exception.DefaultMessagingExceptionStrategy:
********************************************************************************
Message : Could not serialize object (org.mule.api.serialization.SerializationException)
Type : org.mule.api.transformer.TransformerException
Code : MULE_ERROR--2
JavaDoc : http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/transformer/TransformerException.html
Transformer : ObjectToByteArray{this=7be7e15, name='_ObjectToByteArray', ignoreBadInput=false, returnClass=SimpleDataType{type=[B, mimeType='*/*', encoding='null'}, sourceTypes=[SimpleDataType{type=java.io.Serializable, mimeType='*/*', encoding='null'}, SimpleDataType{type=java.io.InputStream, mimeType='*/*', encoding='null'}, SimpleDataType{type=java.lang.String, mimeType='*/*', encoding='null'}, SimpleDataType{type=org.mule.api.transport.OutputHandler, mimeType='*/*', encoding='null'}]}
********************************************************************************
Exception stack is:
1. com.sforce.soap.partner.DeleteResult (java.io.NotSerializableException)
java.io.ObjectOutputStream:1184 (null)
2. java.io.NotSerializableException: com.sforce.soap.partner.DeleteResult (org.apache.commons.lang.SerializationException)
org.apache.commons.lang.SerializationUtils:111 (null)
3. Could not serialize object (org.mule.api.serialization.SerializationException)
org.mule.serialization.internal.AbstractObjectSerializer:68 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/serialization/SerializationException.html)
4. Could not serialize object (org.mule.api.serialization.SerializationException) (org.mule.api.transformer.TransformerException)
org.mule.transformer.simple.SerializableToByteArray:66 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/transformer/TransformerException.html)
********************************************************************************
Root Exception stack trace:
java.io.NotSerializableException: com.sforce.soap.partner.DeleteResult
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1378)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1174)
at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
at org.apache.commons.lang.SerializationUtils.serialize(SerializationUtils.java:108)
at org.apache.commons.lang.SerializationUtils.serialize(SerializationUtils.java:133)
at org.mule.serialization.internal.JavaObjectSerializer.doSerialize(JavaObjectSerializer.java:44)
at org.mule.serialization.internal.AbstractObjectSerializer.serialize(AbstractObjectSerializer.java:64)
at org.mule.transformer.simple.SerializableToByteArray.doTransform(SerializableToByteArray.java:62)
at org.mule.transformer.simple.ObjectToByteArray.doTransform(ObjectToByteArray.java:78)
at org.mule.transformer.AbstractTransformer.transform(AbstractTransformer.java:415)
at org.mule.DefaultMuleMessage.getPayload(DefaultMuleMessage.java:425)
at org.mule.DefaultMuleMessage.getPayload(DefaultMuleMessage.java:373)
at org.mule.DefaultMuleMessage.getPayloadAsBytes(DefaultMuleMessage.java:714)
at org.mule.module.http.internal.listener.HttpResponseBuilder.build(HttpResponseBuilder.java:175)
at org.mule.module.http.internal.listener.HttpMessageProcessorTemplate.sendResponseToClient(HttpMessageProcessorTemplate.java:97)
at org.mule.execution.AsyncResponseFlowProcessingPhase.runPhase(AsyncResponseFlowProcessingPhase.java:83)
at org.mule.execution.AsyncResponseFlowProcessingPhase.runPhase(AsyncResponseFlowProcessingPhase.java:38)
at org.mule.execution.PhaseExecutionEngine$InternalPhaseExecutionEngine.phaseSuccessfully(PhaseExecutionEngine.java:65)
at org.mule.execution.PhaseExecutionEngine$InternalPhaseExecutionEngine.phaseSuccessfully(PhaseExecutionEngine.java:69)
at com.mulesoft.mule.throttling.ThrottlingPhase.runPhase(ThrottlingPhase.java:185)
at com.mulesoft.mule.throttling.ThrottlingPhase.runPhase(ThrottlingPhase.java:1)
at org.mule.execution.PhaseExecutionEngine$InternalPhaseExecutionEngine.process(PhaseExecutionEngine.java:114)
at org.mule.execution.PhaseExecutionEngine.process(PhaseExecutionEngine.java:41)
at org.mule.execution.MuleMessageProcessingManager.processMessage(MuleMessageProcessingManager.java:32)
at org.mule.module.http.internal.listener.DefaultHttpListener$1.handleRequest(DefaultHttpListener.java:126)
at org.mule.module.http.internal.listener.grizzly.GrizzlyRequestDispatcherFilter.handleRead(GrizzlyRequestDispatcherFilter.java:83)
at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:283)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:200)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:132)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:111)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:536)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:112)
at org.mule.module.http.internal.listener.grizzly.ExecutorPerServerAddressIOStrategy.run0(ExecutorPerServerAddressIOStrategy.java:102)
at org.mule.module.http.internal.listener.grizzly.ExecutorPerServerAddressIOStrategy.access$100(ExecutorPerServerAddressIOStrategy.java:30)
at org.mule.module.http.internal.listener.grizzly.ExecutorPerServerAddressIOStrategy$WorkerThreadRunnable.run(ExecutorPerServerAddressIOStrategy.java:125)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

You are actually deleting successfully. The problem is that your Mule flow ends after making the Salesforce call, and by default it tries to return the current payload. That payload is the result of the SF call, which is of type com.sforce.soap.partner.DeleteResult, and Mule doesn't know how to serialize it.
To start, try simply adding a "Set payload" component after the SF call and set payload to anything you want, say "hello world". Once you understand what's happening, you can add a groovy script to process the DeleteResult and actually determine if the deletion was successful or not, and possibly do something if it wasn't.

In this case, if you don't want to explicitly change the payload and want the exact payload being sent out of the SFDC connector, then try having an object to json connector after the sfdc connector and a logger to log the payload in this case.

Related

Flink: Could not find a suitable table factory for 'org.apache.flink.table.factories.DeserializationSchemaFactory' in the classpath

I am using flink's table api, I receive data from kafka, then register it as
a table, then I use sql statement to process, and finally convert the result
back to a stream, write to a directory, the code looks like this:
def main(args: Array[String]): Unit = {
val sEnv = StreamExecutionEnvironment.getExecutionEnvironment
sEnv.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
val tEnv = TableEnvironment.getTableEnvironment(sEnv)
tEnv.connect(
new Kafka()
.version("0.11")
.topic("user")
.startFromEarliest()
.property("zookeeper.connect", "")
.property("bootstrap.servers", "")
)
.withFormat(
new Json()
.failOnMissingField(false)
.deriveSchema() //使用表的 schema
)
.withSchema(
new Schema()
.field("username_skey", Types.STRING)
)
.inAppendMode()
.registerTableSource("user")
val userTest: Table = tEnv.sqlQuery(
"""
select ** form ** join **"".stripMargin)
val endStream = tEnv.toRetractStream[Row](userTest)
endStream.writeAsText("/tmp/sqlres",WriteMode.OVERWRITE)
sEnv.execute("Test_New_Sign_Student")
}
I was successful in the local test, but when I submit the following command
in the cluster, I get the following error:
=======================================================
org.apache.flink.client.program.ProgramInvocationException: The main method
caused an error.
at
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:546)
at
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
at
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:426)
at
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:804)
at
org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:280)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:215)
at
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1044)
at
org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1120)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
at
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at
org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1120)
Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException: Could
not find a suitable table factory for
'org.apache.flink.table.factories.DeserializationSchemaFactory' in
the classpath.
Reason: No factory implements
'org.apache.flink.table.factories.DeserializationSchemaFactory'.
The following properties are requested:
connector.properties.0.key=zookeeper.connect
....
schema.9.name=roles
schema.9.type=VARCHAR
update-mode=append
The following factories have been considered:
org.apache.flink.table.sources.CsvBatchTableSourceFactory
org.apache.flink.table.sources.CsvAppendTableSourceFactory
org.apache.flink.table.sinks.CsvBatchTableSinkFactory
org.apache.flink.table.sinks.CsvAppendTableSinkFactory
org.apache.flink.streaming.connectors.kafka.Kafka011TableSourceSinkFactory
at
org.apache.flink.table.factories.TableFactoryService$.filterByFactoryClass(TableFactoryService.scala:176)
at
org.apache.flink.table.factories.TableFactoryService$.findInternal(TableFactoryService.scala:125)
at
org.apache.flink.table.factories.TableFactoryService$.find(TableFactoryService.scala:100)
at
org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.scala)
at
org.apache.flink.streaming.connectors.kafka.KafkaTableSourceSinkFactoryBase.getDeserializationSchema(KafkaTableSourceSinkFactoryBase.java:259)
at
org.apache.flink.streaming.connectors.kafka.KafkaTableSourceSinkFactoryBase.createStreamTableSource(KafkaTableSourceSinkFactoryBase.java:144)
at
org.apache.flink.table.factories.TableFactoryUtil$.findAndCreateTableSource(TableFactoryUtil.scala:50)
at
org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.scala:44)
at
org.clay.test.Test_New_Sign_Student$.main(Test_New_Sign_Student.scala:64)
at
org.clay.test.Test_New_Sign_Student.main(Test_New_Sign_Student.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
===================================
Can someone tell me what caused this? I am very confused about this........
if you are using maven-shade-plugin, make sure SPI transformer is placed.
Flink uses java Service Provider to discover Source/Sink connector.
Without this transformer, you will 100% encoutner "org.apache.flink.table.api.NoMatchingTableFactoryException: Could
not find a suitable table factory", which happened on me.
https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/connect.html#update-mode
flink officially points out this, search "SPI" on this page
<transformers>
<transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
</transformers>
You have to add the JAR dependencies of the connectors (Kafka) and formats (JSON) that you are using to the classpath of your program, i.e., either build a fat JAR that includes them or provide them to the classpath of the Flink cluster by copying them in the ./lib folder.
Check the Flink documentation for links to download the respective dependencies.
I have the met the same problem, just add parameters --connector.type kafka when you run your application will solve this. see enter link description here

BSP:R5417: Any SIG_KEY_INFO MUST contain a SECURITY_TOKEN_REFERENCE child element

The SOAP-server (which I doesn't control) sent me back the answer, which contains the next section:
<ds:KeyInfo>
<ds:X509Data>
<ds:X509SubjectName>
EMAILADDRESS=***#******, CN=*********, OU=***, O=*****, L=****, ST=***, C=**
</ds:X509SubjectName>
</ds:X509Data>
</ds:KeyInfo>
The specification of Web Services Security
X.509 Certificate Token Profile 1.1 in section 3.2 is saying that <ds:X509Data> must be subelement of <wsse:SecurityTokenReference>.
Am I watching to right docs and am I right that server sending incorrect response?
Is there are ways to fix this on the client side?
p.s. I tried to change WSS4jInInterceptors and set some properties to change key type but I think I did this in incorrect way.
p.p.s And the error stacktrace below:
org.apache.cxf.binding.soap.SoapFault: BSP:R5417: Any SIG_KEY_INFO MUST contain a SECURITY_TOKEN_REFERENCE child element
at org.apache.cxf.ws.security.wss4j.WSS4JInInterceptor.createSoapFault(WSS4JInInterceptor.java:809)
at org.apache.cxf.ws.security.wss4j.WSS4JInInterceptor.handleMessage(WSS4JInInterceptor.java:313)
at org.apache.cxf.ws.security.wss4j.WSS4JInInterceptor.handleMessage(WSS4JInInterceptor.java:93)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:307)
at org.apache.cxf.endpoint.ClientImpl.onMessage(ClientImpl.java:798)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponseInternal(HTTPConduit.java:1636)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponse(HTTPConduit.java:1525)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.close(HTTPConduit.java:1330)
at org.apache.cxf.io.CacheAndWriteOutputStream.postClose(CacheAndWriteOutputStream.java:56)
at org.apache.cxf.io.CachedOutputStream.close(CachedOutputStream.java:215)
at org.apache.cxf.io.CacheAndWriteOutputStream.postClose(CacheAndWriteOutputStream.java:56)
at org.apache.cxf.io.CachedOutputStream.close(CachedOutputStream.java:215)
at org.apache.cxf.transport.AbstractConduit.close(AbstractConduit.java:56)
at org.apache.cxf.transport.http.HTTPConduit.close(HTTPConduit.java:638)
at org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor.handleMessage(MessageSenderInterceptor.java:62)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:307)
at org.apache.cxf.endpoint.ClientImpl.doInvoke(ClientImpl.java:514)
at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:423)
at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:326)
at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:279)
at org.apache.cxf.frontend.ClientProxy.invokeSync(ClientProxy.java:96)
at org.apache.cxf.jaxws.JaxWsClientProxy.invoke(JaxWsClientProxy.java:137)
Caused by: org.apache.wss4j.common.ext.WSSecurityException: BSP:R5417: Any SIG_KEY_INFO MUST contain a SECURITY_TOKEN_REFERENCE child element
at org.apache.wss4j.dom.bsp.BSPEnforcer.handleBSPRule(BSPEnforcer.java:57)
at org.apache.wss4j.dom.processor.SignatureProcessor.handleToken(SignatureProcessor.java:158)
at org.apache.wss4j.dom.WSSecurityEngine.processSecurityHeader(WSSecurityEngine.java:427)
at org.apache.cxf.ws.security.wss4j.WSS4JInInterceptor.handleMessage(WSS4JInInterceptor.java:257)
... 25 more
After some research I just removed the WSS4JInInterceptor and developed my custom validator.

java.io.IOException: Authorization loop detected on Conduit in Apache Camel

org.apache.cxf.interceptor.Fault: Could not send Message.
at org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor.handleMessage(MessageSenderInterceptor.java:64)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:308)
at org.apache.cxf.endpoint.ClientImpl.doInvoke(ClientImpl.java:514)
at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:416)
at org.apache.camel.component.cxf.CxfProducer.process(CxfProducer.java:120)
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:141)
at org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:77)
at org.apache.camel.processor.interceptor.TraceInterceptor.process(TraceInterceptor.java:163)
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:460)
at org.apache.camel.spring.spi.TransactionErrorHandler.processByErrorHandler(TransactionErrorHandler.java:218)
at org.apache.camel.spring.spi.TransactionErrorHandler.process(TransactionErrorHandler.java:99)
Camel version - 2.16.1.
Caused by: java.io.IOException: Authorization loop detected on Conduit "webserviceURL here..."
trying to call a WS published, from "route id=1" which is available in "routeContext id=1", from "route id=2" which is defined in "routeContext id=2"..
WS - defined using CXF-endpoint..
Both the "routeContext" has "http:conduit" defined..
Tried removing one of it..Not successful..
The solution is,
you have to be more specific of the scope of http:conduit, can be acheived by providing the specific name to the configured conduit, eg: http:conduit name="abcd.http", will be available only to the 'abcd', here abcd~=Webservice

Mule 3.6 with JTDS insertion in to SQl Server

Its a basic mule Database Connector where i am using a JTDS driver to insert in to Database. I get the below error, I observed if i hard code it to some value it gets inserted, But if i want to dynamically send it with payload:'column name' I get the below error. I am using it in the wrong way, i tried everything. Please let me know if i am doing something wrong.
INFO 2015-07-28 14:47:15,461 [[mule_edesk_integration].Pricing_Integration.stage1.02] org.mule.api.processor.LoggerMessageProcessor: ==========>Inserted Data : 22733.0,102520 ,FRR ,1082838 ,USD,28.0
ERROR 2015-07-28 14:47:15,583 [[mule_edesk_integration].Pricing_Integration.stage1.02] org.mule.exception.DefaultMessagingExceptionStrategy:
********************************************************************************
Message : Unable to convert value 22733.0 to NULL. (java.sql.SQLException). Message payload is of type: CaseInsensitiveHashMap
Code : MULE_ERROR--2
--------------------------------------------------------------------------------
Exception stack is:
1. Unable to convert value 22733.0 to NULL. (java.sql.SQLException)
net.sourceforge.jtds.jdbc.Support:742 (null)
2. Unable to convert value 22733.0 to NULL. (java.sql.SQLException). Message payload is of type: CaseInsensitiveHashMap (org.mule.api.MessagingException)
org.mule.module.db.internal.processor.AbstractDbMessageProcessor:93 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/MessagingException.html)
--------------------------------------------------------------------------------
Root Exception stack trace:
java.sql.SQLException: Unable to convert value 22733.0 to NULL.
at net.sourceforge.jtds.jdbc.Support.convert(Support.java:742)
at net.sourceforge.jtds.jdbc.JtdsPreparedStatement.setObjectBase(JtdsPreparedStatement.java:590)
at net.sourceforge.jtds.jdbc.JtdsPreparedStatement.setObject(JtdsPreparedStatement.java:913)
+ 3 more (set debug level logging or '-Dmule.verbose.exceptions=true' for everything)
********************************************************************************
Use the "from Template" option instead. A little more tedious but works.

Camel with RabbitMQ exception only occurs on second message - mis-spelt exchange name

I'm using Camel within a Spring boot application and integrate with RabbitMQ but am encountering strange behaviour.
My app has Restful endpointswhich convert the http request to a RabbitMQ message and publish this to a predefined exchange. There is a separate consumer app which listens to a queue and processes the messages.
I have deliberately entered an incorrect rabbitmq exchange name (invalidxchangename)to check that the application will fail if the exchange does not exist however the camel context starts without error and when I send in a first request is does not report any error. This message gets lost as there is no matching RabbitMQ exchange. When I submit a second request I receive the following exception which I would have expected on route startup.
com.rabbitmq.client.AlreadyClosedException: channel is already closed due to channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no exchange 'invalidxchangename' in vhost
EDIT:
I've tried a more simple example to show the issue in Camel.
I've created a simple route as follows:
from("file:in?fileName=in.txt").log(LoggingLevel.DEBUG, "in here!").to("rabbitmq://localhost:5762/invalidexchange?declare=false");
where there is an existing RabbitMQ exchange called validexchange (so I have deliberately made a typo in the RabbitMQ uri). I would expect the camel route to fail at startup since the exchange doesn't exist, or even the first time it tries to process a new in.txt file.
What I am actually seeing in the logs is that on start up it reports no error and only on the 2nd invocation of the route does it report an error.
2015-03-11 16:17:04.356 INFO 9756 : ID-SBMELW7W-06220-59960-1426051020468-0-2 >>> (route2) from(file://in?fileName=in.txt) --> log[in here!] <<< Pattern:InOnly, Headers:...
2015-03-11 16:17:04.360 INFO 9756 : ID-SBMELW7W-06220-59960-1426051020468-0-2 >>> (route2) log[in here!] --> rabbitmq://localhost:5762/customerchannel.exchang?declare=false <<< Pattern:InOnly, Headers:...
2015-03-11 16:17:45.073 INFO 9756 : ID-SBMELW7W-06220-59960-1426051020468-0-4 >>> (route2) from(file://in?fileName=in.txt) --> log[in here!] <<< Pattern:InOnly, Headers: ...
2015-03-11 16:17:45.079 INFO 9756 : ID-SBMELW7W-06220-59960-1426051020468-0-4 >>> (route2) log[in here!] --> rabbitmq://localhost:5762/customerchannel.exchang?declare=false <<< Pattern:InOnly, Headers:...
2015-03-11 16:17:45.092 ERROR 9756 : Failed delivery for (MessageId: ID-SBMELW7W-06220-59960-1426051020468-0-3 on ExchangeId: ID-SBMELW7W-06220-59960-1426051020468-0-4). Exhausted after delivery attempt: 1 caught: com.rabbitmq.client.AlreadyClosedException: channel is already closed due to channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no exchange 'customerchannel.exchang' in vhost '/', class-id=60, method-id=40)
It looks like the first request is causing an error which closes the connection and logs the reason, and when you try to use the channel the second time it's returning an AlreadyClosedException with the message that caused the channel to close in the first call.
You can test this by trying to publish the second message to a different exchange name in the same channel and checking which exchange is in the error. E.g. publish the second message to invalidxchangename2 and you should still see invalidxchangename as the exchange in the error.
To fix, you should handle the publish result when you publish and re-establish the connection if there's an error.
If you want to be sure that a message got delivered to a RabbitMQ queue, then you have to use publisher confirms: https://www.rabbitmq.com/confirms.html
That you are able to publish a message it doesn't mean that the message will reach a queue. You could go to a mailbox and leave a letter inside, but between the time you left the letter there and a postman picked up, many things could have happened, for example, the mailbox catching fire and so on.

Resources