I am trying to set up an mllp listener for hl7v2.x messages using camel.
My environment
apache camel and components version 2.18.3
Also I would like to avoid the use of the HAPI library, as I prefer a custom parser for the received and generated messages. As my clients are each one using different versions of the standard and really different fields usage.That's why there is no unmarshalling to the hl7 datatype in the following route, just to string. I'll do the parser myself.
And my route (all the beans and variables are defined elsewhere in the code, I think they are not relevant)
from("netty4:tcp://0.0.0.0:3333?
encoder=#encoderHl7&decoder=#decoderHl7&sync=true")
.log("~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~")
.unmarshal().string()
.to("file://" + rutaSalidaFichero)
;
First, as a prove of concept, I am just trying to copy all the messages received into a file system directory. The messages are correctly received and wrote to the directory. But I do not know how to generate and send the ACK, an incorrect one is being automatically generated and sended.
If I send a hl7 message from an outer/sending system, the camel component send the same message as the ack, so the sending system sends an error in return as it is not the ack expected. I am sending the hl7 message using mirth, dcm4chee, hapi ... all of then with the same result.
For instance, if I send the following message from an outer/sender system
MSH|^~\&|LIS|LIS|HIS|HIS|20170412131105||OML^O21|0000000001|P|2.5|||AL|||8859/1|||1.0
PID|1||123456||APELLIDO1&APELLIDO2^NOMBRE|19200101
ORC|RP|009509452919|317018426||||||20170412000000
OBR|1|317018426|317018426|CULT^CULTIVO
I received the same as the ack in the sending system. This is the camel generating the ack as the receiving message
MSH|^~\&|LIS|LIS|HIS|HIS|20170412131105||OML^O21|0000000001|P|2.5|||AL|||8859/1|||1.0
PID|1||123456||APELLIDO1&APELLIDO2^NOMBRE|19200101
ORC|RP|009509452919|317018426||||||20170412000000
OBR|1|317018426|317018426|CULT^CULTIVO
I have not found in the camel docs references to the generation of the ack, or if I can use a custom "something" to generate it. I would like to change this default behaviour.
This is what I have done on my project:
<bean id="hl7Processor" class="com.mediresource.MessageRouting.HL7.HL7Processor" />
<route>
<from uri="mina2:tcp://10.68.124.140:2575?sync=true&codec=#hl7codec" />
<onException>
<exception>org.apache.camel.RuntimeCamelException</exception>
<exception>ca.uhn.hl7v2.HL7Exception</exception>
<redeliveryPolicy maximumRedeliveries="0" />
<handled>
<constant>true</constant>
</handled>
<bean ref="hl7Processor" method="sendACKError" />
</onException>
<bean ref="hl7Processor" method="sendACK" />
</route>
On class HL7Processor I have this:
public Message sendACK(Message message, Exchange exchange ) throws HL7Exception, IOException {
logger.debug("Entering");
Message ack = message.generateACK();
logger.info("(10-4), End - ACK sent for " + exchange.getExchangeId());
return ack;
}
public Message sendACKError(Message message, Exception ex) throws HL7Exception, IOException {
try {
logger.warn("Internal Error:" + ex);
Message ack = message.generateACK(AcknowledgmentCode.AE, new HL7Exception("Internal Error") );
logger.warn("(10-4), End - NACK");
return ack;
} catch (Exception ex1) {
logger.error("Fatal error on processError! ", ex1);
}
return null;
}
As camel hl7 component docs says (http://camel.apache.org/hl7.html, "HL7 Acknowledgement expression") you can generate default ack just by using
import static org.apache.camel.component.hl7.HL7.ack;
...
from("direct:test1")
// acknowledgement
.transform(ack())
Here "ack()" is a call for "org.apache.camel.component.hl7.HL7#ack()". But you can check that "org.apache.camel.component.hl7.HL7" contains some other helpful methods like
org.apache.camel.component.hl7.HL7#ack(ca.uhn.hl7v2.AcknowledgmentCode code)
or
org.apache.camel.component.hl7.HL7#ack(ca.uhn.hl7v2.AcknowledgmentCode code, java.lang.String errorMessage, ca.uhn.hl7v2.ErrorCode )
You can use them to customise the actual ACK response.
If we will go deeper then you can see that "org.apache.camel.component.hl7.HL7#ack" are just wrappers for
new ValueBuilder(new AckExpression(...))
and most params from "ack" methods are going directly to the org.apache.camel.component.hl7.AckExpression. Actual ACK generation is done in "org.apache.camel.component.hl7.AckExpression#evaluate" and looks like
public Object evaluate(Exchange exchange) {
Throwable t = exchange.getProperty(Exchange.EXCEPTION_CAUGHT, Throwable.class);
Message msg = exchange.getIn().getBody(Message.class);
try {
HL7Exception hl7e = generateHL7Exception(t);
AcknowledgmentCode code = acknowledgementCode;
if (t != null && code == null) {
code = AcknowledgmentCode.AE;
}
return msg.generateACK(code == null ? AcknowledgmentCode.AA : code, hl7e);
} catch (Exception e) {
throw ObjectHelper.wrapRuntimeCamelException(e);
}
}
If you want deeper customization you can just write your own MyCustomAckExpression which will extend org.apache.camel.component.hl7.AckExpression and implement required logic instead of
return msg.generateACK(code == null ? AcknowledgmentCode.AA : code, hl7e);
and use it like
...
from("direct:test1")
// acknowledgement
.transform(new ValueBuilder(new MyCustomAckExpression()))
Related
In Camel,
ProducerTemplate producerTemplate = exchange.getContext().createProducerTemplate();
producerTemplate.sendBody("endpointqueue?includeSentJMSMessageID=true", ExchangePattern.InOnly, body);
I would need to get JMSMessageID that is returned from IBM MQ/ActiveMQ. I am looking at exchange values on debug mode but cannot find it. I can only find sessionID. Where is it stored and how to get it?
The Camel documentation says:
includeSentJMSMessageID - only applicable when sending to jms destination using InOnly. enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination.
Is includeSentJMSMessageID different than my needs or am I missing something?
The doc says the JMS MessageID is available as header.
So you should be able to return it like this:
from("direct:queueMessage")
.to("jms://myqueue?includeSentJMSMessageID=true")
.setBody().simple("${header.JMSMessageID}");
And then send your message using:
String msgId = producerTemplate.requestBody("direct:queueMessage", body, String.class);
Im trying to download a file with dynamic filenames using pollEnrich in a loop , when there is a conenction exception at pollEnrich it was not handled in onException block even io cannot us docatch after the pollenrich statment.
i also tried using throwExceptionOnConnectFailed=true in endpoint uri. Not no use.
do this have any workaround?
onException(Exception.class)
.log( "${exception.stacktrace}")
.end();
from("direct:DownloadFiles")
.loop(exchangeProperty("FileCount"))
.pollEnrich().simple("sftp://testeruser:password#localhost:24?
move=Processed&antInclude=*${property.soNumber}*.*").timeout(30000)
.to("TARGET SFTP endpoint")
.end();
By default Camel ignores connection problems
} catch (Exception e) {
loggedIn = false;
// login failed should we thrown exception
if (getEndpoint().getConfiguration().isThrowExceptionOnConnectFailed()) {
throw e;
}
}
Therefore you have to enable the option throwExceptionOnConnectFailed on the SFTP consumer. In your case this would be
.pollEnrich()
.simple("sftp://testeruser:password#localhost:24?move=Processed&throwExceptionOnConnectFailed=true&antInclude=*${property.soNumber}*.*")
.timeout(30000)
I know you write in your question that you tried that option without success, but in my test it is this option that decides (according to the Camel code above) if the ConnectException is reaching the error handler or is ignored.
I'am writing a simple . application deploying on Karaf 4.1.0. It's role is sending a rest request to REST API. When I start my bundle I have an error:
javax.ws.rs.ProcessingException: org.apache.cxf.interceptor.Fault: No message body writer has been found for class package.QueueSharedDTO, ContentType: application/json
at org.apache.cxf.jaxrs.client.WebClient.doResponse(WebClient.java:1149)
at org.apache.cxf.jaxrs.client.WebClient.doChainedInvocation(WebClient.java:1094)
at org.apache.cxf.jaxrs.client.WebClient.doInvoke(WebClient.java:894)
at org.apache.cxf.jaxrs.client.WebClient.doInvoke(WebClient.java:865)
at org.apache.cxf.jaxrs.client.WebClient.invoke(WebClient.java:428)
at org.apache.cxf.jaxrs.client.WebClient$SyncInvokerImpl.method(WebClient.java:1631)
at org.apache.cxf.jaxrs.client.WebClient$SyncInvokerImpl.method(WebClient.java:1626)
at org.apache.cxf.jaxrs.client.WebClient$SyncInvokerImpl.post(WebClient.java:1566)
at org.apache.cxf.jaxrs.client.spec.InvocationBuilderImpl.post(InvocationBuilderImpl.java:145)
at package.worker.service.implementation.ConnectionServiceImpl.postCheckRequest(ConnectionServiceImpl.java:114)
at package.worker.service.implementation.ConnectionServiceImpl.sendCheck(ConnectionServiceImpl.java:103)
at package.worker.module.QueueSharedListener.run(QueueSharedListener.java:37)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.cxf.interceptor.Fault: No message body writer has been found for class package.QueueSharedDTO, ContentType: application/json
at org.apache.cxf.jaxrs.client.WebClient$BodyWriter.doWriteBody(WebClient.java:1222)
at org.apache.cxf.jaxrs.client.AbstractClient$AbstractBodyWriter.handleMessage(AbstractClient.java:1091)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:308)
at org.apache.cxf.jaxrs.client.AbstractClient.doRunInterceptorChain(AbstractClient.java:649)
at org.apache.cxf.jaxrs.client.WebClient.doChainedInvocation(WebClient.java:1093)
... 11 more
Caused by: javax.ws.rs.ProcessingException: No message body writer has been found for class com.emot.dto.QueueSharedDTO, ContentType: application/json
at org.apache.cxf.jaxrs.client.AbstractClient.reportMessageHandlerProblem(AbstractClient.java:780)
at org.apache.cxf.jaxrs.client.AbstractClient.writeBody(AbstractClient.java:494)
at org.apache.cxf.jaxrs.client.WebClient$BodyWriter.doWriteBody(WebClient.java:1217)
... 15 more
Initialization WebTarget:
private ConnectionServiceImpl() {
client = ClientBuilder.newClient();
client.property(
ClientProperties.CONNECT_TIMEOUT,
snifferProperties.getProperty(SnifferProperties.PARAM_REST_API_CONNECTION_TIMEOUT));
client.property(
ClientProperties.READ_TIMEOUT,
snifferProperties.getProperty(SnifferProperties.PARAM_REST_API_READ_TIMEOUT));
System.out.println(2);
webTarget = client.target(buildUrl());
}
Send requests :
private synchronized boolean postCheckRequest(String path, Object content) {
boolean result = true;
try {
Response response = webTarget
.path("check")
.path("add/one")
.request(MediaType.APPLICATION_JSON)
.post(Entity.json(content));
result = (response.getStatus() == 200);
} catch (Exception e) {
System.out.println("Error but working");
e.printStackTrace();
result = false;
}
return result;
}
I have always the problems with Karaf... i dont understand why it . couldn't working correctly...
The issue you are facing is mostly not a Karaf issue, but a typical issue you may face while working with some JAX-RS implementation in non-JavaEE environment.
Exception literally says that your implementation misses message body writer. Message body writer is the class which implements class javax.ws.rs.ext.MessageBodyWriter and is responsible for serializing your data objects to some format (like JSON). There is another class named javax.ws.rs.ext.MessageBodyReader, which does the opposite thing. All these classes are registered to JAX-RS framework as providers, extending its capabilities. Details are here: https://jersey.java.net/documentation/latest/message-body-workers.html
So, generally you must decide what you use for serializing/deserializing between your data objects and HTTP MediaType and register a proper JAX-RS provider.
With Jackson, for example, your problem can be easily solved by using one of its standard implementation: either com.fasterxml.jackson.jaxrs.json.JacksonJaxbJsonProvider, if you use JAXB annotations, or com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider, if you prefer Jackson annotations. Add this class in providers section of your Blueprint descriptor:
<jaxrs:server id="restServer" address="/rest">
<jaxrs:serviceBeans>
....
</jaxrs:serviceBeans>
<jaxrs:providers>
....
<bean class="com.fasterxml.jackson.jaxrs.json.JacksonJaxbJsonProvider"/>
....
</jaxrs:providers>
</jaxrs:server>
Whenever there is normal flow in my Camel Routes I am able to get the body in the next component. But whenever there is an exception(Http 401 or 500) I am unable to get the exception body. I just get a java exception in my server logs.
I have also tried onException().. Using that the flow goes into it on error, but still I do not get the error response body that was sent by the web service(which I get when using POSTMAN directly), I only get the request in the body that I had sent to the web service.
Also adding the route:
from("direct:contractUpdateAds")
.to("log:inside_direct:contractUpdateAds_route_CompleteLog?level=INFO&showAll=true&multiline=true")
.streamCaching()
.setHeader(Exchange.HTTP_METHOD, constant("POST"))
.setHeader(Exchange.CONTENT_TYPE, constant("application/json"))
.log("before calling ADS for ContractUpdate:\nBody:${body}")
.to("{{AdsContractUpdateEndpoint}}")
.log("after calling ADS for ContractUpdate:\nBody:${body}")
.convertBodyTo(String.class)
.end();
Option 1: handle failure status codes yourself
The throwExceptionOnFailure=false endpoint option (available at least for camel-http and camel-http4 endpoints) is probably what you want. With this option, camel-http will no longer consider an HTTP Status >= 300 as an error, and will let you decide what to do - including processing the response body however you see fit.
Something along those lines should work :
from("...")
.to("http://{{hostName}}?throwExceptionOnFailure=false")
.choice()
.when(header(Exchange.HTTP_RESPONSE_CODE).isLessThan(300))
// HTTP status < 300
.to("...")
.otherwise()
// HTTP status >= 300 : would throw an exception if we had "throwExceptionOnFailure=true"
.log("Error response: ${body}")
.to("...");
This is an interesting approach if you want to have special handling for certains status codes for example. Note that the logic can be reused in several routes by using direct endpoints, just like any other piece of Camel route logic.
Option 2 : Access the HttpOperationFailedException in the onException
If you want to keep the default error handling, but you want to access the response body in the exception handling code for some reason, you just need to access the responseBody property on the HttpOperationFailedException.
Here's an example:
onException(HttpOperationFailedException.class)
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
// e won't be null because we only catch HttpOperationFailedException;
// otherwise, we'd need to check for null.
final HttpOperationFailedException e =
exchange.getProperty(Exchange.EXCEPTION_CAUGHT, HttpOperationFailedException.class);
// Do something with the responseBody
final String responseBody = e.getResponseBody();
}
});
We are using camel 2.13.2 - I have a multicast route with an AggregationStrategy.
And in each multicast branch, we have a custom camel component that returns huge data (around 4 MB) and writes to Stream Cache (Cached Output Stream) and we need to aggregate the data in the multicast (Aggregation Strategy).
In the Aggregation strategy, I need to do XPath evaluation using camel XPathBuilder.
Hence, I try to read the body and convert from StreamCache to byte[] to avoid 'Error during type conversion from type: org.apache.camel.converter.stream.InputStreamCache.' in the XPathBuilder.
When I try to read the body in the beginning of the Aggregation Strategy, I get the following error.
*/tmp/camel/camel-tmp-4e00bf8a-4a42-463a-b046-5ea2d7fc8161/cos6047774870387520936.tmp (No such file or directory), cause: FileNotFoundException:/tmp/camel/camel-tmp-4e00bf8a-4a42-463a-b046-5ea2d7fc8161/cos6047774870387520936.tmp (No such file or directory).
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.(FileInputStream.java:138)
at org.apache.camel.converter.stream.FileInputStreamCache.createInputStream(FileInputStreamCache.java:123) at org.apache.camel.converter.stream.FileInputStreamCache.getInputStream(FileInputStreamCache.java:117)
at org.apache.camel.converter.stream.FileInputStreamCache.writeTo(FileInputStreamCache.java:93)
at org.apache.camel.converter.stream.StreamCacheConverter.convertToByteArray(StreamCacheConverter.java:102)
at com.sap.it.rt.camel.aggregate.strategies.MergeAtXPathAggregationStrategy.convertToByteArray(MergeAtXPathAggregationStrategy.java:169)
at com.sap.it.rt.camel.aggregate.strategies.MergeAtXPathAggregationStrategy.convertToXpathCompatibleType(MergeAtXPathAggregationStrategy.java:161)
*
Following is the line of code where it is throwing an error:
Object body = exchange.getIn().getBody();
if( body instanceof StreamCache){
StreamCache cache = (StreamCache)body;
xml = new String(convertToByteArray(cache,exchange));
exchange.getIn().setBody(xml);
}
By disabling stream cache to write to file by setting a threshold of 10MB in multicast related routes, we were able to work with the aggregation strategy. But we do not want to do that, as we may have incoming data that maybe bigger.
<camel:camelContext id="multicast_xml_1" streamCache="true">
<camel:properties>
<camel:property key="CamelCachedOutputStreamCipherTransformation" value="RC4"/>
<camel:property key="CamelCachedOutputStreamThreshold" value="100000000"/>
</camel:properties>
....
</camel:camelContext>
Note: The FileNotFound issue does not appear if we have the StreamCache based camel component in the route with other processors, but without Multicast + Aggregation.
After debugging, I could understand the issue with aggregating huge data from StreamCache with MulticastProcessor.
In MulticastProcessor.java: doProcessParallel() is called and on completion of the branch exchange of multicast, the CachedOutputStream deletes / cleans up the temporary file.
This happens even before the multicast branch exchange reaches the aggregation Strategy, which tries to read the data from the branch exchange. In case of huge data in StreamCache, the temporary file is already deleted, leading to FileNotFound issues.
public CachedOutputStream(Exchange exchange, boolean closedOnCompletion) {
this.strategy = exchange.getContext().getStreamCachingStrategy();
currentStream = new CachedByteArrayOutputStream(strategy.getBufferSize());
if (closedOnCompletion) {
// add on completion so we can cleanup after the exchange is done such as deleting temporary files
exchange.addOnCompletion(new SynchronizationAdapter() {
#Override
public void onDone(Exchange exchange) {
try {
if (fileInputStreamCache != null) {
fileInputStreamCache.close();
}
close();
} catch (Exception e) {
LOG.warn("Error deleting temporary cache file: " + tempFile, e);
}
}
#Override
public String toString() {
return "OnCompletion[CachedOutputStream]";
}
});
}
}
public void close() throws IOException {
currentStream.close();
cleanUpTempFile();
}
I was able to circumvent the issue, if I try to set closedOnCompletion= false, while writing to CachedOutputStream in any component in any Multicast branch.
But this is a leaky solution, because the streamcache temporary file(s) may then never get cleaned up... hence I try to close + clean up the cachestream, after reading the data in the AggregationStrategy.
Can the MulticastProcessor be adjusted so that the multicast branch exchanges reach 'completion' status only, after they have been aggregated at the end of multicast?
Please help / advise on the issue, as I am new to using camel Multicast.
Thanks,
Lakshmi
I have similar exception thrown when trying to send larger than 1MB JSON response to Restlet request (yes, I know 1MB JSON is too big):
java.io.FileNotFoundException: C:\Users\me\AppData\Local\Temp\camel\camel-tmp-7ad6e098-538d-4d4c-9357-2b7addb1f19d\cos6725022584818060586.tmp (The system cannot find the file specified)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:146)
at org.apache.camel.converter.stream.FileInputStreamCache.createInputStream(FileInputStreamCache.java:123)
at org.apache.camel.converter.stream.FileInputStreamCache.getInputStream(FileInputStreamCache.java:117)
at org.apache.camel.converter.stream.FileInputStreamCache.read(FileInputStreamCache.java:112)
at java.io.InputStream.read(InputStream.java:170)
at java.io.InputStream.read(InputStream.java:101)
at org.restlet.engine.io.BioUtils.copy(BioUtils.java:81)
at org.restlet.representation.InputRepresentation.write(InputRepresentation.java:148)
at org.restlet.engine.adapter.ServerCall.writeResponseBody(ServerCall.java:510)
at org.restlet.engine.adapter.ServerCall.sendResponse(ServerCall.java:454)
at org.restlet.ext.servlet.internal.ServletCall.sendResponse(ServletCall.java:426)
at org.restlet.engine.adapter.ServerAdapter.commit(ServerAdapter.java:196)
at org.restlet.engine.adapter.HttpServerHelper.handle(HttpServerHelper.java:153)
at org.restlet.ext.servlet.ServerServlet.service(ServerServlet.java:1089)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1496)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:102)
Same workaround works for me:
getContext().getProperties().put(CachedOutputStream.THRESHOLD, "" + THREE_MEGABYTE_TRESHOLD_BEFORE_FILE_CACHE);
I don't use multicast in this route, just plain
restlet request -> Service -> Jackson marshall => error
I use Camel 2.14.0 & Restlet 2.2.2 with JDK 7 and Spring-boot 1.0.2 / Jetty
This Camel reverse proxy - no response stream caching might be related to my issue.