problems with mina / netty tcp endpoint in camel (tcp socket server endpoint) - apache-camel

i want to connect a soap webservice to a tcp enpoint. The tcp endpoint has to be a tcp socket server that accepts clients.
now i have for example this route
<from uri="cxf:bean:myendpoint" />
<to uri="netty:tcp://localhost:port" />
this doesnt work because what i have found out
<from uri="netty:tcp://localhost:port" /> this configures it as server socket where clients can connect
<to uri="netty:tcp://localhost:port" /> this configures it as client that can connect to a server socket
is there any way to configure netty/mina etc as a server socket and not a client with the <to /> tag?
or might anyone have an idea for a workaround for this?
someone else already had a similar problem according to this https://issues.apache.org/jira/browse/CAMEL-1077 "tcp client mode / server mode determined by "to" or "from" elements limits usability." But i dont think anything has happened since then.

It looks like you just want to send the response from the soap service to tcp server.
You can setup the route like this
from("direct:start").to("cxf:xxx").to("netty:xxx");

Related

Produce messages to IBM MQ using REST API. Apache Camel

I have to send messages to IBM MQ by hitting a rest service. Below is the code I came up with, using Camel XML DSL.
<rest path="/basePath">
<post uri="/path" consumes="application/xml" produces="application/xml">
<to uri="ibmmq:QUEUE.NAME"/>
</post>
</rest>
When I try to post the message, I get the following exception
org.apache.camel.RuntimeExchangeException: Failed to resolve replyTo destination on the exchange
Is the post method expecting response back from QUEUE, so that it can respond back to rest client?
I only need the post service to reply with 200, if the message is successfully produced to QUEUE, 500 otherwise.
How to solve this problem?
Pattern of your exchange is InOut so this is default behavior for your jms producer. Try change it for specific endpoint like this:
<to uri="ibmmq:QUEUE.NAME" pattern="InOnly"/>

Camel-netty4: how to wait response before send next request

I have created route that accept request from multiple producers and send to a remote server by using netty4 with request-response. However, when camel is sending a request to remote server and waiting for response, next incoming request is received and want to send to remote server but got IOException as camel cannot receive response.
So, how to set Camel-Netty4 send request and wait for response before send next.
The route configuration:
from("direct:DirectProcessOut?block=true")
.to("netty4:tcp://192.168.1.2:8000?sync=true&synchronous=true&reuseChannel=true")
I actually ran into a similar issue trying to send out several messages at a time based on rows in a database table. The calls wouldn't wait for each other to complete and essentially stepped on each other and either hung or blew up.
The solution I eventually found was to use a message queue. Take in your requests and route them into a single activemq route.
So something like:
<camelContext xmlns="http://camel.apache.org/schema/blueprint">
<route>
<from uri="direct:direct:DirectProcessOut"/>
<to uri="activemq://processOutQueue"/>
</route>
</camelContext>
<camelContext xmlns="http://camel.apache.org/schema/blueprint">
<route>
<from uri="activemq://processOutQueue"/>
<to uri="netty4:tcp://192.168.1.2:8000?sync=true&synchronous=true&reuseChannel=true"/>
</route>
</camelContext>
My case was a little different, so I'm not sure if this will preserve your message you want to send. But hopefully it gives you a place to start.

How to get camel-mqtt endpoint to locate (resolving) my ActiveMQ on openshift/fuse

I'm kind of new to this so I might have missed the obvious.
I have an openshift gear with jboss fuse. I have started an ActiveMQ broker with an mqtt connector and created a camel route (using OSGi blueprint) consuming from the ActiveMQ mqtt connector on the same openshift gear. Everything works perfectly when I use the ip-address:port to the mqtt connector but that is not what I want to do. I would like to have some other solution (resolver) that doesn't make me have to point out a specific ip-address in the mqtt endpoint so I can move around the camel-route without reconfiguring it.
ActiveMQ connector config:
<transportConnectors>
<transportConnector name="openwire" publishedAddressPolicy="#addressPolicy" uri="tcp://${OPENSHIFT_FUSE_IP}:${OPENSHIFT_FUSE_AMQ_PORT}"/>
<transportConnector name="mqtt" publishedAddressPolicy="#addressPolicy" uri="mqtt://${OPENSHIFT_FUSE_IP}:1883"/>
</transportConnectors>
Camel-Route when it works:
<camelContext trace="false" id="blueprintContext" xmlns="http://camel.apache.org/schema/blueprint">
<route id="mqttToLog">
<from uri="mqtt:iot?host=tcp://127.4.22.139:1883&subscribeTopicName=mytesttopic&userName=admin&password=xxxxxxx" id="iot_endpoint">
<description>The MQTT endpoint for consuming data sent from the devices.</description>
</from>
<log message="The message contains ${body}" loggingLevel="INFO" id="iot_log">
<description>Logs all the incoming MQTT messages. This is just for verification purpouses.</description>
</log>
<to uri="mock:result" id="iot_mock">
<description>Final sink for the MQTT message flow. Kept for verification.</description>
</to>
</route>
</camelContext>
My camel-routes profile has feature-camel as parent and features camel and camel-mqtt.
So how do I get rid of actually having to specify the host in the endpoint, using for instance the mq group, or some other registry (fabric) or similar?
Thanks,
Tomas
If you are running a fabric then the ActiveMQ clustering feature works like this: The broker is part of a so called "broker group". The default broker is part of the group "default" which means there is a profile called mq-client-default. This profile will register a pre-configured ActiveMQ ConnectionFactory in the OSGi services registry. Its configured to connect to where-ever your broker is located, and will failover automatically to other brokers in the same group.
To make use of the above in Fuse 6.1 do the following:
Create a new child container
Add the mq-client-default profile to it
Add the feature "mq-fabric-camel" to the mq-client-default profile. This will install a camel component called "amq" that automatically uses the connectionFactory from the mq-client-default profile.
Deploy a camel route like the following and witness the awesomeness of JBoss Fuse :)
<camelContext xmlns="http://camel.apache.org/schema/blueprint">
<route id="myRoute">
<from uri="timer://foo?fixedRate=true&period=5000"/>
<setBody>
<simple>Hello from Camel route to ActiveMQ</simple>
</setBody>
<to uri="amq:queue:timermessages"/>
</route>
</camelContext>
The messages generated by this route will end up on the broker, no matter where the broker is, or where the camel route is.
Good luck!
P.S. The amq component uses openwire to communicate to the broker, but any other client can use whatever protocol you've enabled to consume or produce messages to your queues.

Timeout when connecting through a TCP-Connector

I would like to route traffic through Mule for a MSSQL database. The database runs on the url "internalserverurl" on port 1433.
Mule shall act as a TCP-Server/Proxy and simply re-route the incoming TCP traffic on 1433 to the address at "internalserverurl" on port 1433, handle the response and return it back.
Example Code:
<?xml version="1.0" encoding="UTF-8"?>
<mule xmlns:tcp="http://www.mulesoft.org/schema/mule/tcp" xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:doc="http://www.mulesoft.org/schema/mule/documentation" xmlns:spring="http://www.springframework.org/schema/beans" version="CE-3.3.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="
http://www.mulesoft.org/schema/mule/tcp http://www.mulesoft.org/schema/mule/tcp/current/mule-tcp.xsd
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-current.xsd
http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd ">
<tcp:connector name="TCP_C_L" validateConnections="false" receiveBufferSize="102400" sendBufferSize="102400" doc:name="TCP connector">
<tcp:streaming-protocol/>
</tcp:connector>
<tcp:connector name="TCP_C_81" validateConnections="false" receiveBufferSize="102400" sendBufferSize="102400" doc:name="TCP connector">
<tcp:streaming-protocol/>
</tcp:connector>
<flow name="IncomingEndpoint" doc:name="IncomingEndpoint">
<tcp:inbound-endpoint exchange-pattern="request-response" responseTimeout="10000" doc:name="TCP-Proxy" host="localhost" port="1433" connector-ref="TCP_C_L" />
<tcp:outbound-endpoint exchange-pattern="request-response" host="internalserverurl" port="1433" responseTimeout="10000" doc:name="TCP" connector-ref="TCP_C_81" />
</flow>
</mule>
If I run this code the mule application starts fine, I can also connect via JDBC to the port 1433 thorugh localhost.
But the DB connection is not successful.
Mule will throw an Socket Exception:
Exception stack is:
1. Socket is closed (java.net.SocketException)
java.net.Socket:864 (null)
2. Failed to route event via endpoint: DefaultOutboundEndpoint{endpointUri=tcp://internalserverurl:1433, connector=TcpConnector
{
name=TCP_C_81
lifecycle=start
this=7ffba3f9
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=true
connected=true
supportedProtocols=[tcp]
serviceOverrides=<none>
}
, name='endpoint.tcp.internalserverurl.1433', mep=REQUEST_RESPONSE, properties={}, transactionConfig=Transaction{factory=null, action=INDIFFERENT, timeout=0}, deleteUnacceptedMessages=false, initialState=started, responseTimeout=10000, endpointEncoding=UTF-8, disableTransportTransformer=false}. Message payload is of type: TcpMessageReceiver$TcpWorker$1 (org.mule.api.transport.DispatchException)
org.mule.transport.AbstractMessageDispatcher:109 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/transport/DispatchException.html)
Why is there a Socket timeout ? When I simply do the JDBC connection directly (from the same machine that runs this Mule application) the connection is fine.
If I use
<tcp:direct-protocol payloadOnly="true"/>
instead of
<tcp:streaming-protocol/>
Then I can see the TCP packet incoming on the MSSQL server, but the SQL server will log a message like:
08/18/2014 12:16:41,Logon,Unknown,The login packet used to open the connection is structurally invalid; the connection has been closed. Please contact the vendor of the client library. [CLIENT: 10.2.60.169]
08/18/2014 12:16:41,Logon,Unknown,Error: 17832 Severity: 20 State: 2.
Thanks,
Sebastian
Take a look at the TCP Connector protocol table: http://www.mulesoft.org/documentation/display/current/TCP+Transport+Reference#TCPTransportReference-ProtocolTables
The streaming-protocol has this Read property:
All bytes sent until the socket is closed
If the client doesn't disconnect, Mule will keep reading for ever with this protocol.
Keep in mind Mule is a message-oriented middleware: if you want your TCP bridge to work, you need to use a protocol that is compatible with the MSSQL protocol.
This means that the protocol must recognize whatever character or sequence of characters is used by the SQL client to mark an end of request, so Mule can cut a message out of the bytes received so far then route it down the flow.
It's possible that none of the provided protocols allow this, meaning that you would have to create your own protocol...

Apache Camel VM queue between servlets

I'm trying to setup simple VM queue test between two servlets without success. The problem is that the request always timeouts as there is no response, OUT message, within expected timeframe.
"org.apache.camel.ExchangeTimedOutException: The OUT message was not received within: 30000 millis."
The servlets are running in Tomcat and are both deploying Apache Camel. Both apps are defining camel context and simple routes. The basic setup should be fine as simple routes like following are working:
<route>
<from uri="servlet:///hello?servletName=app1" />
<transform>
<simple>Hello world</simple>
</transform>
</route>
<route>
<from uri="servlet:///hello?servletName=app2" />
<transform>
<simple>Hello world</simple>
</transform>
</route>
First of all I'm not sure if the message ever reaches the app2 as the same timout happens even if the requested route wouldn't be even defined (the app2 would be missing the VM route). So the problem could be in how to define the route between two servlets using VM queue.
If the route between the servlets is fine then the problem should be in the missing/incorrect reply. I do understand that the receiving end should return the reply as the incoming requst from web server is inOut type, but I don't know how to achieve that.
The route in app1 receiving the web request:
<route>
<from uri="servlet:///test?servletName=app1" />
<to uri="vm:test">
</route>
and the other end in servlet app2:
<route>
<from uri="vm:test" />
// Tested here: output with <simple>, 'To', 'inOut'... the result is always timeout
</route>
As I'm new with Apache Camel the root cause is most likely very simple. Any help would be highly appreciated.
The question is simply, how to setup VM queue between two servlet apps?
The vm component works in the same classloader, eg kinda what we say on the vm documentation page: http://camel.apache.org/vm.html
This component differs from the SEDA component in that VM supports
communication across CamelContext instances - so you can use this
mechanism to communicate across web applications (provided that
camel-core.jar is on the system/boot classpath).
So if you use Apache Tomcat, you need to have camel-core JAR as shared JAR. Such as in the boot classpath somewhere.

Resources