Hazelcast-- Apache Camel integration - apache-camel

I am new to hazelcast and camel.
While creating a map-loader using camel, I am calling camel route within the the "load" method. Although the container shows "dataSyncLoad" container is present, but still while loading data on startup of the application, it gives error listed below.
Hazelcast Maploader
#Produce(uri = "direct:dataSyncLoad")
private ProducerTemplate dataSyncLoad;
#Override
public synchronized DataSyncServiceRequest load(CellDataSyncKey key) {
DataSyncServiceRequest dataSynchTemplateVO = dataSyncLoad.requestBody("direct:dataSyncLoad",key , DataSyncServiceRequest.class);
if (null != dataSynchTemplateVO) {
LOGGER.info("Cache Loaded: {}", key);
} else {
LOGGER.info("No data found for the key : {}", key);
}
return dataSynchTemplateVO;
}
Map Configuration
<hz:map name="getsToolCcaDatasyncMap" backup-count="0" max-size="100" eviction-percentage="25" eviction-policy="LRU"
read-backup-data="0">
<hz:map-store enabled="true" initial-mode="EAGER" write-delay-seconds="0" implementation="getsToolCcaDatasyncMapLoader"
/>
</hz:map>
Could not load keys from map store
org.apache.camel.CamelExecutionException: Exception occurred during
execution on the exchange: Exchange[Message: CellDataSyncKey
[locoid=20695, dataTemplate=3, deviceName=CCA]] at
org.apache.camel.util.ObjectHelper.wrapCamelExecutionException(ObjectHelper.java:1379)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.util.ExchangeHelper.extractResultBody(ExchangeHelper.java:622)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.DefaultProducerTemplate.extractResultBody(DefaultProducerTemplate.java:467)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.DefaultProducerTemplate.sendBody(DefaultProducerTemplate.java:133)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.DefaultProducerTemplate.sendBody(DefaultProducerTemplate.java:149)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.DefaultProducerTemplate.requestBody(DefaultProducerTemplate.java:297)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.DefaultProducerTemplate.requestBody(DefaultProducerTemplate.java:327)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
com.ge.trans.loader.cell.cache.mapstore.GetsToolCcaDatasyncMap.load(GetsToolCcaDatasyncMap.java:40)[295:cell-cache-service:2.0.0]
at
com.ge.trans.loader.cell.cache.mapstore.GetsToolCcaDatasyncMap.loadAll(GetsToolCcaDatasyncMap.java:57)[295:cell-cache-service:2.0.0]
at
com.hazelcast.map.impl.MapStoreWrapper.loadAll(MapStoreWrapper.java:143)[276:com.hazelcast:3.6.5]
at
com.hazelcast.map.impl.mapstore.AbstractMapDataStore.loadAll(AbstractMapDataStore.java:56)[276:com.hazelcast:3.6.5]
at
com.hazelcast.map.impl.recordstore.BasicRecordStoreLoader.loadAndGet(BasicRecordStoreLoader.java:161)[276:com.hazelcast:3.6.5]
at
com.hazelcast.map.impl.recordstore.BasicRecordStoreLoader.doBatchLoad(BasicRecordStoreLoader.java:134)[276:com.hazelcast:3.6.5]
at
com.hazelcast.map.impl.recordstore.BasicRecordStoreLoader.loadValuesInternal(BasicRecordStoreLoader.java:120)[276:com.hazelcast:3.6.5]
at
com.hazelcast.map.impl.recordstore.BasicRecordStoreLoader.access$100(BasicRecordStoreLoader.java:54)[276:com.hazelcast:3.6.5]
at
com.hazelcast.map.impl.recordstore.BasicRecordStoreLoader$GivenKeysLoaderTask.call(BasicRecordStoreLoader.java:107)[276:com.hazelcast:3.6.5]
at
com.hazelcast.util.executor.CompletableFutureTask.run(CompletableFutureTask.java:67)[276:com.hazelcast:3.6.5]
at
com.hazelcast.util.executor.CachedExecutorServiceDelegate$Worker.run(CachedExecutorServiceDelegate.java:212)[276:com.hazelcast:3.6.5]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)[:1.7.0_67]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)[:1.7.0_67]
at java.lang.Thread.run(Thread.java:745)[:1.7.0_67] at
com.hazelcast.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76)[276:com.hazelcast:3.6.5]
at
com.hazelcast.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:92)[276:com.hazelcast:3.6.5]
Caused by:
org.apache.camel.component.direct.DirectConsumerNotAvailableException:
No consumers available on endpoint: Endpoint[direct://dataSyncLoad].
Exchange[Message: CellDataSyncKey [locoid=20695, dataTemplate=3,
deviceName=CCA]] at
org.apache.camel.component.direct.DirectProducer.process(DirectProducer.java:47)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:191)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.processor.UnitOfWorkProducer.process(UnitOfWorkProducer.java:73)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.ProducerCache$2.doInProducer(ProducerCache.java:378)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.ProducerCache$2.doInProducer(ProducerCache.java:346)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.ProducerCache.doInProducer(ProducerCache.java:242)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.ProducerCache.sendExchange(ProducerCache.java:346)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.ProducerCache.send(ProducerCache.java:201)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.DefaultProducerTemplate.send(DefaultProducerTemplate.java:128)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
at
org.apache.camel.impl.DefaultProducerTemplate.sendBody(DefaultProducerTemplate.java:132)[142:org.apache.camel.camel-core:2.12.0.redhat-610379]
... 19 more

Related

Flink JDBC Sink part 2

I have posted a question few days back- Flink Jdbc sink
Now, I am trying to use the sink provided by flink.
I have written the code and it worked as well. But nothing got saved in DB and no exceptions were there. Using previous sink my code was not finishing(that should happen ideally as its a streaming app) but after the following code I am getting no error and the nothing is getting saved to DB.
public class CompetitorPipeline implements Pipeline {
private final StreamExecutionEnvironment streamEnv;
private final ParameterTool parameter;
private static final Logger LOG = LoggerFactory.getLogger(CompetitorPipeline.class);
public CompetitorPipeline(StreamExecutionEnvironment streamEnv, ParameterTool parameter) {
this.streamEnv = streamEnv;
this.parameter = parameter;
}
#Override
public KeyedStream<CompetitorConfig, String> start(ParameterTool parameter) throws Exception {
CompetitorConfigChanges competitorConfigChanges = new CompetitorConfigChanges();
KeyedStream<CompetitorConfig, String> competitorChangesStream = competitorConfigChanges.run(streamEnv, parameter);
//Add to JBDC Sink
competitorChangesStream.addSink(JdbcSink.sink(
"insert into competitor_config_universe(marketplace_id,merchant_id, competitor_name, comp_gl_product_group_desc," +
"category_code, competitor_type, namespace, qualifier, matching_type," +
"zip_region, zip_code, competitor_state, version_time, compConfigTombstoned, last_updated) values (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)",
(ps, t) -> {
ps.setInt(1, t.getMarketplaceId());
ps.setLong(2, t.getMerchantId());
ps.setString(3, t.getCompetitorName());
ps.setString(4, t.getCompGlProductGroupDesc());
ps.setString(5, t.getCategoryCode());
ps.setString(6, t.getCompetitorType());
ps.setString(7, t.getNamespace());
ps.setString(8, t.getQualifier());
ps.setString(9, t.getMatchingType());
ps.setString(10, t.getZipRegion());
ps.setString(11, t.getZipCode());
ps.setString(12, t.getCompetitorState());
ps.setTimestamp(13, Timestamp.valueOf(t.getVersionTime()));
ps.setBoolean(14, t.isCompConfigTombstoned());
ps.setTimestamp(15, new Timestamp(System.currentTimeMillis()));
System.out.println("sql"+ps);
},
new JdbcConnectionOptions.JdbcConnectionOptionsBuilder()
.withUrl("jdbc:mysql://127.0.0.1:3306/database")
.withDriverName("com.mysql.cj.jdbc.Driver")
.withUsername("xyz")
.withPassword("xyz#")
.build()));
return competitorChangesStream;
}
}
You need enable autocommit mode for jdbc Sink.
new JdbcConnectionOptions.JdbcConnectionOptionsBuilder()
.withUrl("jdbc:mysql://127.0.0.1:3306/database;autocommit=true")
It looks like SimpleBatchStatementExecutor only works in auto-commit mode. And if you need to commit and rollback batches, then you have to write your own ** JdbcBatchStatementExecutor **
Have you tried to include the JdbcExecutionOptions ?
dataStream.addSink(JdbcSink.sink(
sql_statement,
(statement, value) -> {
/* Prepared Statement */
},
JdbcExecutionOptions.builder()
.withBatchSize(5000)
.withBatchIntervalMs(200)
.withMaxRetries(2)
.build(),
new JdbcConnectionOptions.JdbcConnectionOptionsBuilder()
.withUrl("jdbc:mysql://127.0.0.1:3306/database")
.withDriverName("com.mysql.cj.jdbc.Driver")
.withUsername("xyz")
.withPassword("xyz#")
.build()));

camel route test - the registry for: mock:result

Hi I have complex camel route and in between the route I am sending mesage to MQ using Bean.
.bean("{{tp.mqservice}}")
application.yaml
mqservice: bean:mqService
application-test.yaml
mqservice: mock:result
Below is my PortfolioRouteTest
#ActiveProfiles("test")
#RunWith(CamelSpringBootRunner.class)
#SpringBootTest(classes = MainApplication.class, webEnvironment = WebEnvironment.RANDOM_PORT)
#MockEndpoints
public class PortfolioTncRouteTest {
#EndpointInject(value = "{{trade-publisher.portfolio-tnc.source-endpoint}}")
private ProducerTemplate producerTemplate;
#EndpointInject(value = "{{trade-publisher.mqservice}}")
private MockEndpoint mock;
}
Junit
#Test
public void portfolioTncRouteTest() throws InterruptedException {
data = ...
Mockito.when(service.search(Mockito.any(....class))).thenReturn(...);
producerTemplate.sendBody(data);
mock.expectedMessageCount(1);
mock.assertIsSatisfied(30000);
}
however when I run the test I am getting below error. Am I missing something?
Stacktrace
Caused by: org.apache.camel.NoSuchBeanException: No bean could be found in the registry for: mock:result
at org.apache.camel.component.bean.RegistryBean.getBean(RegistryBean.java:92)
at org.apache.camel.component.bean.RegistryBean.createCacheHolder(RegistryBean.java:67)
at org.apache.camel.reifier.BeanReifier.createProcessor(BeanReifier.java:57)
at org.apache.camel.reifier.ProcessorReifier.createProcessor(ProcessorReifier.java:485)
at org.apache.camel.reifier.ProcessorReifier.createOutputsProcessorImpl(ProcessorReifier.java:448)
at org.apache.camel.reifier.ProcessorReifier.createOutputsProcessor(ProcessorReifier.java:415)
at org.apache.camel.reifier.ProcessorReifier.createOutputsProcessor(ProcessorReifier.java:212)
at org.apache.camel.reifier.ExpressionReifier.createFilterProcessor(ExpressionReifier.java:39)
at org.apache.camel.reifier.WhenReifier.createProcessor(WhenReifier.java:32)
at org.apache.camel.reifier.WhenReifier.createProcessor(WhenReifier.java:24)
at org.apache.camel.reifier.ProcessorReifier.createProcessor(ProcessorReifier.java:485)
at org.apache.camel.reifier.ChoiceReifier.createProcessor(ChoiceReifier.java:54)
at org.apache.camel.reifier.ProcessorReifier.createProcessor(ProcessorReifier.java:485)
at org.apache.camel.reifier.ProcessorReifier.createOutputsProcessorImpl(ProcessorReifier.java:448)
at org.apache.camel.reifier.ProcessorReifier.createOutputsProcessor(ProcessorReifier.java:415)
at org.apache.camel.reifier.TryReifier.createProcessor(TryReifier.java:38)
at org.apache.camel.reifier.ProcessorReifier.createProcessor(ProcessorReifier.java:485)
at org.apache.camel.reifier.ProcessorReifier.createOutputsProcessorImpl(ProcessorReifier.java:448)
at org.apache.camel.reifier.ProcessorReifier.createOutputsProcessor(ProcessorReifier.java:415)
at org.apache.camel.reifier.ProcessorReifier.createOutputsProcessor(ProcessorReifier.java:212)
at org.apache.camel.reifier.ProcessorReifier.createChildProcessor(ProcessorReifier.java:231)
at org.apache.camel.reifier.SplitReifier.createProcessor(SplitReifier.java:42)
at org.apache.camel.reifier.ProcessorReifier.makeProcessorImpl(ProcessorReifier.java:536)
at org.apache.camel.reifier.ProcessorReifier.makeProcessor(ProcessorReifier.java:497)
at org.apache.camel.reifier.ProcessorReifier.addRoutes(ProcessorReifier.java:241)
at org.apache.camel.reifier.RouteReifier.addRoutes(RouteReifier.java:358)
... 56 more
Use .to instead of .bean so its sending to a Camel endpoint, then you can send to the mock endpoint. When using .bean then its for calling a POJO Java bean only.

How to handle exception while parsing JSON in Flink

I am reading data from Kafka using flink 1.4.2 and parsing them to ObjectNode using JSONDeserializationSchema. If the incoming record is not a valid JSON then my Flink job fails. I would like to skip the broken record instead of failing the job.
FlinkKafkaConsumer010<ObjectNode> kafkaConsumer =
new FlinkKafkaConsumer010<>(TOPIC, new JSONDeserializationSchema(), consumerProperties);
DataStream<ObjectNode> messageStream = env.addSource(kafkaConsumer);
messageStream.print();
I am getting the following exception if the data in Kafka is not a valid JSON.
Job execution switched to status FAILING.
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'This': was expecting ('true', 'false' or 'null')
at [Source: [B#4f522623; line: 1, column: 6]
Job execution switched to status FAILED.
Exception in thread "main" org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
The easiest solution is to implement your own DeserializationSchema and wrap JSONDeserializationSchema. You can then catch the exception and either ignore it or perform custom action.
As suggested by #twalthr, I implemented my own DeserializationSchema by copying JSONDeserializationSchema and added exception handling.
import org.apache.flink.api.common.serialization.AbstractDeserializationSchema;
import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper;
import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ObjectNode;
import java.io.IOException;
public class CustomJSONDeserializationSchema extends AbstractDeserializationSchema<ObjectNode> {
private ObjectMapper mapper;
#Override
public ObjectNode deserialize(byte[] message) throws IOException {
if (mapper == null) {
mapper = new ObjectMapper();
}
ObjectNode objectNode;
try {
objectNode = mapper.readValue(message, ObjectNode.class);
} catch (Exception e) {
ObjectMapper errorMapper = new ObjectMapper();
ObjectNode errorObjectNode = errorMapper.createObjectNode();
errorObjectNode.put("jsonParseError", new String(message));
objectNode = errorObjectNode;
}
return objectNode;
}
#Override
public boolean isEndOfStream(ObjectNode nextElement) {
return false;
}
}
In my streaming job.
messageStream
.filter((event) -> {
if(event.has("jsonParseError")) {
LOG.warn("JsonParseException was handled: " + event.get("jsonParseError").asText());
return false;
}
return true;
}).print();
Flink has improved null record handling for FlinkKafkaConsumer
There are two possible design choices when the DeserializationSchema encounters a corrupted message. It can either throw an IOException which causes the pipeline to be restarted, or it can return null where the Flink Kafka consumer will silently skip the corrupted message.
For more details, you can see this link.

Apache Flink JDBC InputFormat throwing java.net.SocketException: Socket closed

I am querying oracle database using Flink DataSet API. For this I have customised Flink JDBCInputFormat to return java.sql.Resultset. As I need to perform further operation on resultset using Flink operators.
public static void main(String[] args) throws Exception {
ExecutionEnvironment environment = ExecutionEnvironment.getExecutionEnvironment();
environment.setParallelism(1);
#SuppressWarnings("unchecked")
DataSource<ResultSet> source
= environment.createInput(JDBCInputFormat.buildJDBCInputFormat()
.setUsername("username")
.setPassword("password")
.setDrivername("driver_name")
.setDBUrl("jdbcUrl")
.setQuery("query")
.finish(),
new GenericTypeInfo<ResultSet>(ResultSet.class)
);
source.print();
environment.execute();
}
Following is the customised JDBCInputFormat:
public class JDBCInputFormat extends RichInputFormat<ResultSet, InputSplit> implements ResultTypeQueryable {
#Override
public void open(InputSplit inputSplit) throws IOException {
Class.forName(drivername);
dbConn = DriverManager.getConnection(dbURL, username, password);
statement = dbConn.prepareStatement(queryTemplate, resultSetType, resultSetConcurrency);
resultSet = statement.executeQuery();
}
#Override
public void close() throws IOException {
if(statement != null) {
statement.close();
}
if(resultSet != null)
resultSet.close();
if(dbConn != null) {
dbConn.close();
}
}
#Override
public boolean reachedEnd() throws IOException {
isLastRecord = resultSet.isLast();
return isLastRecord;
}
#Override
public ResultSet nextRecord(ResultSet row) throws IOException{
if(!isLastRecord){
resultSet.next();
}
return resultSet;
}
}
This works with below query having limit in the row fetched:
SELECT a,b,c from xyz where rownum <= 10;
but when I try to fetch all the rows having approx 1 million of data, I am getting the below exception after fetching random number of rows:
java.sql.SQLRecoverableException: Io exception: Socket closed
at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:101)
at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:133)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:199)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:263)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:521)
at oracle.jdbc.driver.T4CPreparedStatement.fetch(T4CPreparedStatement.java:1024)
at oracle.jdbc.driver.OracleResultSetImpl.close_or_fetch_from_next(OracleResultSetImpl.java:314)
at oracle.jdbc.driver.OracleResultSetImpl.next(OracleResultSetImpl.java:228)
at oracle.jdbc.driver.ScrollableResultSet.cacheRowAt(ScrollableResultSet.java:1839)
at oracle.jdbc.driver.ScrollableResultSet.isValidRow(ScrollableResultSet.java:1823)
at oracle.jdbc.driver.ScrollableResultSet.isLast(ScrollableResultSet.java:349)
at JDBCInputFormat.reachedEnd(JDBCInputFormat.java:98)
at org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:173)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketException: Socket closed
at java.net.SocketOutputStream.socketWrite0(Native Method)
So for my case, how i can solve this issue?
I don't think it is possible to ship a ResultSet like a regular record. This is a stateful object that internally maintains a connection to the database server. Using a ResultSet as a record that is transferred between Flink operators means that it can be serialized, shipped over the via the network to another machine, deserialized, and handed to a different thread in a different JVM process. That does not work.
Depending on the connection a ResultSet might as well stay on the same machine in the same thread, which might be the case that worked for you. If you want to query a database from within an operator, you could implement the function as a RichMapPartitionFunction. Otherwise, I'd read the ResultSet in the data source and forward the resulting rows.

How to set receiveTimeout and connection timeout for cxfEndpoint

I am trying to set receiveTimeout and connection timeout for cxfEndpoint in below code .. I got so many spring dsl related answer but i am using camel dsl specifically.
I am trying to set receiveTimeout and connection timeout for cxfEndpoint in below code .. I got so many spring dsl related answer but i am using camel dsl specifically.
I am trying to set receiveTimeout and connection timeout for cxfEndpoint in below code .. I got so many spring dsl related answer but i am using camel dsl specifically.
I am trying to set receiveTimeout and connection timeout for cxfEndpoint in below code .. I got so many spring dsl related answer but i am using camel dsl specifically.
void configure() throws Exception {
super.configure()
CamelContext context=getContext()
String version=context.resolvePropertyPlaceholders('{{'+ CommonConstants.VERSION_PROPERTY+ '}}')
String region=context.resolvePropertyPlaceholders('{{'+ CommonConstants.REGION_PROPERTY + '}}')
String getContextRoot=context.resolvePropertyPlaceholders('{{' + CommonConstants.CONTEXT_ROOT_PROPERTY + '}}')
boolean validateResponse=getContextRoot
//main route exposing a GET
rest("/$version/$region/")
.get("/$getContextRoot")
.produces('application/json')\
.to('direct:validate')
from('direct:validate')
.routeId('validate')
.bean(ValidatorSubRouteHelper.class,'validate')
.to('direct:get-deviceIdentification')
from('direct:get-deviceIdentification')
.routeId('get-deviceIdentification')
//pre-processing closure
.process {
it.out.body = [ it.properties[MessageReferenceConstants.USER_AGENT_HEADER], new CallContext() ]
it.in.headers[CxfConstants.OPERATION_NAME] = context.resolvePropertyPlaceholders('{{'+MessageReferenceConstants.PROPERTY_OPERATION_NAME+'}}')
it.in.headers[Exchange.SOAP_ACTION] = context.resolvePropertyPlaceholders('{{'+MessageReferenceConstants.PROPERTY_SOAP_ACTION+'}}')
Map<String, Object> reqCtx = new HashMap<String, Object>();
HTTPClientPolicy clientHttpPolicy = new HTTPClientPolicy();
clientHttpPolicy.setReceiveTimeout(10000);
reqCtx.put(HTTPClientPolicy.class.getName(), clientHttpPolicy)
it.in.headers[Client.REQUEST_CONTEXT]=reqCtx
}
.to(getEndpointURL())
//In case of SOAPFault from device, handling the exception in processSOAPResponse
.onException(SoapFault.class)
.bean(ProcessResponseExceptionHelper.class,"processSOAPResponse")
.end()
//post-processing closure
.process {
log.info("processing the response retrieved from device service")
MessageContentsList li = it.in.getBody(MessageContentsList.class)
DeviceFamily deviceFamily = (DeviceFamily) li.get(0)
log.debug('device type is '+deviceFamily.deviceType.value)
it.properties[MessageReferenceConstants.PROPERTY_RESPONSE_BODY] = deviceFamily.deviceType.value
}.to('direct:transform')
from('direct:transform')
.routeId('transform')
//transform closure
.process {
log.info("Entering the FilterTransformSubRoute(transform)")
Device device=new Device()
log.debug('device type '+it.properties[MessageReferenceConstants.PROPERTY_RESPONSE_BODY])
device.familyName = it.properties[MessageReferenceConstants.PROPERTY_RESPONSE_BODY]
it.out.body=device
}
.choice()
.when(simple('{{validateResponse}}'))
.to('direct:validateResponse')
if(validateResponse) {
from('direct:validateResponse')
.bean(DataValidator.getInstance('device.json'))
}
}
/**
* Constructs the endpoint url.
* Formatting end point URL for device identification service call
* #return the endpoint url
*/
private String getEndpointURL() {
CamelContext context=getContext()
def serviceURL=context.resolvePropertyPlaceholders('{{'+MessageReferenceConstants.SERVICE_URL+'}}')
def wsdlURL=context.resolvePropertyPlaceholders('{{'+MessageReferenceConstants.WSDL_URL+'}}')
boolean isGZipEnable=CommonConstants.TRUE.equalsIgnoreCase(context.resolvePropertyPlaceholders('{{'+MessageReferenceConstants.GZIP_ENABLED+'}}'))
def serviceClass = context.resolvePropertyPlaceholders('{{'+MessageReferenceConstants.PROPERTY_SERVICE_CLASS+'}}')
def serviceName = context.resolvePropertyPlaceholders('{{'+MessageReferenceConstants.PROPERTY_SERVICE_NAME+'}}')
def url="cxf:$serviceURL?"+
"wsdlURL=$wsdlURL"+
"&serviceClass=$serviceClass"+
"&serviceName=$serviceName"
if(isGZipEnable) {
url+= "&cxfEndpointConfigurer=#deviceIdentificationServiceCxfConfigurer"
}
log.debug("endpoint url is " + url)
url
}
You already find a the cxfEndpointConfigurer option for it.
Now you just need implement the configurer interface like this:
public static class MyCxfEndpointConfigurer implements CxfEndpointConfigurer {
#Override
public void configure(AbstractWSDLBasedEndpointFactory factoryBean) {
// Do nothing here
}
#Override
public void configureClient(Client client) {
// reset the timeout option to override the spring configuration one
HTTPConduit conduit = (HTTPConduit) client.getConduit();
HTTPClientPolicy policy = new HTTPClientPolicy();
// You can setup the timeout option here
policy.setReceiveTimeout(60000);
policy.setConnectionTimeout(30000);
conduit.setClient(policy);
}
#Override
public void configureServer(Server server) {
// Do nothing here
}
}

Resources