After going through Camel In Action book, I encountered following doubts.
I have below 2 routes
A.
from("file:/home/src") //(A.1)
.transacted("required") //(A.2)
.bean("dbReader", "readFromDB()") //(A.3) only read from DB
.bean("dbReader", "readFromDB()") //(A.4) only read from DB
.to("jms:queue:DEST_QUEUE") //(A.5)
Questions:
A.a. Is transacted in (A.2) really required here ?
A.b. If answer to #a is yes, then what should be the associated transaction manager of the "required" policy ? Should it be JmsTransactionManager or JpaTransactionManager ?
A.c. As DEST_QUEUE is at the producer end, so does JMS component in (A.5) need to be transacted ?
B.
from("jms:queue:SRC_QUEUE") //(B.1) transactional jms endpoint
.transacted("required") //(B.2)
.bean("someBean", "someMethod()") //(B.3) simple arithmetic computation
.to("jms1:queue:DEST_QUEUE") //(B.4)
SRC_QUEUE and DEST_QUEUE are queues of different jms broker
Questions:
B.a. The JMS component in (B.1) is marked as transacted, so in this case does route need to be transacted as mentioned in (B.2) ?
B.b. As DEST_QUEUE is at the producer end, so does JMS component in (B.4) need to be transacted ?
Very good questions to talk about Camel transaction handling.
General remark: when talking about Camel transactions it means to consume transacted from a transaction capable system like a database or JMS broker. The transacted statement in a route must immediately follow the from statement because it is always related to the consumption.
A.a. Is transacted in (A.2) really required here ?
No, it is not. Since the filesystem is not transaction capable, it can't be of any help in this route.
A.b. If answer to #a is yes, then ... ?
There is no "filesystem transaction manager"
A.c. As DEST_QUEUE is at the producer end, so does JMS component in (A.5) need to be transacted ?
Not sure, but I don't think so. The producer tries to hand over a message to the broker. Transactions are used to enable a rollback, but if the broker has not received the data, what could a rollback do?
B.a. The JMS component in (B.1) is marked as transacted, so in this case does route need to be transacted as mentioned in (B.2) ?
It depends because SRC and DEST are on different brokers.
If you want an end-to-end-transaction between the brokers, you need to use an XA-transaction manager and then you have to mark the route as transacted.
If you are OK with consumer transaction, you can configure the JMS component for it and omit the Spring Tx manager and the Camel transacted statement.
To clarify the last point: if you consume with local broker transaction, Camel does not commit the message until the route is successfully processed. So if any error occurs, a rollback would happen and the message would be redelivered.
In most cases this is totally OK, however, what still could happen with two different brokers is that the route is successfully processed, the message is delivered to DEST broker but Camel is no more able to commit against SRC broker. Then a redelivery occurs, the route is processed one more time and the message is delivered multiple times to DEST broker.
In my opinion the complexity of XA transactions is harder to handle than the very rare edge cases with local broker transactions. But this is a very subjective opinion and perhaps also depends on the context or data you are working with.
And important to note: if SRC and DEST broker are the same, local broker transactions are 100% sufficient! Absolutely no need for Spring Tx manager and Camel transacted.
B.b. As DEST_QUEUE is at the producer end, so does JMS component in (B.4) need to be transacted ?
Same as answer to B.a.
Good afternoon,
I'd like to take a minute to reply to your questions. I'll address the 'B' side questions.
WRT:
B.a. The JMS component in (B.1) is marked as transacted, so in this case does route need to be transacted as mentioned in (B.2) ?
Yes. Both the source and destination components need to be marked as transacted. Marking the components as transacted will start local JMS transactions on the source and destination session. Note that these are two separate local JMS transactions that are managed by two separate JmsTransactionManagers.
Marking the route as 'transacted' will start a JTA transaction context. Note that the PlatformTransactionManager must be a JtaTransactionManager. When the 'to' component is called, the local JMS transaction for the message send will be synchronized with the local transaction for the message get. (JTA synchronized transactions). This means that the send will get a callback when the remote broker acknowledges the commit for the send. At that point, the message receive will be committed. This is 'dups OK' transactional behavior (not XA). You have a window where the message has been sent, but the receive has not been ack'ed.
Actually getting this working is tricky. Here is a sample:
<!-- ******************** Camel route definition ********************* -->
<camelContext allowUseOriginalMessage="false"
id="camelContext-Bridge-Local" streamCache="true" trace="true" xmlns="http://camel.apache.org/schema/blueprint">
<route id="amq-to-amq">
<from id="from" uri="amqLoc:queue:IN"/>
<transacted id="trans"/>
<to id="to" uri="amqRem:queue:OUT"/>
</route>
</camelContext>
<!-- ********************* Local AMQ configuration ************************** -->
<bean class="org.apache.activemq.camel.component.ActiveMQComponent" id="amqLoc">
<property name="configuration">
<bean class="org.apache.camel.component.jms.JmsConfiguration">
<property name="connectionFactory" ref="AmqCFLocalPool"/>
<property name="receiveTimeout" value="100000"/>
<property name="maxConcurrentConsumers" value="3"/>
<property name="cacheLevelName" value="CACHE_NONE"/>
<property name="transacted" value="true"/>
</bean>
</property>
</bean>
<bean class="org.apache.activemq.jms.pool.PooledConnectionFactory" id="AmqCFLocalPool">
<property name="maxConnections" value="1"/>
<property name="idleTimeout" value="0"/>
<property name="connectionFactory" ref="AmqCFLocal"/>
</bean>
<bean class="org.apache.activemq.ActiveMQConnectionFactory" id="AmqCFLocal">
<property name="brokerURL" value="tcp://10.0.0.170:61616?jms.prefetchPolicy.all=0"/>
<property name="userName" value="admin"/>
<property name="password" value="admin"/>
</bean>
<!-- ********************* Remote AMQ configuration ************************** -->
<bean class="org.apache.activemq.camel.component.ActiveMQComponent" id="amqRem">
<property name="configuration">
<bean class="org.apache.camel.component.jms.JmsConfiguration">
<property name="connectionFactory" ref="AmqCFRemotePool"/>
<property name="transacted" value="true"/>
</bean>
</property>
</bean>
<bean class="org.apache.activemq.jms.pool.PooledConnectionFactory"
destroy-method="stop" id="AmqCFRemotePool" init-method="start">
<property name="maxConnections" value="1"/>
<property name="idleTimeout" value="0"/>
<property name="connectionFactory" ref="AmqCFRemote"/>
</bean>
<bean class="org.apache.activemq.ActiveMQConnectionFactory" id="AmqCFRemote">
<property name="brokerURL" value="tcp://10.0.0.171:61616"/>
<property name="userName" value="admin"/>
<property name="password" value="admin"/>
</bean>
Enable DEBUG logging for the org.springframework.jms.connection.JmsTransactionManager, and DEBUG/TRACE level logging for the JTA transaction manager that you are using.
Related
I have an old application which handle JMS messages with ActiveMQ 5.8.0 and some JNDI remote topic connected to this ActiveMQ.
I have a connector like that :
<bean class="org.apache.activemq.network.jms.JmsConnector">
<property name="outboundTopicConnectionFactory" ref="jmsConnectionFactoryTo" />
<property name="outboundClientId" value="${remote.clientId}" />
<property name="jndiOutboundTemplate" ref="jndiTemplateTo" />
<property name="preferJndiDestinationLookup" value="true" />
<property name="inboundTopicBridges">
<list>
<bean class="org.apache.activemq.network.jms.InboundTopicBridge">
<property name="inboundTopicName" value="${remote.topic.to}"/>
<property name="localTopicName" value="${local.topic.to}"/>
<property name="consumerName" value="${remote.consumer.name}"/>
<property name="selector" value="${remote.selector}"/>
</bean>
</list>
</property>
</bean>
It works great, but now, for some technical reasons (strict JMS 1.1), I need to use "ConnectionFactory" instead of "TopicConnectionFactory".
With the actual configuration, I'm stuck because ActiveMQ seems to use "TopicConnectionFactory" instead of "ConnectionFactory", and my new class "MyConnectionFactoryImpl" implements "ConnectionFactory" now :
nested exception is org.springframework.beans.ConversionNotSupportedException:
Failed to convert property value of type 'com.webmethods.jms.impl.MyConnectionFactoryImpl'
to required type 'javax.jms.TopicConnectionFactory'
for property 'outboundTopicConnectionFactory';
nested exception is java.lang.IllegalStateException:
Cannot convert value of type [com.webmethods.jms.impl.MyConnectionFactoryImpl]
to required type [javax.jms.TopicConnectionFactory] for property 'outboundTopicConnectionFactory':
no matching editors or conversion strategy found
In "org.apache.activemq.network.jms.JmsConnector" class, it use everywhere "TopicConnectionFactory", which is not recommended anymore in JMS 1.1.
EDIT :
According to #Justin Bertram, I need to use Camel instead of ActiveMQ embedded bridge. But I can't find any example of XML configuration which I can use to replace my actual two beans JMSConnector. Which is the simple way to do this keeping my XML config files ?
As the documentation for the JMS to JMS Bridge (i.e. org.apache.activemq.network.jms.JmsConnector) states:
ActiveMQ provides bridging functionality to other JMS providers that implement the JMS 1.0.2 and above specification.
In other words, the whole goal of the JMS to JMS Bridge is to use the JMS 1.0.2 interface(s). Changing it so that it only used JMS 1.1 would defeat the purpose.
The documentation also states that you should use Camel instead of the JMS to JMS Bridge:
Warning, try Camel first!
Note that we recommend you look at using Apache Camel for bridging ActiveMQ to or from any message broker (or indeed any other technology, protocol or middleware) as its much easier to:
keep things flexible; its very easy to map different queue/topic to one or more queues or topics on the other provider
perform content based routing, filtering and other Enterprise Integration Patterns
allows you to work with any technology, protocol or middleware, not just JMS providers
Therefore I recommend you use Camel instead of org.apache.activemq.network.jms.JmsConnector.
I would think that having your code return a TopicConnectionFactory would be the simplest solution. Even the JMS 2.0 specification provides the TopicConnectionFactory. No matter what version of ActiveMQ you are using, you certainly have the option of using the TopicConnectionFactory in your code and providing that to your bridge.
Note that the Camel route:
<camelContext xmlns="http://camel.apache.org/schema/spring">
<route>
<from uri="mqseries:Foo.Bar"/>
<to uri="activemq:Cheese"/>
</route>
</camelContext>
has no error handling. For example, if the 'to' endpoint is down, this route will read from the 'from' endpoint and just throw the messages on the floor. Furthermore, if the 'to' component is not configured to use a caching/pooling connection factory, then a new JMS connection will be created for each message sent. This has poor performance and can result in many sockets in the TIME_WAIT state. Bottom line - beware trivial Camel routes.
In Apache Camel 2.20.2, I created a route with a split() and recipientlist(). I would like the entire route and recipients of each Exchange to occur in the same transaction. I am confused about when Camel will use a separate thread and transaction boundary. I've read through the Camel documentation and combed through various articles/forums on the web. I am looking for a definitive answer.
In Camel I have this route:
from("seda:process")
.transacted("TRANS_REQUIRESNEW")
.to("sql:classpath:sql/SelForUpdate.sql?dataSource=DataSource1")
.split(body())
.shareUnitOfWork()
.setHeader("transactionId", simple("${body.transactionId}"))
// Datasource 2 updates happening using "direct:xxxx" recipients
.recipientList().method(Routing.class).shareUnitOfWork().end()
.to("sql:classpath:sql/UpdateDateProcessed.sql?dataSource=DataSource1");
In the Spring context I defined the transaction management:
<jee:jndi-lookup expected-type="javax.sql.DataSource" id="Datasource1" jndi-name="jdbc/Datasource1"/>
<jee:jndi-lookup expected-type="javax.sql.DataSource" id="Datasource2" jndi-name="jdbc/Datasource2"/>
<bean id="datasource1TxManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="Datasource1" />
</bean>
<bean id="datasource2TxManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="Datasource2" />
</bean>
<bean id="TRANS_REQUIRESNEW"
class="org.apache.camel.spring.spi.SpringTransactionPolicy">
<property name="transactionManager">
<bean id="txMgrRouting"
class="org.springframework.data.transaction.ChainedTransactionManager">
<constructor-arg>
<list>
<ref bean="datasource1TxManager" />
<ref bean="datasource2TxManager" />
</list>
</constructor-arg>
</bean>
</property>
<property name="propagationBehaviorName"
value="PROPAGATION_REQUIRES_NEW" />
</bean>
When I run the route, it appears that the updates to Datasource1 and Datasource2 are happening in separate transactions. In addition, it appears the SelForUpdate.sql and UpdateDateProcessed.sql for Datasource1 are happening in separate transactions.
My question is, where are new threads created in this code, and where are the transaction boundaries? How would I get this to happen in one transaction context?
In reading the Apache Camel Developer's Cookbook, I understand the Split and RecipientList patterns both use the same thread for all processing (unless parallel processing is used). With the SpringTransactionPolicy beans that I've created, it seems all work in this route and recipient routes should take place in the same transaction context. Am I correct?
I am using JDO (3.x, with datanucleus 2) to persist objects in one of my apps in the google app engine (java). My sequence of calls are such:
Open persistence manager in servlet filter (servlet 1) - Using ThreadLocal
call pm.findByObjectId from DAO class (via servlet 1)
call pm.deletePersistent from DAO class (via servlet 1)
call pm.newQuery to list all objects now in db (via servlet 1) - write to response (json)
Close persistence manager in servlet filter - inside finally of doFilter method
However, my objects are not being deleted until I close the pm in step 5. Also it is not consitent, sometimes it does get deleted !(havent figured out when). I would ideally want the objects to be deleted in Step 3 above, so that when in step 4 my query runs, it returns the updated list.
Could anyone please let me know if I could improve on this design to do inserts/deletes more atomically that this. Or is it just because the writes to the database are too slow ?
Here's my jdoconfig.xml
<persistence-manager-factory name="transactions-optional">
<property name="javax.jdo.PersistenceManagerFactoryClass"
value="org.datanucleus.api.jdo.JDOPersistenceManagerFactory"/>
<property name="javax.jdo.option.ConnectionURL" value="appengine"/>
<property name="javax.jdo.option.NontransactionalRead" value="true"/>
<property name="javax.jdo.option.NontransactionalWrite" value="true"/>
<property name="javax.jdo.option.RetainValues" value="true"/>
<property name="datanucleus.appengine.autoCreateDatastoreTxns" value="true"/>
<property name="datanucleus.appengine.singletonPMFForName" value="true"/>
</persistence-manager-factory>
I suspect your GAE environment is setup to commit on close. You can control the transaction boundaries with the JDO API, e.g.:
Transaction jdoTx = pm.currentTransaction();
jdoTx.begin();
pm.deletePersistent(obj);
jdoTx.commit();
I have an application that uses one database, for now i have this data-access-config.xml configured.
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:tx="http://www.springframework.org/schema/tx"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd
http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.1.xsd">
<!-- Instructs Spring to perfrom declarative transaction management on annotated classes -->
<tx:annotation-driven />
<!-- Drives transactions using local JPA APIs -->
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory" />
</bean>
<!-- Creates a EntityManagerFactory for use with the Hibernate JPA provider and a simple in-memory data source populated with test data -->
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter" />
</property>
</bean>
<bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName" value="org.postgresql.Driver" />
<property name="url" value="jdbc:postgresql://localhost:5432/database1" />
<property name="username" value="admin1" />
<property name="password" value="some_pass" />
</bean>
</beans>
it connects good, but now i need to configure a second database (in the same server), tried to duplicate the EntityManagerfactory but throws an error, that cannot have two Entities managers at the same time so im confused here. Im using Hibernate+JPA+Spring
Thanks!!!
Something like this should work I believe:
<bean id="emf" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="dataSource" ref="dataSource" />
...
</bean>
<bean id="emf1" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="dataSource" ref="dataSource1" />
...
</bean>
The in the DAO, use
#PersistenceContext(unitName = "emf1")
private EntityManager em;
The above will tell the DAO to use the emf1 instance.
Maybe you forgot to name your second entity manager something different than your first?
You might need to use a "persistence unit manager" which will help manage your persistence units. See the Spring documentation on multiple persistence units. You will have the 2 data sources, 1 entity manager factory, and 1 persistence unit manager.
The entity manager factor will have a reference to the persistence unit manager (instead of the 2 data sources), and then the persistence unit manager will have a reference to the 2 data sources.
How do I configure an xml file as the datasource in iBatis?
thanks,
R
If you are using Tomcat you can configure the DataSource in config.xml and have the following definition in your iBatis configuration xml where comp/env/jdbc/db is your jndi definition in Tomcat.
<bean id="JndiDatasource" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="java:comp/env/jdbc/db"/>
<property name="resourceRef" value="true" />
</bean>
If its a standalone application:
<bean id="jdbc.DataSource"
class="org.apache.commons.dbcp.BasicDataSource">
<property name="driverClassName" value="oracle.jdbc.OracleDriver"/>
<property name="initialSize" value="${jdbc.initialSize}"/>
<property name="maxActive" value="${jdbc.maxActive}"/>
<property name="minIdle" value="${jdbc.minIdle}"/>
<property name="password" value="${jdbc.dbpassword}"/>
<property name="url" value="${jdbc.dburl}"/>
<property name="username" value="${jdbc.dbuser}"/>
<property name="accessToUnderlyingConnectionAllowed" value="true"/>
</bean>
You can use JndiDataSourceFactory.. here is what i got from the IBATIS documentation:
JndiDataSourceFactory -
This implementation will retrieve a DataSource implementation from a JNDI context from within
an application container. This is typically used when an application server is in use and a
container managed connection pool and associated DataSource implementation are provided. The
standard way to access a JDBC DataSource implementation is via a JNDI context.
JndiDataSourceFactory provides functionality to access such a DataSource via JNDI. The
configuration parameters that must be specified in the datasource stanza are as follows:
I used Spring to configure IBATIS with AppServer defined Data Source, the spring framework has a nice integration with IBATIS. look at org.springframework.orm.ibatis.SqlMapClientFactoryBean to do this.
If you are looking for complete (working) example then, http://ganeshtiwaridotcomdotnp.blogspot.com/2011/05/tutorial-on-ibatis-using-eclipse-ibator_31.html might help you.
This article contains all the configuration settings for ibatis with ibator plugin and working sample examples with downloadable code.