Google App Engine : JDO deletePersistent not consistent - google-app-engine

I am using JDO (3.x, with datanucleus 2) to persist objects in one of my apps in the google app engine (java). My sequence of calls are such:
Open persistence manager in servlet filter (servlet 1) - Using ThreadLocal
call pm.findByObjectId from DAO class (via servlet 1)
call pm.deletePersistent from DAO class (via servlet 1)
call pm.newQuery to list all objects now in db (via servlet 1) - write to response (json)
Close persistence manager in servlet filter - inside finally of doFilter method
However, my objects are not being deleted until I close the pm in step 5. Also it is not consitent, sometimes it does get deleted !(havent figured out when). I would ideally want the objects to be deleted in Step 3 above, so that when in step 4 my query runs, it returns the updated list.
Could anyone please let me know if I could improve on this design to do inserts/deletes more atomically that this. Or is it just because the writes to the database are too slow ?
Here's my jdoconfig.xml
<persistence-manager-factory name="transactions-optional">
<property name="javax.jdo.PersistenceManagerFactoryClass"
value="org.datanucleus.api.jdo.JDOPersistenceManagerFactory"/>
<property name="javax.jdo.option.ConnectionURL" value="appengine"/>
<property name="javax.jdo.option.NontransactionalRead" value="true"/>
<property name="javax.jdo.option.NontransactionalWrite" value="true"/>
<property name="javax.jdo.option.RetainValues" value="true"/>
<property name="datanucleus.appengine.autoCreateDatastoreTxns" value="true"/>
<property name="datanucleus.appengine.singletonPMFForName" value="true"/>
</persistence-manager-factory>

I suspect your GAE environment is setup to commit on close. You can control the transaction boundaries with the JDO API, e.g.:
Transaction jdoTx = pm.currentTransaction();
jdoTx.begin();
pm.deletePersistent(obj);
jdoTx.commit();

Related

Jms component transacted and camel route transacted

After going through Camel In Action book, I encountered following doubts.
I have below 2 routes
A.
from("file:/home/src") //(A.1)
.transacted("required") //(A.2)
.bean("dbReader", "readFromDB()") //(A.3) only read from DB
.bean("dbReader", "readFromDB()") //(A.4) only read from DB
.to("jms:queue:DEST_QUEUE") //(A.5)
Questions:
A.a. Is transacted in (A.2) really required here ?
A.b. If answer to #a is yes, then what should be the associated transaction manager of the "required" policy ? Should it be JmsTransactionManager or JpaTransactionManager ?
A.c. As DEST_QUEUE is at the producer end, so does JMS component in (A.5) need to be transacted ?
B.
from("jms:queue:SRC_QUEUE") //(B.1) transactional jms endpoint
.transacted("required") //(B.2)
.bean("someBean", "someMethod()") //(B.3) simple arithmetic computation
.to("jms1:queue:DEST_QUEUE") //(B.4)
SRC_QUEUE and DEST_QUEUE are queues of different jms broker
Questions:
B.a. The JMS component in (B.1) is marked as transacted, so in this case does route need to be transacted as mentioned in (B.2) ?
B.b. As DEST_QUEUE is at the producer end, so does JMS component in (B.4) need to be transacted ?
Very good questions to talk about Camel transaction handling.
General remark: when talking about Camel transactions it means to consume transacted from a transaction capable system like a database or JMS broker. The transacted statement in a route must immediately follow the from statement because it is always related to the consumption.
A.a. Is transacted in (A.2) really required here ?
No, it is not. Since the filesystem is not transaction capable, it can't be of any help in this route.
A.b. If answer to #a is yes, then ... ?
There is no "filesystem transaction manager"
A.c. As DEST_QUEUE is at the producer end, so does JMS component in (A.5) need to be transacted ?
Not sure, but I don't think so. The producer tries to hand over a message to the broker. Transactions are used to enable a rollback, but if the broker has not received the data, what could a rollback do?
B.a. The JMS component in (B.1) is marked as transacted, so in this case does route need to be transacted as mentioned in (B.2) ?
It depends because SRC and DEST are on different brokers.
If you want an end-to-end-transaction between the brokers, you need to use an XA-transaction manager and then you have to mark the route as transacted.
If you are OK with consumer transaction, you can configure the JMS component for it and omit the Spring Tx manager and the Camel transacted statement.
To clarify the last point: if you consume with local broker transaction, Camel does not commit the message until the route is successfully processed. So if any error occurs, a rollback would happen and the message would be redelivered.
In most cases this is totally OK, however, what still could happen with two different brokers is that the route is successfully processed, the message is delivered to DEST broker but Camel is no more able to commit against SRC broker. Then a redelivery occurs, the route is processed one more time and the message is delivered multiple times to DEST broker.
In my opinion the complexity of XA transactions is harder to handle than the very rare edge cases with local broker transactions. But this is a very subjective opinion and perhaps also depends on the context or data you are working with.
And important to note: if SRC and DEST broker are the same, local broker transactions are 100% sufficient! Absolutely no need for Spring Tx manager and Camel transacted.
B.b. As DEST_QUEUE is at the producer end, so does JMS component in (B.4) need to be transacted ?
Same as answer to B.a.
Good afternoon,
I'd like to take a minute to reply to your questions. I'll address the 'B' side questions.
WRT:
B.a. The JMS component in (B.1) is marked as transacted, so in this case does route need to be transacted as mentioned in (B.2) ?
Yes. Both the source and destination components need to be marked as transacted. Marking the components as transacted will start local JMS transactions on the source and destination session. Note that these are two separate local JMS transactions that are managed by two separate JmsTransactionManagers.
Marking the route as 'transacted' will start a JTA transaction context. Note that the PlatformTransactionManager must be a JtaTransactionManager. When the 'to' component is called, the local JMS transaction for the message send will be synchronized with the local transaction for the message get. (JTA synchronized transactions). This means that the send will get a callback when the remote broker acknowledges the commit for the send. At that point, the message receive will be committed. This is 'dups OK' transactional behavior (not XA). You have a window where the message has been sent, but the receive has not been ack'ed.
Actually getting this working is tricky. Here is a sample:
<!-- ******************** Camel route definition ********************* -->
<camelContext allowUseOriginalMessage="false"
id="camelContext-Bridge-Local" streamCache="true" trace="true" xmlns="http://camel.apache.org/schema/blueprint">
<route id="amq-to-amq">
<from id="from" uri="amqLoc:queue:IN"/>
<transacted id="trans"/>
<to id="to" uri="amqRem:queue:OUT"/>
</route>
</camelContext>
<!-- ********************* Local AMQ configuration ************************** -->
<bean class="org.apache.activemq.camel.component.ActiveMQComponent" id="amqLoc">
<property name="configuration">
<bean class="org.apache.camel.component.jms.JmsConfiguration">
<property name="connectionFactory" ref="AmqCFLocalPool"/>
<property name="receiveTimeout" value="100000"/>
<property name="maxConcurrentConsumers" value="3"/>
<property name="cacheLevelName" value="CACHE_NONE"/>
<property name="transacted" value="true"/>
</bean>
</property>
</bean>
<bean class="org.apache.activemq.jms.pool.PooledConnectionFactory" id="AmqCFLocalPool">
<property name="maxConnections" value="1"/>
<property name="idleTimeout" value="0"/>
<property name="connectionFactory" ref="AmqCFLocal"/>
</bean>
<bean class="org.apache.activemq.ActiveMQConnectionFactory" id="AmqCFLocal">
<property name="brokerURL" value="tcp://10.0.0.170:61616?jms.prefetchPolicy.all=0"/>
<property name="userName" value="admin"/>
<property name="password" value="admin"/>
</bean>
<!-- ********************* Remote AMQ configuration ************************** -->
<bean class="org.apache.activemq.camel.component.ActiveMQComponent" id="amqRem">
<property name="configuration">
<bean class="org.apache.camel.component.jms.JmsConfiguration">
<property name="connectionFactory" ref="AmqCFRemotePool"/>
<property name="transacted" value="true"/>
</bean>
</property>
</bean>
<bean class="org.apache.activemq.jms.pool.PooledConnectionFactory"
destroy-method="stop" id="AmqCFRemotePool" init-method="start">
<property name="maxConnections" value="1"/>
<property name="idleTimeout" value="0"/>
<property name="connectionFactory" ref="AmqCFRemote"/>
</bean>
<bean class="org.apache.activemq.ActiveMQConnectionFactory" id="AmqCFRemote">
<property name="brokerURL" value="tcp://10.0.0.171:61616"/>
<property name="userName" value="admin"/>
<property name="password" value="admin"/>
</bean>
Enable DEBUG logging for the org.springframework.jms.connection.JmsTransactionManager, and DEBUG/TRACE level logging for the JTA transaction manager that you are using.

Camel Split/RecipientList Threads & Transaction Boundaries

In Apache Camel 2.20.2, I created a route with a split() and recipientlist(). I would like the entire route and recipients of each Exchange to occur in the same transaction. I am confused about when Camel will use a separate thread and transaction boundary. I've read through the Camel documentation and combed through various articles/forums on the web. I am looking for a definitive answer.
In Camel I have this route:
from("seda:process")
.transacted("TRANS_REQUIRESNEW")
.to("sql:classpath:sql/SelForUpdate.sql?dataSource=DataSource1")
.split(body())
.shareUnitOfWork()
.setHeader("transactionId", simple("${body.transactionId}"))
// Datasource 2 updates happening using "direct:xxxx" recipients
.recipientList().method(Routing.class).shareUnitOfWork().end()
.to("sql:classpath:sql/UpdateDateProcessed.sql?dataSource=DataSource1");
In the Spring context I defined the transaction management:
<jee:jndi-lookup expected-type="javax.sql.DataSource" id="Datasource1" jndi-name="jdbc/Datasource1"/>
<jee:jndi-lookup expected-type="javax.sql.DataSource" id="Datasource2" jndi-name="jdbc/Datasource2"/>
<bean id="datasource1TxManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="Datasource1" />
</bean>
<bean id="datasource2TxManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="Datasource2" />
</bean>
<bean id="TRANS_REQUIRESNEW"
class="org.apache.camel.spring.spi.SpringTransactionPolicy">
<property name="transactionManager">
<bean id="txMgrRouting"
class="org.springframework.data.transaction.ChainedTransactionManager">
<constructor-arg>
<list>
<ref bean="datasource1TxManager" />
<ref bean="datasource2TxManager" />
</list>
</constructor-arg>
</bean>
</property>
<property name="propagationBehaviorName"
value="PROPAGATION_REQUIRES_NEW" />
</bean>
When I run the route, it appears that the updates to Datasource1 and Datasource2 are happening in separate transactions. In addition, it appears the SelForUpdate.sql and UpdateDateProcessed.sql for Datasource1 are happening in separate transactions.
My question is, where are new threads created in this code, and where are the transaction boundaries? How would I get this to happen in one transaction context?
In reading the Apache Camel Developer's Cookbook, I understand the Split and RecipientList patterns both use the same thread for all processing (unless parallel processing is used). With the SpringTransactionPolicy beans that I've created, it seems all work in this route and recipient routes should take place in the same transaction context. Am I correct?

need to connect two databases with Hibernate and JPA

I have an application that uses one database, for now i have this data-access-config.xml configured.
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:tx="http://www.springframework.org/schema/tx"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd
http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.1.xsd">
<!-- Instructs Spring to perfrom declarative transaction management on annotated classes -->
<tx:annotation-driven />
<!-- Drives transactions using local JPA APIs -->
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory" />
</bean>
<!-- Creates a EntityManagerFactory for use with the Hibernate JPA provider and a simple in-memory data source populated with test data -->
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter" />
</property>
</bean>
<bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName" value="org.postgresql.Driver" />
<property name="url" value="jdbc:postgresql://localhost:5432/database1" />
<property name="username" value="admin1" />
<property name="password" value="some_pass" />
</bean>
</beans>
it connects good, but now i need to configure a second database (in the same server), tried to duplicate the EntityManagerfactory but throws an error, that cannot have two Entities managers at the same time so im confused here. Im using Hibernate+JPA+Spring
Thanks!!!
Something like this should work I believe:
<bean id="emf" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="dataSource" ref="dataSource" />
...
</bean>
<bean id="emf1" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="dataSource" ref="dataSource1" />
...
</bean>
The in the DAO, use
#PersistenceContext(unitName = "emf1")
private EntityManager em;
The above will tell the DAO to use the emf1 instance.
Maybe you forgot to name your second entity manager something different than your first?
You might need to use a "persistence unit manager" which will help manage your persistence units. See the Spring documentation on multiple persistence units. You will have the 2 data sources, 1 entity manager factory, and 1 persistence unit manager.
The entity manager factor will have a reference to the persistence unit manager (instead of the 2 data sources), and then the persistence unit manager will have a reference to the 2 data sources.

Single Sign On (SSO): How to use Active Directory as an authentication method for CAS service?

I am developing a portal to Liferay and want to apply there a Single Sign On mechanism (SSO). I am using Jasig CAS for centralized authentication of my multiple web applications. Until now I know that I am able to use CAS as an authentication method but the next step would be to add some more intelligence and ask the authentication from an Active Directory server.
This should be possible by using AD as a "database" towards which the authentication is made, but I am new on these things and do not know how to make this with Jasig CAS.
Any clue how to accomplish this task?
I'm making a few assumptions here, so please let me know if I'm off target:
You're using a version of CAS between 3.3.2 and 3.4.8.
You want to tie CAS into Active Directory via LDAP (for Kerberos or SPNEGO see references below) using the Bind LDAP Handler (for FastBind see references below).
You're familiar with building CAS from source via Maven.
Prerequisite
If you're going to bind to AD via "ldaps://" (as opposed to "ldap://"), the JVM on your CAS server needs to trust the SSL certificate of your Active Directory server. If you're using a self-signed cert for AD, you'll need to import this into the JVM's trust store.
Summary
Within your CAS source tree, you'll need to make changes to the following files:
cas-server-webapp/pom.xml
cas-server-webapp/src/main/webapp/WEB-INF/deployerConfigContext.xml
Details
pom.xml:
Add the following within <dependencies>:
<!-- LDAP support -->
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>cas-server-support-ldap</artifactId>
<version>${project.version}</version>
</dependency>
deployerConfigContext.xml:
Reconfigure your Authentication Handers:
Look for: <property name="authenticationHandlers">. Inside this is a <list>, and inside this are (probably) two <bean ...> elements
Keep this one:
<bean class="org.jasig.cas.authentication.handler.support.HttpBasedServiceCredentialsAuthenticationHandler" p:httpClient-ref="httpClient" />
The other <bean> (again, probably) corresponds to the current method of authentication you're using. (I'm not clear based upon the question, as there are several ways
CAS can do this without using external services. The default is SimpleTestUsernamePasswordAuthenticationHandler, this authenticates as long as username is equal to password). Replace that <bean> with:
<!-- LDAP bind Authentication Handler -->
<bean class="org.jasig.cas.adaptors.ldap.BindLdapAuthenticationHandler">
<property name="filter" value="uid=%u" />
<property name="searchBase" value="{your LDAP search path, e.g.: cn=users,dc=example,dc=com}" />
<property name="contextSource" ref="LDAPcontextSource" />
<property name="ignorePartialResultException" value="yes" /> <!-- fix because of how AD returns results -->
</bean>
Modify the "searchBase" property according to your AD configuration.
Create a Context Source for LDAP:
Add this somewhere within the root <beans> element:
<bean id="LDAPcontextSource" class="org.springframework.ldap.core.support.LdapContextSource">
<property name="pooled" value="false"/>
<property name="urls">
<list>
<value>{URL of your AD server, e.g.: ldaps://ad.example.com}/</value>
</list>
</property>
<property name="userDn" value="{your account that has permission to bind to AD, e.g.: uid=someuser, dc=example, dc=com}"/>
<property name="password" value="{your password for bind}"/>
<property name="baseEnvironmentProperties">
<map>
<entry>
<key>
<value>java.naming.security.authentication</value>
</key>
<value>simple</value>
</entry>
</map>
</property>
</bean>
Modify "urls", "userDn" and "password" accordingly.
Rebuild cas-server-webapp and try it.
References:
https://wiki.jasig.org/display/CASUM/LDAP
https://wiki.jasig.org/display/CASUM/Active+Directory

iBatis | Configuring xml file as a datasource in ibatis

How do I configure an xml file as the datasource in iBatis?
thanks,
R
If you are using Tomcat you can configure the DataSource in config.xml and have the following definition in your iBatis configuration xml where comp/env/jdbc/db is your jndi definition in Tomcat.
<bean id="JndiDatasource" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="java:comp/env/jdbc/db"/>
<property name="resourceRef" value="true" />
</bean>
If its a standalone application:
<bean id="jdbc.DataSource"
class="org.apache.commons.dbcp.BasicDataSource">
<property name="driverClassName" value="oracle.jdbc.OracleDriver"/>
<property name="initialSize" value="${jdbc.initialSize}"/>
<property name="maxActive" value="${jdbc.maxActive}"/>
<property name="minIdle" value="${jdbc.minIdle}"/>
<property name="password" value="${jdbc.dbpassword}"/>
<property name="url" value="${jdbc.dburl}"/>
<property name="username" value="${jdbc.dbuser}"/>
<property name="accessToUnderlyingConnectionAllowed" value="true"/>
</bean>
You can use JndiDataSourceFactory.. here is what i got from the IBATIS documentation:
JndiDataSourceFactory -
This implementation will retrieve a DataSource implementation from a JNDI context from within
an application container. This is typically used when an application server is in use and a
container managed connection pool and associated DataSource implementation are provided. The
standard way to access a JDBC DataSource implementation is via a JNDI context.
JndiDataSourceFactory provides functionality to access such a DataSource via JNDI. The
configuration parameters that must be specified in the datasource stanza are as follows:
I used Spring to configure IBATIS with AppServer defined Data Source, the spring framework has a nice integration with IBATIS. look at org.springframework.orm.ibatis.SqlMapClientFactoryBean to do this.
If you are looking for complete (working) example then, http://ganeshtiwaridotcomdotnp.blogspot.com/2011/05/tutorial-on-ibatis-using-eclipse-ibator_31.html might help you.
This article contains all the configuration settings for ibatis with ibator plugin and working sample examples with downloadable code.

Resources