I have an OfyService class of this type
/**
* Custom Objectify Service that this application should use.
*/
public class OfyService {
/**
* This static block ensure the entity registration.
*/
static {
factory().register(MerchantProfile.class);
factory().register(Product.class);
}
/**
* Use this static method for getting the Objectify service object in order to make sure the
* above static block is executed before using Objectify.
* #return Objectify service object.
*/
public static Objectify ofy() {
return ObjectifyService.ofy();
}
/**
* Use this static method for getting the Objectify service factory.
* #return ObjectifyFactory.
*/
public static ObjectifyFactory factory() {
return ObjectifyService.factory();
}
}
I use factory().allocateId() method to allocate Key (to get Long id) before saving an entity. I have a problem where I need to transfer money from one account to the other and add an entry to Transaction table. So, I use ofy().transact(new Work<~>) in the following way
WrappedBoolean result = ofy().transact(new Work<WrappedBoolean>() {
#Override
public WrappedBoolean run() {
}
}
I allocate Id for Transaction before entering the transact part and then I subtract money from one account add it to other and then save both the accounts and Transaction entity.
My concern is as follows
What happens when there are two concurrent requests and app engine Instance provide them separate request handlers and same ID is allocated to both of them, depending upon the database State or it is not possible that the same id gets allocated twice.
What is the flow of control of Work as compared to the conventional synchronization block that we use in Java for making critical sections?
PS: To perform the same in other frameworks like Jersey (with JPA) I would have used a Synchronization block and would have done the Transaction in that block. And since at a time only one thread can access that block and id is also assigned once data is saved to the table there would have bee no issues.
Thread safety is not relevant to data consistency with either the datastore or with JPA/RDBMSes. If you are relying on synchronization, you are doing something wrong.
If you create a complete unit of work that performs your task and execute it in a transaction, the datastore will ensure that it is either completely applied or not applied at all. It will also guarantee that all transactions behave as if they were operated in serial. This might result in any particular execution aborting and retrying, but you don't see this as a user.
In short: Just put this in a transaction and do not worry about threading.
I was importing ttl ontologies to dbpedia following the blog post http://michaelbloggs.blogspot.de/2013/05/importing-ttl-turtle-ontologies-in-neo4j.html. The post uses BatchInserters to speed up the task. It mentions
Batch insertion is not transactional. If something goes wrong and you don't shutDown() your database properly, the database becomes inconsistent.
I had to interrupt one of the batch insertion tasks as it was taking time much longer than expected which left my database in an inconsistence state. I get the following message:
db_name store is not cleanly shut down
How can I recover my database from this state? Also, for future purposes is there a way for committing after importing every file so that reverting back to the last state would be trivial. I thought of git, but I am not sure if it would help for a binary file like index.db.
There are some cases where you cannot recover from unclean shutdowns when using the batch inserter api, please note that its package name org.neo4j.unsafe.batchinsert contains the word unsafe for a reason. The intention for batch inserter is to operate as fast as possible.
If you want to guarantee a clean shutdown you should use a try finally:
BatchInserter batch = BatchInserters.inserter(<dir>);
try {
} finally {
batch.shutdown();
}
Another alternative for special cases is registering a JVM shutdown hook. See the following snippet as an example:
BatchInserter batch = BatchInserters.inserter(<dir>);
// do some operations potentially throwing exceptions
Runtime.getRuntime().addShutdownHook(new Thread() {
public void run() {
batch.shutdown();
}
});
I have a situation where we update an entity data and then based on certain type of updates we may update another entity as well.
There are situations when the updation in the second entry may fail for some reasons and throw exceptions.
The question is how to handle this situation as we would like to rollback the changes done in the first entity.
We cannot defer the update to the first entry until the second entry update.
In the current situation if that happens then
as soon as the code reach the below block then it will commit the first entry changes even there are failures in 2nd entity update. so how to rollback? I think not closing the persistentManager if 2nd entity update is failed is not the right option.
finally {
try {
if (pm != null && pm.isClosed() == false )
pm.close();
} catch (Exception e) {
log.severe("Exception in finally of execute of updateDonor");
log.severe("Exception class is :" + e.getClass().getName());
log.severe("Exception is :" + e.getMessage());
throw new Exception(e.getMessage()
+ "Unable to close persistence manager");
}
log.info("end of updateDonor");
}
I'm not sure I fully understand your situation, but would cross-group (XG) transactions, which allow a transaction to be applied to entities from more than one entity group, be what you are looking for? Search for 'cross-group transactions on this page as well. With an XG transaction, either all changes to the entities encompassed by the transaction go through, or none do.
I am using Google App Engine and using Google's datastore interface for a database .
My question is this , I have the following code : I have a network object that I want to either update if it exists on db , or to create if it's the first time. . For this I have to catch an exception and repeat the same code twice - it seems ugly and redundant and makes me think I'm doing something wrong .
The second thing that strikes me as odd is that there is no method I can think of that copies an object to an entity or vice versa . Am I expected to implement this myself ? It is very uncomfrotable to use the setProperty or getProperty for each property and well ...I am just wondering why there is no objectToEntity method or something of the sort.
This is how my code currently looks ...
try {
Entity network=datastore.get(KeyFactory.stringToKey(networks.get(i)._ipDigits));
//If I get here no exception was thrown - entity already exists on db.
Network contextNet= //fetch the network object from servlet context ...
network.setProperty("ip", contextNet._ip); //update the fields using setProperty - no better way??
network.setProperty("offlineUsers",contextNet._offlineUsers);
datastore.put(network);
}
//Entity doesn't exist , create a new entity and save it (while repeating the same code)...
catch (EntityNotFoundException e) {
Entity network=new Entity("network",Long.parseLong(networks.get(i)._ipDigits));
Network contextNet= // ...fetch the network object from servlet context
network.setProperty("ip", contextNet._ip);
network.setProperty("offlineUsers",contextNet._offlineUsers);
datastore.put(network);
}
You don't have to get and put the entity in order to update it. If you know the ID of the entity you can just put it. If it exists it will be updated, if not it will be created.
Use objectify to automatically map your classes to entities.
Can I do nested transactions in NHibernate, and how do I implement them? I'm using SQL Server 2008, so support is definitely in the DBMS.
I find that if I try something like this:
using (var outerTX = UnitOfWork.Current.BeginTransaction())
{
using (var nestedTX = UnitOfWork.Current.BeginTransaction())
{
... do stuff
nestedTX.Commit();
}
outerTX.Commit();
}
then by the time it comes to outerTX.Commit() the transaction has become inactive, and results in a ObjectDisposedException on the session AdoTransaction.
Are we therefore supposed to create nested NHibernate sessions instead? Or is there some other class we should use to wrap around the transactions (I've heard of TransactionScope, but I'm not sure what that is)?
I'm now using Ayende's UnitOfWork implementation (thanks Sneal).
Forgive any naivety in this question, I'm still new to NHibernate.
Thanks!
EDIT: I've discovered that you can use TransactionScope, such as:
using (var transactionScope = new TransactionScope())
{
using (var tx = UnitOfWork.Current.BeginTransaction())
{
... do stuff
tx.Commit();
}
using (var tx = UnitOfWork.Current.BeginTransaction())
{
... do stuff
tx.Commit();
}
transactionScope.Commit();
}
However I'm not all that excited about this, as it locks us in to using SQL Server, and also I've found that if the database is remote then you have to worry about having MSDTC enabled... one more component to go wrong. Nested transactions are so useful and easy to do in SQL that I kind of assumed NHibernate would have some way of emulating the same...
NHibernate sessions don't support nested transactions.
The following test is always true in version 2.1.2:
var session = sessionFactory.Open();
var tx1 = session.BeginTransaction();
var tx2 = session.BeginTransaction();
Assert.AreEqual(tx1, tx2);
You need to wrap it in a TransactionScope to support nested transactions.
MSDTC must be enabled or you will get error:
{"Network access for Distributed Transaction Manager (MSDTC) has been disabled. Please enable DTC for network access in the security configuration for MSDTC using the Component Services Administrative tool."}
As Satish suggested, nested transactions are not supported in NHibernate. I've not come across scenarios where nested transactions were needed, but certainly I've faced problems where I had to ignore creating transactions if other ones were already active in other units of work.
The blog link below provides an example implementation for NHibernate, but should also work for SQL server:
http://rajputyh.blogspot.com/2011/02/nested-transaction-handling-with.html
I've been struggling with this for a while now. Am going to have another crack at it.
I want to implement transactions in individual service containers - because that makes them self-contained - but then be able to nest a bunch of those service methods within a larger transaction and rollback the whole lot if necessary.
Because I'm using Rhino Commons I'm now going to try refactoring using the With.Transaction method. Basically it allows us to write code as if transactions were nested, though in reality there is only one.
For example:
private Project CreateProject(string name)
{
var project = new Project(name);
With.Transaction(delegate
{
UnitOfWork.CurrentSession.Save(project);
});
return project;
}
private Sample CreateSample(Project project, string code)
{
var sample = new Sample(project, code);
With.Transaction(delegate
{
UnitOfWork.CurrentSession.Save(sample);
});
return sample;
}
private void Test_NoNestedTransaction()
{
var project = CreateProject("Project 1");
}
private void TestNestedTransaction()
{
using (var tx = UnitOfWork.Current.BeginTransaction())
{
try
{
var project = CreateProject("Project 6");
var sample = CreateSample(project, "SAMPLE006", true);
}
catch
{
tx.Rollback();
throw;
}
tx.Commit();
}
}
In Test_NoNestedTransaction(), we are creating a project alone, without the context of a larger transaction. In this case, in CreateSample a new transaction will be created and committed, or rolled back if an exception occurs.
In Test_NestedTransaction(), we are creating both a sample and a project. If anything goes wrong, we want both to be rolled back. In reality, the code in CreateSample and CreateProject will run just as if there were no transactions at all; it is entirely the outer transaction that decides whether to rollback or commit, and does so based on whether an exception is thrown. Really that's why I'm using a manually created transaction for the outer transaction; so we I have control over whether to commit or rollback, rather than just defaulting to on-exception-rollback-else-commit.
You could achieve the same thing without Rhino.Commons by putting a whole lot of this sort of thing through your code:
if (!UnitOfWork.Current.IsInActiveTransaction)
{
tx = UnitOfWork.Current.BeginTransaction();
}
_auditRepository.SaveNew(auditEvent);
if (tx != null)
{
tx.Commit();
}
... and so on. But With.Transaction, despite the clunkiness of needing to create anonymous delegates, does that quite conveniently.
An advantage of this approach over using TransactionScopes (apart from the reliance on MSDTC) is that there ought to be just a single flush to the database in the final outer-transaction commit, regardless of how many methods have been called in-between. In other words, we don't need to write uncommitted data to the database as we go, we're always just writing it to the local NHibernate cache.
In short, this solution doesn't offer ultimate control over your transactions, because it doesn't ever use more than one transaction. I guess I can accept that, since nested transactions are by no means universally supported in every DBMS anyway. But now perhaps I can at least write code without worrying about whether we're already in a transaction or not.
That implementation doesn't support nesting, if you want nesting use Ayende's UnitOfWork implementation. Another problem with the implementation your are using (at least for web apps) is that it holds onto the ISession instance in a static variable.
I just rewrote our UnitOfWork yesterday for these reasons, it was originally based off of Gabriel's.
We don't use UnitOfWork.Current.BeginTransaction(), we use UnitofWork.TransactionalFlush(), which creates a separate transaction at the very end to flush all the changes at once.
using (var uow = UnitOfWork.Start())
{
var entity = repository.Get(1);
entity.Name = "Sneal";
uow.TransactionalFlush();
}