How to perform a catalog update correctly in FDT 1.x? - industrial

I have heard rumours that performing a catalog update correctly in FDT 1.x is quite complex. There seem to be more than the obvious steps, which are in pseudo code:
foreach (progid in Registry having component category "FDT DTM")
{
dtm = CoCreateInstance(progid);
StartDTMAccordingStateMachine(dtm);
info = dtm.GetInformation("FDT");
catalog.Add(info);
ShutdownDTMAccordingStateMachine(dtm);
Release(dtm);
}
I could not find any hints in the FDT specification that would require a more complex catalog update procedure, so are the rumours true? What makes a correct catalog update procedure so complex?

Basically the idea for the catalog update is correct. Unfortunately the rumours are also true: doing a catalog update involves some more thoughts, as there are:
Frame application interface considerations
During the catalog update, the DTM is not part of a project yet. Therefore the frame application could be implemented without project specific interfaces such as IFdtTopology or IFdtBulkData. However, many DTMs will query for those interfaces immediately and throw an exception if the frame application does not support those interfaces.
Also, during the catalog update, the frame application could expect that the DTM works without user interface, because this is a batch operation which should not require user interaction. This means the frame application could be implemented without the IFdtActiveX and IFdtDialog interfaces. Unfortunately there are also DTMs that use those interfaces during catalog update time.
.NET considerations
Doing a catalog update on a system with many DTMs installed could require a lot of memory. Therefore some frame applications do the catalog update in an external process. While this is a good idea, you need to consider the FDT .NET specifications and best practice documents.
The base line here is: the external process must be a .NET 2.0 process, independent of the actual implementation technology of your frame application. If you have a C++ implementation, you'll need a very small .NET 2.0 object being loaded before any DTM is started.
Memory considerations
Since FDT 1.x is a conglomerate of COM and .NET, there will be pinned objects. This makes it likely that your application suffers from small object heap fragmentation. In addition FDT passes XMLs as strings which makes it more likely that your application suffers from large object heap fragmentation. The overall combination is very dangerous.
One solution might be to start a limited number of DTMs in the same process and then restart the process, e.g. like
updateprocess = StartProcess();
dtmCount = 0;
foreach (progid in Registry having component category "FDT DTM")
{
dtmCount++;
if (dtmCount % 10 == 0)
{
// restart process to avoid out of memory situation
updateProcess.SignalShutdown();
updateProcess.WaitForExit();
updateProcess = StartProcess();
}
updateProcess.StartDTM(progid);
info = updateProcess.GetDtmInformation();
catalog.Add(info);
updateProcess.ShutdownDTM();
}
In the update process you'll need to create the COM object and follow the state machine etc.
FDT 1.2.1 scanning information
In FDT 1.2.1, additional information was introduced to better recognize device during a hardware scan. Although there is no fully FDT 1.2.1 compliant DTM at the time of writing, many FDT 1.2.0 DTMs implement the additional interface IDtmInformation2 to support device detection.
For you as the frame application developer, this means that you have to extend the GetDtmInformation() method in the update process:
T GetDtmInformation()
{
var result = new T(); // a type defined by you
result.info = dtm.GetInformation();
foreach (deviceType in result.info)
{
foreach (protocol in deviceType)
{
deviceInfo = dtm.GetDeviceIdentificationInformation(deviceType, protocol);
result.deviceinfo.Add(deviceInfo);
}
}
}
Schema path updates
FDT 1.2.0 had the problem that the user needed to install XDR schema definitions manually, which was very uncomfortable. FDT 1.2.1 solves this problem in the way that the DTM can now bring XDR schemas with it. The definition is in the XML from GetInformation() at the XML elements <FDT>, <DtmInfo>, <DtmSchemaPaths>. The DTM will publish a directory name there. In theory, this is an easy task: to install the XDR schemas, we need to update the GetDtmInformation() a little bit:
T GetDtmInformation()
{
var result = new T(); // a type defined by you
result.info = dtm.GetInformation();
schemaPaths = result.info.SelectNodes("/FDT/DtmInfo/DtmSchemaPaths/DtmSchemaPath");
foreach (dtmSchemaPath in schemaPaths)
{
CopyFiles(from dtmSchemaPath to frameSchemaPath);
}
// *) read on, more code needed here
foreach (deviceType in result.info)
{
foreach (protocol in deviceType)
{
deviceInfo = dtm.GetDeviceIdentificationInformation(deviceType, protocol);
result.deviceinfo.Add(deviceInfo);
}
}
}
Unfortunately there is a logical bug in the sequence now. Since the DTM was already started, it has already asked the frame application for the schema path (using IFdtContainer::GetXmlSchemaPath()) and it has already set up the schema cache to validate XMLs. The DTM cannot be notified about updates in the schema path.
Therefore you need to restart the DTM in order to be sure that it gets the latest version of XDR schemas. In code, this means you have to update the whole code to:
T GetDtmInformation()
{
var result = new T; // a type defined by you
result.info = dtm.GetInformation();
schemaPaths = result.info.SelectNodes("/FDT/DtmInfo/DtmSchemaPaths/DtmSchemaPath");
schemasUpdated = false;
foreach (dtmSchemaPath in schemaPaths)
{
schemasUpdated |= CopyFiles(from dtmSchemaPath to frameSchemaPath);
}
if (schemasUpdated)
{
// restart the DTM to make sure it uses latest versions of the schemas
dtm = CoCreateInstance(progid);
StartDTMAccordingStateMachine(dtm);
info = dtm.GetInformation("FDT");
}
foreach (deviceType in result.info)
{
foreach (protocol in deviceType)
{
deviceInfo = dtm.GetDeviceIdentificationInformation(deviceType, protocol);
result.deviceinfo.Add(deviceInfo);
}
}
}
XDR schema version information issue
In the chapter before, I have used a simple CopyFiles() operation to update the XDR schema files. This method is not so simple as it seems, because this method needs to perform a version number check.
The version is given in the XDR schema like this:
<AttributeType name="schemaVersion" dt:type="number" default="1.0"/>
The attribute #default defines the version number of the schema. #schemaVersion itself is not used anywhere else.
Version numbers that are used at the time of writing:
1.0 // e.g. FDTCIPCommunicationSchema CIP version 1.1-02
1.1 // e.g. FDTCIPChannelParameterSchema CIP version 1.1-02
1.00 // e.g. DTMIOLinkDeviceSchema IO Link version 1.0-1
1.21 // e.g. FDTIOLinkChannelParameterSchema IO Link version 1.0-1
1.22 // e.g. FDTHART_ExtendedCommunicationSchema
Version 1.21 highly suggests that it correlates to FDT version 1.2.1, which brings up the question on how to interpret the version number. There are three possible ways of interpreting it:
a) as a simple float number as defined in the datatype of XDR (dt:type="number")
b) as a version number in format major.minor
c) as a version number in format major.minorbuild where minor and build are simply concatenated
Ok, I'll leave that puzzle up to the reader. I have suggested a document clarifying this version number issue.
Anyway, this is our CopyFiles() method:
bool CopyFiles(sourceDir, destinationDir)
{
filesCopied = false;
foreach(filename in sourceDir)
{
existingVersion = ExtractVersion(destinationDir + filename);
newVersion = ExtractVersion(sourceDir + filename);
if (newVersion > existingVersion)
{
File.Copy(sourceDir + filename, destinationDir+filenam);
filesCopied = true;
}
}
return filesCopied;
}
XDR schema update impact on other DTMs
In the last chapter we return a flag from CopyFiles() in order to determine whether or not the DTM needs to be restarted in GetDtmInformation(). However, this update may not only affect the current DTM, it may also affect other DTMs of the same protocol which have been added to the catalog before.
While you can simply restart the whole catalog update from scratch, this would imply a huge performance impact. The better way seems to do it selectively.
To apply a selective approach, you need to maintain a list of protocols that were updated (in T GetDtmInformation()):
foreach (dtmSchemaPath in schemaPaths)
{
schemasUpdated = CopyFiles(from dtmSchemaPath to frameSchemaPath);
if (schemasUpdated)
{
listOfChangedProtocols.Add(ExtractProtocolId(destinationDir));
}
}
And of course, don't forget to re-update the catalog for affected DTMs:
affectedDtms = catalog.GetDtmsForProtocols(listOfChangedProtocols);
// TODO: perform catalog update again
// NOTE: remember that this might apply recursively
Getting protocol groups right
Next, you need to know the concept of protocol groups. A protocol group shares XDR schema files across different protocols, where each protocol is identified by a protocol ID. A good example is the CIP protocol family, which consists of the single protocols DeviceNet, CompoNet and Ethernet/IP.
These protocols share a common set of XDR schema files, so you'll find the same file three times on your hard disk. This duplication also has some impact on the catalog update since you need to update all copies even if the DTM comes for a single protocol only.
The reason is in the way a schema cache is constructed: when adding XDR schemas to the schema cache, the first file will win. Other files with the same name will not be added any more. Therefore it is important to ensure that the first file added to the cache is the one with the highest version number. This can only be achieved by updating all copies to the latest version.
This results in an update of the CopyFiles() method:
List<protocolID> CopyFiles(sourceDir, destinationDir)
{
protocolsChanged = new List<protocolID>();
foreach(filename in sourceDir)
{
foreach (subdirectory in destinationDir)
{
files = GetFiles(subdirectory, pattern = filename);
if (files.Count == 1)
{
UpdateXDRConsideringVersionNumber(sourceDir, subdirectory);
protocolsChanged.Add(ExtractProtocolId(subdirectory));
}
}
}
return protocolsChanged;
}
void UpdateXDRConsideringVersionNumber(sourceDir, destinationDir)
{
existingVersion = ExtractVersion(destinationDir + filename);
newVersion = ExtractVersion(sourceDir + filename);
if (newVersion > existingVersion)
{
File.Copy(sourceDir + filename, destinationDir+filenam);
filesCopied = true;
}
}
Device DTMs and schema paths
For whatever reason, it is defined that only communication DTMs and device DTMs need to bring XDR schemas with them. The rationale behind that probably was that you cannot use a device DTM without a communication or gateway DTM.
Unfortunately, when querying the Windows Registry for DTMs, you cannot predict the order in which you get DTMs. This may lead to the case that you get a device DTM first. Starting this DTM and getting information from it may result in errors or invalid XML if there is no XDR schema for the protocol of the DTM yet.
So you need to continue the catalog update, hopefully find a communication DTM or gateway DTM of the same protocol which brings the XDR schemas. Then you start the device DTM again and it will deliver useful information.
This does not need an update to any code. It should already work if you followed all the steps described before. The only thing to consider here is good error handling (but I'll not do that in pseudo code here).
Conclusion
Hopefully I could cover all the topics which are important to know in conjunction with the FDT 1.x catalog update. As you can see, this is not only a rumour.

Related

How to configure Ignite to work as a full distributed database?

I'm trying to manage a decentralized DB around a huge number of partial DB instances. Each instance has a subset of the whole data and they are all nodes and clients, thus asking for some data the query must be spread to every (group) instance and which one have it will return the data.
Due to avoid lost of data if one instance goes down, I figured out they must replicate its contents with some others. How this scenario can be configured with Ignite?
Supose I have a table with the name and last access datetime of users in a distributed application, like ...
class UserLogOns
{
string UserName;
DateTime LastAccess;
}
Now when the program starts I prepare Ingite for work as a decentralizad DB ...
static void Main(string[] args)
{
TcpCommunicationSpi commSpi = new TcpCommunicationSpi();
// Override local port.
commSpi.LocalPort = 44444;
commSpi.LocalPortRange = 0;
IgniteConfiguration cfg = new IgniteConfiguration();
// Override default communication SPI.
cfg.CommunicationSpi = commSpi;
using (var ignite = Ignition.Start(cfg))
{
var cfgCache = new CacheConfiguration("mio");
cfgCache.AtomicityMode = CacheAtomicityMode.Transactional;
var cache = ignite.GetOrCreateCache<string, UserLogOns>(cfgCache);
cache.Put(Environment.MachineName, new UserLogOns { UserName = Environment.MachineName, LastAccess = DateTime.UtcNow });
}
}
And now ... I want to get LastAccess of other "computerB" when ever it was ..
Is this correct? How can it be implemented?
It depends on the exact use-case that you want to implement. In general, Ignite provides out of the box everything that you mentioned here.
This is a good way to start with using SQL in Ignite: https://apacheignite-sql.readme.io/docs
Create table with "template=partitioned" instead of "replicated" as it is shown in the example here: https://apacheignite-sql.readme.io/docs/getting-started#section-creating-tables, configure number of backups and select a field to be affinity key (a field that is used to map specific entries to cluster node) and just run some queries.
Also check out the concept of baseline topology if you are going to use native persistence: https://apacheignite.readme.io/docs/baseline-topology.
In-memory mode will trigger rebalance between nodes on each server topology change (node that can store data in/out) automatically.

Connections with Entity Framework and Transient Fault Handling Block?

We're migrating SQL to Azure. Our DAL is Entity Framework 4.x based. We're wanting to use the Transient Fault Handling Block to add retry logic for SQL Azure.
Overall, we're looking for the best 80/20 rule (or maybe more of a 95/5 but you get the point) - we're not looking to spend weeks refactoring/rewriting code (there's a LOT of it). I'm fine re-implementing our DAL's framework but not all of the code written and generated against it anymore than we have to since this is already here only to address a minority case. Mitigation >>> elimination of this edge case for us.
Looking at the possible options explained here at MSDN, it seems Case #3 there is the "quickest" to implement, but only at first glance. Upon pondering this solution a bit, it struck me that we might have problems with connection management since this circumvent's Entity Framework's built-in processes for managing connections (i.e. always closing them). It seems to me that the "solution" is to make sure 100% of our Contexts that we instantiate use Using blocks, but with our architecture, this would be difficult.
So my question: Going with Case #3 from that link, are hanging connections a problem or is there some magic somewhere that's going on that I don't know about?
I've done some experimenting and it turns out that this brings us back to the old "managing connections" situation we're used to from the past, only this time the connections are abstracted away from us a bit and we must now "manage Contexts" similarly.
Let's say we have the following OnContextCreated implementation:
private void OnContextCreated()
{
const int maxRetries = 4;
const int initialDelayInMilliseconds = 100;
const int maxDelayInMilliseconds = 5000;
const int deltaBackoffInMilliseconds = initialDelayInMilliseconds;
var policy = new RetryPolicy<SqlAzureTransientErrorDetectionStrategy>(maxRetries,
TimeSpan.FromMilliseconds(initialDelayInMilliseconds),
TimeSpan.FromMilliseconds(maxDelayInMilliseconds),
TimeSpan.FromMilliseconds(deltaBackoffInMilliseconds));
policy.ExecuteAction(() =>
{
try
{
Connection.Open();
var storeConnection = (SqlConnection) ((EntityConnection) Connection).StoreConnection;
new SqlCommand("declare #i int", storeConnection).ExecuteNonQuery();
//Connection.Close();
// throw new ApplicationException("Test only");
}
catch (Exception e)
{
Connection.Close();
Trace.TraceWarning("Attempted to open connection but failed: " + e.Message);
throw;
}
}
);
}
In this scenario, we forcibly open the Connection (which was the goal here). Because of this, the Context keeps it open across many calls. Because of that, we must tell the Context when to close the connection. Our primary mechanism for doing that is calling the Dispose method on the Context. So if we just allow garbage collection to clean up our contexts, then we allow connections to remain hanging open.
I tested this by toggling the comments on the Connection.Close() in the try block and running a bunch of unit tests against our database. Without calling Close, we jumped up to ~275-300 active connections (from SQL Server's perspective). By calling Close, that number hovered at ~12. I then reproduced with a small number of unit tests both with and without a using block for the Context and reproduced the same result (different numbers - I forget what they were).
I was using the following query to count my connections:
SELECT s.session_id, s.login_name, e.connection_id,
s.last_request_end_time, s.cpu_time,
e.connect_time
FROM sys.dm_exec_sessions AS s
INNER JOIN sys.dm_exec_connections AS e
ON s.session_id = e.session_id
WHERE login_name='myuser'
ORDER BY s.login_name
Conclusion: If you call Connection.Open() with this work-around to enable the Transient Fault Handling Block, then you MUST use using blocks for all contexts you work with, otherwise you will have problems (that with SQL Azure, will cause your database to be "throttled" and ultimately taken offline for hours!).
The problem with this approach is it only takes care of connection retries and not command retries.
If you use Entity Framework 6 (currently in alpha) then there is some new in-built support for transient retries with Azure SQL Database (with a little bit of configuration): http://entityframework.codeplex.com/wikipage?title=Connection%20Resiliency%20Spec
I've created a library which allows you to configure Entity Framework to retry using the Fault Handling block without needing to change every database call - generally you will only need to change your config file and possibly one or two lines of code.
This allows you to use it for Entity Framework or Linq To Sql.
https://github.com/robdmoore/ReliableDbProvider

Issue with getting database via Sitecore API

We noticed a slight oddity in the Sitecore API code. The code is below for your reference. The code is trying to get a database by doing new Database(database). But randomly it was failing.
This code worked for a while with Database db = new Database(database); but started failing randomly yesterday. When we changed the code to Database db = Database.GetDatabase(database);, the code started working again. What is the difference between the two approaches and what is recommended by Sitecore?
I've seen this happen twice now - multiple times in production and a couple of times in my development environment.
public static void DeleteItem(string id, stringdatabase)
{
//get the database
Database db = new Database(database);
//get the item
item = db.GetItem(new ID(id));
if (item != null)
{
using(new Sitecore.SecurityModel.SecurityDisabler())|
{
//delete the item
item.Delete();
}
}
}
A common way you will see people get a specific database is:
Sitecore.Data.Database master = Sitecore.Configuration.Factory.GetDatabase("master");
This is equivalent to Sitecore.Data.Database.GetDatabase("master").
When you call either of these methods it will first check the cache for the database. If not found it will build up the database with all of the configuration values within the config file via reflection. Once the database is created it will be placed in the cache for future use.
When you use the constructor on the database it is simply creating a rather empty database object. I am rather suprised to hear it was working at all when you used this method.
The proper approach to get a specific database would be to use:
Sitecore.Configuration.Factory.GetDatabase("master");
// or
Sitecore.Data.Database.GetDatabase("master");
If you are looking to get the database used with the current request (aka context database) you can use Sitecore.Context.Database. You can also use Sitecore.Context.ContentDatabase.

Provide a database packaged with the .APK file or host it separately on a website?

Here is some background about my app:
I am developing an Android app that will display a random quote or verse to the user. For this I am using an SQLite database. The size of the DB would be approximately 5K to 10K records, possibly increasing to upto 1M in later versions as new quotes and verses are added. Thus the user would need to update the DB as and when newer versions are of the app or DB are released.
After reading through some forums online, there seem to be two feasible ways I could provide the DB:
1. Bundle it along with the .APK file of the app, or
2. Upload it to my app's website from where users will have to download it
I want to know which method would be better (if there is yet another approach other than these, please do let me know).
After pondering this problem for some time, I have these thoughts regarding the above approaches:
Approach 1:
Users will obtain the DB along with the app, and won't have to download it separately. Installation would thereby be easier. But, users will have to reinstall the app every time there is a new version of the DB. Also, if the DB is large, it will make the installable too cumbersome.
Approach 2:
Users will have to download the full DB from the website (although I can provide a small, sample version of the DB via Approach 1). But, the installer will be simpler and smaller in size. Also, I would be able to provide future versions of the DB easily for those who might not want newer versions of the app.
Could you please tell me from a technical and an administrative standpoint which approach would be the better one and why?
If there is a third or fourth approach better than either of these, please let me know.
Thank you!
Andruid
I built a similar app for Android which gets periodic updates with data from a government agency. It's fairly easy to build an Android compatible db off the device using perl or similar and download it to the phone from a website; and this works rather well, plus the user gets current data whenever they download the app. It's also supposed to be possible to throw the data onto the sdcard if you want to avoid using primary data storage space, which is a bigger concern for my app which has a ~6Mb database.
In order to make Android happy with the DB, I believe you have to do the following (I build my DB using perl).
$st = $db->prepare( "CREATE TABLE \"android_metadata\" (\"locale\" TEXT DEFAULT 'en_US')");
$st->execute();
$st = $db->prepare( "INSERT INTO \"android_metadata\" VALUES ('en_US')");
$st->execute();
I have an update activity which checks weather updates are available and if so presents an "update now" screen. The download process looks like this and lives in a DatabaseHelperClass.
public void downloadUpdate(final Handler handler, final UpdateActivity updateActivity) {
URL url;
try {
close();
File f = new File(getDatabasePath());
if (f.exists()) {
f.delete();
}
getReadableDatabase();
close();
url = new URL("http://yourserver.com/" + currentDbVersion + ".sqlite");
URLConnection urlconn = url.openConnection();
final int contentLength = urlconn.getContentLength();
Log.i(TAG, String.format("Download size %d", contentLength));
handler.post(new Runnable() {
public void run() {
updateActivity.setProgressMax(contentLength);
}
});
InputStream is = urlconn.getInputStream();
// Open the empty db as the output stream
OutputStream os = new FileOutputStream(f);
// transfer bytes from the inputfile to the outputfile
byte[] buffer = new byte[1024 * 1000];
int written = 0;
int length = 0;
while (written < contentLength) {
length = is.read(buffer);
os.write(buffer, 0, length);
written += length;
final int currentprogress = written;
handler.post(new Runnable() {
public void run() {
Log.i(TAG, String.format("progress %d", currentprogress));
updateActivity.setCurrentProgress(currentprogress);
}
});
}
// Close the streams
os.flush();
os.close();
is.close();
Log.i(TAG, "Download complete");
openDatabase();
} catch (Exception e) {
Log.e(TAG, "bad things", e);
}
handler.post(new Runnable() {
public void run() {
updateActivity.refreshState(true);
}
});
}
Also note that I keep a version number in the filename of the db files, and a pointer to the current one in a text file on the server.
It sounds like your app and your db are tightly bound -- that is, the db is useless without the database and the database is useless without the app, so I'd say go ahead and put them both in the same .apk.
That being said, if you expect the db to change very slowly over time, but the app to change quicker, and you don't want your users to have to download the db with each new app revision, then you might want to unbundle them. To make this work, you can do one of two things:
Install them as separate applications, but make sure they share the same userID using the sharedUserId tag in the AndroidManifest.xml file.
Install them as separate applications, and create a ContentProvider for the database. This way other apps could make use of your database as well (if that is useful).
If you are going to store the db on your website then I would recommend that you just make rpc calls to your webserver and get data that way, so the device will never have to deal with a local database. Using a cache manager to avoid multiple lookups will help as well so pages will not have to lookup data each time a page reloads. Also if you need to update the data you do not have to send out a new app every time. Using HttpClient is pretty straight forward, if you need any examples please let me know

How do I do nested transactions in NHibernate?

Can I do nested transactions in NHibernate, and how do I implement them? I'm using SQL Server 2008, so support is definitely in the DBMS.
I find that if I try something like this:
using (var outerTX = UnitOfWork.Current.BeginTransaction())
{
using (var nestedTX = UnitOfWork.Current.BeginTransaction())
{
... do stuff
nestedTX.Commit();
}
outerTX.Commit();
}
then by the time it comes to outerTX.Commit() the transaction has become inactive, and results in a ObjectDisposedException on the session AdoTransaction.
Are we therefore supposed to create nested NHibernate sessions instead? Or is there some other class we should use to wrap around the transactions (I've heard of TransactionScope, but I'm not sure what that is)?
I'm now using Ayende's UnitOfWork implementation (thanks Sneal).
Forgive any naivety in this question, I'm still new to NHibernate.
Thanks!
EDIT: I've discovered that you can use TransactionScope, such as:
using (var transactionScope = new TransactionScope())
{
using (var tx = UnitOfWork.Current.BeginTransaction())
{
... do stuff
tx.Commit();
}
using (var tx = UnitOfWork.Current.BeginTransaction())
{
... do stuff
tx.Commit();
}
transactionScope.Commit();
}
However I'm not all that excited about this, as it locks us in to using SQL Server, and also I've found that if the database is remote then you have to worry about having MSDTC enabled... one more component to go wrong. Nested transactions are so useful and easy to do in SQL that I kind of assumed NHibernate would have some way of emulating the same...
NHibernate sessions don't support nested transactions.
The following test is always true in version 2.1.2:
var session = sessionFactory.Open();
var tx1 = session.BeginTransaction();
var tx2 = session.BeginTransaction();
Assert.AreEqual(tx1, tx2);
You need to wrap it in a TransactionScope to support nested transactions.
MSDTC must be enabled or you will get error:
{"Network access for Distributed Transaction Manager (MSDTC) has been disabled. Please enable DTC for network access in the security configuration for MSDTC using the Component Services Administrative tool."}
As Satish suggested, nested transactions are not supported in NHibernate. I've not come across scenarios where nested transactions were needed, but certainly I've faced problems where I had to ignore creating transactions if other ones were already active in other units of work.
The blog link below provides an example implementation for NHibernate, but should also work for SQL server:
http://rajputyh.blogspot.com/2011/02/nested-transaction-handling-with.html
I've been struggling with this for a while now. Am going to have another crack at it.
I want to implement transactions in individual service containers - because that makes them self-contained - but then be able to nest a bunch of those service methods within a larger transaction and rollback the whole lot if necessary.
Because I'm using Rhino Commons I'm now going to try refactoring using the With.Transaction method. Basically it allows us to write code as if transactions were nested, though in reality there is only one.
For example:
private Project CreateProject(string name)
{
var project = new Project(name);
With.Transaction(delegate
{
UnitOfWork.CurrentSession.Save(project);
});
return project;
}
private Sample CreateSample(Project project, string code)
{
var sample = new Sample(project, code);
With.Transaction(delegate
{
UnitOfWork.CurrentSession.Save(sample);
});
return sample;
}
private void Test_NoNestedTransaction()
{
var project = CreateProject("Project 1");
}
private void TestNestedTransaction()
{
using (var tx = UnitOfWork.Current.BeginTransaction())
{
try
{
var project = CreateProject("Project 6");
var sample = CreateSample(project, "SAMPLE006", true);
}
catch
{
tx.Rollback();
throw;
}
tx.Commit();
}
}
In Test_NoNestedTransaction(), we are creating a project alone, without the context of a larger transaction. In this case, in CreateSample a new transaction will be created and committed, or rolled back if an exception occurs.
In Test_NestedTransaction(), we are creating both a sample and a project. If anything goes wrong, we want both to be rolled back. In reality, the code in CreateSample and CreateProject will run just as if there were no transactions at all; it is entirely the outer transaction that decides whether to rollback or commit, and does so based on whether an exception is thrown. Really that's why I'm using a manually created transaction for the outer transaction; so we I have control over whether to commit or rollback, rather than just defaulting to on-exception-rollback-else-commit.
You could achieve the same thing without Rhino.Commons by putting a whole lot of this sort of thing through your code:
if (!UnitOfWork.Current.IsInActiveTransaction)
{
tx = UnitOfWork.Current.BeginTransaction();
}
_auditRepository.SaveNew(auditEvent);
if (tx != null)
{
tx.Commit();
}
... and so on. But With.Transaction, despite the clunkiness of needing to create anonymous delegates, does that quite conveniently.
An advantage of this approach over using TransactionScopes (apart from the reliance on MSDTC) is that there ought to be just a single flush to the database in the final outer-transaction commit, regardless of how many methods have been called in-between. In other words, we don't need to write uncommitted data to the database as we go, we're always just writing it to the local NHibernate cache.
In short, this solution doesn't offer ultimate control over your transactions, because it doesn't ever use more than one transaction. I guess I can accept that, since nested transactions are by no means universally supported in every DBMS anyway. But now perhaps I can at least write code without worrying about whether we're already in a transaction or not.
That implementation doesn't support nesting, if you want nesting use Ayende's UnitOfWork implementation. Another problem with the implementation your are using (at least for web apps) is that it holds onto the ISession instance in a static variable.
I just rewrote our UnitOfWork yesterday for these reasons, it was originally based off of Gabriel's.
We don't use UnitOfWork.Current.BeginTransaction(), we use UnitofWork.TransactionalFlush(), which creates a separate transaction at the very end to flush all the changes at once.
using (var uow = UnitOfWork.Start())
{
var entity = repository.Get(1);
entity.Name = "Sneal";
uow.TransactionalFlush();
}

Resources