I am creating a MEAN stack and I want to clarify the below.
From the coding standards perspective I know that validations should be executed both at the client side and server side. What I would like to achieve is to execute the exact same validation code so that I do not repeat the code again. This is more like a shared code for client and server side.
so How can I have angular js and Express js invoke same .js file for performing validations? is it even possible?
Thanks!
You sure can do this. This approach is used by RemObjects DataAbstract (http://old.wiki.remobjects.com/wiki/Business_Rules_Scripting_API). The principle here is to define business-rules that will either apply on the client and on the server, or the server only. You will almost never have to check for business-rules ONLY on the client, because you can never "trust" the client to check your business rules.
CQRS and DDD are two architectural principles that could help you here. Domain Driven Design will kind of "clean" or "refine" your code, pushing the infrastructure away from the core "domain" logic. And business rules apply only in the domain, so it's a good idea the keep the domain isolated from the rest.
Command-Query-Responsability-Segretation. I like this one a lot. Basically, you define a set of commands that will be validated before they are applied. There's no more machine-like code that looks like Model.Set('a', 2). Your code, using this principle, will look like MyUnderstandableBusinessObject.UnderstandableCommand(aFriendlyArgument). When it comes to applying business rules, this is very handy that your actual commands reflect the use cases of your domain.
I also always encounter this problem when I work on node.js / javascript projects. The problem is that you do not have a standardized ORM that can be understood by both the client AND the server. This is paradoxal, as node.js and the browser are running on the same language. When I was drawn towards Node.js, I told myself, man both client and server are running the same language, that's going to save sooo much time. But that was kind of false, as there are not that many mature and professional tools out there, even if npm is really active.
I wanted to build an ORM too that could be both understood by the client/server, and add a relational aspect to it (so that it was compatible with SQL) but I kind of abandoned the project. https://github.com/ludydoo/affinity
But, there are a couple of other solutions. Backbone is one, and it's lightweight.
The actual implementation of your business-rule checking here is what you are going to have to work on. You'll want to extract the "validation" part out of your model into another object that will be able to be shared. Something to get you started :
https://jsfiddle.net/ludydoo/y0otcvrf/
BusinessRuleRepository = function() {
this.rules = [];
}
BusinessRuleRepository.prototype.addRule = function(aModelClass, operation, callback) {
this.rules.push({
class: aModelClass,
operation: operation,
callback: callback
})
}
BusinessRuleRepository.prototype.validate = function(object, operation, args) {
_.forIn(this.rules, function(rule) {
if (object.constructor == rule.class && operation == rule.operation) {
rule.callback(object, args)
}
})
}
MyObject = function() {
this.a = 2;
}
MyObject.prototype.setA = function(value) {
aBusinessRuleRepo.validate(this, 'setA', arguments);
this.a = value;
}
// Creating the repository
var aBusinessRuleRepo = new BusinessRuleRepository();
//-------------------------------
// shared.js
var shared = function(aRepository) {
aRepository.addRule(MyObject, 'setA', function(object, args) {
if (args[0] < 0) {
throw 'Invalid value A';
}
})
}
if (aBusinessRuleRepo != undefined) {
shared(aBusinessRuleRepo);
}
//-------------------------------
// Creating the object
var aObject = new MyObject();
try {
aObject.setA(-1); // throws
} catch (err) {
alert('Shared Error : ' + err);
}
aObject.setA(2);
//-------------------------------
// server.js
var server = function(aRepository) {
aRepository.addRule(MyObject, 'setA', function(object, args) {
if (args[0] > 100) {
throw 'Invalid value A';
}
})
}
if (aBusinessRuleRepo != undefined) {
server(aBusinessRuleRepo);
}
//-------------------------------
// on server
try {
aObject.setA(200); // throws
} catch (err) {
alert('Server Error :' + err);
}
The first thing of all is to model and define your domain. You'll have a set of classes that represent your business objects, as well as methods that correspond th business-operations. (I would really go with CQRS for your case)
The model definition would be shared between the client and the server.
You would have to define two files, or two objects. Separated. ServerRules and SharedRules. Those will be a set of Repository.addRule() calls that will register you business rules in the repository. Your client will get the Shared.js business rules, and the server the Shared.js + Server.js business rules. Those business rules will always be applied on your objects this way.
The little example of code I shown you is very simple, and checks business rules only before the command is applied. Maybe you could add a parameter 'beforeCommand' and 'afterCommand' to check business rules before and after changed are made. Then, if you add the possibility of checking business rules after a command is applied, you must be able to rollback the changes (backbone has this functionality I think).
Good luck
You could automate this a little by automatically getting the name of the method you are in (Can I get the name of the currently running function in JavaScript?)
function checkBusinessRules(model, arguments){
businessRuleRepo.validate(model, getCalleeName, arguments);
}
Model.prototype.command = function(arg){
checkBusinessRules(this, arguments);
// perform logic
}
EDIT 2
A small detail i would like to correct on my first answer. Do not implement your business rules on property setters! Use business operation names instead :
You must make sure that you always set your model properties through methods. If you set your model properties directly by assigning a value, you're bypassing the whole business rule processor thing.
The cheap way is to do this through standard setters such as
SetMyProperty(value);
SetAnotherProperty(value);
This is kind of the low-level business rule logic (based on getters and setters). Then, your business rules will also be low-level. Which is kind of bad.
Better, you should do this through business understandable high-level method names such as
RegisterClient(client);
InvalidateMandate(mandate);
Then, your business rules become way more understandable and you'll almost have a good time implementing them.
BusinessRuleRepository.add(ModelClass, "RegisterClient", function(){
if (!Session.can('RegisterClient')) { fail('Unauthorized'); }
})
Related
Note: Cross-posted to ServerFault, based on comments.
Intro
I need to password protect some actions in my application, such as loading/saving files, clicking check-boxes, etc. This is a standard C# .Net 4.0, WinForms application which will run on Windows 7 in a corporate network.
I was about to roll my own very basic system (read obfuscation with wide open backdoors) with a text file of users/passwords/permissions (hashed and salted) until after some searching I found what looks like a
tantalizingly simple approach , but I'm having trouble finding a good tutorial on Roles that isn't about ASP.NET.
Question
So does anyone know of one or more tutorials that show me how to:
Create a Windows User/Group and give that User/Group a Role or Permission.
Note that I'm testing this from my company's networked laptop, but will deploy it on the customer's corporate network (Not sure if this is an issue, or how tricky this will get).
Create winforms/console app sample with even just a single method that prints "Hello World" if I'm authenticated or throws an exception if I'm not?
I've never done Network Admin or anything related and I keep reading about Active Directory and Local Users Vs Networked Users... I was hoping for an approach where I could build to an Interface and just ask Windows if the current user has permission ABC and not care too much about how Windows figured that out. Then I can make a concrete implementation for each Local/Network/ActiveDirectory/etc. use case as required (or if required... as I don't even know that right now).
Background
- read if interested, but not required to answer question
Just to make sure I'm going in the right direction here, basically I need/want to test this on my development PC to make sure it's going to have a good end-user experience for my customer. The problem is that currently they run an Auto-login script for each computer that runs my application and there are several different operators that use my application throughout the day. The customer wants password protection on certain features of my app and only provide that to certain operators. I have no problem fitting this in, as I've expected the request for a while, I just haven't ever programmed authentication before.
I think it's worthwhile to convince my customer to give each operator their own network account and assign whatever permissions they want to that operator or group, in case they need to fire somebody, change permissions, etc. It also means I just open several options for them and they can group those permissions however they see fit based on internal corporate policies, which I really shouldn't have to be worried about (but will be if I have to roll my own, as they're IT department knows almost nothing of my application).
From what I can tell it also makes my life a lot easier by not having to deal with hashing passwords and encryption, etc. and just handle which Role is required to click this or that button.
First of all, you'd have to determine, if you really want a simple role-based-authentication (you may want to read: http://lostechies.com/derickbailey/2011/05/24/dont-do-role-based-authorization-checks-do-activity-based-checks/)
If you're sure it's absolutely sufficient, you're already on the right way with the SO link you provided in your question. It's kind of confusing that there is no support of 'roles' by default in Windows, but there are groups. Groups can be local or remote (e.g. ActiveDirectory), so an admin could assign users to certain groups, that are specific for your application (for an example look here: http://msdn.microsoft.com/en-us/library/ms731200(v=vs.110).aspx)
One key is: You have to prepare your application's central principal, hence fill it with roles, supported for the current user.
Therefore, On the very startup of your application you then check the current active user and set your application wide principal and role(s). This may look like this (just a very simple example):
using System;
using System.Collections.Generic;
using System.Linq;
using System.Security;
using System.Security.Principal;
using System.Text;
using System.Threading;
namespace WindowsPrincipalTrial
{
public class Program
{
// you could also move these definitions to a config file
private static IDictionary<string, string> _groupRoleMappings = new Dictionary<string, string>()
{
{"MYAPPUSERGRP", MyRoles.Standard},
{"MYAPPSUPPORTGRP", MyRoles.Extended},
{"MYAPPADMINGRP", MyRoles.Admin},
};
private static void Main(string[] args)
{
var windowsId = WindowsIdentity.GetCurrent();
if (windowsId != null)
{
var allRoleNames = getGroupCorrespondingRoles(windowsId);
var newPrincipal = new GenericPrincipal(windowsId, allRoleNames);
Thread.CurrentPrincipal = newPrincipal;
}
else
{
throw new NotSupportedException("There must be a logged on Windows User.");
}
}
private static string[] getGroupCorrespondingRoles(WindowsIdentity id)
{
// you also could do this more elegant with LINQ
var allMappedRoleNames = new List<string>();
string roleName;
foreach (var grp in id.Groups)
{
var groupName = grp.Translate(typeof(NTAccount)).Value.ToUpper();
if (_groupRoleMappings.TryGetValue(groupName, out roleName))
{
allMappedRoleNames.Add(roleName);
}
}
return allMappedRoleNames.ToArray();
}
}
public static class MyRoles
{
public const string Standard = "standard_role";
public const string Extended = "extended_role";
public const string Admin = "admin_role";
}
}
Then your Application-Principal is set up.
Now you could check access in your code like this:
public void DoSomethingSpecial()
{
if (Thread.CurrentPrincipal.IsInRole(MyRoles.Extended))
{
// do your stuff
}
else
{
// maybe display an error
}
}
Or more drastically:
public void DoSomethingCritical()
{
var adminPermission = new PrincipalPermission(null, MyRoles.Admin);
adminPermission.Demand();
// do stuff
}
what is possible even declarative, as known from ASP.NET:
[PrincipalPermission(SecurityAction.Demand, Role=MyRoles.Admin)]
public void DoSomethingMoreCritical()
{
// do stuff
}
The ugly thing with the latter two examples is, that they throw exceptions, when the right role isn't hit.
So the mapping between roles and groups you have to do quite at the start of your app, according to the systems you want to use (local groups, AD groups, LDAP groups etc.).
If you, however, prefer authentication with actions and roles, after all, have a look at Windows Identity Foundation and Claims Based Authorization! There are already some ready-to-use frameworks out there (e.g. https://github.com/thinktecture/Thinktecture.IdentityModel).
UPDATE:
When it comes to activity based and thereby claims based authorization, I will try in short, how you could achieve it, by using Thinktecture's IdentityModel.
Generally that approach still uses roles internally, but has a kind of translation layer in between. Thinktecture already encapsulates many things needed. Authorization checks in code are then done via claim permissions. They are technically kind of request for an access to a certain resource. For the sake of simplicity I limit my example for actions only, by using one single default resource (since ClaimPermission doesn't accept an empty resource).
If you want to use action#resource pairs, you'd have to modify the code respectively.
At first you need a ClaimsAuthorizationManager
public class MyClaimsAuthorizationManager : ClaimsAuthorizationManager
{
private IActivityRoleMapper _actionToRolesMapper;
public MyClaimsAuthorizationManager(IActivityRoleMapper mapper)
{
_actionToRolesMapper = mapper;
}
public override bool CheckAccess(AuthorizationContext context)
{
if (context == null)
{
throw new ArgumentNullException("context");
}
try
{
var action = getActionNameFromAuthorizationContext(context);
var sufficientRoles = _actionToRolesMapper.GetRolesForAction(action)
.Select(roleName => roleName.ToUpper());
var principal = context.Principal;
return CheckAccessInternal(sufficientRoles, principal);
}
catch (Exception ex)
{
return false;
}
}
protected virtual bool CheckAccessInternal(IEnumerable<string> roleNamesInUpperCase, IClaimsPrincipal principal)
{
var result = principal.Identities.Any(identity =>
identity.Claims
.Where(claim => claim.ClaimType.Equals(identity.RoleClaimType))
.Select(roleClaim => roleClaim.Value.ToUpper())
.Any(roleName => roleNamesInUpperCase.Contains(roleName)));
return result;
}
// I'm ignoring resources here, modify this, if you need'em
private string getActionNameFromAuthorizationContext(AuthorizationContext context)
{
return context.Action
.Where(claim => claim.ClaimType.Equals(ClaimPermission.ActionType))
.Select(claim => claim.Value)
.FirstOrDefault();
}
}
As you may have guessed, IActivityRoleMapper is an interface for a class, that returns the names of all roles, that include permission for a given action.
This class is very individual and I guess you'll find your way implementing it, because it's not the point here. You could do it by hardcoding, loading from xml or from a database. Also you would have to change/extend it, if you wanted to you action#resource pairs for permission requests.
Then you'd have to change the code in main() method to:
using Thinktecture.IdentityModel;
using Thinktecture.IdentityModel.Claims;
using Microsoft.IdentityModel.Web;
private static void Main(string[] args)
{
var windowsId = WindowsIdentity.GetCurrent();
if (windowsId != null)
{
var rolesAsClaims = getGroupCorrespondingRoles(windowsId)
.Select(role => new Claim(ClaimTypes.Role, role))
.ToList();
// just if you want, remember the username
rolesAsClaims.Add(new Claim(ClaimTypes.Name, windowsId.Name));
var newId = new ClaimsIdentity(rolesAsClaims, null, ClaimTypes.Name, ClaimTypes.Role);
var newPrincipal = new ClaimsPrincipal(new ClaimsIdentity[] { newId });
AppDomain.CurrentDomain.SetThreadPrincipal(newPrincipal);
var roleMapper = new ActivityRoleMapper(); // you have to implement
// register your own authorization manager, so IdentityModel will use it per default
FederatedAuthentication.ServiceConfiguration.ClaimsAuthorizationManager = new MyClaimsAuthorizationManager(roleMapper);
}
else
{
throw new NotSupportedException("There must be a logged on Windows User.");
}
}
Finally you can check access this way:
public const string EmptyResource = "myapplication";
public void DoSomethingRestricted()
{
if (!ClaimPermission.CheckAccess("something_restricted", EmptyResource))
{
// error here
}
else
{
// do your really phat stuff here
}
}
Or again, with exceptions:
private static ClaimPermission RestrictedActionPermission = new ClaimPermission(EmptyResource, "something_restricted");
public void DoSomethingRestrictedDemand()
{
RestrictedActionPermission.Demand();
// play up, from here!
}
Declarative:
[ClaimPermission(SecurityAction.Demand, Operation = "something_restricted", Resource = EmptyResource)]
public void DoSomethingRestrictedDemand2()
{
// dostuff
}
Hope this helps.
I am learning ASP.NET MVC, and confused as to how can I ensure unique values for columns (username & email) for a table.
Can anybody help me with a sample code or a link to the tutorial which shows & explains this?
EDIT:
I know that I can apply an unique key constraint on my table columns and achieve it. However, I want to know how can I achieve it via ASP.NET MVC code?
UPDATE:
I wish to do a check in my application such that no duplicated values are passed to DAL, i.e. achieve a check before inserting a new row.
Mliya, this is something you are controlling at the database level, not at the application level.
If you are designing a database with a users table in which you would like to constraint the username and email columns to be UNIQUE you should create a UNIQUE INDEX on such columns.
without knowing your backend database (mySQL, SQL Server, MS Access, Oracle...) it's not the case to show you pictures or tell much more, just create the table with the designer and add these unique constraints to those columns and by design you will be sure no duplicated values will ever be inserted for username and email.
I also suggest you to create an ID column which would be set as PK (primary key, which means it will be automatically set as NON NULL and UNIQUE).
From your ASP.NET MVC application you should of course make sure that no duplicated values are then passed to the DAL for username and email. You could do this in different ways, the easiest is probably to check before inserting a new row if any user already exists with that username and/or email and if so you can show a notification message telling the user to please select another pair of values.
In an ASP.NET MVC architecture, you should try to do most of your validation in the Model, but with low-level validation rules like these, it's sometimes impossible. What you should look to for answers then is Domain-driven Design (DDD) where Application Services can solve such low-level needs.
Application Services will have access to the database (either directly, or better yet; indirectly through repositories) and can perform low-level validation and throw ValidationException or something similar (with detailed information the Controller can act upon and respond to the user) when a prerequisite or business rule isn't met.
S#arp Architecture implementes all of this in a best-practice framework that you can use as a basis for your ASP.NET MVC applications. It is highly opinionated towards DDD principles and NHibernate, and it will sometimes force your hand on how you do stuff, which is kind of the point. The most important part about it is that it learns you how to deal with these kinds of problems.
To answer your question more concretely and in the spirit of DDD, this is how I would solve it:
public class UserController
{
private readonly IUserService userService;
public UserController(IUserService userService)
{
// The IUserService will be injected into the controller with
// an "Inversion of Control" container like NInject, Castle Windsor
// or StructureMap:
this.userService = userService;
}
public ActionResult Save(UserFormModel userFormModel)
{
if (userFormModel.IsValid)
{
try
{
// Mapping can be performed by AutoMapper or some similar library
UserDto userDto = Mapper.Map<UserDto>(userFormModel);
this.userService.Save(userDto);
}
catch (ValidationException ve)
{
ViewBag.Error = ve.Detail;
}
}
// Show validation errors or redirect to a "user saved" page.
}
}
public class UserService : IUserService
{
private readonly IUserRepository userRepository;
public UserService(IUserRepository userRepository)
{
// The IUserRepository will be injected into the service with
// an "Inversion of Control" container like NInject, Castle Windsor
// or StructureMap:
this.userRepository = userReposityr;
}
public UserDto Save(UserDto userDto)
{
using (this.userRepository.BeginTransaction())
{
if (!this.userRepository.IsUnique(userDto.UserName))
{
// The UserNameNotUniqueValidationException will inherit from ValidationException
// and build a Detail object that contains information that can be presented to
// a user.
throw new UserNameNotUniqueValidationException(userDto.UserName);
}
userDto = this.userRepository.Save(userDto);
this.userRepository.CommitTransaction();
return userDto;
}
}
}
I use Entity Framework 4 and Self Tracking Entities. The schema is like:
Patient -> Examinations -> LeftPictures
-> RightPictures
So there is TrackableCollection of these two relationships Patient 1 - * ....Pictures.
Now when loading the customers Form and browsing the details I dont need to load these
data images, only when another form is loaded for Examination details!
I am using a class library as a Data Repository to get data from the database (SQL Server) and this code:
public List<Patient> GetAllPatients()
{
try
{
using (OptoEntities db = new OptoEntities())
{
List<Patient> list = db.Patients
.Include("Addresses")
.Include("PhoneNumbers")
.Include("Examinations").ToList();
list.ForEach(p =>
{
p.ChangeTracker.ChangeTrackingEnabled = true;
if (!p.Addresses.IsNull() &&
p.Addresses.Count > 0)
p.Addresses.ForEach(a => a.ChangeTracker.ChangeTrackingEnabled = true);
if (!p.PhoneNumbers.IsNull() &&
p.PhoneNumbers.Count > 0)
p.PhoneNumbers.ForEach(a => a.ChangeTracker.ChangeTrackingEnabled = true);
if (!p.Examinations.IsNull() &&
p.Examinations.Count > 0)
p.Examinations.ForEach(e =>
{
e.ChangeTracker.ChangeTrackingEnabled = true;
});
});
return list;
}
}
catch (Exception ex)
{
return new List<Patient>();
}
}
Now I need when calling the Examination details form to go and get all the Images for the Examination relationship (LeftEyePictures, RightEyePictures). I guess that is called Lazy Loading and I dont understood how to make it happen while I'm closing the Entities connection immidiately and I would like to stay like this.
I use BindingSource components through the application.
What is the best method to get the desired results?
Thank you.
Self tracking entities don't support lazy loading. Moreover lazy loading works only when entities are attached to live context. You don't need to close / dispose context immediately. In case of WinForms application context usually lives for longer time (you can follow one context per form or one context per presenter approach).
WinForms application is scenario for normal attached entities where all these features like lazy loading or change tracking work out of the box. STEs are supposed to be used in distributed systems where you need to serialize entity and pass it to another application (via web service call).
So I want to test one of my Functions in my Web Project, but it's not actually connected to anything in the project yet (someone else is working on that part). The Function takes in an "ID" field, goes off and does some queries and gets some data, performs some calculations on it, and then writes a bunch of lines to a FileStream and returns that stream. I pretty much just want to test it by having it write the file to my own computer locally, and working with that file directory after the Function completes.
So my question is mainly:
1) How do I call this Function just for testing purposes so I can test all the queries/calculations/File writes, etc without it being connected to another part of the application just yet.
2) How can I change the 'Return fs' for the FileStream to write to my own computer locally to view the file that has been written.
Thanks guys!
To make your function testable you need to isolate all your dependencies and replace them in your test with stubs mocks. You can achieve this by wrappers around the file system classes and making sure your data layers classes have interfaces. With this your code could look like:
public class Something
{
IDataProvider provider;
IFileSystem fileSystem;
public Something(IDataProvider provider, IFileSystem fileSystem)
{
this.provider = provider;
this.fileSystem = fileSystem;
}
void DoThing(int id)
{
// make database call to get data
var data = provider.GetData(id);
fileSystem.Write("someFilePath",data);
}
}
With this you can write a test as such (in this casing using Moq like syntax):
void SomeTest()
{
var mockDataProvider = new Mock<IDataProvider>();
var mockFileSystem = new Mock<IFileSystem>();
var something = new Something(mockDataProvider.Object, mockFileSystem.Object);
var data = "someData";
mockDataProvider.Setup(x => x.GetData(5)).Return(data);
DoThing(5);
mockFileSystem.Verify(x => x.Write("someFilePath",data);
}
You need to read up on Unit Testing as this solves your problem in so many ways - it would also introduce you to dependency injection and mocking, which would be a great way to handle your problem.
Here is an overview...
Set up your class so it accepts the data-access and file-writer in the constructor. You can then pass in mock or stub version of the data access and file writer so you don't physically need to connect to a database or write to the file system to test your code.
In the "real world" you pass in the genuine data access and file writer.
In "test world" you use something such as MOQ or Rhino Mocks to create a pretend version of the data access, this means you can predict what will come back from the data access every time you test as it isn't the real database, it's some data you have prepared. You can also create a pretend file-writer that doesn't actually need to write a real file.
You can then test your class in isolation.
Dependency Injection:
http://msdn.microsoft.com/en-us/magazine/cc163739.aspx
Moq
http://code.google.com/p/moq/
Can I do nested transactions in NHibernate, and how do I implement them? I'm using SQL Server 2008, so support is definitely in the DBMS.
I find that if I try something like this:
using (var outerTX = UnitOfWork.Current.BeginTransaction())
{
using (var nestedTX = UnitOfWork.Current.BeginTransaction())
{
... do stuff
nestedTX.Commit();
}
outerTX.Commit();
}
then by the time it comes to outerTX.Commit() the transaction has become inactive, and results in a ObjectDisposedException on the session AdoTransaction.
Are we therefore supposed to create nested NHibernate sessions instead? Or is there some other class we should use to wrap around the transactions (I've heard of TransactionScope, but I'm not sure what that is)?
I'm now using Ayende's UnitOfWork implementation (thanks Sneal).
Forgive any naivety in this question, I'm still new to NHibernate.
Thanks!
EDIT: I've discovered that you can use TransactionScope, such as:
using (var transactionScope = new TransactionScope())
{
using (var tx = UnitOfWork.Current.BeginTransaction())
{
... do stuff
tx.Commit();
}
using (var tx = UnitOfWork.Current.BeginTransaction())
{
... do stuff
tx.Commit();
}
transactionScope.Commit();
}
However I'm not all that excited about this, as it locks us in to using SQL Server, and also I've found that if the database is remote then you have to worry about having MSDTC enabled... one more component to go wrong. Nested transactions are so useful and easy to do in SQL that I kind of assumed NHibernate would have some way of emulating the same...
NHibernate sessions don't support nested transactions.
The following test is always true in version 2.1.2:
var session = sessionFactory.Open();
var tx1 = session.BeginTransaction();
var tx2 = session.BeginTransaction();
Assert.AreEqual(tx1, tx2);
You need to wrap it in a TransactionScope to support nested transactions.
MSDTC must be enabled or you will get error:
{"Network access for Distributed Transaction Manager (MSDTC) has been disabled. Please enable DTC for network access in the security configuration for MSDTC using the Component Services Administrative tool."}
As Satish suggested, nested transactions are not supported in NHibernate. I've not come across scenarios where nested transactions were needed, but certainly I've faced problems where I had to ignore creating transactions if other ones were already active in other units of work.
The blog link below provides an example implementation for NHibernate, but should also work for SQL server:
http://rajputyh.blogspot.com/2011/02/nested-transaction-handling-with.html
I've been struggling with this for a while now. Am going to have another crack at it.
I want to implement transactions in individual service containers - because that makes them self-contained - but then be able to nest a bunch of those service methods within a larger transaction and rollback the whole lot if necessary.
Because I'm using Rhino Commons I'm now going to try refactoring using the With.Transaction method. Basically it allows us to write code as if transactions were nested, though in reality there is only one.
For example:
private Project CreateProject(string name)
{
var project = new Project(name);
With.Transaction(delegate
{
UnitOfWork.CurrentSession.Save(project);
});
return project;
}
private Sample CreateSample(Project project, string code)
{
var sample = new Sample(project, code);
With.Transaction(delegate
{
UnitOfWork.CurrentSession.Save(sample);
});
return sample;
}
private void Test_NoNestedTransaction()
{
var project = CreateProject("Project 1");
}
private void TestNestedTransaction()
{
using (var tx = UnitOfWork.Current.BeginTransaction())
{
try
{
var project = CreateProject("Project 6");
var sample = CreateSample(project, "SAMPLE006", true);
}
catch
{
tx.Rollback();
throw;
}
tx.Commit();
}
}
In Test_NoNestedTransaction(), we are creating a project alone, without the context of a larger transaction. In this case, in CreateSample a new transaction will be created and committed, or rolled back if an exception occurs.
In Test_NestedTransaction(), we are creating both a sample and a project. If anything goes wrong, we want both to be rolled back. In reality, the code in CreateSample and CreateProject will run just as if there were no transactions at all; it is entirely the outer transaction that decides whether to rollback or commit, and does so based on whether an exception is thrown. Really that's why I'm using a manually created transaction for the outer transaction; so we I have control over whether to commit or rollback, rather than just defaulting to on-exception-rollback-else-commit.
You could achieve the same thing without Rhino.Commons by putting a whole lot of this sort of thing through your code:
if (!UnitOfWork.Current.IsInActiveTransaction)
{
tx = UnitOfWork.Current.BeginTransaction();
}
_auditRepository.SaveNew(auditEvent);
if (tx != null)
{
tx.Commit();
}
... and so on. But With.Transaction, despite the clunkiness of needing to create anonymous delegates, does that quite conveniently.
An advantage of this approach over using TransactionScopes (apart from the reliance on MSDTC) is that there ought to be just a single flush to the database in the final outer-transaction commit, regardless of how many methods have been called in-between. In other words, we don't need to write uncommitted data to the database as we go, we're always just writing it to the local NHibernate cache.
In short, this solution doesn't offer ultimate control over your transactions, because it doesn't ever use more than one transaction. I guess I can accept that, since nested transactions are by no means universally supported in every DBMS anyway. But now perhaps I can at least write code without worrying about whether we're already in a transaction or not.
That implementation doesn't support nesting, if you want nesting use Ayende's UnitOfWork implementation. Another problem with the implementation your are using (at least for web apps) is that it holds onto the ISession instance in a static variable.
I just rewrote our UnitOfWork yesterday for these reasons, it was originally based off of Gabriel's.
We don't use UnitOfWork.Current.BeginTransaction(), we use UnitofWork.TransactionalFlush(), which creates a separate transaction at the very end to flush all the changes at once.
using (var uow = UnitOfWork.Start())
{
var entity = repository.Get(1);
entity.Name = "Sneal";
uow.TransactionalFlush();
}