Is it ever a good idea to work directly with the context? For example, say I have a database of customers and a user can search them by name, display a list, choose one, then edit that customer's properties.
It seems I should use the context to get a list of customers (mapped to POCOs or CustomerViewModels) and then immediately close the context. Then, when the user selects one of the CustomerViewModels in the list the customer properties section of the UI populates.
Next they can change the name, type, website address, company size, etc. Upon hitting a save button, I then open a new context, use the ID from the CustomerViewModel to retrieve that customer record, and update each of its properties. Finally, I call SaveChanges() and close the context. This is a LOT OF WORK.
My question is why not just work directly with the context leaving it open throughout? I have read using the same context with a long lifetime scope is very bad and will inevitably cause problems. My assumption is if the application will only be used by ONE person I can leave the context open and do everything. However, if there will be many users, I want to maintain a concise unit of work and thus open and close the context on a per request basis.
Any suggestions? Thanks.
#PGallagher - Thanks for the thorough answer.
#Brice - your input is helpful as well
However, #Manos D. the 'epitome of redundant code' comment concerns me a bit. Let me go through an example. Lets say I'm storing customers in a database and one of my customer properties is CommunicationMethod.
[Flags]
public enum CommunicationMethod
{
None = 0,
Print = 1,
Email = 2,
Fax = 4
}
The UI for my manage customers page in WPF will contain three check boxes under the customer communication method (Print, Email, Fax). I can't bind each checkbox to that enum, it doesn't make sense. Also, what if the user clicked that customer, gets up and goes to lunch... the context sits there for hours which is bad. Instead, this is my thought process.
End user chooses a customer from the list. I new up a context, find that customer and return a CustomerViewModel, then the context is closed (I've left repositories out for simplicity here).
using(MyContext ctx = new MyContext())
{
CurrentCustomerVM = new CustomerViewModel(ctx.Customers.Find(customerId));
}
Now the user can check/uncheck the Print, Email, Fax buttons as they are bound to three bool properties in the CustomerViewModel, which also has a Save() method. Here goes.
public class CustomerViewModel : ViewModelBase
{
Customer _customer;
public CustomerViewModel(Customer customer)
{
_customer = customer;
}
public bool CommunicateViaEmail
{
get { return _customer.CommunicationMethod.HasFlag(CommunicationMethod.Email); }
set
{
if (value == _customer.CommunicationMethod.HasFlag(CommunicationMethod.Email)) return;
if (value)
_customer.CommunicationMethod |= CommunicationMethod.Email;
else
_customer.CommunicationMethod &= ~CommunicationMethod.Email;
}
}
public bool CommunicateViaFax
{
get { return _customer.CommunicationMethod.HasFlag(CommunicationMethod.Fax); }
set
{
if (value == _customer.CommunicationMethod.HasFlag(CommunicationMethod.Fax)) return;
if (value)
_customer.CommunicationMethod |= CommunicationMethod.Fax;
else
_customer.CommunicationMethod &= ~CommunicationMethod.Fax;
}
}
public bool CommunicateViaPrint
{
get { return _customer.CommunicateViaPrint.HasFlag(CommunicationMethod.Print); }
set
{
if (value == _customer.CommunicateViaPrint.HasFlag(CommunicationMethod.Print)) return;
if (value)
_customer.CommunicateViaPrint |= CommunicationMethod.Print;
else
_customer.CommunicateViaPrint &= ~CommunicationMethod.Print;
}
}
public void Save()
{
using (MyContext ctx = new MyContext())
{
var toUpdate = ctx.Customers.Find(_customer.Id);
toUpdate.CommunicateViaEmail = _customer.CommunicateViaEmail;
toUpdate.CommunicateViaFax = _customer.CommunicateViaFax;
toUpdate.CommunicateViaPrint = _customer.CommunicateViaPrint;
ctx.SaveChanges();
}
}
}
Do you see anything wrong with this?
It is OK to use a long-running context; you just need to be aware of the implications.
A context represents a unit of work. Whenever you call SaveChanges, all the pending changes to the entities being tracked will be saved to the database. Because of this, you'll need to scope each context to what makes sense. For example, if you have a tab to manage customers and another to manage products, you might use one context for each so that when a users clicks save on the customer tab, all of the changes they made to products are not also saved.
Having a lot of entities tracked by a context could also slow down DetectChanges. One way to mitigate this is by using change tracking proxies.
Since the time between loading an entity and saving that entity could be quite long, the chance of hitting an optimistic concurrency exception is greater than with short-lived contexts. These exceptions occur when an entity is changed externally between loading and saving it. Handling these exceptions is pretty straightforward, but it's still something to be aware of.
One cool thing you can do with long-lived contexts in WPF is bind to the DbSet.Local property (e.g. context.Customers.Local). this is an ObservableCollection that contains all of the tracked entities that are not marked for deletion.
Hopefully this gives you a bit more information to help you decide which approach to help.
Microsoft Reference:
http://msdn.microsoft.com/en-gb/library/cc853327.aspx
They say;
Limit the scope of the ObjectContext
In most cases, you should create
an ObjectContext instance within a using statement (Using…End Using in
Visual Basic).
This can increase performance by ensuring that the
resources associated with the object context are disposed
automatically when the code exits the statement block.
However, when
controls are bound to objects managed by the object context, the
ObjectContext instance should be maintained as long as the binding is
needed and disposed of manually.
For more information, see Managing Resources in Object Services (Entity Framework). http://msdn.microsoft.com/en-gb/library/bb896325.aspx
Which says;
In a long-running object context, you must ensure that the context is
disposed when it is no longer required.
StackOverflow Reference:
This StackOverflow question also has some useful answers...
Entity Framework Best Practices In Business Logic?
Where a few have suggested that you promote your context to a higher level and reference it from here, thus keeping only one single Context.
My ten pence worth:
Wrapping the Context in a Using Statement, allows the Garbage Collector to clean up the resources, and prevents memory leaks.
Obviously in simple apps, this isn't much of a problem, however, if you have multiple screens, all using alot of data, you could end up in trouble, unless you are certain to Dispose your Context correctly.
Hence I have employed a similar method to the one you have mentioned, where I've added an AddOrUpdate Method to each of my Repositories, where I pass in my New or Modified Entity, and Update or Add it depending upon whether it exists.
Updating Entity Properties:
Regarding updating properties however, I've used a simple function which uses reflection to copy all the properties from one Entity to Another;
Public Shared Function CopyProperties(Of sourceType As {Class, New}, targetType As {Class, New})(ByVal source As sourceType, ByVal target As targetType) As targetType
Dim sourceProperties() As PropertyInfo = source.GetType().GetProperties()
Dim targetProperties() As PropertyInfo = GetType(targetType).GetProperties()
For Each sourceProp As PropertyInfo In sourceProperties
For Each targetProp As PropertyInfo In targetProperties
If sourceProp.Name <> targetProp.Name Then Continue For
' Only try to set property when able to read the source and write the target
'
' *** Note: We are checking for Entity Types by Checking for the PropertyType to Start with either a Collection or a Member of the Context Namespace!
'
If sourceProp.CanRead And _
targetProp.CanWrite Then
' We want to leave System types alone
If sourceProp.PropertyType.FullName.StartsWith("System.Collections") Or (sourceProp.PropertyType.IsClass And _
sourceProp.PropertyType.FullName.StartsWith("System.Collections")) Or sourceProp.PropertyType.FullName.StartsWith("MyContextNameSpace.") Then
'
' Do Not Store
'
Else
Try
targetProp.SetValue(target, sourceProp.GetValue(source, Nothing), Nothing)
Catch ex As Exception
End Try
End If
End If
Exit For
Next
Next
Return target
End Function
Where I do something like;
dbColour = Classes.clsHelpers.CopyProperties(Of Colour, Colour)(RecordToSave, dbColour)
This reduces the amount of code I need to write for each Repository of course!
The context is not permanently connected to the database. It is essentially an in-memory cache of records you have loaded from disk. It will only request records from the database when you request a record it has not previously loaded, if you force it to refresh or when you're saving your changes back to disk.
Opening a context, grabbing a record, closing the context and then copying modified properties to an object from a brand new context is the epitomy of redundant code. You are supposed to leave the original context alone and use that to do SaveChanges().
If you're looking to deal with concurrency issues you should do a google search about "handling concurrency" for your version of entity framework.
As an example I have found this.
Edit in response to comment:
So from what I understand you need a subset of the columns of a record to be overridden with new values while the rest is unaffected? If so, yes, you'll need to manually update these few columns on a "new" object.
I was under the impression that you were talking about a form that reflects all the fields of the customer object and is meant to provide edit access to the entire customer record. In this case there's no point to using a new context and painstakingly copying all properties one by one, because the end result (all data overridden with form values regardless of age) will be the same.
Related
My goal is to keep session size as small as possible. (Why?.. it's other topic).
What I have is Phase listener declared in faces-config.xml
<lifecycle>
<phase-listener>mypackage.listener.PhaseListener</phase-listener>
</lifecycle>
I want to save all other views, except the last one(maximum two) , in some memcache. Getting the session map:
Map<String, Object> sessionMap = event.getFacesContext().getExternalContext().getSessionMap();
in beforePhase(PhaseEvent event) method is giving me access to all views. So here I could save all views to the memcache and delete them from the session. The question is where in jsf these views that are still loaded in the browser are requested so that I can refill with this view if it's needed. Is it possible at all? Thank you.
To address the core of your question, implement a ViewHandler, within which you can take control of the RESTORE_VIEW and RENDER_RESPONSE phases/processes. You'll save the view during the RENDER_RESPONSE and selectively restore, during the RESTORE_VIEW phase. Your view handler could look something like the following
public class CustomViewHandlerImpl extends ViewHandlerWrapper{
#Inject ViewStore viewStore; //hypothetical storage for the views. Could be anything, like a ConcurrentHashMap
ViewHandler wrapped;
public CustomViewHandlerImpl(ViewHandler toWrap){
this.wrapped = toWrap;
}
public UIViewRoot restoreView(FacesContext context, String viewId) throws IOException{
//this assumes you've previously saved the view, using the viewId
UIViewRoot theView = viewStore.get(viewId);
if(theView == null){
theView = getWrapped().restoreView(context, viewId);
}
return theView;
}
public void renderView(FacesContext context, UIViewRoot viewToRender) throws IOException, FacesException{
viewStore.put(viewToRender.getId(),viewToRender);
getWrapped().renderView(context, viewToRender);
}
}
Simply plug in your custom viewhandler, using
<view-handler>com.you.customs.CustomViewHandlerImpl</view-handler>
Of course, you probably don't want to give this treatment to all your views; you're free to add any conditions to the logic above, to implement conditional view-saving and restoration.
You should also consider other options. It appears that you're conflating issues here. If your true concern is limit the overhead associated with view processing, you should consider
Stateless Views, new with JSF-2.2. The stateless view option allows you to exclude specific pages from the JSF view-saving mechanism, simply by specifying transient="true" on the f:view. Much cleaner than mangling the UIViewRoot by hand. The caveat here is that a stateless view cannot be backed by scopes that depend on state-saving, i.e. #ViewScoped. In a stateless view, the #ViewScoped bean is going to be recreated for every postback. Ajax functionality also suffers in this scenario, because state saving is the backbone of ajax-operations.
Selectively set mark components as transient The transient property is available for all UIComponents, which means, on a per-view basis, you can mark specific components with transient="true", effectively giving you the same benefits as 1) but on a much smaller scope. Without the downside of no ViewScoped
EDIT: For some reason, UIViewRoot#getViewId() is not returning the name of the current view (this might be a bug). Alternatively, you can use
ExternalContext extCtxt = FacesContext.getCurrentInstance().getExternalContext();
String viewName = ((HttpServletRequest)extCtxt.getRequest()).getRequestURI(); //use this id as the key to store your views instead
I know this question has been asked and discussed a lot (e.g. Here and Here and this Article). Nevertheless, i still feel confused about it. I know that DbContexts should not live for the duration of application lifetime, i know they should be used per Form (window) or per Presenter. The problem is that i don't have Forms or Presenters. I have a single Form (Window) with many view models, some of them which live for the duration of the application and almost all of my view models depend on DbContext (LOB application, WPF, MVVM, Sql Server CE).
My solution is to hide DbContext behind a factory which is injected in all view models that need access to DbContext and those view models create/dispose of the DbContext when their corresponding view is loaded/unloaded.
I would like to know if this solution has any problems or if there is a better solution you could advise ?
I tend to lay my projects out as follows;
1) Presentation Layer:
Contains my Views and ViewModels
2) Business Layer:
Contains my business logic
3) Data Layer:
Contains my models
My Presentation layer calls into the business layer to populate local copies (held in the ViewModel) of the data I wish to use in my ViewModel / View.
This is achieved with a Using statement, something like;
Using DBContext As Entities = ConnectToDatabase()
Dim clsApprovalTypes As New Repositories.clsApprovalTypesRepository(DBContext)
dbResults = clsApprovalTypes.GetRecords()
End Using
Return dbResults
Here I simply pass in the Context into Repository, once the data has been returned, the "End Using" will dispose of my context.
To update the context with changes made in my ViewModel / View, I use an AddEdit routine, which accepts a record, and updates / Adds to the context as necessary, using a similar methodology to the above, something like;
Using DBContext As CriticalPathEntities = ConnectToDatabase()
Dim clsApprovalTypes As New Repositories.clsApprovalTypesRepository(DBContext)
clsApprovalTypes.AddEditRecord(ApprovalTypeToSave)
Try
clsApprovalTypes.SaveData()
Catch ex As Exception
Return ex
End Try
End Using
Where my AddEdit routine is something like;
SavedRecord = New ApprovalType 'Store the Saved Record for use later
Dim query = From c In DBContext.ApprovalTypes
Where c.ApprovalType_ID = RecordToSave.ApprovalType_ID
Select c
If query.Count > 0 Then
SavedRecord = query.FirstOrDefault
End If
'
' Use Reflection here to copy all matching Properties between the Source Entity
' and the Entity to be Saved...
'
SavedRecord = Classes.clsHelpers.CopyProperties(RecordToSave, SavedRecord)
If query.Count = 0 Then
Try
DBContext.ApprovalTypes.Add(SavedRecord)
Catch ex As EntityException
Return ex
End Try
End If
I wrote a little more about some of this here;
https://stackoverflow.com/a/15014599/1305169
I know the reason for the exception (SqlDateTime overflow. Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM.) is a non nullable DateTime field in a Entity and so Nhibernate wants to save a smaller DateTime value than MSSQL accepts.
The Problem ist that there are far to many entities in the project to find the right DateTime field.
The exception occurs after an SaveOrUpdate() but is not triggered by the entity i want to save but any other entity which was loaded in the current session and now is affected by the flush().
How can i find out which field really is responsible for the exception?
If you cast the exception to a SqlTypeException, that will expose the Data collection. Normally there is a single Key and a single Value in the collection. The value is the SQL that was attempted to be executed. By examining the DML you can then see what table was being acted upon. Hopefully that table is narrow enough to make determining the offending column trivial.
Here's some simple code I use to spit out the Key and Value of the exception.
catch (SqlTypeException e)
{
foreach(var key in e.Data.Keys)
{
System.Console.Write("Key is " + key.ToString());
}
foreach(var value in e.Data.Values)
{
Console.WriteLine("Value is "+value.ToString());
}
}
Have you tried forcing NHib to output the generated sql and reviewing that for the rogue DateTime? It'd be easier if you were using something like NHProfiler (I don't work for them, just a satisfied customer), but really all that's doing for you is showing/isolating the sql anyway, which you can do from the output window with a little extra effort. The trick will be if it's a really deep save, then there could potentially be a lot of sql to read through, but chances are you'll be able to spot it pretty quickly.
You can create a class that implements both IPreUpdateEventListener and IPreInsertEventListener as follows:
public class InsertUpdateListener : IPreInsertEventListener, IPreUpdateEventListener {
public bool OnPreInsert(PreInsertEvent #event) {
CheckDateTimeWithinSqlRange(#event.Persister, #event.State);
return false;
}
public bool OnPreUpdate(PreUpdateEvent #event) {
CheckDateTimeWithinSqlRange(#event.Persister, #event.State);
return false;
}
private static void CheckDateTimeWithinSqlRange(IEntityPersister persister, IReadOnlyList<object> state) {
var rgnMin = System.Data.SqlTypes.SqlDateTime.MinValue.Value;
// There is a small but relevant difference between DateTime.MaxValue and SqlDateTime.MaxValue.
// DateTime.MaxValue is bigger than SqlDateTime.MaxValue but still within the valid range of
// values for SQL Server. Therefore we test against DateTime.MaxValue and not against
// SqlDateTime.MaxValue. [Manfred, 04jul2017]
//var rgnMax = System.Data.SqlTypes.SqlDateTime.MaxValue.Value;
var rgnMax = DateTime.MaxValue;
for (var i = 0; i < state.Count; i++) {
if (state[i] != null
&& state[i] is DateTime) {
var value = (DateTime)state[i];
if (value < rgnMin /*|| value > rgnMax*/) { // we don't check max as SQL Server is happy with DateTime.MaxValue [Manfred, 04jul2017]
throw new ArgumentOutOfRangeException(persister.PropertyNames[i], value,
$"Property '{persister.PropertyNames[i]}' for class '{persister.EntityName}' must be between {rgnMin:s} and {rgnMax:s} but was {value:s}");
}
}
}
}
}
You also need to then register this event handler when you configure the session factory. Add an instance to Configuration.EventListeners.PreUpdateEventListeners and to Configuration.EventListeners.PreInsertEventListeners and then use the Configuration object when creating NHibernate's session factory.
What this does is this: Every time NHibernate inserts or updates an entity it will call OnPreInsert() or OnPreUpdate() respectively. Each of these methods in turn calls CheckDateTimeWithinSqlRange().
CheckDateTimeWithinSqlRange() iterates over all property values of the entity, ie the object, that is being saved. If the property value is not null it then checks if it is of type DateTime. If that is the case it checks that it is not less than SqlDateTime.MinValue.Value (note the additional .Value to avoid exceptions). There is no need to check against SqlDateTime.MaxValue.Value if you are using SQL Server 2012 or later. They will happily accept even DateTime.MaxValue which is a few time ticks greater than SqlDateTime.MaxValue.Value.
If the value is outside of the allowed range this code will then throw an ArgumentOutOfRangeException with an appropriate message that includes the names of the class (entity) and property causing the problem as well as the actual value that was passed in. The message is similar to the equivalent SqlServerException for the SqlDateTime overflow exception but will make it easier to pinpoint the problem.
A couple of things to consider. Obviously this does not come for free. You will incur a runtime overhead as this logic consumes CPU. Depending on your scenario this may not be a problem. If it is, you can also consider optimizing the code given in this example to make it faster. One option could perhaps be to use caching to avoid the loop for the same class. Another option could be to use it only in test and development environments. For production you could then rely that the rest of the system operates correctly and the values will always be within valid range.
Also, be aware that this code introduces a dependency on SQL Server. NHibernate is typically used to avoid dependencies like this. Other database servers that are supported by NHibernate may have a different range of allowed values for datetime. Again, there are options for resolving this as well, e.g. by using different boundaries depending on SQL dialect.
Happy coding!
RIA Services is returning a list of Entities that won't allow me to add new items. Here are what I believe to be the pertinent details:
I'm using the released versions of Silverlight 4 and RIA Services 1.0 from mid-April of 2010.
I have a DomainService with a query method that returns List<ParentObject>.
ParentObject includes a property called "Children" that is defined as List<ChildObject>.
In the DomainService I have defined CRUD methods for ParentObject with appropriate attributes for the Query, Delete, Insert, and Update functions.
The ParentObject class has an Id property marked with the [Key] attribute. It also has the "Children" property marked with the attributes [Include], [Composition], and [Association("Parent_Child", "Id",
"ParentId")].
The ChildObject class has an Id marked with the [Key] attribute as well as a foreign key, "ParentId", that contains the Id of the parent.
On the client side, data is successfully returned and I assign the results of the query to a PagedCollectionView like this:
_pagedCollectionView = new PagedCollectionView(loadOperation.Entities);
When I try to add a new ParentObject to the PagedCollectionView like this:
ParentObject newParentObject = (ParentObject)_pagedCollectionView.AddNew();
I get the following error:
" 'Add New' is not allowed for this view."
On further investigation, I found that _pagedCollectionView.CanAddNew is "false" and cannot be changed because the property is read-only.
I need to be able to add and edit ParentObjects (with their related children, of course) to the PagedCollectionView. What do I need to do?
I was just playing around with a solution yesterday and feel pretty good about how it works. The reason you can't add is the source collection (op.Entities) is read-only. However, even if you could add to the collection, you'd still want to be adding to the EntitySet as well. I created a intermediate collection that takes care of both these things for me.
public class EntityList<T> : ObservableCollection<T> where T : Entity
{
private EntitySet<T> _entitySet;
public EntityList(IEnumerable<T> source, EntitySet<T> entitySet)
: base(source)
{
if (entitySet == null)
{
throw new ArgumentNullException("entitySet");
}
this._entitySet = entitySet;
}
protected override void InsertItem(int index, T item)
{
base.InsertItem(index, item);
if (!this._entitySet.Contains(item))
{
this._entitySet.Add(item);
}
}
protected override void RemoveItem(int index)
{
T item = this[index];
base.RemoveItem(index);
if (this._entitySet.Contains(item))
{
this._entitySet.Remove(item);
}
}
}
Then, I use it in code like this.
dataGrid.ItemsSource = new EntityList<Entity1>(op.Entities, context.Entity1s);
The only caveat is this collection does not actively update off the EntitySet. If you were binding to op.Entities, though, I assume that's what you'd expect.
[Edit]
A second caveat is this type is designed for binding. For full use of the available List operation (Clear, etc), you'd need to override a few of the other methods to write-though as well.
I'm planning to put together a post that explains this a little more in-depth, but for now, I hope this is enough.
Kyle
Here's a workaround which I am using:
Instead of using the AddNew, on your DomainContext you can retrieve an EntitySet<T> by saying Context.EntityNamePlural (ie: Context.Users = EntitySet<User> )
You can add a new entity to that EntitySet by calling Add() and then Context.SubmitChanges() to send it to the DB. To reflect the changes on the client you will need to Reload (Context.Load())
I just made this work about 15mins ago after having no luck with the PCV so I am sure it could be made to work better, but hopefully this will get you moving forward.
For my particular situation, I believe the best fit is this (Your Mileage May Vary):
Use a PagedCollectionView (PCV) as a wrapper around the context.EntityNamePlural (in my case, context.ParentObjects) which is an EntitySet. (Using loadOperation.Entities doesn't work for me because it is always read-only.)
_pagedCollectionView = new PagedCollectionView(context.ParentObjects);
Then bind to the PCV, but perform add/delete directly against the context.EntityNamePlural EntitySet. The PCV automatically syncs to the changes done to the underlying EntitySet so this approach means I don't need to worry about sync issues.
context.ParentObjects.Add();
(The reason for performing add/delete directly against the EntitySet instead of using the PCV is that PCV's implementation of IEditableCollectionView is incompatible with EntitySet causing IEditableCollectionView.CanAddNew to be "false" even though the underlying EntitySet supports this function.)
I think Kyle McClellan's approach (see his answer) may be preferred by some because it encapsulates the changes to the EntitySet, but I found that for my purposes it was unneccessary to add the ObservableCollection wrapper around loadOperation.Entities.
Many thanks to to Dallas Kinzel for his tips along the way!
I have a User table and a ClubMember table in my database. There is a one-to-one mapping between users and club members, so every time I insert a ClubMember, I need to insert a User first. This is implemented with a foreign key on ClubMember (UserId REFERENCES User (Id)).
Over in my ASP.NET MVC app, I'm using LinqToSql and the Repository Pattern to handle my persistence logic. The way I currently have this implemented, my User and ClubMember transactions are handled by separate repository classes, each of which uses its own DataContext instance.
This works fine if there are no database errors, but I'm concerned that I'll be left with orphaned User records if any ClubMember insertions fail.
To solve this, I'm considering switching to a single DataContext, which I could load up with both inserts then call DataContext.SubmitChanges() only once. The problem with this, however, is that the Id for User is not assigned until the User is inserted into the database, and I can't insert a ClubMember until I know the UserId.
Questions:
Is it possible to insert the User into the database, obtain the Id, then insert the ClubMember, all as a single transaction (which can be rolled back if anything goes wrong with any part of the transaction)? If yes, how?
If not, is my only recourse to manually delete any orphaned User records that get created? Or is there a better way?
You can use System.Transactions.TransactionScope to perform this all in an atomic transaction, but if you are using different DataContext instances, it will result in a distributed transaction, which is probably not what you really want.
By the sounds of it, you're not really implementing the repository pattern correctly. A repository should not create its own DataContext (or connection object, or anything else) - these dependencies should be passed in via a constructor or public property. If you do this, you'll have no problem sharing the DataContext:
public class UserRepository
{
private MyDataContext context;
public UserRepository(MyDataContext context)
{
if (context == null)
throw new ArgumentNullException("context");
this.context = context;
}
public void Save(User user) { ... }
}
Use the same pattern for ClubMemberRepository (or whatever you call it), and this becomes trivial:
using (MyDataContext context = new MyDataContext())
{
UserRepository userRep = new UserRepository(context);
userRep.Save(user);
ClubMemberRepository memberRep = new ClubMemberRepository(context);
memberRep.Save(member);
context.SubmitChanges();
}
Of course, even this is a little bit iffy. If you have a foreign key in your database, then you shouldn't even need two repositories, because Linq to SQL manages the relationship. The code to create should simply look like this:
using (MyDataContext context = new MyDataContext())
{
User user = new User();
user.Name = "Bob";
user.ClubMember = new ClubMember();
user.ClubMember.Club = "Studio 54";
UserRepository userRep = new UserRepository(context);
userRep.Save(user);
context.SubmitChanges();
}
Don't fiddle with multiple repositories - let Linq to SQL handle the relationship for you, that's what ORMs are for.
Yes, you can do it in a single transaction. Use the TransactionScope object to begin and commit the transaction (and rollback if there is an error of course)