The buzword in Linq now days is "unit of work".
as in "Only keep your datacontext in existence for one unit of work" then destroy it.
Well I have a few questions about that.
I am creating a fat client WPF
application. So my data context needs to track the entire web of instantiated object available for the user on the current screen. when can I destroy my datacontext?
I build a linq query over time based on actions of the user and their interactions with objects of the first datacontext. How can I create a new DataContext and execute the query on new Context?
I hope I was clear.
Thank you
Unit of Work is not the same as only keep your datacontext in existence for one unit of work.
Unit of Work is a design pattern that describe how to represent transactions in an abstract way. It's really only required for Create, Update and Delete (CUD) operations.
One philosophy is that UoW is used for all CUD operations, while read-only Repositories are used for read operations.
In any case I would recommend decoupling object lifetime from UoW or Repository usage. Use Dependency Injection (DI) to inject both into your consuming services, and let the DI Container manage lifetimes of both.
In web applications, my experience is that the object context should only be kept alive for a single request (per-request lifetime). On the other hand, for a rich client like the one you describe, keeping it alive for a long time may be more efficient (singleton lifetime).
By letting a DI Container manage the lifetime of the object context, you don't tie yourself to one specific choice.
I am creating a fat client WPF application.
Ok.
So my data context needs to track the entire web of instantiated object available for the user on the current screen.
No. Those classes are database mapping classes. They are not UI presentation classes.
How can I create a new DataContext and execute the query on new Context?
Func<DataContext, IQueryable<Customer>> queryWithoutADataContext =
dc =>
from cust in dc.Customers
where cust.name == "Bob"
select cust;
Func<DataContext, IQueryable<Customer>> moreFiltered =
dc =>
from cust in queryWithoutADataContext(dc)
where cust.IsTall
select cust;
var bobs = queryWithoutADataContext(new DataContext);
var tallbobs = moreFiltered(new DataContext);
Related
I have a WPF application with MVVM. Assuming object composition from the ViewModel down looks as follows:
MainViewModel
OrderManager
OrderRepository
EFContext
AnotherRepository
EFContext
UserManager
UserRepository
EFContext
My original approach was to inject dependencies (from the ViewModelLocator) into my View Model using .InCallScope() on the EFContext and .InTransientScope() for everything else. This results in being able to perform a "business transaction" across multiple business layer objects (Managers) that eventually underneath shared the same Entity Framework Context. I would simply Commit() said context at the end for a Unit of Work type scenario.
This worked as intended until I realized that I don't want long living Entity Framework contexts at the View Model level, data integrity issues across multiple operations described HERE. I want to do something similar to my web projects where I use .InRequestScope() for my Entity Framework context. In my desktop application I will define a unit of work which will serve as a business transaction if you will, typically it will wrap everything within a button click or similar event/command. It seems that using Ninject's ActivationBlock can do this for me.
internal static class Global
{
public static ActivationBlock GetNinjectUoW()
{
//assume that NinjectSingleton is a static reference to the kernel configured with the necessary modules/bindings
return new ActivationBlock(NinjectSingleton.Instance.Kernel);
}
}
In my code I intend to use it as such:
//Inside a method that is raised by a WPF Button Command ...
using (ActivationBlock uow = Global.GetNinjectUoW())
{
OrderManager orderManager = uow.Get<OrderManager>();
UserManager userManager = uow.Get<UserManager>();
Order order = orderManager.GetById(1);
UserManager.AddOrder(order);
....
UserManager.SaveChanges();
}
Questions:
To me this seems to replicate the way I do business on the web, is there anything inherently wrong with this approach that I've missed?
Am I understanding correctly that all .Get<> calls using the activation block will produce "singletons" local to that block? What I mean is no matter how many times I ask for an OrderManager, it'll always give me the same one within the block. If OrderManager and UserManager compose the same repository underneath (say SpecialRepository), both will point to the same instance of the repository, and obviously all repositories underneath share the same instance of the Entity Framework context.
Both questions can be answered with yes:
Yes - this is service location which you shouldn't do
Yes you understand it correctly
A proper unit-of-work scope, implemented in Ninject.Extensions.UnitOfWork, solves this problem.
Setup:
_kernel.Bind<IService>().To<Service>().InUnitOfWorkScope();
Usage:
using(UnitOfWorkScope.Create()){
// resolves, async/await, manual TPL ops, etc
}
I have a Silverlight RIA app where I share the models and data access between the MVC web app and the Silverlight app using compiler directives, and for the server, to see what context I am running under I would check to see if the ChangeSet object was non-null (meaning I was running under RIA rather than MvC). Everything works alright but I had problems with the default code generated by the domain service methods.
Let's say I had a Person entity, who belonged to certain Groups (Group entity). The Person object has a collection of Groups which I add or remove. After making the changes, the SL app would call the server to persist the changes. What I noticed happening is that the group entity records would be inserted first. That's fine, since I'm modifying an existing person. However, since each Group entity also has a reference to the existing person, calling AddObject would mark the whole graph - including the person I'm trying to modify - as Added. Then, when the Update statement is called, the default generated code would try to Attach the person, which now has a state of Added, to the context, with not-so-hilarious results.
When I make the original call for an entity or set of entities in a query, all of the EntityKeys for the entities are filled in. Once on the client, then EntityKey is filled in for each object. When the entity returns from the client to be updated on the server, the EntityKey is null. I created a new RIA services project and verified that this is the case. I'm running RIA Services SP1 and I am not using composition. I kind of understand the EntityKey problem - the change tracking done is on two separate contexts. EF doesn't know about the change tracking done on the SL side. However, it IS passing back the object graph, including related entities, so using AddObject is a problem unless I check the database for the existence of an object with the same key first.
I have code that works. I don't know how WELL it works but I'm doing some further testing today to see what's going on. Here it is:
/// <summary>
/// Updates an existing object.
/// </summary>
/// <typeparam name="TBusinessObject"></typeparam>
/// <param name="obj"></param>
protected void Update<TBusinessObject>(TBusinessObject obj) where TBusinessObject : EntityObject
{
if (this.ChangeSet != null)
{
ObjectStateManager objectStateManager = ObjectContext.ObjectStateManager;
ObjectSet<TBusinessObject> entitySet = GetEntitySet<TBusinessObject>();
string setName = entitySet.EntitySet.Name;
EntityKey key = ObjectContext.CreateEntityKey(setName, obj);
object dbEntity;
if (ObjectContext.TryGetObjectByKey(key, out dbEntity) && obj.EntityState == System.Data.EntityState.Detached)
{
// An object with the same key exists in the DB, and the entity passed
// is marked as detached.
// Solution: Mark the object as modified, and any child objects need to
// be marked as Unchanged as long as there is no Domainoperation.
ObjectContext.ApplyCurrentValues(setName, obj);
}
else if (dbEntity != null)
{
// In this case, tryGetObjectByKey said it failed, but the resulting object is
// filled in, leading me to believe that it did in fact work.
entitySet.Detach(obj); // Detach the entity
try
{
ObjectContext.ApplyCurrentValues(setName, obj); // Apply the changes to the entity in DB
}
catch (Exception)
{
entitySet.Attach(obj); // Re-attach the entity
ObjectContext.ApplyCurrentValues(setName, obj); // Apply the changes to the entity in DB'
}
}
else
{
// Add it..? Update must have been called mistakenly.
entitySet.AddObject(obj);
}
}
else
DirectInsertUpdate<TBusinessObject>(obj);
}
Quick walkthrough: If the ChangeSet is null, I'm not under the RIA context, and therefore can call a different method to handle the insert/update and save immediately. That works fine as far as I can tell. For RIA, I generate a key, and see if it exists in the database. If it does and the object I am working with is detached, I apply those values; otherwise, I force detach and apply the values, which works around the added state from any previous Insert calls.
Is there a better way of doing this? I feel like I'm doing way too much work here.
In this kind of a case, where you're adding Group entities to Person.Groups, I would think of just saving the Person and expect RIA to handle the Groups for me.
But let's take a step back, how are you trying to persist your changes? You shouldn't be saving/updating entities one by one. All you have to do is call DomainContext.SubmitChanges and all your changes should be persisted.
I work with pretty complicated projects and I seldom ever have to touch add/update code.
This question has been around with no solid answer, so I'll tell you what I did... which is nothing. That's how I handled it in RIA services, using the code above, since I was sharing the RIA client model and the server model.
After working with RIA services for a year and a half, I'm in the camp that believes that RIA services is good for working with smaller, less complex apps. If you can use [Composite] for your entities, which I couldn't for many of my entities, then you're fine.
RIA services can make throwing together small applications where you want to use the entity from EF really quick, but if you want to use POCOs or you foresee your application getting complex in the future, I would stick with building POCOs on the service end and passing those through regular WCF, and using shared behaviors by making your POCOs partial classes and sharing the behavior code with the client. When you're trying to create models that work the same on the client and the server, I had to write a ridiculous amount of plumbing code to make it work.
It definitely IS possible to do, I've done it; but there is a lot of hoops you must jump through for everything to work well, and I never fully took into consideration things like your shared model pre-loading lists for use on the client, whereas the server didn't need these preloaded everytime and actually slowed down the loading of the web page unnecessarily and countering by writing hacky method calls which I had to adopt on the client. (Sorry for the run-on.) The technique I chose to use definitely had its issues.
I am working on a Prism desktop application and would like to know the best way to deal with lookup / reference data lists when using a WCF backend. I think this question may cover a few areas and I would appreciate some guidance
For example, consider a lookup that contains Products(codes and descriptions) which would be used in a lot of different input screens in the system.
Does the viewmodel call the WCF service directly to obtain the data to fill the control?
Would you create a control that solely deals with Products with its own viewmodel etc and then use that in every place that needs a product lookup or would you re-implements say a combobox that repopulates the products ItemsSource in every single form view model that uses it?
Would I create a brand new WCF service called something like LookupData service and use that to populate my lookup lists? - I am concerned I will end up with lots of lookups if I do this.
What other approaches are there for going about this?
I suggest creating your lookup object/component as a proxy object for WCF service. It can work in several ways, but most simple coming to my mind would be:
Implement WCF service with methods to provide all Products entities and requested one (eg. basing on product code)
Implement component that will use WCF client to get products, let's call it ProductsProvider
Your view models will take dependency on ProductsProvider (eg. via constructor injection)
Key element in this model is ProductsProvider - it will work as kind of cache for Products objects. First, it will ask web service for all products (or some part of it, up to your liking) to start with. Then, whenever you need to lookup product, you ask provider - it's provider's responsibility to deal with how product should be looked up - maybe it's already in local list? Maybe it will need to call web service for update? Example:
public class ProductsProvider
{
private IList<Product> products;
private IProductsService serviceClient;
public ProductsProvider(IProductsService serviceClient)
{
this.serviceClient = serviceClient;
this.products = serviceClient.GetAllProducts();
}
public Product LookUpProduct(string code)
{
// 1: check if our local list contains product with given code
// 2: if it does not, call this.serviceClient.LookUpProduct
// 3: if service also doesn't know such product:
// throw, return null, report error
}
}
Now, what this gives you is:
you only need to have one ProductsProvider instance
better flexibility with when and how your service is called
your view models won't have to deal with WCF at all
Edit:
As for your second question. Control may not be needed, but having view model for Product entity is definitely a good idea.
I have created a class that inherits for DomainService and have a Silverlight app that uses System.ServiceModel.DomainServices.Client to get a DomainContext. I have also created POCO DataContracts that are used in the DomainServices's Query, Update, Insert, and Delete methods. I also have a ViewModel that executes all the LoadOperations. And now I'm at the part of my app where I want to add new Entities to the generated EntitySets but am unsure about what's going to happen when one user creates new and sets the Key value; all while another user creates a similar entity with the same Key value.
I have seen in the documentation that an ObjectContext is used, but in my situation I was not able to use the EntityFramework model generator. So I had to create my datacontracts by hand.
So I guess my question is, is there any way I can force other silverlight apps to update on database change?
When you make a Save operation to your DomainContext, depending on the load behavior, it will automatically refresh.
TicketContext.Load(TicketContext.GetTicketByIdQuery(ticketId),
LoadBehavior.RefreshCurrent,
x =>
{
Ticket = x.Entities.First();
Ticket.Load();
((IChangeTracking) TicketContext.EntityContainer).AcceptChanges();
}, null);
Here I've set the LoadBehavior to RefreshCurrent. When you make a save, RIA will send the entity back across the wire to the client and merge the changes with the entity already cached on your client side context. I don't know if that quite answers your question or not however.
I will start a project that needs a web and desktop interface. One solution seems to be IdeaBlade (http://www.ideablade.com).
Can anyone who uses it describe its limitations and advantages? Is it testable?
Thanks,
Alex
As VP of Technology at IdeaBlade it is not for me to comment generally on the DevForce limitations and advantages in this space. Happy to respond to specific questions though.
Is it testable? To this I can respond with the beginnings of an answer.
It's a potentially contentious question. People have strong feelings about what makes something testable. Let me confine myself to specific testing scenarios .. and then you judge the degree to which we meet your testing requirements.
1) DevForce supports pure POCO entities if that's your preference. Most people will prefer to use the entities that derive from our base Entity class so I will confine my subsequent remarks entirely to such entities.
2) You can new-up such an entity using any ctor you please and get and set its (non-navigation) properties with no other setup.
var cust = new Customer {ID=..., Name =...}; // have fun
Assembly references are required of course.
3) To test its navigation properties (properties that return other entities), you first new an EntityManager (our Unit-of-Work, context-like container), add or attach the entities to the EM, and off you go. Navigation properties of the Entities inheriting from our base class expect to find related entities through that container.
4) In most automated tests, the EntityManager will be created in a disconnected state so that it never attempts to reach a server or database.
You might add to it an Order, a Customer, some OrderDetails; note that all of them are constructed within the context of your tests ... not retrieved from anywhere.
Now order.Customer returns the test Customer; order.OrderDetails returns your test details. Your preparation consists of creating the EM, the test entities, ensuring that these entities have unique IDs and are associated.
Here's an example sequence:
var mgr = new EntityManager(false); // create disconnected
var order = new Order {ID = ..., Quantity = 1, ...};
var customer = new Customer {ID = 42, Name = "ABC", };
mgr.AttachEntity(order);
mgr.AttachEntity(customer);
order.Customer = customer; // associate them
The EM is acting as an in-memory database.
5) You can use LINQ now
var custs = mgr.Customers.Where(c => c.Name.StartsWith("A").ToList();
var orders = mgr.Orders.Where(o => o.Customer.Name.StartsWith("A")).ToList();
6) Of course I always create a new EntityManager before each test to eliminate cross-test pollution.
7) I often write a so-called "Data Mother" test helper class to populate an EM with a standard collection of test data, including deviant cases.
8) I can export an EntityManager's cache of test entities to file or a test project resource. When tests run, a DataMother can retrieve and restore these test entities.
Observe that I am moving progressively away from unit testing and toward integration testing. But (so far) my tests do not require access to a server, or Entity Framework, or the database. They run fast and they are less vulnerable to distracting setup failures.
Of course you can get to the server in deep integration tests and you can easily switch servers and databases dynamically for local, LAN, and web scenarios.
9) You can intercept query, save, change, add, remove, and other events for interaction testing.
10) Everything I've described works in both regular .NET and Silverlight and with every test framework I've encountered.
On the downside, I wouldn't describe our product as mock-friendly.
I readily concede that we are not Persistence Ignorant (PI). If you're a PI fanatic, we're the wrong choice for you.
We try to appreciate the important benefits of PI and do our best to realize them in our product. We do what we can to shunt framework concerns out of view. Still, as you see, our abstraction leaks in a few places. For example, we add these members to the public API of your entities:
EntityAspect (the gateway to persistence awareness)
ErrorsChanged
PendingEntityResolved
PropertyChanged
ToQuery<>
Personally, I would have cut this to two (EntityAspect, PropertyChanged); the others snuck by me. For what it's worth, inheriting from Object (as you must) contributes another extraneous five.
We feel that we've made good trade-offs between pure P.I. and ease-of-development.
My question is "does it give you what you need to validate expectations without a lot of friction ... along the entire spectrum from unit to deep integration testing?"
I'm certainly curious to learn how you obtain similar facility with less friction with similar products. And eager to take suggestions on how we can improve our support for application testing.
Feel free to follow-up with questions about other testing scenarios that I may have overlooked.
Hope this helps
Ward