Global Entity Framework Context in WPF Application - wpf

I am in the middle of development of a WPF application that is using Entity Framework (.NET 3.5). It accesses the entities in several places throughout. I am worried about consistency throughout the application in regard to the entities. Should I be instancing separate contexts in my different views, or should I (and is a a good way to do this) instance a single context that can be accessed globally?
For instance, my entity model has three sections, Shipments (with child packages and further child contents), Companies/Contacts (with child addresses and telephones), and disk specs. The Shipments and EditShipment views access the DiskSpecs, and the OptionsView manages the DiskSpecs (Create, Edit, Delete). If I edit a DiskSpec, I have to have something in the ShipmentsView to retrieve the latest specs if I have separate contexts right?
If it is safe to have one overall context from which the rest of the app retrieves it's objects, then I imagine that is the way to go. If so, where would that instance be put? I am using VB.NET, but I can translate from C# pretty good. Any help would be appreciated.
I just don't want one of those applications where the user has to hit reload a dozen times in different parts of the app to get the new data.
Update:
OK so I have changed my app as follows:
All contexts are created in Using Blocks to dispose of them after they are no longer needed.
When loaded, all entities are detatched from context before it is disposed.
A new property in the MainViewModel (ContextUpdated) raises an event that all of the other ViewModels subscribe to which runs that ViewModels RefreshEntities method.
After implementing this, I started getting errors saying that an entity can only be referenced by one ChangeTracker at a time. Since I could not figure out which context was still referencing the entity (shouldn't be any context right?) I cast the object as IEntityWithChangeTracker, and set SetChangeTracker to nothing (Null).
This has let to the current problem:
When I Null the changeTracker on the Entity, and then attach it to a context, it loses it's changed state and does not get updated to the database. However if I do not null the change tracker, I can't attach. I have my own change tracking code, so that is not a problem.
My new question is, how are you supposed to do this. A good example Entity query and entity save code snipped would go a long way, cause I am beating my head in trying to get what I once thought was a simple transaction to work.

A global static context is rarely the right answer. Consider what happens if the database is reset during the execution of this application - your SQL connection is gone and all subsequent requests using the static context will fail.
Recommend you find a way to have a much shorter lifetime for your entity context - open it, do some work, dispose of it, ...
As far as putting your different objects in the same EDMX, that's almost certainly the right answer if they have any relationships between objects you'll want them in the same EDMX.
As for reloading - the user should never have to do this. Behind the scenes you can open a new context, reloading the current version of the object from the database, applying the changes they made in the UI annd then saving it back.
You might want to look at detached entities also, and beware of optimistic concurrency exceptions when you try to save changes and someone else has changed the same object in the database.

Good question, Cory. Thumb up from me.
Entity Framework gives you a free choice - you can either instanciate multiple contexts or have just one, static. It will work well in both cases and yes, both solutions are safe. The only valuable advice I can give you is: experiment with both, measure performance, delays etc and choose best one for you. It's fun, believe me :)
If this is going to be a really massive application with tons of concurrent connections I would advise using one static context or one static, core context and just few additional ones just to support the main one. But, as I wrote just few lines above - it's up to your requirements which solution is better for you.
I especially liked this part of your question:
I just don't want one of those
applications where the user has to hit
reload a dozen times in different
parts of the app to get the new data.
WPF is a really, really powerful tool and basically times when users have to press buttons to refresh data are gone forever. WPF gives you a really wide range of asynchronous, multithreading tools such as Dispatcher class or Background worker to gently refresh desired data in the background. This is really great, because not only you don't have to worry about pressing various buttons, but also background threads don't block UI, so data is refreshed transparently from user's point of view.
WPF together with Entity Framework are really worth the effort of learning - please feel free to ask if you have any further concerns.

Related

Long living Entity Framework context - best practice to avoid data integrity issues

As a very rudimentary scenario, let's take these two operations:
UserManager.UpdateFirstName(int userId, string firstName)
{
User user = userRepository.GetById(userId);
user.FirstName = firstName;
userRepository.SaveChanges();
}
InventoryManager.InsertOrder(Order newOrder)
{
orderRepository.Add(newOrder);
orderRepository.SaveChanges();
}
I have only used EF in web projects and relied heavily on the stateless nature of the web. With every request I would get a fresh copy of the context injected into my business layer facade objects (services, managers, whatever you want to call them), all of the business managers sharing the same instance of the EF context. Currently I am working on a WPF project and I'm injecting the business managers and subsequently the repositories that they use directly into the View Model.
Let's assume a user is on a complicated screen and his first button click calls the UpdateFirstName() method. Let's assume SaveChanges() fails for whatever reason. Their second button click will invoke the InsertOrder() method.
On the web this is not an issue as the context for operation #2 is not related (new http request) to the context used by operation #1. On the desktop however, it's the same context across both actions. The issue arises in the fact that the user's first name has been modified and as such is tracked by the context. Even though the original SaveChanges() didn't take (say the db was not available at that time), the second operation calling SaveChanges() will not only insert the new order, it will also update the users first name. In just about every cause this is not desirable, as the user has long forgotten about their first action which failed anyway.
This is obviously being a silly example, but I always tend to bump into these sort of scenarios where I wish I could just start fresh with a new context for every user action as opposed having a longer lived (for the life of the WPF window for example) context.
How do you guys handle these situations?
A very real interrogqation when coming from ad hoc desktop apps that hit the database directly.
The answer that seems to go with the philosophy for ORMs is to have a context that has the same lifespan as your screen's.
The data context is a Unit Of Work implementation, in essence. For a web app, it is obviously the request and it works well.
For a desktop app, you could have a context tied to the screen, or possibly tied to the edit operation (between loading and pressing "save"). Read only operations could have a throwaway context (using IDs to reload the object when necessary, for example when pressing a "delete button" in a grid).
Forget about a context that spans the entire application's life if you want to stay aware of modifications from other users (relationship collections are cached when first loaded).
Also forget about directly sharing EF entities between different windows, because that effectively makes it an app-wide and thus long lived context.
It seems to me ORMs tend to enforce a web-like design on desktop apps because of this. No real way around this I'm afraid.
Not that it is necessary a bad thing in itself. You should normally not be attacking the database at the desktop level if it is to be shared between multiple instances. In which case you would be using a service layer, EF would be in its element and your problem would be shifted to the "service context"...
We have a large WPF application that calls WCF services to perform CRUD operations like ‘UpdateFirstName()’ or ‘InsertOrder()’. Every call into the service creates a new ObjectContext, therefore we don’t need to worry about inconsistent ObjectContexts hanging around.
Throw away the ObjectContext as soon as you are done with it. Nothing’s wrong with generating a new ObjectContext on the fly. There’s unnoticeable overhead in creating a new ObjectContext and you’ll save yourself many future bugs and headaches.
Suppose you do create a new ObjectContext each time a method is called on your repository, the below snippet would still give you transaction support by using the TransactionScope class. For example,
using (TransactionScope scope = new TransactionScope())
{
UserManager.UpdateFirstName(userid,firstName);
InventoryManager.InsertOrder(newOrder);
scope.Complete();
}
One completely different way is writing all changes to the database directly (with short lived database contexts). And then add some kind of draft/versioning on top of that. That way you can still offer ok/undo/cancel functionality.

SL asynchronicity at startup... Strategies?

I'm far from new at threading and asynchronous operations, but SL seems more prone to asynchronous issues, particularly ordering of operations, than most frameworks. This is most apparent at startup when you need a few things done (e.g. identity, authorization, cache warming) before others (rendering a UI usable by your audience, presenting user-specific info, etc.).
What specific solution provides the best (or at least a good) strategy for dealing with ordering of operations at startup? To provide context, assume the UI isn't truly usable until the user's role has been determined, and assume several WCF calls need to be made before "general use".
My solutions so far involve selective enablement of UI controls until "ready" but this feels forced and overly sensitive to "readiness" conditions. This also increases coupling, which I'm not fond of (who is?).
One useful aspect of Silverlight startup to remember is that the splash xaml will continue to be displayed until the Application.RootVisual is assigned. In some situations (for example where themes are externally downloaded) it can be better to leave the assignment of the RootVisual until other outstanding async tasks have completed.
Another useful control is the BusyIndicator in the Silverlight Toolkit. Perhaps when you are ready to display some UI but haven't got data to populate the UI you assign the RootVisual using a page that has a BusyIndicator.
In my oppinion:
1st: Render start up UI that will tell the user, that the application did register his action and is runing, (Login window may be?)
2nd: Issue neccessary calls for warm up procedures in background (WCF calls and all that) and let user know about progress of tasks that are required to finish before next GUI can be made operable (main window?)
Order of operations is kind of situation specific, but please be sure to let user know what is happening if it blocks his inputs.
The selective enabling of individual controls can look interesting, but user might go for that function first and it will be disabled, so you have to make sure user knows why, or he will be confused why at start it was disabled and when he went there 10mins later it works. Also depends on what is primary function of your program and what function would that disabled control have. If application main function is showing list of books it wouldnt be nice to make that list load last.

Switching views in WPF, high level design question

So I am currently working on a UI written in WPF. One thing I really like about WPF is the way it leads you to write more decoupled, isolated UI components. One pain point for me in WPF is that it leads you to write more decoupled, isolated UI components that sometimes need to communicate with one another :). This is probably due to my relative lack of UI experience, especially in WPF (I'm not a novice, but most of my work is far more low level than UI design).
Anyway, here is the situation:
At any one time, the central area of the UI displays one of three views implemented as UserControls, let's call them Views A, B, and C.
The user will be switching between these views at various times, and there is more than one way to switch views (this works well for the customer, causes some pain in code design currently).
Right now each view switching mechanism does its own thing to transition to another view. A certain singleton class takes care of storing data and communicating between the views. I don't like this, it's messy, error prone, and the singleton class knows way too much about the details of the UI. I want to eliminate it as much as is possible.
I ran into a bug today that had to do with the timing of switching between views. To make it simple, one view needs to perform some cleanup when it is unloaded, but that cleanup erases some data that is needed for another view. If the cleanup runs after the other view is loaded, problems ensure. See what I mean? Messy.
I am trying to take a step back and imagine a different way to get these views loaded with the data they need to do their job. Some of you more experience UI / WPF people out there must have come across a similar issue. I have a couple of ideas, but I am hoping someone will present a cleaner approach to me here. I don't like depending upon order of operations (at a high level) for my code to work properly. Thanks in advance for any help you may be able to offer.
I would recommend some kind of parent ViewModel that handles the CurrentView. I wrote an example here a while back if you're interested.
Basically the parent view will have a List<ViewModelBase> AvailablePages, a ViewModelBase CurrentPage, and an ICommand ChangePageCommand
How you choose to display these is up to you. My preferred method is a ContentControl with it's Content bound to the CurrentPage, and using DataTemplates to determine which View should be displayed based on the ViewModel stored in CurrentPage
Rachel's post sums up my basic approach to this, quite well. However, I would like to add a few things based on your comments which you may want to consider here.
Note that this is all assuming a ViewModel-first approach, as mentioned in comments.
The user will be switching between these views at various times, and there is more than one way to switch views (this works well for the customer, causes some pain in code design currently).
This shouldn't cause pain in the design. The key here is to have a single, consistent way to request a "current ViewModel" change, and the View will follow suit automatically. The actual mechanism used in the View can be anything - changing the VM should be consistent.
Done correctly, there should be little pain in the design, and a lot of flexibility in terms of how the View actually operates.
Right now each view switching mechanism does its own thing to transition to another view. A certain singleton class takes care of storing data and communicating between the views. I don't like this, it's messy, error prone, and the singleton class knows way too much about the details of the UI. I want to eliminate it as much as is possible.
This is where a coordinating ViewModel can really ease things. It does not require a singleton, as it effectively "owns" the individual ViewModels of the views. One option here, that's fairly simple, is to implement an interface on the ViewModels that includes an event - the ViewModel can raise the event (which I would name based more on what the intent is, not based on the "view change"). The coordinating VM would subscribe to each child VM, and based on the event, change it's "CurrentItem" property (for the active VM) based on the appropriate content to make the request. There are no UI details at all required.
I ran into a bug today that had to do with the timing of switching between views. To make it simple, one view needs to perform some cleanup when it is unloaded, but that cleanup erases some data that is needed for another view. If the cleanup runs after the other view is loaded, problems ensure. See what I mean? Messy.
This is screaming out for a refactoring. A ViewModel should never clean up data it doesn't own. If this is occurring, it means that a VM is cleaning up data that really should be managed separately. Again, a coordinating VM could be one way to handle this, though it's very difficult without more information.
I don't like depending upon order of operations (at a high level) for my code to work properly.
This is the right way to think here. There should be no dependencies on order within your code if it can be avoided, as it will make life much simpler over time.
I am trying to take a step back and imagine a different way to get these views loaded with the data they need to do their job.
The approach Rachel and I are espousing here is effectively the same approach I used in my series on MVVM to implement the master-detail View. The nice thing here is that the "detail" portion of the View does not always have to be the same type of ViewModel or View - if you use a ContentPresenter bound to a property that's just an Object (or an interface that the VMs share), you can easily switch out the Views with completely different Views merely by changing the property value at runtime.
My suggestion for this is to have one main view model that coordinates everything (not static / singleton) that you then use sub view models to transfer data around. This keeps the decoupling you are looking for, provides testability, and allows you to control when the data for each object is changed.

Architectural Design for a Data-Driven Silverlight WP7 app

I have a Silverlight Windows Phone 7 app that pulls data from a public API. I find myself doing much of the same thing over and over again:
In the UI, set a loading message or loading progress bar in place of where the content is
Get the content, which may be already in memory, cached in isolated file storage, or require an HTTP request
If the content can not be acquired (no network connection, etc), display an error message
If the content is acquired, display it in the UI
Keep the content in main memory for subsequent queries
The content that is displayed to the user can be taken directly from a data source, such as an ObservableCollection, or it may be a query on a data source.
I would like to factor out this repetitive process into a framework where ideally only the following needs to be specified:
Where to display the content in the UI
The UI elements to show while loading, on failure, and on success
The URI of the HTTP request
How to parse the HTTP response into the data structure that will kept in memory
The location of the file in isolated storage, if it exists
How to parse the file contents into the data structure that will be kept in memory
It may sound like a lot, but two strings, three FrameworkElements, and two methods is less than the overhead that I currently have.
Also, this needs to work for however the data is maintained in memory, and needs to work for direct collections and queries on those collections.
My questions are:
Has something like this already been implemented?
Are my thoughts about the topic above fundamentally wrong in some way?
Here is a design I'm thinking of:
There are two components, a View and a Model.
The View is given the FrameworkElements for loading, failure, and success. It is also given a reference to the corresponding Model. The View is a UserControl that is placed somewhere in the UI.
The Model a class that is given the URI for the data, a method of how to parse the data, and optionally a filename and how to parse the file. It is responsible for retrieving the data and notifying the View whenever the current status (loading/fail/success) changes. If the data downloaded from the network is different from the cache, the network data takes precedence. When the app closes or is tombstoned, the model writes the data to the cache.
How does that sound?
I took some time to have a good read of your requirements and noted some thoughts to offer as a sounding board.
Firstly, for repetetive tasks with common behaviour this is definitely the way to approach it. You are not alone in thinking about this problem.
People doing a bunch of this sort of thing may have created similar abstractions however, to my knowledge none have been publicly released.
How far you go with it may depend if you intend it to be just for your own use and for those with very similar requirements or whether you want to handle more general cases and make a product that is usable by a very wide audience.
I'm going to assume the former, but that does not preclude the possibility of releasing it as an open source project that can be developed further and/or forked.
By not trying to cater for all possibilities you can make certain assumptions about the nature of the using implementation and in particular UI design choices.
I think overall your thinking in the right direction. While reading some of your high level thoughts I considered some things could be simplified (a good thing) and at the same time delivering a compeling UI.
On your initial points.
You could just assume a performance isindeterminate progressbar is being passed in.
Do this if it's important to you, but you could be buying yourself into some complexity here handling different caching requirements - variance in duration or dirty handling. Perhaps sufficient to lean on the platforms inbuilt caching of urls (which some people have found gets in their way).
Handle network connectivity, yep this is repetitive and somewhat intricate. A perfect candidate for a general solution.
Update UI... arguably better to just return data and defer decisions regarding presentation and format of data to your individual clients.
Content in main memory - see above on caching.
On your potential inputs.
Where to display content - see above re data and defer presentation choices to client.
I would go with a UI element for the progress indicator, again a performant progress bar. Regarding communication of failure I would consider implementing this in a Completed event which you publish. Then through parameters you can communicate the result and defer handling to the client to place that result in some presentation control/log/whatever. This is consistent with patterns used by the .Net Framework.
URI - yes, this gets passed in.
How to parse - passing in a delegate to convert a stream or string into an object whose type can be decided by the client makes sense.
Loc of cache - you could pass this if generalising this matters, or hardcode it's path. It would be more useful to others if passed in (consider if you handle folders/creation).
On the implementation.
You could go with a UserControl, if it works for you to be bound by that assumption. It would be more flexible though, and arguably equally simple/elegant, to push presentation back on the client for both the data display and status messages and control hide/display of the progress bar as passed in.
Perhaps you would go so far as to assume the status messages would always be displayed in a textblock (if passed) and shift that housekeeping from each of your clients into your generic class.
I suspect you will benefit from not coupling the data format and presentation still.
Tombstone handling.. I would recommend some testing on the platforms in built caching of URLs here and see if you can identify whether it's durations/dirty conditions work for your general cases.
Hopefully this gives you some things to think about and some reassurance you're heading down the right path. There are many ways you could go about this. Which is the best path ultimately will be driven by your goals.
I'm developing a WP7 application which is basically a client of an existing REST API. The server returns data in JSON. With the help of the library JSON.NET (http://json.codeplex.com/) I was able to deserialize it directly to my .NET C# classes.
I store locally the data to handle offline scenario of my application and also to prevent the call on the server each time the user launch the application. I provide two ways to refresh the data: manually and/or after a period of time. To store the data I use Sertling (http://sterling.codeplex.com/), it’s a simple but easy to use local database for Silverlight/WP7.
The biggest challenge is to handle the asynchronous communication with the server. I provide clear UI feedbacks (Progressbar and /or loading wheel) to let know to the user what’s going on.
On a side note I’m using MVVM Light toolkit and SL Unit Testing to do integration test View Model => my local Client code => Server. (http://code.google.com/p/nunit-silverlight/wiki/NunitTestsWp7)

Delphi DataModule Usage - Single or Multiple?

I am writing an application, there are various forms and their corresponding datamodules.
I wrote in a way that they are using each other by mentioning in uses class(one in implementation and another in interface to avoid cross reference)
Is this approach is wrong? why or why not i should use in this way?
I have to agree with Ldsandon, IMHO it's way better to have more than one datamodule in your project. If you look at it as a Model - View - Controller thingie, your DB is the model, your forms would be the Views and your datamodules would be the controllers.
Personally I always have AT LEAST 2 datamodules in my project. One datamodule is used to share Actions, ImageLists, DBConnection, ... and other stuff throughout the project. Most of the time this is my main datamodule.
From there on I create a new datamodule for each 'Entity' in my application. For example If my application needs to proces or display orders, cursomters and products, then I will have a Datamodule for each and everyone of those.
This way I can clearly separate functionality and easily reuse bits and pieces without having to pull in everything. If I need something related to Customers, I simple use the Customers Datamodule and just that.
Regards,
Stefaan
It is ok, especially if you're going to create more than one instance of the same form each using a different instance of the related datamodule.
Just be aware of a little issue in the VCL design: if you create two instances of the same form and their datamodules, both forms will point to the same datamodule (due the way the VLC resolves links) unless you use a little trick when creating the datamodule instance:
if FDataModule = nil then
begin
FDataModule := TMyDataModule.Create(Self);
FDataModule.Name := ''; // That will avoid pointing to the same datamodule
end;
A data module just for your table objects, if you only have a db few tables, would be a good one to be the first one.
And a data module just for your actions is another.
A data module just for image lists, is another good third data module. This data module only contains image lists, and then your forms that need access to image lists can use them all from this shared location.
If you have 200 table objects, maybe more than one data module just for db tables. Imagine an application that has 20 tables to do with invoicing, and another 20 tables to do with HR. I would think an InvoicingDataModule, and HRDataModule would be nice to be separate if the tables inside them, and the code that works against them, doesn't need to know anything about each other, or even when one module has a "uses" dependency in one direction, but that relationship is not circular. Even then, finer grained data module modularity can be beneficial.
Beyond the looking glass. I always use Forms instead of DataModules. I know it's not common ground.
I always use Forms instead of DataModules. I call them DataMovules.
I use one such DataMovule per each group of tables logically related.
I use Forms instead of DataModules because both DataModules and Forms are component containers and both can effectively be used as containers for data related components.
During development:
My application is easier to develop. Forms give you the opportunity to view the data components, making easy to develop those components.
My application is easier to debug, as you may put some viewer components right away so you may actually see the data. I usually create a tab rack, with one page per table, and one datagrid on every page for browsing the data on the table.
My application is easier to test, as you may eventually manipulate the data, for example experimenting with extreme values for stress testing.
After development:
I do turn form invisible. Practically making it a DataModule, I enjoy the very same container features than a datamodule has.
But with a bonus. The Form is still there, so I may eventually can turn it visible for problem determination. I use the About Box for this.
And no, I have not experienced any noticeable penalty in application size or performance.
I don't try to break the MVC paradigm. I try to adhere to it instead. I don't mix the Forms that constitute my View with the DataMovules that constitute my Controllers. I don't consider them part of my View. They will never be used as the User Interface of my application. DataMovules just happen to be Forms. They are just convenient engineering artifacts.

Resources