Wpf-binding to DbSet<T>.Local (EF Core) - wpf

Please help! I am developing a wpf application with ef core.
After reading the Microsoft documentation, I found it convenient to bind my DataGrid to a collection DbSet< T >.Local:
it represents a local view of all Added, Unchanged, and Modified entities in this set. This local view will stay in sync as entities are added or removed from the context.
Everything works great, but there is a small problem:
Now I can only make synchronous queries to DB through the context, which often freeze the UI. Asynchronous queries throw an exception:
This type of CollectionView does not support changes to its SourceCollection from a thread different from the Dispatcher thread.
Is there a way how to bind to this Local and keep the possibility of asynchronous queries through the context?
Here is the simplest test(no MVVM etc): https://github.com/OlegFj17/TestingBindingToLocal_EF_6
Thank you.

Related

What is the real purpose of ViewModel in MVVM?

I had a talk with teamlead about this topic and from his point of view we can just use bindings and commandings omitting ViewModel, because we can test UI behaviour without VM using Automation or our own developed mechanisms of UI testing (based on automated clicks on Views). So, if there are no real benefits, why should I spawn "redundant" entities? Besides, automated integration tests look much more indicative than VM tests. So, it seems that we can mix VMs and Models.
update:
I agree that mixing VMs and Models brings into single .cs a data model and rules of data transformation for representing it in a View. But if this is only one benefit - I don't want to create a VM for each View.
So what pros of VM do you know?
The VM is the logic behind your UI. combining the UI code with the logic ends up in a mess, at least from my experience. Your view should define what you see - buttons, menus etc. Your VM is in charge of the binding and handling events caused by the user.
Edit:
Not wanting to create a VM for each view doesn't sound like a SW-oriented reason. Doing so will leave your view clean of logic and your model free to be the connecting layer between the core layer and your app layer.
I like the following example referring to the model and its role (why it shouldn't be combined with the VM): imagine you're developing some Android app using Google maps. Google maps are your core. Then one fine day you really need the option to, say, color parts of the map in pink, bright pink. An email to Google asking for colorPink(Map)will probably get you nowhere. That's where your model steps in and provides you the map wrapper that you need to define your pinky method.
Without a separate model, you'd have to go through every VM that uses map and update it.
So, the view has a role, the model has a role, the VM is the logic between those.
Edit 2:
There are some cases where there's no need of a model layer - I tended to disagree at first but was convinced after a discussion: In relatively small applications, the model can end up being a redundant wrapper with no added functionality over the core. In such cases, you can use the core objects directly.
What is he binding to? Whatever he is binding to is effectively a view model, whether you call it that or not. If it also doubles as a data model, then you're violating the Single Responsibility Principal (SRP). Essentially, the problem here is that you're lumping together code that is servicing different parts of the application architecture, which will lead to a convoluted mess.
UI Testing is a pain not just because you need to accomodate for changes in the View which can occur many times but also because UI tends to interfere with its own testing. Depending on the tests needed you'll need to keep track of focus, resize and possibly other (mouse) events which can be extremely hard to make it a representative test.
It is much easier to scope the test to the logic in the ViewModel and leave the UI testing to humans. The human testers do not need to test the logic; their only consern should be the UI.
By breaking down a ViewModel into a hierarchy of ViewModels you might be able to reuse a ViewModel multiple times. There is no rule that states that you should have a ViewModel for each View. (After a release or two I end up there but that's besides the point :) )
It depends on the nature of your Model - sometimes they are simple and can serve as both like you are suggesting. However, depending on the framework, you'll need to dirty up your model with some PropertyChanged event code which can get messy and distracting. In more complex cases, they serve as a mediator between your the view and the model. Let's suppose you're creating a client app that monitors a remote process or database entries - creating View Model's for these entities let's you surface your data in a way that is convenient to bind to for a UI framework (but is silly in a DB framework), encapsulate operations as commands, perform validation, etc etc.

WPF performance issue. Bindings ?

We have a financial application that we have started to develop 5 years ago. It is a migration of an old application with the main purpose being to achieve a nice user interface.
Because of UI considerations back then the chosen technology is WPF. The application still uses .Net version 3.5 with SP1.
The current application is a layered one, but the business logic and the back-end is different technology and it is outside the scope of our problem.
The client is not a traditional MVVM application, however it mixes some concepts from it.
In a simple scenario data is retrieved from an application server in the form of typed datasets. The datasets being to the client are mapped into data objects (POCO) implementing the INotifyPropertyChanged interface, the properties raising the PropertyChanged event.
Those data objects are practically bound to the UI and as response to user interaction some routed commands are executed so in the end data is retrieved from the server.
A variation of Active Record pattern is implemented so basically each table from a dataset received from the server gets mapped into a data object, a property from a data object representing a column from a table and a data object instance represents a row from the table.
Because of the WPF bindings used the retrieved data automatically appears on the UI.
The problem is that we started to have performance problems. They appear sometimes in a non-deterministic fashion after the data is retrieved from the server. There is a delay between the moment the mapping from the datasets starts until the UI is updated.
This delays on slow machines can go up to a couple of seconds.
On powerful machines the delays seem to be insignificant. Also the delays are not constant, so retrieving the same data the UI is updated sometimes in a couple of milliseconds and sometimes it can last seconds.
We think we have isolated the problem and it is the usage of bindings in cases where a binding source (data object) has many properties (ex: 50) and the UI contains many controls (text boxes) that use bindings.
Some measurements have been made also using different tools. It seems there are no memory leaks. The processing delays were seen at the moment a property of a data object participating in a binding is updated.
Also seems like the duration of setting the value of the property depends also on the value itself (ex: updating decimal properties with the zero value is faster, using or not converters on the binding, tests were made also without converters and strings, the property value string “0” is updated much faster than property value “123.456.00”).
So basically we think that a major part of the performance issue is using many controls on the UI (ex: 150 textboxes) bound to objects having many properties (ex: 50). We have noticed that targeting the 3.5 version is much slower than targeting the version 4.0 or 4.5.
Delays can be reproduced but they are not occurring all the time. The delays increase or decrease exponentially like from 5 milliseconds to 2 seconds or on very slow machines even up to tens of seconds.
We suspect the performance issue (delay until the UI is updated) is related to the WPF technology and the binding mechanism but it is not confirmed.
So basically we do not understand why the delays appear, why they appear only sometimes (non-deterministic) and if they are because of the WPF binding system!
Any help on this would be appreciated. Thank you.

Using an ObservableCollection in a service

I have a few projects. One of the projects will be deployed as a Windows Service. An other project is a WPF application. There are a few other applications managing some data (of external hardware or the database). The data-lists are almost all ObservableCollection's.
I've read around a bit and it seems like ObservableCollection is only really ment for the UI layer (the WPF application, in my case). Is that correct? Would I be better of using events (PropertyChanged) in the service/datamanager layers?
In a service, usually all objects should only live as long as it takes to process the current request, correct? So, normally you wouldn't need change notification at all in the service layer, since the service would perform all changes to the model itself. For the next request, the required objects would be read again from the backing store.

Global Entity Framework Context in WPF Application

I am in the middle of development of a WPF application that is using Entity Framework (.NET 3.5). It accesses the entities in several places throughout. I am worried about consistency throughout the application in regard to the entities. Should I be instancing separate contexts in my different views, or should I (and is a a good way to do this) instance a single context that can be accessed globally?
For instance, my entity model has three sections, Shipments (with child packages and further child contents), Companies/Contacts (with child addresses and telephones), and disk specs. The Shipments and EditShipment views access the DiskSpecs, and the OptionsView manages the DiskSpecs (Create, Edit, Delete). If I edit a DiskSpec, I have to have something in the ShipmentsView to retrieve the latest specs if I have separate contexts right?
If it is safe to have one overall context from which the rest of the app retrieves it's objects, then I imagine that is the way to go. If so, where would that instance be put? I am using VB.NET, but I can translate from C# pretty good. Any help would be appreciated.
I just don't want one of those applications where the user has to hit reload a dozen times in different parts of the app to get the new data.
Update:
OK so I have changed my app as follows:
All contexts are created in Using Blocks to dispose of them after they are no longer needed.
When loaded, all entities are detatched from context before it is disposed.
A new property in the MainViewModel (ContextUpdated) raises an event that all of the other ViewModels subscribe to which runs that ViewModels RefreshEntities method.
After implementing this, I started getting errors saying that an entity can only be referenced by one ChangeTracker at a time. Since I could not figure out which context was still referencing the entity (shouldn't be any context right?) I cast the object as IEntityWithChangeTracker, and set SetChangeTracker to nothing (Null).
This has let to the current problem:
When I Null the changeTracker on the Entity, and then attach it to a context, it loses it's changed state and does not get updated to the database. However if I do not null the change tracker, I can't attach. I have my own change tracking code, so that is not a problem.
My new question is, how are you supposed to do this. A good example Entity query and entity save code snipped would go a long way, cause I am beating my head in trying to get what I once thought was a simple transaction to work.
A global static context is rarely the right answer. Consider what happens if the database is reset during the execution of this application - your SQL connection is gone and all subsequent requests using the static context will fail.
Recommend you find a way to have a much shorter lifetime for your entity context - open it, do some work, dispose of it, ...
As far as putting your different objects in the same EDMX, that's almost certainly the right answer if they have any relationships between objects you'll want them in the same EDMX.
As for reloading - the user should never have to do this. Behind the scenes you can open a new context, reloading the current version of the object from the database, applying the changes they made in the UI annd then saving it back.
You might want to look at detached entities also, and beware of optimistic concurrency exceptions when you try to save changes and someone else has changed the same object in the database.
Good question, Cory. Thumb up from me.
Entity Framework gives you a free choice - you can either instanciate multiple contexts or have just one, static. It will work well in both cases and yes, both solutions are safe. The only valuable advice I can give you is: experiment with both, measure performance, delays etc and choose best one for you. It's fun, believe me :)
If this is going to be a really massive application with tons of concurrent connections I would advise using one static context or one static, core context and just few additional ones just to support the main one. But, as I wrote just few lines above - it's up to your requirements which solution is better for you.
I especially liked this part of your question:
I just don't want one of those
applications where the user has to hit
reload a dozen times in different
parts of the app to get the new data.
WPF is a really, really powerful tool and basically times when users have to press buttons to refresh data are gone forever. WPF gives you a really wide range of asynchronous, multithreading tools such as Dispatcher class or Background worker to gently refresh desired data in the background. This is really great, because not only you don't have to worry about pressing various buttons, but also background threads don't block UI, so data is refreshed transparently from user's point of view.
WPF together with Entity Framework are really worth the effort of learning - please feel free to ask if you have any further concerns.

MVVM pattern: ViewModel updates after Model server roundtrip

I have stateless services and anemic domain objects on server side. Model between server and client is POCO DTO. The client should become MVVM. The model could be graph of about 100 instances of 20 different classes. The client editor contains diverse tab-pages all of them live-connected to model/viewmodel.
My problem is how to propagate changes after server round-trip nice way. It's quite easy to propagate changes from ViewModel to DTO. For way back it would be possible to throw away old DTO and replace it whole with new one, but it will cause lot of redrawing for lists/DataTemplates.
I could gather the server side changes and transmit them to client side. But the names of fields changed would be domain/DTO specific, not ViewModel specific. And the mapping seems nontrivial to me. If I should do it imperative way after round-trip, it would break SOC/modularity of viewModels.
I'm thinking about some kind of mapping rule engine, something like automappper or emit mapper. But it solves just very plain use-cases. I don't see how it would map/propagate/convert adding items to list or removal. How to identify instances in collections so it could merge values to existing instances. As well it should propagate validation/error info.
Maybe I should implement INotifyPropertyChanged on DTO and try to replay server side events on it ? And then bind ViewModel to it ? Would binding solve the problems with collection merges nice way ? Is EventAgregator from PRISM useful for that ? Is there any event record-replay component ?
Is there better client side pattern for architecture with server side logic ?
Typically, I've kept a reference to the DTO in my Model classes. For multiple models, I ensure each model knows how to construct itself from a DTO, as well as how to Save itself using an injectible "saver" or other service provider object. Carrying around a reference to the DTO makes it pretty easy, when you call Save() on the model, to modify the old DTO according to the Model before passing it back to the service.
Hopefully any "updates" to other objects after a Save() operation could be communicated in other DTOs, which should then be loaded into the appropriate Model classes used by your ViewModel.
The downside to this is that you do indeed have to write the mapping code, but this is usually the easiest part. I am not convinced this is the best way to do things and I would appreciate reading others' responses.
I used another variation with a lot of success. We had a real-time GUI, so repetitive redraws on the client were not acceptable.
We made our DTOs aware of their property changes, and emit PropertyChanged events to the ViewModel, therefore replaying server-side events.
Code is straightforward to write, unit test, etc. It becomes easy to browse when PropertyChangeListeners (interfaces implemented by ViewModels) are typed.
This boundary can also be used to switch threads to the GUI worker thread.

Resources