Modifying an ObservableCollection<T> declared as a resource at runtime - wpf

I have a bunch of ObservableCollections which are populated from a database. There's agood chance that during the application lifetime these collections will grow and i need them to be updated every 30 seconds or so.
I declare the collections as resources in merged dictionaries in App.xaml. I can fetch these collections fine by using the Application.FindResource() method but any changes I make to the resulting collection are not reflected when I call FindResource again. Maybe I'm naive to think this would be the case.
Am I right or wrong?

Got it!
a resources value can be set through Application.Current.Resources[key].
So in my example, should anyone run into this problem i do something like.
MyObservableCollection coll1 = Application.FindResource("resourceName") as MyObservableCollection
foreach(Item i in coll1)
{
if(somecondition){i.someProperty == someValue;}
}
//coll2 does NOT reflect the above change!!!
MyObservableCollection coll2 = Application.FindResource("resourceName") as MyObservableCollection;
Application.Current.Resources["resourceName"] = coll1;
//coll3 DOES reflect the above change
MyObservableCollection coll3 = Application.FindResource("resourceName") as MyObservableCollection

Related

Immutable State - Propagating Changes to the GUI Efficiently

In a previous question I asked how to idiomatically implement an observer pattern for an F# application. My application now uses a MailboxProcessor as reccomended and I've created some helper functions to create sub-MailboxProcessor's etc. However, I'm at a mental block when it comes to specific case scenarios w.r.t. GUI binding.
Lets say I have a model as such:
type Document = {
Contents : seq<DocumentObject>
}
And the GUI (WPF, XAML) requires binding like so:
interface IMainWindowViewModel
{
IEnumerable<Control> ContentViews { get; }
}
Each ViewModel for each Control will require a DocumentObject (its underlying model) and a way of knowing if it has changed. I supply this as a sub-MailboxProcessor<DocumentObject> so that changes may be propagated correctly, I'm moderately confident this pattern works. Essentially, it maps the service outputs and wraps modification requests (outer interface example below):
let subSvc = generateSubSvc svc (fun doc -> doc.Contents[0]) (fun f -> fun oldDoc -> { oldDoc with Contents[0] = f Contents[0] })
let viewModel = new SomeDocObjViewModel(docObjSvc)
new DocObjView(viewModel)
Now, imagine a modification command now deletes a DocumentObject from MyDocument. The top-level MailboxProcessor now echoes the change to IMainWindowViewModel using it's IEvent<MyDocument>. And here's where my problems begin.
My IMainWindowViewModel doesn't really know which DocumentObject has been deleted. Only that there's a new Document and it has to deal with it. There may be ways of it figuring out but it never really knows directly. This can force me down the path of having to either re-create all the Control's for all DocumentObject's to be safe (inefficient). There are additional problems (such as dangling subSvc's) which I also haven't mentioned here for brevity.
Normally, these kind of dynamic changes would be dealt with something like an ObservableCollection<DocumentObject> which is then mapped into an ObservableCollection<Control>. This comes with all the caveats of shared mutable state and is a little 'hackish'; however, it does do the job.
Ideally, I'd like a 'pure' model, free from the trappings of PropertyChanged and ObservableCollections, what kind of patterns in F# would satisfy this need? Where is it appropriate to draw the line between idiomatic and realistic?
Have you considered using the Reactive Extensions (and Reactive UI further down the road) for the purpose of modelling mutable state (read: your model properties over time) in a functional way?
I don't see anything wrong technically to use an ObservableCollection in your model. After all, you need to track collection changes. You could do it on your own, but it looks like you can save yourself a lot of trouble reinventing the observable collection, unless you have a very specific reason to avoid the ObservableCollection class.
Also, using MailboxProcessor seems a bit overkill, since you could just use a Subject (from Rx) to publish and expose it as an IObservable to subscribe to 'messages':
type TheModel() =
let charactersCountSubject = new Subject()
let downloadDocument (* ... *) = async {
let! text = // ...
charactersCountSubject.OnNext(text.Length)
}
member val CharactersCount = charactersCountSubject.AsObservable() with get
type TheViewModel(model : TheModel) =
// ...
member val IsTooManyCharacters = model.CharactersCount.Select((>) 42)
Of course since we're talking about WPF, the view-model should implement INPC. There are different approaches, but whichever one you take, the ReactiveUI has a lot of convenient tools.
For example the CreateDerivedCollection extension method that solves one of the problems you've mentioned:
documents.CreateDerivedCollection(fun x -> (* ... map Document to Control ... *))
This will take your documents observable collection, and make another observable collection out of it (actually a ReactiveCollection) that will have documents mapped to controls.

setting hierarchical property for GXT's ComboBox.setDisplayField

i have a simple extension of a BaseModelData in a form of MyModel, and i can call new MyModel().getObj1().getObj2() to get to obj2's string value. i have a number of MyModel instances, so i would like to populate a ComboBox instance with an obj2 value from each MyModel instance. first, i called ComboBox.setDisplayField("obj1.obj2"), because using such hierarchical property approach works for TextField.setName() cases. then, i took a store which contains all MyModel instances, and set it to a ComboBox via setStore(). however, the combobox is empty. it looks as though setting the aforementioned property via ComboBox.setDisplayField() does not work the same way as it does for TextField.setName(). i tried using my own instance of ListModelPropertyEditor, but without success. so what are my alternatives?
thank you for your time!!!
I am not sure about accessing hierarchical data from ComboBox.setDisplayField() method, but can you can achieve it by adding a new method say getObj2() in MyModel class, which will essentially represent obj1.obj2.
public Obj2 getObj2() {
return getObj1().getObj2(); //with possible null checks
}
Now you can call ComboBox.setDisplayField("obj2") and get the work done.

Good way to implement transactional editing (commit/revert) when using WPF data bindings

I have a fairly standard requirement — I need to be able to open a dialog where user can change values in data-bound fields, and then choose to click OK or Cancel, where clicking Cancel reverts the changes.
I've looked at IEditableCollectionView, IEditableObject and BindingGroups, but they all seem to be meant for editing a single item at a time. My program provides a collection of objects in a list, user selects an item from the list and edits it using SelectedItem-bound TextBoxes. Meaning that any number of items may be edited, including adding and removing them from the list, and all of those changes need to be reverted if he presses cancel.
At first I was simply making object backups through deep-copy (serialization) and restoring them on cancel, but now the objects must contain references to other, shared objects, making this approach problematic.
What's the best way to approach such a scenario without manually copying objects and/or values back and forth?
In this case the DataTable class would work Perfectly. It can save changes, go back (step by step) or revert all changes and many other features.
DataTable class has a nested feature that goes well with XML.
In case you're willing to save in a database then take a look at EntityFramework
After more thought on the matter, I have concluded that the best way, at least for small-scale implementation, is to write a "by value deep copy" method that copies values of objects fields and properties without replacing the objects themselves (so that any references to the edited objects remain intact even when data is restored).
For this purpose I have written the following extension method:
public static void CopyDataTo(this Object source, Object target) {
// Recurse into lists
if (source is IList) {
var a = 0;
foreach (var item in (IList)source) {
if (a >= ((IList)target).Count) {
var type = item.GetType();
var assembly = Assembly.GetAssembly(type);
var newItem = assembly.CreateInstance(type.FullName);
((IList)target).Add(newItem);
}
item.CopyDataTo(((IList)target)[a]);
a++;
}
while (a < ((IList)target).Count) {
((IList)target).RemoveAt(a);
}
}
// Copy over fields
foreach (var field in source.GetType().GetFields())
field.SetValue(target, field.GetValue(source));
// Copy properties
foreach (var property in source.GetType().GetProperties().Where(
property => property.CanWrite && !property.GetMethod.GetParameters().Any()))
{
property.SetValue(target, property.GetValue(source));
}
}
It's no silver bullet: it only works on objects of the same type, list items have to have a parametrless constructor and there is no way to control recursion depth. In addition, I haven't yet had a chance to test in any long-term or more complex scenarios, but so far it does what it should (copies values between objects) and can be used for a simple backup/restore scenario:
var backup = new TypeOfVariableToEdit();
data.CopyDataTo(backup);
var clickedOK = RunDataEditor(data);
if (!clickedOK)
backup.CopyDataTo(data);
The best approach is not to:
if you need these items, get a fresh copy of them from the database or whatever data storage, allow the user to make changes, and if they press cancel, just discard the changes. If they press save, save the data to the storage and then refresh your existing screens or whatever.

How do I avoid a memory leak with LINQ-To-SQL?

I have been having some issues with LINQ-To-SQL around memory usage. I'm using it in a Windows Service to do some processing, and I'm looping through a large amount of data that I'm pulling back from the context. Yes - I know I could do this with a stored procedure but there are reasons why that would be a less than ideal solution.
Anyway, what I see basically is memory is not being released even after I call context.SubmitChanges(). So I end up having to do all sorts of weird things like only pull back 100 records at time, or create several contexts and have them all do separate tasks. If I keep the same DataContext and use it later for other calls, it just eats up more and more memory. Even if I call Clear() on the "var tableRows" array that the query returns to me, set it to null, and call SYstem.GC.Collect() - it still doesn't release the memory.
Now I've read some about how you should use DataContexts quickly and dispose of them quickly, but it seems like their ought to be a way to force the context to dump all its data (or all its tracking data for a particular table) at a certain point to guarantee the memory is free.
Anyone know what steps guarantee that the memory is released?
A DataContext tracks all the objects it ever fetched. It won't release this until it is garbage collected. Also, as it implements IDisposable, you must call Dispose or use the using statement.
This is the right way to go:
using(DataContext myDC = new DataContext)
{
// Do stuff
} //DataContext is disposed
If you don't need object tracking set DataContext.ObjectTrackingEnabled to false. If you do need it, you can use reflection to call the internal DataContext.ClearCache(), although you have to be aware that since its internal, it's subject to disappear in a future version of the framework. And as far as I can tell, the framework itself doesn't use it but it does clear the object cache.
As Amy points out, you should dispose of the DataContext using a using block.
It seems that your primary concern is about creating and disposing a bunch of DataContext objects. This is how linq2sql is designed. The DataContext is meant to have short lifetime. Since you are pulling a lot of data from the database, it makes sense that there will be a lot of memory usage. You are on the right track, by processing your data in chunks.
Don't be afraid of creating a ton of DataContexts. They are designed to be used that way.
Thanks guys - I will check out the ClearCache method. Just for clarification (for future readers), the situation in which I was getting the memory usuage was something like this:
using(DataContext context = new DataContext())
{
while(true)
{
int skipAmount = 0;
var rows = context.tables.Select(x => x.Dept == "Dept").Skip(skipAmount).Take(100);
//break out of loop when out of rows
foreach(table t in rows)
{
//make changes to t
}
context.SubmitChanges();
skipAmount += rows.Count();
rows.Clear();
rows = null;
//at this point, even though the rows have been cleared and changes have been
//submitted, the context is still holding onto a reference somewhere to the
//removed rows. So unless you create a new context, memory usuage keeps on growing
}
}
I just ran into a similar problem. In my case, helped establish the properties of DataContext.ObjectTrackingEnabled to false.
But it works only in the case of iterating through the rows as follows:
using (var db = new DataContext())
{
db.ObjectTrackingEnabled = false;
var documents = from d in db.GetTable<T>()
select d;
foreach (var doc in documents)
{
...
}
}
If, for example, in the query to use the methods ToArray() or ToList() - no effect

Why is my WPF Treeview bound to LinqToSql classes being a memory hog?

I have a WPF App which is grinding to a halt after running out of memory...
It is basically a TreeView displaying nodes, which are instances of Linq To Sql OR Generated class ICTemplates.Segment. There around 20 tables indirectly linked via associations to this class in the OR designer.
<TreeView Grid.Column="0" x:Name="tvwSegments"
ItemsSource="{Binding}"
SelectedItemChanged="OnNewSegmentSelected"/>
<HierarchicalDataTemplate DataType="{x:Type local:Segment}" ItemsSource="{Binding Path=Children}">
...
// code behind, set the data context based on user-input (Site, Id)
KeeperOfControls.DataContext = from segment in tblSegments
where segment.site == iTemplateSite && segment.id == iTemplateSid
select segment;
I've added an explicit property called Children to the segment class which looks up another table with parent-child records.
public IEnumerable<Segment> Children
{
get
{
System1ConfigDataContext dc = new System1ConfigDataContext();
return from link in this.ChildLinks
join segment in dc.Segments on new { Site = link.ChildSite, ID = link.ChildSID } equals new { Site = segment.site, ID = segment.id }
select segment;
}
}
The rest of it is data binding coupled with data templates to display each Segment as a set of UI Controls.
I'm pretty certain that the children are being loaded on-demand (when I expand the parent) going by the response time. When I expand a node with around 70 children, it takes a while before the children are loaded (Task manager shows Mem Usage as 1000000K!). If I expand the next node with around 50 children, BOOM! OutOfMemoryException
I ran the VS Profiler to dig deeper and here are the results
Summary Page
Object Lifetimes
Allocation
The top 3 are Action, DeferredSourceFactory.DeferredSource and EntitySet (all .Net/LINQ classes). The only user-classes are Segment[] and Segment come in at #9 an #10.
I can't think of a lead to pursue.. What could be the reason ?
maybe a using surrounding that DataContext ?
using(System1ConfigDataContext dc = new System1ConfigDataContext()){
.... ?
}
also, have you tried using an sql profiler? might shed some light on the matter.
Have you tried using a global DataContext instead of one for each element?
Creating all of the DataContext's each with there own query and results could be the cause of your memory bloat.
I don't know the exact solution but the new statement in join may cause this. Because for each relation a new object could be created(But as I mentioned, I don't know if it is correct).
Could you try this;
public IEnumerable<Segment> Children
{
get
{
System1ConfigDataContext dc = new System1ConfigDataContext();
return from link in this.ChildLinks
join segment in dc.Segments on link.ChildSite == segment.site && link.ChildSID == segment.id
select segment;
}
}
The issue seems to be the creation of multiple S1DataContext objects as Sirocco referred to.
I tried the using statement to force a Dispose and make it eligible for collection. However it resulted in an ObjectDisposedException that I can't make sense of.
The control goes from the line that sets the data context of the DockPanel KeeperOfAllControls.
[External Code] (shown in call stack)
Segment.Children.get (has a using block with dc)
Back at the Line in Step 1... the Linq query uses tblSegments which is retrieved from a local instance of S1DataContext
Anyways so i assume that there is something that prevents multiple DataContexts from being created and disposed. So I tried a Singleton DataContext.
And it works!
the TreeView control is significantly more responsive, every node I tried loads in 3-4 secs max.
I put in a GC.Collect (for verification) before every fetch/search and now the memory usage stays between 200,000-300,000K.
The OR generated System.Data.Linq.DataContext doesn't seem to go away unless it is disposed explicitly (eating memory). Trying to Dispose it in my case, didn't pan out.. even though both functions had their own using blocks (no shared instance of DataContext). Though I dislike Singletons, I'm making a small internal tool for devs and hence don't mind it as of now.. None of the LinqToSql samples I saw online.. had Dispose calls mandated.
So I guess the problem has been fixed. Thanks to all the people that acted as more eyeballs to make this bug shallow.

Resources