WPF app with multiple usercontrols - wpf

I Am creating simple WPF test project which contains multiple UserControls(Insteda of Pages).I Am using Switcher Class to navigate between different UserControls.When i navigate to different pages,i have observed that memory consuption keep on increasing on each UserControle Navigationand GC is not invoked.
1.So am i doing something wrong in following code?
2.Which part of the code consuming more memory?
3.Do i need to invoke GC for disposing my UserControls on each new UserControle creation?
If need how can i invoke GC?
public void On_Navigate_Click()
{
UserControle newusercontrole=new UserControle();
DataSet ds = new DataSet();
ds=con.getSome_Datafrom_SQL();//Gets data from SQL via connection class
dataGrid_test.ItemsSource = ds.Tables[0].DefaultView;
Grid.SetColumn(newusercontrole, 1);//dataGrid_test is inside newusercontrole and following is the code to add "this" usercontrol to the main window.
Grid.SetRow(newusercontrole, 1);
Grid.SetZIndex(newusercontrole, 10);
Container.Children.Add(newusercontrole);
}

First off, I must point out that if garbage collection really isn't happening (as you said), it's not your fault and it does not mean you're doing something wrong. It only means that the CLR doesn't think that your system is under memory pressure yet.
Now, to manually invoke a garbage collection cycle anyway, you can use the GC.Collect() static method. If a garbage collection actually starts and your memory consumption is still unreasonably high, this means that you're probably doing something wrong: You're keeping an ever increasing number of unnecessary object references and the garbage collector cannot safely collect those objects. This is a kind of a memory leak.
As far as your code goes, I think that the problem is at the end of the method you posted:
Container.Children.Add(newusercontrole);
This seems to add a new object (on every click) to the collection Container.Children. If this is not removed elsewhere, this is probably the cause of your memory leak. I don't know what the suitable solution would be for your use case (since I don't know exactly how your UI should behave), but you'll likely need to find a way to remove the last UserControle added from the Container.Children. If you can use LINQ, then the methods OfType<T>() and Last() could be of use to find it.
In any case, don't leave the GC.Collect() line in production code. Use it only to force a collection cycle for testing purposes, like this one.

Related

Is it bad programming practice to store objects of type Foo into a static array of type Foo belonging to Foo in their construction?

Say I wanted to store objects statically inside their own class. Like this:
public class Foo
{
private static int instance_id = 0;
public static List<Foo> instances = new List<Foo>();
public Foo()
{
instances[instance_id++] = this;
}
}
Why?
I don't need to create unique array structures outside the class (one will do).
I want to map each object to a unique id according to their time of birth.
I will only have one thread with the class in use. Foo will only exist as one set in the program.
I did searching, but could find no mention of this data structure. Is this bad practice? If so, why? Thank you.
{please note, this question is not specific to any language}
There are a couple of potential problems I can see with this setup.
First, since you only have a single array of objects, if you need to update the code so that you have lots of different groups of objects in different contexts, you'll need to do a significant rewrite so that each object ends up getting associated with a different context. Depending on your setup this may not be a problem, but I suspect that in the long term this decision may come back to haunt you.
Second, this approach assumes that you never need to dispose of any objects. Imagine that you want to update your code so that you do a number of different simulations and aggregate the results. If you do this, then you'll end up having your giant array storing pointers to objects you're not using. This means that you'll (1) have a memory leak and (2) have to update all your looping code to skip over objects you no longer care about.
Third, this approach makes it the responsibility of the class, rather than the client, to keep track of all the instances. In some sense, if the purpose of what you're doing is to make it easier for clients to have access to a global list of all the objects that exist, you may want to consider just putting a different list somewhere else that's globally accessible so that the objects themselves aren't the ones responsible for keeping track of themselves.
I would recommend using one of a number of alternate approaches:
Just have the client do this. If the client needs to keep track of all the instances, just have them always create the array they need and populate it. That way, if multiple clients need different arrays, they can do so. You also avoid the memory leak issues if you do this properly.
Have each object take, as part of its constructor, a context in which to be constructed. For example, if all of these objects are nodes in a quadtree, have them take a pointer to the quadtree in which they'll live as a constructor parameter, then have the quadtree object store the list of the nodes in it. After all, it seems like it's really the quadtree's responsibility to keep track of everything.
Keep doing what you're doing, but using something with weak references. For example, you might consider using some variation on a WeakHashMap so that you do store everything, but if the objects are no longer needed, you at least don't have a memory leak.

Using "using" statements for every object implementing IDisposable?

I'm currently skimming through some code that reads Active Directory entries and manipulates them. Since I haven't had to do with this kind of stuff, I F12'd the classes (DirectoryEntry, SearchResultCollection, ...), and I found out they all implement the IDisposable interface, but I couldn't see any using blocks in our code.
Are those even necessary in this case (i.e., should I blindly refactor them in)?
Another question of mine regarding this (there are very many instantiated IDisposable objects in the code: Isn't IDisposable making stuff very "ugly" in this case? I mean, I like using statements as they basically free my mind from worrying about things, but in many cases the code has a layout similar to the following:
using (var one = myObject.GetSomeDisposableObject())
using (var two = myObject.GetSomeOtherDisposableObject())
{
one.DoSomething();
using (var foo = new DisposableFoo())
{
MyMethod(foo);
using (...)
using (...)
{
...
}
}
}
I feel that this becomes quite unreadable due to high indentation levels (even stacking the using statements). But extracting some of this into new methods can lead to many parameters that need to be passed, since naturally the "inner" code often needs the objects created in the using statements.
What is an elegant way to solve this without losing readability?
For the first part, this question refers to 'memory used by the task increasing constantly' when not disposing of AD references
For the second, a using block is syntactic sugar for a try/finally with the Dispose call in the finally block, which would be an alternative construct allowing you to dispose of everything in one place, reducing indentation

Arrays in PowerBuilder

I have this code
n_userobject inv_userobject[]
For i = 1 to dw_1.Rowcount()
inv_userobject[i] = create n_userobject
.
.
.
NEXT
dw_1.rowcount() returns only 210 rows. Its so odd that in the range of 170 up, the application stop and crashes on inv_userobject[i] = create n_userobject.
My question, is there any limit on array or userobject declaration using arrays?
I already try destroying it after the loop so as to check if that will be a possible solution, but it is still crashing.
Or how can i be able to somehow refresh the userobject?
Or is there anyone out there encounter this?
Thanks for all your help.
First, your memory problem. You're definitely not running into an array limit. If I was to take a guess, one of the instance variables in n_userobject isn't being cleaned up properly (i.e. pointing to a class that isn't being destroyed when the parent class is destroyed) or pointing to a class that similarly doesn't clean itself up. If you've got PB Enterprise, I'd do a profiling trace with a smaller loop and see what is being garbage collected (there's a utility called CDMatch that really helps this process).
Secondly, let's face it, you're just doing this to avoid writing a reset method. Even if you get this functional, it will never be as efficient as writing your own reset method and reusing the same instance over again. Yes, it's another method you'll have to maintain whenever the instance variable list changes or the defaults change, but you'll easily gain that back in performance.
Good luck,
Terry.
I'm assuming the crash you're facing is at the PBVM level, and not a regular PB exception (which you can catch in your code). If I'm wrong, please add the exception details.
A loop of 170-210 iterations really isn't a large one. However, crashes within loops are usually the result of resource exhaustion. What we usually do in long loops is call GarbageCollect() occasionally. How often should it be called depends on what your code does - using it frequently could allow the use of less memory, but it will slow down the run. Read this for more.
If this doesn't help, make sure the error does not come from some non-PB code (imported DLL or so). You can check the stack trace during the crash to see the exception's origin.
Lastly, if you're supported by Sybase (or a local representative), you can send them a crash dump. They can analyze it, and see if it's a bug in PB, and if so, let you know when it was (or will be) fixed.
What I would normally do with a DataWindow is to create an object that processes the data in a row and call it for each row.
the only suggestion i have for this is to remove the rowcount from the for (For i = 1 to dw_1.Rowcount()) this will cause the code to recount the rows every time it uses one. get the count into a variable and then use the variable. it should run a bit better and be far more easy to debug.

How do I avoid a memory leak with LINQ-To-SQL?

I have been having some issues with LINQ-To-SQL around memory usage. I'm using it in a Windows Service to do some processing, and I'm looping through a large amount of data that I'm pulling back from the context. Yes - I know I could do this with a stored procedure but there are reasons why that would be a less than ideal solution.
Anyway, what I see basically is memory is not being released even after I call context.SubmitChanges(). So I end up having to do all sorts of weird things like only pull back 100 records at time, or create several contexts and have them all do separate tasks. If I keep the same DataContext and use it later for other calls, it just eats up more and more memory. Even if I call Clear() on the "var tableRows" array that the query returns to me, set it to null, and call SYstem.GC.Collect() - it still doesn't release the memory.
Now I've read some about how you should use DataContexts quickly and dispose of them quickly, but it seems like their ought to be a way to force the context to dump all its data (or all its tracking data for a particular table) at a certain point to guarantee the memory is free.
Anyone know what steps guarantee that the memory is released?
A DataContext tracks all the objects it ever fetched. It won't release this until it is garbage collected. Also, as it implements IDisposable, you must call Dispose or use the using statement.
This is the right way to go:
using(DataContext myDC = new DataContext)
{
// Do stuff
} //DataContext is disposed
If you don't need object tracking set DataContext.ObjectTrackingEnabled to false. If you do need it, you can use reflection to call the internal DataContext.ClearCache(), although you have to be aware that since its internal, it's subject to disappear in a future version of the framework. And as far as I can tell, the framework itself doesn't use it but it does clear the object cache.
As Amy points out, you should dispose of the DataContext using a using block.
It seems that your primary concern is about creating and disposing a bunch of DataContext objects. This is how linq2sql is designed. The DataContext is meant to have short lifetime. Since you are pulling a lot of data from the database, it makes sense that there will be a lot of memory usage. You are on the right track, by processing your data in chunks.
Don't be afraid of creating a ton of DataContexts. They are designed to be used that way.
Thanks guys - I will check out the ClearCache method. Just for clarification (for future readers), the situation in which I was getting the memory usuage was something like this:
using(DataContext context = new DataContext())
{
while(true)
{
int skipAmount = 0;
var rows = context.tables.Select(x => x.Dept == "Dept").Skip(skipAmount).Take(100);
//break out of loop when out of rows
foreach(table t in rows)
{
//make changes to t
}
context.SubmitChanges();
skipAmount += rows.Count();
rows.Clear();
rows = null;
//at this point, even though the rows have been cleared and changes have been
//submitted, the context is still holding onto a reference somewhere to the
//removed rows. So unless you create a new context, memory usuage keeps on growing
}
}
I just ran into a similar problem. In my case, helped establish the properties of DataContext.ObjectTrackingEnabled to false.
But it works only in the case of iterating through the rows as follows:
using (var db = new DataContext())
{
db.ObjectTrackingEnabled = false;
var documents = from d in db.GetTable<T>()
select d;
foreach (var doc in documents)
{
...
}
}
If, for example, in the query to use the methods ToArray() or ToList() - no effect

Why is my WPF Treeview bound to LinqToSql classes being a memory hog?

I have a WPF App which is grinding to a halt after running out of memory...
It is basically a TreeView displaying nodes, which are instances of Linq To Sql OR Generated class ICTemplates.Segment. There around 20 tables indirectly linked via associations to this class in the OR designer.
<TreeView Grid.Column="0" x:Name="tvwSegments"
ItemsSource="{Binding}"
SelectedItemChanged="OnNewSegmentSelected"/>
<HierarchicalDataTemplate DataType="{x:Type local:Segment}" ItemsSource="{Binding Path=Children}">
...
// code behind, set the data context based on user-input (Site, Id)
KeeperOfControls.DataContext = from segment in tblSegments
where segment.site == iTemplateSite && segment.id == iTemplateSid
select segment;
I've added an explicit property called Children to the segment class which looks up another table with parent-child records.
public IEnumerable<Segment> Children
{
get
{
System1ConfigDataContext dc = new System1ConfigDataContext();
return from link in this.ChildLinks
join segment in dc.Segments on new { Site = link.ChildSite, ID = link.ChildSID } equals new { Site = segment.site, ID = segment.id }
select segment;
}
}
The rest of it is data binding coupled with data templates to display each Segment as a set of UI Controls.
I'm pretty certain that the children are being loaded on-demand (when I expand the parent) going by the response time. When I expand a node with around 70 children, it takes a while before the children are loaded (Task manager shows Mem Usage as 1000000K!). If I expand the next node with around 50 children, BOOM! OutOfMemoryException
I ran the VS Profiler to dig deeper and here are the results
Summary Page
Object Lifetimes
Allocation
The top 3 are Action, DeferredSourceFactory.DeferredSource and EntitySet (all .Net/LINQ classes). The only user-classes are Segment[] and Segment come in at #9 an #10.
I can't think of a lead to pursue.. What could be the reason ?
maybe a using surrounding that DataContext ?
using(System1ConfigDataContext dc = new System1ConfigDataContext()){
.... ?
}
also, have you tried using an sql profiler? might shed some light on the matter.
Have you tried using a global DataContext instead of one for each element?
Creating all of the DataContext's each with there own query and results could be the cause of your memory bloat.
I don't know the exact solution but the new statement in join may cause this. Because for each relation a new object could be created(But as I mentioned, I don't know if it is correct).
Could you try this;
public IEnumerable<Segment> Children
{
get
{
System1ConfigDataContext dc = new System1ConfigDataContext();
return from link in this.ChildLinks
join segment in dc.Segments on link.ChildSite == segment.site && link.ChildSID == segment.id
select segment;
}
}
The issue seems to be the creation of multiple S1DataContext objects as Sirocco referred to.
I tried the using statement to force a Dispose and make it eligible for collection. However it resulted in an ObjectDisposedException that I can't make sense of.
The control goes from the line that sets the data context of the DockPanel KeeperOfAllControls.
[External Code] (shown in call stack)
Segment.Children.get (has a using block with dc)
Back at the Line in Step 1... the Linq query uses tblSegments which is retrieved from a local instance of S1DataContext
Anyways so i assume that there is something that prevents multiple DataContexts from being created and disposed. So I tried a Singleton DataContext.
And it works!
the TreeView control is significantly more responsive, every node I tried loads in 3-4 secs max.
I put in a GC.Collect (for verification) before every fetch/search and now the memory usage stays between 200,000-300,000K.
The OR generated System.Data.Linq.DataContext doesn't seem to go away unless it is disposed explicitly (eating memory). Trying to Dispose it in my case, didn't pan out.. even though both functions had their own using blocks (no shared instance of DataContext). Though I dislike Singletons, I'm making a small internal tool for devs and hence don't mind it as of now.. None of the LinqToSql samples I saw online.. had Dispose calls mandated.
So I guess the problem has been fixed. Thanks to all the people that acted as more eyeballs to make this bug shallow.

Resources