Using "using" statements for every object implementing IDisposable? - active-directory

I'm currently skimming through some code that reads Active Directory entries and manipulates them. Since I haven't had to do with this kind of stuff, I F12'd the classes (DirectoryEntry, SearchResultCollection, ...), and I found out they all implement the IDisposable interface, but I couldn't see any using blocks in our code.
Are those even necessary in this case (i.e., should I blindly refactor them in)?
Another question of mine regarding this (there are very many instantiated IDisposable objects in the code: Isn't IDisposable making stuff very "ugly" in this case? I mean, I like using statements as they basically free my mind from worrying about things, but in many cases the code has a layout similar to the following:
using (var one = myObject.GetSomeDisposableObject())
using (var two = myObject.GetSomeOtherDisposableObject())
{
one.DoSomething();
using (var foo = new DisposableFoo())
{
MyMethod(foo);
using (...)
using (...)
{
...
}
}
}
I feel that this becomes quite unreadable due to high indentation levels (even stacking the using statements). But extracting some of this into new methods can lead to many parameters that need to be passed, since naturally the "inner" code often needs the objects created in the using statements.
What is an elegant way to solve this without losing readability?

For the first part, this question refers to 'memory used by the task increasing constantly' when not disposing of AD references
For the second, a using block is syntactic sugar for a try/finally with the Dispose call in the finally block, which would be an alternative construct allowing you to dispose of everything in one place, reducing indentation

Related

How to pupulate nested DB without hurting encapsulation?

I have the following situation (code is python, but this is not language specific):
I need to populate a tabular DB in a way equivalent to:
for i,a in enumerate(as):
for j,b in enumerate(a.bs):
for k,c in enumerate(b.cs):
db[i,j,k] = c.data()
This method forces c to publish its private Data, thus breaking its encapsulation.
I am aware some of this chain could be replaced by a.update_db, b.update_db and so on, but still at the very least, c would still have to allow access to its private data.
Another way to tackle this would be to allow C to update the DB like so:
for i,a in enumerate(as):
for j,b in enumerate(a.bs):
for k,c in enumerate(b.cs):
c.update_db_at(i,j,k)
Which would make C know i,j,k which it shouldn't really care about, and C's responsibility isn't anything about updating the DB, but rather representing some object.
This looks like a very common problem to me, and i'm sure there is some standard best practice for this.
What is a good way to populate a nested DB from a corresponding nested object structure that doesn't break encapsulation?

Is it bad programming practice to store objects of type Foo into a static array of type Foo belonging to Foo in their construction?

Say I wanted to store objects statically inside their own class. Like this:
public class Foo
{
private static int instance_id = 0;
public static List<Foo> instances = new List<Foo>();
public Foo()
{
instances[instance_id++] = this;
}
}
Why?
I don't need to create unique array structures outside the class (one will do).
I want to map each object to a unique id according to their time of birth.
I will only have one thread with the class in use. Foo will only exist as one set in the program.
I did searching, but could find no mention of this data structure. Is this bad practice? If so, why? Thank you.
{please note, this question is not specific to any language}
There are a couple of potential problems I can see with this setup.
First, since you only have a single array of objects, if you need to update the code so that you have lots of different groups of objects in different contexts, you'll need to do a significant rewrite so that each object ends up getting associated with a different context. Depending on your setup this may not be a problem, but I suspect that in the long term this decision may come back to haunt you.
Second, this approach assumes that you never need to dispose of any objects. Imagine that you want to update your code so that you do a number of different simulations and aggregate the results. If you do this, then you'll end up having your giant array storing pointers to objects you're not using. This means that you'll (1) have a memory leak and (2) have to update all your looping code to skip over objects you no longer care about.
Third, this approach makes it the responsibility of the class, rather than the client, to keep track of all the instances. In some sense, if the purpose of what you're doing is to make it easier for clients to have access to a global list of all the objects that exist, you may want to consider just putting a different list somewhere else that's globally accessible so that the objects themselves aren't the ones responsible for keeping track of themselves.
I would recommend using one of a number of alternate approaches:
Just have the client do this. If the client needs to keep track of all the instances, just have them always create the array they need and populate it. That way, if multiple clients need different arrays, they can do so. You also avoid the memory leak issues if you do this properly.
Have each object take, as part of its constructor, a context in which to be constructed. For example, if all of these objects are nodes in a quadtree, have them take a pointer to the quadtree in which they'll live as a constructor parameter, then have the quadtree object store the list of the nodes in it. After all, it seems like it's really the quadtree's responsibility to keep track of everything.
Keep doing what you're doing, but using something with weak references. For example, you might consider using some variation on a WeakHashMap so that you do store everything, but if the objects are no longer needed, you at least don't have a memory leak.

WPF app with multiple usercontrols

I Am creating simple WPF test project which contains multiple UserControls(Insteda of Pages).I Am using Switcher Class to navigate between different UserControls.When i navigate to different pages,i have observed that memory consuption keep on increasing on each UserControle Navigationand GC is not invoked.
1.So am i doing something wrong in following code?
2.Which part of the code consuming more memory?
3.Do i need to invoke GC for disposing my UserControls on each new UserControle creation?
If need how can i invoke GC?
public void On_Navigate_Click()
{
UserControle newusercontrole=new UserControle();
DataSet ds = new DataSet();
ds=con.getSome_Datafrom_SQL();//Gets data from SQL via connection class
dataGrid_test.ItemsSource = ds.Tables[0].DefaultView;
Grid.SetColumn(newusercontrole, 1);//dataGrid_test is inside newusercontrole and following is the code to add "this" usercontrol to the main window.
Grid.SetRow(newusercontrole, 1);
Grid.SetZIndex(newusercontrole, 10);
Container.Children.Add(newusercontrole);
}
First off, I must point out that if garbage collection really isn't happening (as you said), it's not your fault and it does not mean you're doing something wrong. It only means that the CLR doesn't think that your system is under memory pressure yet.
Now, to manually invoke a garbage collection cycle anyway, you can use the GC.Collect() static method. If a garbage collection actually starts and your memory consumption is still unreasonably high, this means that you're probably doing something wrong: You're keeping an ever increasing number of unnecessary object references and the garbage collector cannot safely collect those objects. This is a kind of a memory leak.
As far as your code goes, I think that the problem is at the end of the method you posted:
Container.Children.Add(newusercontrole);
This seems to add a new object (on every click) to the collection Container.Children. If this is not removed elsewhere, this is probably the cause of your memory leak. I don't know what the suitable solution would be for your use case (since I don't know exactly how your UI should behave), but you'll likely need to find a way to remove the last UserControle added from the Container.Children. If you can use LINQ, then the methods OfType<T>() and Last() could be of use to find it.
In any case, don't leave the GC.Collect() line in production code. Use it only to force a collection cycle for testing purposes, like this one.

Whats a Strong Argument against Variable Redundancy in c code

I work in safety critical application development. Recently as a code reviewer I complained against coding style shown below, but couldn't make a strong case against it. So what would be a good argument against such Variable redundancy/duplication, I am looking for cases where this might lead to problems or test cases which might fail, rather than just coding style.
//global data
// global data
int Block1Var;
int Block2Var;
...
//Block1
{
...
Block1Var = someCondition; // someCondition is an logical expression
...
}
//Block2
{
...
Block2Var = Block1Var; // Block2Var is an unconditional copy of Block1Var
...
}
I think a little more context would be helpful perhaps.
You could argue that the value of Block1Var is not guaranteed to stay the
same across concurrent access/modification. This is only valid if Block1Var
ever changes (ie is not only read). I don't know if you are concerned with
multi-threaded applications or not.
Readability is an important issue as well. Future code maintainers
don't want to have to trace around a bunch of trivial assignments.
Depends on what's done with those variables later, but one argument is that it's not future-proof. If, in the future, you change the code such that it changes the value of Block1Var, but Block2Var is used instead (without the additional change) later on, then this will result in erroneous behavior.
If the shown function context reaches a certain length (I'm assuming a lot of detail has been discarded to create the minimal reproducible example for this question), a good next step could be to create a new (sub-)function out of Block 2. This subfunction then should be started assigning Block1Var (-> actual parameter) to Block2Var (-> formal parameter). If there were no other coupling to the rest of the function, one could cut the rest of Block 2 and drop it as a function definition, and would only have to replace the assignment by the subfunction call.
My answer is fairly speculative, but I have seen many cases where this strategy helped me to mark useful points to split a complex function later during the development. Of course, this interpretation only applies to an intermediate stage of development and not to code that is stated to be "ready for release".

How do I avoid a memory leak with LINQ-To-SQL?

I have been having some issues with LINQ-To-SQL around memory usage. I'm using it in a Windows Service to do some processing, and I'm looping through a large amount of data that I'm pulling back from the context. Yes - I know I could do this with a stored procedure but there are reasons why that would be a less than ideal solution.
Anyway, what I see basically is memory is not being released even after I call context.SubmitChanges(). So I end up having to do all sorts of weird things like only pull back 100 records at time, or create several contexts and have them all do separate tasks. If I keep the same DataContext and use it later for other calls, it just eats up more and more memory. Even if I call Clear() on the "var tableRows" array that the query returns to me, set it to null, and call SYstem.GC.Collect() - it still doesn't release the memory.
Now I've read some about how you should use DataContexts quickly and dispose of them quickly, but it seems like their ought to be a way to force the context to dump all its data (or all its tracking data for a particular table) at a certain point to guarantee the memory is free.
Anyone know what steps guarantee that the memory is released?
A DataContext tracks all the objects it ever fetched. It won't release this until it is garbage collected. Also, as it implements IDisposable, you must call Dispose or use the using statement.
This is the right way to go:
using(DataContext myDC = new DataContext)
{
// Do stuff
} //DataContext is disposed
If you don't need object tracking set DataContext.ObjectTrackingEnabled to false. If you do need it, you can use reflection to call the internal DataContext.ClearCache(), although you have to be aware that since its internal, it's subject to disappear in a future version of the framework. And as far as I can tell, the framework itself doesn't use it but it does clear the object cache.
As Amy points out, you should dispose of the DataContext using a using block.
It seems that your primary concern is about creating and disposing a bunch of DataContext objects. This is how linq2sql is designed. The DataContext is meant to have short lifetime. Since you are pulling a lot of data from the database, it makes sense that there will be a lot of memory usage. You are on the right track, by processing your data in chunks.
Don't be afraid of creating a ton of DataContexts. They are designed to be used that way.
Thanks guys - I will check out the ClearCache method. Just for clarification (for future readers), the situation in which I was getting the memory usuage was something like this:
using(DataContext context = new DataContext())
{
while(true)
{
int skipAmount = 0;
var rows = context.tables.Select(x => x.Dept == "Dept").Skip(skipAmount).Take(100);
//break out of loop when out of rows
foreach(table t in rows)
{
//make changes to t
}
context.SubmitChanges();
skipAmount += rows.Count();
rows.Clear();
rows = null;
//at this point, even though the rows have been cleared and changes have been
//submitted, the context is still holding onto a reference somewhere to the
//removed rows. So unless you create a new context, memory usuage keeps on growing
}
}
I just ran into a similar problem. In my case, helped establish the properties of DataContext.ObjectTrackingEnabled to false.
But it works only in the case of iterating through the rows as follows:
using (var db = new DataContext())
{
db.ObjectTrackingEnabled = false;
var documents = from d in db.GetTable<T>()
select d;
foreach (var doc in documents)
{
...
}
}
If, for example, in the query to use the methods ToArray() or ToList() - no effect

Resources