Static objects in wpf app - wpf

In my wpf app i am using a lot of objects declared as static for caching purposes.
Just wondering if there are any drawbacks.

I almost never use static data, because of the inherent problems that come into play when you add worker threads.
If you only want one instance of something accessible by your objects, then perhaps the Singleton pattern will help. You might want to read this helpful article on Singletons in C#.
There's also a framework available that makes requesting services really easy. You can set up the Framework to give you a new instance of a service, or the same service every time. The problem is that I can't remember what it's called, and would really appreciate it if someone else could comment on this because I'd like to read up on it again. I thought it was Unity or Prism, but I'm not sure. I know the latter framework is for setting up your application with MVVM principles in mind.

One disadvantage is that a static members on a class don't have full lazy instantiation. The static constructor will be run the first time any member on that class is accessed. This may or may not be a big concern for you.
A much bigger problem, in my opinion, is that statics are not good for unit testing. Say you are trying to write unit tests for another class, that reference those static objects. You have no way of setting up a mock for those objects. You're forced to use the real thing, which may end up forcing you to start up a large part of your system, in which case it's no longer a unit test, but an integration test.
I don't think you need to avoid the static keyword entirely; just be aware of the limitations you're putting on your program by doing so. And using a Singleton is not the only alternative. You may simply just choose to follow the "just create one" policy. :)

Related

Is there a reason to encapsulate if I am the only one using my code?

I understand that we encapsulate data to prevent things from being accessed that don't need to be accessed by developers working with my code. However I only program as a hobby and do not release any of code to be used by other people. I still encapsulate, but it mostly just seems to me like I'm just doing it for the sake of good policy and building the habit. So, is there any reason to encapsulate data when I know I am the only one who will be using my code?
Encapsulation not only about hiding data.
It is also about hiding details of implementation.
When such details are hidden, that forces you to use defined class API and the class is only who can change it inside.
So just imagine a situation, when you opened all methods to any class interested in them and you have a function that performs some calculations. And you've just realized that you want to replace it because the logic is not right, or you want to perform some complicated calculations.
In such cases sometimes you have to change all the places across your application to change the result instead of changing it in only one place, in API, that you provided.
So don't make everything public, it leads to strong coupling and pain during update process.
Encapsulation is not only creating "getters" and "setters", but also exposing a sort of API to access the data (if needed).
Encapsulation lets you keep access to the data in one place and allow you to manage it in a more "abstract" way, reducing errors and making your code more maintainable.
If your personal projects are simple and small, you can do whatever you feel like in order to produce fast what you need, but bear in mind the consequences ;)
I don't think unnecessary data access can happen only by third party developers. It can happen by you as well right? When you allow direct access to data through access rights on variables/properties, whoever is working with that, be it you, or someone else may end up creating bugs by accessing data directly.

What is the advantage of using AngularJS dependency Injection? [duplicate]

I have been working with Angular for some time now, and I fail to see how it is an improvement from my previous way of coding.
First, I can't see what's wrong with having a central object to hold your project. After all, the injector is a singleton that looks up your dependencies into a central place, so Angular does have a central object, it's just hidden. Namespacing doesn't necessarily mean coupling, if it's done properly. And even when it's done, you don't need every single object of your code to be loosely coupled with the others. Besides, anytime you create a standalone JS script, you have to wrap it into Angular to make them play nice together.
Second, it's very verbose to declare all your dependencies everytime (especially with minification), so there is no gain from the readability point of view compared to proper namespacing.
Third, the performance gain is minimal. It forces me to use singletons everywhere, but I can do that on my own if I need to, and most of the time, I don't (network and DOM manipulations are my bottleneck, not JS objects).
In the end, I like the "enhanced" HTML and the automatic two-way bindings, but I can't see how the injection makes it any better than the way other frameworks deal with dependencies, given that it doesn't even provide dynamic loading like require.js. I haven't see any use case where I say to myself "oh, this is where it's so much better than before, I see" while coding.
Could you explain to me what benefits this technical choice brings to a project?
I can see only one for now: convention and best practice enforcement. It's a big one to create a lib ecosystem, but for now I don't see the fruit of it in the Angular community.
For me there are few aspects of how Angular's dependency injection is improving my projects, which I will list here. I hope that this will show you how OTHERS can benefit from it, but if you are well organised and experienced JS developer, then perhaps it might not be the same case for you. I think at some point this is just the matter of developing your own tools and coding guide.
Unified, declarative dependency resolving
JS is dynamic language (that's new, huh?) which gives a lot of power and even more responsibility to the programmer. Components can interact with each other on various ways by passing around all sorts of objects: regular objects, singletons, functions, etc. They can even make use of blocks of code which were not even mentioned to be used by other components.
JS has never had (and most likely never will) a unified way of declaring public, private or package (module) scopes like other languages have (Java, C, C#). Of course there are ways of encapsulating logic, but ask any newcomer to the language and he will simply don't know how to use it.
What I like about DI (not only in Angular, but in general) is the fact that you can list dependencies to your component, and you are not troubled how this dependency got constructed. This is very important for me, especially that DI in Angular allows you to resolve both kinds of components: these from the framework itself (like $http), or custom ones (like my favorite eventBus which I'm using to wrap $on event handlers).
Very often I look at the declaration of a service and I know what it does and how it does it just by looking at dependencies!
If I was to construct and/or make use of all those objects deep in the component itself, then I would always have to analyze implementation thoroughly and check it from various aspects. If I see localStorage in dependencies list, I know for the fact that I'm using HTML5 local storage to save some data. I don't have to look for it in the code.
Lifespan of components
We don't need to bother anymore about order of initialization of certain components. If A is dependent on B then DI will make sure that B is ready when A needs it.
Unit testing
It helps a lot to mock out components when you are using DI. For instance, if you have controller: function Ctrl($scope, a, b, c, d) then you instantly know what it is dependent on. You inject proper mocks, and you are making sure that all parties talking and listening to your controller are isolated. If you have troubles writing tests then you most likely messed up levels of abstraction or are violating design principles (Law Of Diameter, Encapsulation, etc.)
Good habits
Yes, most likely you could use namespacing to properly manage the lifespan of your objects. Define singleton where its needed and make sure that noone messes up your private members.
But honestly, would you need that if the framework can do it for you?
I haven't been using JS "the right way" just until I learned Angular. It's not that I didn't care, I just didn't have to since I was using JS just for some tweeks of UI, primarly based on jquery.
Now its different, I got a nice framework which forces me a bit to keep up with good practices, but it also gives me great power to extend it and make use of the best features that JS has.
Of course poor programmers can still break even the best tool, but from what I've learned by recently reading "JS the good parts" by D. Crockford people were doing reeeeeealy nasty stuff with it. Thanks to great tools like jQuery, Angular and others we now have some nice tool which helps to write good JS applications and sticking to best practices while doing so.
Conclusion
As you have pointed out, you CAN write good JS applications by doing at least those three things:
Namespacing - this avoids adding stuff to global namespace, avoids potential conflicts and allows for resolving proper components easily where needed
Creating reusable components / modules - by, for instance, using function module pattern and explicitly declaring private and public members
Managing dependencies between components - by defining singletons, allowing for retrieving dependencies from some 'registry', disallowing of doing certain stuff when certain conditions are not meet
Angular simply does that by:
Having $injector which manages dependencies between components and retrieves them when needed
Forcing to use factory function which has both PRIVATE and PUBLIC APIs.
Let the components talk to each other either directly (by being dependent of one another) or by shared $scope chain.

What are the benefits of AngularJS dependency injection?

I have been working with Angular for some time now, and I fail to see how it is an improvement from my previous way of coding.
First, I can't see what's wrong with having a central object to hold your project. After all, the injector is a singleton that looks up your dependencies into a central place, so Angular does have a central object, it's just hidden. Namespacing doesn't necessarily mean coupling, if it's done properly. And even when it's done, you don't need every single object of your code to be loosely coupled with the others. Besides, anytime you create a standalone JS script, you have to wrap it into Angular to make them play nice together.
Second, it's very verbose to declare all your dependencies everytime (especially with minification), so there is no gain from the readability point of view compared to proper namespacing.
Third, the performance gain is minimal. It forces me to use singletons everywhere, but I can do that on my own if I need to, and most of the time, I don't (network and DOM manipulations are my bottleneck, not JS objects).
In the end, I like the "enhanced" HTML and the automatic two-way bindings, but I can't see how the injection makes it any better than the way other frameworks deal with dependencies, given that it doesn't even provide dynamic loading like require.js. I haven't see any use case where I say to myself "oh, this is where it's so much better than before, I see" while coding.
Could you explain to me what benefits this technical choice brings to a project?
I can see only one for now: convention and best practice enforcement. It's a big one to create a lib ecosystem, but for now I don't see the fruit of it in the Angular community.
For me there are few aspects of how Angular's dependency injection is improving my projects, which I will list here. I hope that this will show you how OTHERS can benefit from it, but if you are well organised and experienced JS developer, then perhaps it might not be the same case for you. I think at some point this is just the matter of developing your own tools and coding guide.
Unified, declarative dependency resolving
JS is dynamic language (that's new, huh?) which gives a lot of power and even more responsibility to the programmer. Components can interact with each other on various ways by passing around all sorts of objects: regular objects, singletons, functions, etc. They can even make use of blocks of code which were not even mentioned to be used by other components.
JS has never had (and most likely never will) a unified way of declaring public, private or package (module) scopes like other languages have (Java, C, C#). Of course there are ways of encapsulating logic, but ask any newcomer to the language and he will simply don't know how to use it.
What I like about DI (not only in Angular, but in general) is the fact that you can list dependencies to your component, and you are not troubled how this dependency got constructed. This is very important for me, especially that DI in Angular allows you to resolve both kinds of components: these from the framework itself (like $http), or custom ones (like my favorite eventBus which I'm using to wrap $on event handlers).
Very often I look at the declaration of a service and I know what it does and how it does it just by looking at dependencies!
If I was to construct and/or make use of all those objects deep in the component itself, then I would always have to analyze implementation thoroughly and check it from various aspects. If I see localStorage in dependencies list, I know for the fact that I'm using HTML5 local storage to save some data. I don't have to look for it in the code.
Lifespan of components
We don't need to bother anymore about order of initialization of certain components. If A is dependent on B then DI will make sure that B is ready when A needs it.
Unit testing
It helps a lot to mock out components when you are using DI. For instance, if you have controller: function Ctrl($scope, a, b, c, d) then you instantly know what it is dependent on. You inject proper mocks, and you are making sure that all parties talking and listening to your controller are isolated. If you have troubles writing tests then you most likely messed up levels of abstraction or are violating design principles (Law Of Diameter, Encapsulation, etc.)
Good habits
Yes, most likely you could use namespacing to properly manage the lifespan of your objects. Define singleton where its needed and make sure that noone messes up your private members.
But honestly, would you need that if the framework can do it for you?
I haven't been using JS "the right way" just until I learned Angular. It's not that I didn't care, I just didn't have to since I was using JS just for some tweeks of UI, primarly based on jquery.
Now its different, I got a nice framework which forces me a bit to keep up with good practices, but it also gives me great power to extend it and make use of the best features that JS has.
Of course poor programmers can still break even the best tool, but from what I've learned by recently reading "JS the good parts" by D. Crockford people were doing reeeeeealy nasty stuff with it. Thanks to great tools like jQuery, Angular and others we now have some nice tool which helps to write good JS applications and sticking to best practices while doing so.
Conclusion
As you have pointed out, you CAN write good JS applications by doing at least those three things:
Namespacing - this avoids adding stuff to global namespace, avoids potential conflicts and allows for resolving proper components easily where needed
Creating reusable components / modules - by, for instance, using function module pattern and explicitly declaring private and public members
Managing dependencies between components - by defining singletons, allowing for retrieving dependencies from some 'registry', disallowing of doing certain stuff when certain conditions are not meet
Angular simply does that by:
Having $injector which manages dependencies between components and retrieves them when needed
Forcing to use factory function which has both PRIVATE and PUBLIC APIs.
Let the components talk to each other either directly (by being dependent of one another) or by shared $scope chain.

Caching TaskScheduler retrieved FromCurrentSynchronizationContext in WPF app

Does it makes sense to store/cache the TaskScheduler returned from TaskScheduler.FromCurrentSynchronizationContext while loading a WPF app and use it every from there on? Are there any drawbacks of this kind of usage?
What I mean from caching is to store a reference to the TaskScheduler in a singleton and make it available to all parts of my app, probably with the help of a DI/IoC container or worst case in a bare ol' singleton.
As Drew says, there's no performance benefit to doing this. But there might be other reasons to hold onto a TaskScheduler.
One reason you might want to do it is that by the time you need a TaskScheduler it may be too late to call FromCurrentSynchronizationContext because you may no longer be able to be certain that you are in the right context. (E.g., perhaps you can guarantee that your constructor runs in the right context, but you have no guarantees about when any of the other methods of your class are called.)
Since the only way to obtain a TaskScheduler for a SynchronizationContext is through the FromCurrentSynchronizationContext method, you would need to store a reference to the TaskScheduler itself, rather than just grabbing SynchronizationContext.Current during your constructor. But I'd probably avoid calling this "caching" because that word implies that you're doing it for performance reasons, when in fact you'd be doing it out of necessity.
Another possibility is that you might have code that has no business knowing which particular TaskScheduler it is using, but which still needs to use a scheduler because it fires off new tasks. (If you start new tasks, you're choosing a scheduler even if you don't realise it. If you don't explicitly choose which scheduler to use, you'll use the default one, which isn't always the right thing to do.) I've written code where this is the case: I've written methods that accept a TaskScheduler object as an argument and will use that. So this is another scenario where you might want to keep hold of a refernce to a scheduler. (I was using it because I wanted certain IO operations to happen on a particular thread, so I was using a custom scheduler.)
Having said all that, an application-wide singleton doesn't sound like a great idea to me, because it tends to make testing harder. And it also implies that the code grabbing that shared scheduler is making assumptions about which scheduler it should be using, and that might be a code smell.
The underlying implementation of FromCurrentSynchronizationContext just instantiates an instance of an internal class named SynchronizationContextTaskScheduler which is extremely lightweight. All it does is cache the SynchronizationContext it finds when constructed and then the QueueTask implementation simply does a Post to that SynchronizationContext to execute the Task.
So, all that said, I would not bother caching these instances at all.

Has inheritance become bad?

Personally, I think inheritance is a great tool, that, when applied reasonably, can greatly simplify code.
However, I seems to me that many modern tools dislike inheritance. Let's take a simple example: Serialize a class to XML. As soon as inheritance is involved, this can easily turn into a mess. Especially if you're trying to serialize a derived class using the base class serializer.
Sure, we can work around that. Something like a KnownType attribute and stuff. Besides being an itch in your code that you have to remember to update every time you add a derived class, that fails, too, if you receive a class from outside your scope that was not known at compile time. (Okay, in some cases you can still work around that, for instance using the NetDataContract serializer in .NET. Surely a certain advancement.)
In any case, the basic principle still exists: Serialization and inheritance don't mix well. Considering the huge list of programming strategies that became possible and even common in the past decade, I feel tempted to say that inheritance should be avoided in areas that relate to serialization (in particular remoting and databases).
Does that make sense? Or am messing things up? How do you handle inheritance and serialization?
There are indeed a few gotcha with inheritance and serialization. One is that it leads to an asymmetry between serialization/deserialization. If a class is subclassed, this will work transparently during serialization, but will fail during deserialization, unless the deserialization is made aware of the new class. That's why we have tags such as #SeeAlso to annotate data for XML serialization.
These problems are however not new about inheritance. It's frequently discussed under the terminology open/closed world. Either you consider you know the whole world and classes, or you might be in a case where new classes are added by third-parties. In a closed world assumption, serialization isn't much a problem. It's more problematic in an open world assumption.
But inheritance and the open world assumption have anyway other gotchas. E.g. if you remove a protected method in your classes and refactor accordingly, how can you ensure that there isn't a third party class that was using it? In an open world, the public and internal API of your classes must be considered as frozen once made available to other. And you must take great care to evolve the system.
There are other more technical internal details of how serialization works that can be surprising. That's for Java, but I'm pretty sure .NET has similarities. E.g. Serialization Killer, by Gilad Bracha, or Serialization and security manager bug exploit.
I ran into this on my current project and this might not be the best way, but I created a service layer of sorts for it and its own classes. I think it came out being named ObjectToSerialized translator and a couple of interfaces. Typically this was a one to one (the "object" and "serialized" had the exact same properties) so adding something to the interface would let you know "hey, add this over here too".
I want to say I had a IToSerialized interface with a simple method on it for generic purposes and used automapper for most of the conversions. Sure, it's a bit more code but hey whatever, it worked and doesn't gum up other things.

Resources