Personally, I think inheritance is a great tool, that, when applied reasonably, can greatly simplify code.
However, I seems to me that many modern tools dislike inheritance. Let's take a simple example: Serialize a class to XML. As soon as inheritance is involved, this can easily turn into a mess. Especially if you're trying to serialize a derived class using the base class serializer.
Sure, we can work around that. Something like a KnownType attribute and stuff. Besides being an itch in your code that you have to remember to update every time you add a derived class, that fails, too, if you receive a class from outside your scope that was not known at compile time. (Okay, in some cases you can still work around that, for instance using the NetDataContract serializer in .NET. Surely a certain advancement.)
In any case, the basic principle still exists: Serialization and inheritance don't mix well. Considering the huge list of programming strategies that became possible and even common in the past decade, I feel tempted to say that inheritance should be avoided in areas that relate to serialization (in particular remoting and databases).
Does that make sense? Or am messing things up? How do you handle inheritance and serialization?
There are indeed a few gotcha with inheritance and serialization. One is that it leads to an asymmetry between serialization/deserialization. If a class is subclassed, this will work transparently during serialization, but will fail during deserialization, unless the deserialization is made aware of the new class. That's why we have tags such as #SeeAlso to annotate data for XML serialization.
These problems are however not new about inheritance. It's frequently discussed under the terminology open/closed world. Either you consider you know the whole world and classes, or you might be in a case where new classes are added by third-parties. In a closed world assumption, serialization isn't much a problem. It's more problematic in an open world assumption.
But inheritance and the open world assumption have anyway other gotchas. E.g. if you remove a protected method in your classes and refactor accordingly, how can you ensure that there isn't a third party class that was using it? In an open world, the public and internal API of your classes must be considered as frozen once made available to other. And you must take great care to evolve the system.
There are other more technical internal details of how serialization works that can be surprising. That's for Java, but I'm pretty sure .NET has similarities. E.g. Serialization Killer, by Gilad Bracha, or Serialization and security manager bug exploit.
I ran into this on my current project and this might not be the best way, but I created a service layer of sorts for it and its own classes. I think it came out being named ObjectToSerialized translator and a couple of interfaces. Typically this was a one to one (the "object" and "serialized" had the exact same properties) so adding something to the interface would let you know "hey, add this over here too".
I want to say I had a IToSerialized interface with a simple method on it for generic purposes and used automapper for most of the conversions. Sure, it's a bit more code but hey whatever, it worked and doesn't gum up other things.
Related
I understand that we encapsulate data to prevent things from being accessed that don't need to be accessed by developers working with my code. However I only program as a hobby and do not release any of code to be used by other people. I still encapsulate, but it mostly just seems to me like I'm just doing it for the sake of good policy and building the habit. So, is there any reason to encapsulate data when I know I am the only one who will be using my code?
Encapsulation not only about hiding data.
It is also about hiding details of implementation.
When such details are hidden, that forces you to use defined class API and the class is only who can change it inside.
So just imagine a situation, when you opened all methods to any class interested in them and you have a function that performs some calculations. And you've just realized that you want to replace it because the logic is not right, or you want to perform some complicated calculations.
In such cases sometimes you have to change all the places across your application to change the result instead of changing it in only one place, in API, that you provided.
So don't make everything public, it leads to strong coupling and pain during update process.
Encapsulation is not only creating "getters" and "setters", but also exposing a sort of API to access the data (if needed).
Encapsulation lets you keep access to the data in one place and allow you to manage it in a more "abstract" way, reducing errors and making your code more maintainable.
If your personal projects are simple and small, you can do whatever you feel like in order to produce fast what you need, but bear in mind the consequences ;)
I don't think unnecessary data access can happen only by third party developers. It can happen by you as well right? When you allow direct access to data through access rights on variables/properties, whoever is working with that, be it you, or someone else may end up creating bugs by accessing data directly.
I come from server side language background (Symfony2). What I know is that dependency injection and service-oriented architectures are specific to object oriented programming. From their documentation:
Structuring your application around a set of independent service
classes is a well-known and trusted object-oriented best-practice.
These skills are key to being a good developer in almost any language.
I am reading now a client-side framework documentation (AngularJS), precisely the dependency injection chapter. Is angularJs written with OOP? Someone please to help me understand.
Thanks for your usual help.
The concept of dependency injection is based on the following ideas:
An entity (object, module, etc.) should not programmatically create the entities it depends upon.
Those dependencies should instead be passed as parameters (injected) instead.
This reduces unwanted coupling and allows other implementations to be substituted more easily (e.g., alternative data sources, stubs or mocks for testing)
As Doug Luce states in his answer, this concept can apply in other programming paradigms as well. The term is most common in object oriented circles because:
The origin of (programming) design patterns was object oriented programming.
Dependency Injection can alleviate the tendency of large object oriented programs to become too tightly coupled, difficult to test and brittle to change.
In some situations, such as pure functional programming, there is less need for something like dependency injection (it tends to happen naturally).
In structural languages without object oriented features, the concept is still very useful (perhaps even more so). Obvious and easily used mechanisms for dependency injection are not agreed upon in (non object oriented) structural languages.
Since JavaScript has both object oriented and functional features, I would expect AngularJS programmers to make full use of these features and use dependency injection when appropriate.
When the concept is called "dependency injection," it's almost always couched in the verbiage of object-oriented patterns. But the idea of passing in a wad of executable code that the function can use only depends on the language system having a way to do that: function types, closures, monads, promises, or whatever might do the trick.
I'm writing a WPF application using a MVVM pattern and using Prism in selected places for loose coupling, and I'd like to have logging messages shown in a window and written to a file. The subset of messages going each way may not be the same.
I think I should publish a message through the EventAggregator (MS-Prism implementation of observer pattern) and have two objects subscribe: one that updates the LogWindowViewModel and one that logs using the Enterprise Library logger. Is this a good idea or am I duplicating something that's already implemented?
The fact that the log message will be different in each output is the limiting factor.
Extending the block may suffice and defining a CustomTraceListener or ILogFilter may work out for you. This would avoid needing to use the EventAggregator.
It boils down to who has the knowledge of what and where to log. Are the differences driven off values within the logging engine such as severity? Are they instead driven by the consumer of the logging engine and therefore tightly coupled to the class itself? These types of questions will dictate your choice.
Leveraging the extension points in the logging block would be my first choice before having to rely on using the EventAggregator.
I think an idea is fine. There is not so much functionality to be duplicated it seems
I used Common.Logging as data collector, filter and distributor for something comparable and wrote a custom appender for my own processing and ui-output.
In my wpf app i am using a lot of objects declared as static for caching purposes.
Just wondering if there are any drawbacks.
I almost never use static data, because of the inherent problems that come into play when you add worker threads.
If you only want one instance of something accessible by your objects, then perhaps the Singleton pattern will help. You might want to read this helpful article on Singletons in C#.
There's also a framework available that makes requesting services really easy. You can set up the Framework to give you a new instance of a service, or the same service every time. The problem is that I can't remember what it's called, and would really appreciate it if someone else could comment on this because I'd like to read up on it again. I thought it was Unity or Prism, but I'm not sure. I know the latter framework is for setting up your application with MVVM principles in mind.
One disadvantage is that a static members on a class don't have full lazy instantiation. The static constructor will be run the first time any member on that class is accessed. This may or may not be a big concern for you.
A much bigger problem, in my opinion, is that statics are not good for unit testing. Say you are trying to write unit tests for another class, that reference those static objects. You have no way of setting up a mock for those objects. You're forced to use the real thing, which may end up forcing you to start up a large part of your system, in which case it's no longer a unit test, but an integration test.
I don't think you need to avoid the static keyword entirely; just be aware of the limitations you're putting on your program by doing so. And using a Singleton is not the only alternative. You may simply just choose to follow the "just create one" policy. :)
I'm starting a new project and i've recently found castle project activerecord, which seems like a GREAT solution, but at the same time, it looks like something realy unconventional.
I was wondering, does this feeling comes from learning something new (and i should just get used to it) or is really bad practice?
Part of what felt weird to me about using ActiveRecord was having to inherit from ActiveRecordBase<T>, and having all those persistence methods on your object (Save and so forth).
But it turns out you don't have to! Instead of having, say:
[ActiveRecord]
class Customer : ActiveRecordBase<Customer> { }
You can just have
[ActiveRecord]
class Customer : inherit from whatever you want { }
and then use ActiveRecordMediator<Customer>. It has basically the same static methods that ActiveRecordBase<T> has, but this way you don't have to clutter your object model with them. If you don't need the various protected method event hooks in ActiveRecordBase<T>, this can make things simpler.
ActiveRecord is a design pattern first named by Martin Fowler in Patterns of Enterprise Application Architectures. It is fairly common and used extensively in the popular Ruby framework Rails.
It contrasts with the more usual style of development in the .Net world which is to use DAOs, and that perhaps explains why you're uneasy.
A suggestion: read the source code for some Ruby on Rails applications which are similar to your own projects, and evaluate how you like the design style that results from heavy use of ActiveRecord.
It's not a bad solution but it has it's downsides.
In Patterns of Enterprise Application Architecture Martin Fowler describes several ways of designing applications that are built on top of a database. These methods differ in the way the application is decoupled from the database. He also describes that more decoupling makes more complex applications possible. Active Record is described as a way to design simpler applications but for applications with more complex behaviour you need a Domain Model that is independent of the database and something like an object-relational mapper in between.
ActiveRecord works very well in Ruby, but it's not easily transferable to all languages. The central feat of AR is the metaphor of table=class, row=instance. This comes out quite elegant in Ruby, because classes are also objects. In other languages, classes are usually a special kind of construct, and then you have to go through all sorts of hoops to make it work like properly. This takes away some of the natural feel that it has in Ruby.
The mixture of the domain object with the service layer is the biggest bad practice (if you see that as a bad practice). You end up calling user.Save() which means if you want to change your ORM, you are reliant on this pattern. The 2 alternatives are a layer aka a set of facade classes to perform your CRUD operations, or to put this inside the domain object as something like
User.Service.Save(user);
If you're using .NET then ActiveRecord is obviously ActiveRecord based, Coolstorage, Subsonic and a few others.
No it's not bad practice. Even in .NET it's a fairly well established pattern now. SubSonic (http://subsonicproject.com) and LINQ to SQL also use the pattern.
Implementations of the pattern, such as Subsonic, are great for quickly and easily creating a data access layer that manages the CRUD for your application.
That doesn't mean it's a good solution for all systems. For large, complex systems you probably want to have less coupling to the data store.
I think ActiveRecord has not much to do with Castle and therefore, the answers to this question Does the ActiveRecord pattern follow/encourage the SOLID design principles? could be more enlightening to many.