As a proxy for something real I am working on, consider building a model. We'd have a Parts class representing model parts, complete with methods to manufacture the part based on given parameters, to validate, etc. We have a Glue class which does much the same, and a ModelBuilder class.
I can manufacture my glue and parts with nice encapsulation. Then I pass my PartsInstance and my GlueInstance to my ModelBuilder as parameters. Here's where everything falls apart. I can tell my ModelBuilder to build, but to do so, it needs access to the data in the Parts (and maybe the Glue). I may for instance need to know where the corner or center of PartsInstance.Part[0] is in order to properly build.
I'm in a quandary here because it seems the Tell, Don't Ask camp would say that the part should build itself somehow. But that doesn't make sense to me. A part is a part, not the whole model. That's what should build itself. The ModelBuilder could be quite complex, and it's possible that the Parts might be used somewhere else. On the other hand, I'm inquiring as to the state of the PartsInstance object and then making a decision based on that state. Furthermore, the decision may not be something that fits solely inside the Parts concept anyway. It may involve the type of Glue I've chosen.
How can I construct or rework this example so that it doesn't violate encapsulation?
Related
I would like to know what is the difference between the Visitor pattern and using a static method to execute code in separation.
Let's take a look at an example where I might call the Visitor pattern:
new AnalyticsVisitor.accept(myClass);
and this when called in from myClass for example, would move the work into a visitor to execute. It would even garbage collect faster if it's memory intensive.
Now lets take a look at using a simple method to achieve more or less the same thing:
new AnalyticsManager.execute(myClass);
Have I achieved the same thing?
I have code separation.
I can apply this to several data structures
I can add info to legacy code without changing it.
So why use the Visitor pattern instead of just a class (unless for double dispatch)?
This question is still a little confused. I suspect you haven't understood the goal of the Visitor pattern.
As discussed here the visitor pattern is useful when you have complex data structure (such as a parse tree) that is relatively stable (in terms of development), but you want to be able to keep adding new operations on all of its elements. This is clumsy with standard OO techniques.
The technology the visitor pattern is based on is double-dispatch, so when you say "Why use the Visitor pattern unless for double-dispatch?" you are effectively saying "Why use the visitor pattern?"
Your example code only includes the client, so it isn't clear what your new technique actually offers.
The supplied code appears to be backwards for a real visitor pattern. It should be:
my_datastructure.accept(analytics_visitor);
where analytics_visitor inherits from MyDataStructureVisitor, and supplies individual methods for each of the element types that the data structure can hold.
As for the achievements:
"Code separation" is a vague term. The visitor pattern allows the data structure to be defined without all the the operations (putative methods) to be defined. Instead, they can be defined separately - with a cost of poorer encapsulation.)
It isn't clear what it means to apply a visitor pattern to several data structures. Each visitor class is associated with one data structure.
The goal isn't to add 'info' to legacy code. It is to add operations to legacy code.
I have a class that handles writing and reading data from my database. What is a proper name to call this class?
There are a couple of conventions. Assuming a Person model, you could use:
PersonDataAccessObject,
PersonDao,
PersonRepository,
PersonDataAccess,
...
It is also dependent on the technology you are using. I mean, who knows what conventions exist for the language you are using. Let us know what language and what data access framework and the answer may vary.
I used to append "Dao" because it's short and clear. But then I moved over more to Martin Fowler's vocabulary and patterns, so now I use Repository. A little more long winded, but I'm long winded by nature, so it fits my style. In the end, that's the key. It's stylistic and there is no across the board standard that I'm aware of. What's most important is that you pick something that is clear and you use it consistently. If you decide, later on, to switch to something else, have mercy on any programmers that may follow you and rename everything so that all your data access components are consistently named.
Edit: in rereading this, I realized I am assuming you are going to have multiple such classes, one for each of your model entities. Who knows what your setup is. If you aren't going to do it like that, and you're just looking for a standard name for a single point of entry to all data access, you could use:
DataMapper
Gateway
Typically, the assumption is that you are going to have several of these around, one for each of your "tables"/model entities. More than a naming convention, that is probably a standard coding convention. This way, when you change or add some aspect of how you interact with your "persons" table, you don't have to modify a class in which you have code to access the "addresses" table. Check out Martin Fowler's Patterns of Enterprise Application Architecture (PofEAA), for more
PofEAA catalog of patterns (check out Data Source Architectural Patterns
and
Domain Driven Design Quickly (free pdf) esp. Ch. 3
Depending on the entity this class represents it could be for example Person. Then you design a PersonViewModel which is passed to the GUI. So the Person you got from the database is mapped to a PersonViewModel which is passed to the UI layer for being shown under some form. The view model is just a representation of the domain model you fetched from the database and containing only the necessary information that you need to display on the given UI.
I'm supporting a server for an online card game and while thinking about refactoring it into a better state I have found myself unable to decide what is a proper object model for my needs.
I have a Player class which has a lot of attributes. The first problem is just that - the class is too big. The second problem is that I don't know how to refactor it. I will list some of the attributes and issues with these.
Some attributes are very tightly bound to a player: nick, email, last login &c. These, I suppose, are to be kept directly in the player class and in the same table in the DB.
Now, some attributes are a little more difficult, like money and gold amount. The problem with these is that they are historically stored in a different table, there might be some more currencies later on and that they MUST be synched into the database at their own pace.
Third category of attributes are loosely coupled to the player, like status string, experience, achievements, statistics &c. These are stored in different tables in the DB and MUST be stored, retrieved, cached and synchronized at their own pace.
Note that one of the big problems here is that we have to implement relatively complex database synchronization schemes because we have a lot of online players and our game is soft-realtime and we have to make load on the DB as low as possible.
My questions are:
How to determine which attributes to store within a player class and which not to? Say, experience, nickname, money amount?
When one has some attributes that may be grouped together like (strength, agility, endurance, &c.) and (handItem, headItem, feetItem, weapon) when they should be grouped and when not?
What to do with complex database synchronization schemes? Make a separate model for each attribute that needs to be synched independently or make some DataManager classes to take them apart and work with them?
What to do with the need for a class to have several different "data representations" for external consumers? Like XML, Json, another XML for some external service, human-readable string, &c.
I'm sorry if my questions are bogus, I'm not really good at OOP design, I'm more an FP guy. And my English is not very good =).
There is no "limit" to what you can store in a player class. As long as it is concerning him and him only, it should be in his class. But one thing you should consider is to make several player classes. The idea is : if you don't need is, don't query it. You may have PlayerView_Small, PlayerBuying, PlayerFighting, PlayerSettings (depending on your game, they may not be fulfilling the exact same purpose)... This way for each "need" of info on a player, you only load the player data you need, and can handle it properly. Also, you may use inheritance if some class is only a more detailed version of the other.
If you are talking about the class, it may be in a sub-class PlayerAttributes of which an instance is contained into PlayerFighting and PlayerView_Detailed. In the database, it might be interesting to store it as a string (conveniently outputted by our class, and accepted in constructor), to avoid having too much fields, but you will lose the sorting ability. That's probably not a problem in our case, but might be in some others.
Blank for now, I don't understand where there is synchronization, will edit when informed.
In your PlayerViewDetailInfo(or in your PlayerAllData depending what you need), you place some methods such as ToXmlClient1(), ToJson(), ToHumanReadableString() (although that might be a bit confusing to the eye, you should consider HTML^^). The class having the method should be the class with the least (but sufficient to provide the answer) data. When requested, you load the Player... which has the method giving the correct output, and you write it directly in the response.
Can anyone please give me any example of situation in a database-driven application where I should use Flyweight pattern?
How can I know that, I should use flyweight pattern at a point in my application?
I have learned flyweight pattern. But not able to understand an appropriate place in my database-driven business applications to use it.
Except for a very specialized database application, the Flyweight might be used by your application, but probably not for any class that represents an entity which is persisted in your database. Flyweight is used when there otherwise might be a need for so many instantiations of a class that if you instantiated one every discrete time you needed it performance would suffer. So instead, you instantiate a much smaller number of them and reuse them for each required instance by just changing data values for each use. This would be useful in a situation where, for example, you might have to instantiate thousands of such classes each second, which is generally not the case for entities persisted in a database.
You should apply any pattern when it naturally suggests itself as a solution to a concrete problem - not go looking for places in your application where you can apply a given pattern.
Flyweight's purpose is to address memory issues, so it only makes sense to apply it after you have profiled an application and determined that you have a ton of identical instances.
Colors and Brushes from the Base Class Library come to mind as examples.
Since a very important part of Flyweight is that the shared implementation is immutable, good candidates in a data-driven application would be what Domain-Driven Design refers to as Value Objects - but it only becomes relevant if you have a lot of identical values.
[Not a DB guy so this is my best guess]
The real bonus to the flyweight pattern is that you can reuse data if you need to; Another example is word processing where ideally you would have an object per "character" in your document, but that wuld eat up way too much memory so the flyweight memory lets you only store one of each unique value that you need.
A second (and perhaps simplest) way to look at it is like object pooling, only you're pooling on a "per-field" level as opposed to a "per-object" level.
In fact, now that i think about it, it's not unlike using a (comparatively small) chunk of memory in c(++) so store some raw data which you do pointer manipulation to get stuff out of.
[See this wikpedia article].
In many embedded applications there is a tradeoff between making the code very efficient or isolating the code from the specific system configuration to be immune to changing requirements.
What kinds of C constructs do you usually employ to achieve the best of both worlds (flexibility and reconfigurabilty without losing efficiency)?
If you have the time, please read on to see exactly what I am talking about.
When I was developing embedded SW for airbag controllers, we had the problem that we had to change some parts of the code every time the customer changed their mind regarding the specific requirements. For example, the combination of conditions and events that would trigger the airbag deployment changed every couple weeks during development. We hated to change that piece of code so often.
At that time, I attended the Embedded Systems Conference and heard a brilliant presentation by Stephen Mellor called "Coping with changing requirements". You can read the paper here (they make you sign-up but it's free).
The main idea of this was to implement the core behavior in your code but configure the specific details in the form of data. The data is something you can change easily and it can even be programmable in EEPROM or a different section of flash.
This idea sounded great to solve our problem. I shared this with my colleague and we immediately started reworking some of the SW modules.
When trying to use this idea in our coding, we encountered some difficulty in the actual implementation. Our code constructs got terribly heavy and complex for a constrained embedded system.
To illustrate this I will elaborate on the example I mentioned above. Instead of having a a bunch of if-statements to decide if the combination of inputs was in a state that required an airbag deployment, we changed to a big table of tables. Some of the conditions were not trivial, so we used a lot of function pointers to be able to call lots of little helper functions which somehow resolved some of the conditions. We had several levels of indirection and everything became hard to understand. To make a long story short, we ended up using a lot of memory, runtime and code complexity. Debugging the thing was not straightforward either. The boss made us change some things back because the modules were getting too heavy (and he was maybe right!).
PS: There is a similar question in SO but it looks like the focus is different. Adapting to meet changing business requirements?
As another point of view on changing requirements ... requirements go into building the code. So why not take a meta-approach to this:
Separate out parts of the program that are likely to change
Create a script that will glue parts of source together
This way you are maintaining compatible logic-building blocks in C ... and then sticking those compatible parts together at the end:
/* {conditions_for_airbag_placeholder} */
if( require_deployment)
trigger_gas_release()
Then maintain independent conditions:
/* VAG Condition */
if( poll_vag_collision_event() )
require_deployment=1
and another
/* Ford Conditions */
if( ford_interrupt( FRONT_NEARSIDE_COLLISION ))
require_deploymen=1
Your build script could look like:
BUILD airbag_deployment_logic.c WITH vag_events
TEST airbag_deployment_blob WITH vag_event_emitter
Thinking outloud really. This way you get a tight binary blob without reading in config.
This is sort of like using overlays http://en.wikipedia.org/wiki/Overlay_(programming) but doing it at compile-time.
Our system is subdivided into many components, with exposed configuration and test points. There is a configuration file that is read at start-up that actually helps us instantiate components, attach them to each other, and configure their behavior.
It's very OO-like, in C, with the occasional hack to implement something like inheritance.
In the defense/avionics world software upgrades are very strictly controlled, and you can't just upgrade SW to fix issues... however, for some bizarre reason you can update a configuration file without a major fight. So it's been darn useful for us to be able to specify a lot of our implementation in those configuration files.
There is no magic, just good separation of concerns when designing the system and a bit of foresight on the part of the developers.
What are you trying to save exactly? Effort of code re-work? The red tape of a software version release?
It's possible that changing the code is reasonably straight-forward, and quite possibly easier than changing data in tables. Moving your often-changing logic from code to data is only helpful if, for some reason, it's less effort to modify data rather than code. That might be true if the changes are better expressed in a data form (e.g. numeric parameters stored in EEPROM). Or it might be true if the customer's requests make it necessary to release a new version of software, and a new software version is a costly procedure to build (lots of paperwork, or perhaps OTP chips burned by the chip maker).
Modularity is very good principle for these sort of things. Sounds as though you're already doing it to some degree. It's good to aim to isolate the often-changing code to as small an area as possible, and try to keep the rest of the code ("helper" functions) separate (modular) and as stable as possible.
I don't make the code immune to requirements changes per se, but I always tag a section of code that implements a requirement by putting a unique string in a comment. With the requirements tags in place, I can easily search for that code when the requirement needs a change. This practice also satisfies a CMMI process.
For example, in the requirements document:
The following is a list of
requirements related to the RST:
[RST001] Juliet SHALL start the RST with 5 minute delay when the ignition
is turned OFF.
And in the code:
/* Delay for RST when ignition is turned off [RST001] */
#define IGN_OFF_RST_DELAY 5
...snip...
/* Start RST with designated delay [RST001] */
if (IS_ROMEO_ON())
{
rst_set_timer(IGN_OFF_RST_DELAY);
}
I suppose what you could do is to specify several valid behaviors based on a byte or word of data that you could fetch from EEPROM or an I/O port if necessary and then create generic code to handle all possible events described by those bytes.
For instance, if you had a byte that specified the requirements for releasing the airbag it could be something like:
Bit 0: Rear collision
Bit 1: Speed above 55mph (bonus points for generalizing the speed value!)
Bit 2: passenger in car
...
Etc
Then you pull in another byte that says what events happened and compare the two. If they're the same, execute your command, if not, don't.
For adapting to changing requirements I would concentrate on making the code modular and easy to change, e.g. by using macros or inline functions for parameters which are likely to change.
W.r.t. a configuration which can be changed independently from the code, I would hope that the parameters which are reconfigurable are specified in the requirements, too. Especially for safety-critical stuff like airbag controllers.
Hooking in a dynamic language can be a lifesaver, if you've got the memory and processor power for it.
Have the C talk to the hardware, and then pass up a known set of events to a language like Lua. Have the Lua script parse the event and callback to the appropriate C function(s).
Once you've got your C code running well, you won't have to touch it again unless the hardware changes. All of the business logic becomes part of the script, which in my opinion is a lot easier to create, modify and maintain.