How to use single responsibility with functions - c

I've just read the book called clean code. The author Uncle Bob talks about a single responsibility that a function should have in a program. It should only do one thing.
Now, I would like to understand how is it now possible to reuse a code that does multiple things. Let's say I have a method called runTrafficAndCheckIfItPassed and it calls two methods inside of it: runTraffic and checkIfTrafficPassed.
Now let's say that in my software I need to run traffic and to check it's result in a lot of places in my software. sometimes i need to check that traffic has failed, and sometimes i need to check if it passed.
Why wouldn't it be right to call the runTrafficAndCheckIfItPassed function and why is it way better to call the functions inside separately?
As far as I see, if there will be a change in the runTraffic function, for example to receive another parameter, the change will be implemented in one place, only in runTrafficAndCheckIfItPassed, which we see will be kinda easy to maintain.
But if i will use the functions seperately i will need to change it in any place.
But Bob says it's wrong? Got any examples or tips why it is called wrong?

Related

What is the difference between seeding a action and call a 'setter' method of a store in reflux data flow?

What is the difference between seeding a action and call a 'setter' method of a store in reflux data flow?
TodoActions['add'](todo)
vs
TodoStore.add(todo)
Action will trigger your store via RefluxJS lib, but Store.Add() is calling add method directly
First off, it's useful to note that Whatever.func() and Whatever['func']() are just two different syntaxes for the same thing. So the only difference here in your example is what you're calling it on.
As far as calling a method in a store directly, vs. an action which then ends up calling that method in a store, the difference is architectural, and has to do with following a pattern that is more easily scaled, works more broadly, etc. etc.
If any given event within the program (such as, in this case, adding something) emits 1 clear action that anything can listen for, then it becomes MUCH easier to build large programs, edit previously made programs, etc. The component saying that this event has happened doesn't need to keep track of everywhere that might need to know about it...it just needs to say TodoActions.add(todo), and every other part of the program that needs to know about an addition happening can manage itself to make sure it's listening for that action.
So that's why we follow the 1 way looping pattern:
component -> action -> store -> back to component
Because then the flow of events happening is much more easily managed, because each part of the program can manage its own knowledge about the program state and when it needs to be changed. The component emitting the action doesn't need to know every possible part of the program that might need that action...it just needs to emit it.

What is the preffered level inside controllers to place App::uses?

I know App::uses does lazy-loading, but if we are going to use i.e. CakeTime in a small part of our code (like inside an if statement that is called 1/3 of the times the method is called) then what would be the best place to put App::uses('CakeTime', 'Utility')?
The options are:
AppController
MyController
MyController::methodName
MyController::methodName inside the if statement (where it is actually going to be used).
I'm putting it in 4 as I guess there must be some obvious overhead (even if really small) but I don't see any reason for it to be present in every call of the controller. My colleague says 2 because it "we might need it elsewhere in the future and it lazyloads so it's not a problem". My answer to that is that if we need it elsewhere then we should redefine then where to put it according to the case.
What is your opinion and why?
There is no preferred place, only a "correct" place.
Always in the file where the class actually uses it.
So if you use CakeTime in your MyController, there is only the top of this very same file where you may put it.
Besides the technical aspect here it is also wise to do that for the coming up 3.0 version.
If you ever want to migrate (and thus to use namespace declarations) you will be more than grateful if you places those then former App::uses() statements in the correct places.

When to Use a Callback function?

When do you use a callback function? I know how they work, I have seen them in use and I have used them myself many times.
An example from the C world would be libcurl which relies on callbacks for its data retrieval.
An opposing example would be OpenSSL: Where I have used it, I use out parameters:
ret = somefunc(&target_value);
if(ret != 0)
//error case
I am wondering when to use which? Is a callback only useful for async stuff? I am currently in the processes of designing my application's API and I am wondering whether to use a callback or just an out parameter. Under the hood it will use libcurl and OpenSSL as the main libraries it builds on and the parameter "returned" is an OpenSSL data type.
I don't see any benefit of a callback over just returning. Is this only useful, if I want to process the data in any way instead of just giving it back? But then I could process the returned data. Where is the difference?
In the simplest case, the two approaches are equivalent. But if the callback can be called multiple times to process data as it arrives, then the callback approach provides greater flexibility, and this flexibility is not limited to async use cases.
libcurl is a good example: it provides an API that allows specifying a callback for all newly arrived data. The alternative, as you present it, would be to just return the data. But return it — how? If the data is collected into a memory buffer, the buffer might end up very large, and the caller might have only wanted to save it to a file, like a downloader. If the data is saved to a file whose name is returned to the caller, it might incur unnecessary IO if the caller in fact only wanted to store it in memory, like a web browser showing an image. Either approach is suboptimal if the caller wanted to process data as it streams, say to calculate a checksum, and didn't need to store it at all.
The callback approach allows the caller to decide how the individual chunks of data will be processed or assembled into a larger whole.
Callbacks are useful for asynchronous notification. When you register a callback with some API, you are expecting that callback to be run when some event occurs. Along the same vein, you can use them as an intermediate step in a data processing pipeline (similar to an 'insert' if you're familiar with the audio/recording industry).
So, to summarise, these are the two main paradigms that I have encountered and/or implemented callback schemes for:
I will tell you when data arrives or some event occurs - you use it as you see fit.
I will give you the chance to modify some data before I deal with it.
If the value can be returned immediately then yes, there is no need for a callback. As you surmised, callbacks are useful in situations wherein a value cannot be returned immediately for whatever reason (perhaps it is just a long running operation which is better performed asynchronously).
My take on this: I see it as which module has to know about which one? Let's call them Data-User and IO.
Assume you have some IO, where data comes in. The IO-Module might not even know who is interested in the data. The Data-User however knows exactly which data it needs. So the IO should provide a function like subscribe_to_incoming_data(func) and the Data-User module will subscribe to the specific data the IO-Module has. The alternative would be to change code in the IO-Module to call the Data-User. But with existing libs you definitely don't want to touch existing code that someone else has provided to you.

Should an API assume the proper calling sequence, or should it verify the assumptions?

I have 2 api calls to implement, let's call them "ShouldDoSomething" and "DoSomething". The first is a test to see if a particular action needs to be taken and the other actually takes that action. Naturally, taking the action is only necessary or valid if the test returns true. There are cases where the test is needed without actually taking the action, so both are necessary.
Should the action call internally run the test and become a no-op if it's not needed, or should the implementation assume that it will only ever be called in the case where the test has already returned true? It seems simpler and safer to validate the assumptions, but then the 'action' call has 3 possible return states instead of two (success, failure, and not needed). Another approach would be to make the test an assertion and abort if the action was called unnecessarily. (this is in C, so exceptions aren't really a good idea)
But it comes down to a choice - simpler api calls or fewer api calls?
Pick a policy according to what you think is cleanest for the intended use cases. Document it clearly and carefully. The most important thing with API decisions like this is documentation, not the specifics of what convention is chosen.
If you decide to use the first option, do you actually need to add a "not needed" return state, or can it simply indicate "success" if the operation is not needed? Don't add complexity unless it's necessary.
I think it would be best to go with:
Another approach would be to make the
test an assertion and abort if the
action was called unnecessarily.
So for example if "DoSomething" is called and "ShouldDoSomething" is telling you that you shouldn't take that particular action then "DoSomething should abort without doing anything. Do you really need to expose "ShouldDoSomething" to the client, if its only testing whether "DoSomething" should be called?
I think it also comes down to what will be changed. Could it be fatal if called inappropriately? If so I think you should put assertions in. If it will be harmless then it doesn't really matter.
I would say safer is the way to go.
Hope this helps.
Stephen Canon is correct about consistency and documentation.
Personally I tend to err on trying to protect people from themselves unless it is a serious performance penalty. In most situations I would lean to checking ShouldDoSomething internally from DoSomething and return an error if DoSomething wasn't required. That indicates a logic error to the user and is a lot nicer than letting them blow up or leaving something in an invalid state that might be hard to debug down the line.
The simplest API (from a user perspective) would be to have the function DoSomethingIfNeeded(), and let you take care of the conditions. Why bother the user?
The least friendly user interface is the one that says: "Now call one of these functions! No, not that one!".

Making actions get called in silverlight unit test with MOQ

Lets say I have this
_articlesService.SaveAsync(Model, AddressOf OnSaveCompleted)
The OnSaveCompleteMethod do a couple of things, obviously. Its a
Protected Overridable Sub OnSaveCompleted(ByVal asyncValidationResult As AsyncValidationResult)
In my unittest. I need to run a mocked SaveAsync, and have OnSaveCompleted called in anyway, because the method sends out events that I need to know have been sent.
Right now, the code just walks past that method, thus its never executed.
Need help solving this because I'm stuck right now.
If I understand your context right:
you have a class under test which uses an ArticlesService
your ArticlesService (a collaborating class) is responsible for sending some events
you want to verify that your class under test is behaving correctly
you want to do that by checking for the events.
If that's the case, you may be making your class responsible for more than it needs to be. You only need to verify that the ArticlesService was asked to SaveAsync. You don't need to worry about what the ArticlesService then went off and did.
Think of it this way. You are a Class-Under-Test. You have too much work to do, so you've asked some other people to help you. You have two choices. You can either chase them up, worrying about whether they're doing it right, or you can just trust them.
Rather than micro-managing classes, you can write a separate test which gives some examples of the way the ArticlesService will work, which will check that the ArticlesService is doing its job correctly. Your CUT's responsibility is to delegate that work effectively.
If you actually need the events to be raised so that your CUT can respond, that's a separate aspect of its behaviour, and you can do it with Moq's "Raise" method, documented in "Events", here:
http://code.google.com/p/moq/wiki/QuickStart
Edit: You can also use "CallBack", documented on the same link, to do stuff with the args being passed to you, including OnSaveCompleted. Not sure if it's going to help or not; it's tricky to see what you're doing without both the code and the failing test. Good luck anyway!
Close, but not exactly like that.
We don't actually send out an event in the ArticleService.
The method SaveAsync takes an Article to be saved, and a method to be called once the saving is complete.
The problem is that the "OnSaveCompleted"-method isnt being called. (This method exists in the View Model Base class, so the service isnt sending the event, the viewmodel is.).
But we have our own implementation of WCF-service proxies so this is probably what's messing with us, since we dont use the generated code.
Think we will have to rework our infrastructure on the services abit to solve this.
So it's a special case, just wanted to throw the question out just in case. :)
Thanks anyway for the answer.

Resources