Eiffel Studio seems to pass through my requirements even if I have them enabled on project settings. And as far as I remember I was able some time to put a break point into the requirements...
I don't understand what I'am missing here, as you can see in my example, the requirement passes through as I have the same condition on the code and it goes into (attached {POWER_DEVICE} a_csv.device as l_dev).
A general rule for inherited assertions is the following:
preconditions can be only relaxed;
postconditions can be only strengthened.
In the particular example the effective precondition is
True
or else
valid_csv (a_csv) and then attached {POWER_DEVICE} a_csv.device
This is reflected by the keywords require at the beginning and require else in the middle of the combined precondition in the flat form of the feature. The expression True is inherited. This is the precondition of the feature in the parent.
A possible solution is to move valid_csv (a_csv) to the parent feature, and redefine valid_csv in the descendant. If valid_csv is common for all calls, but the second test varies across descendants, it might be better to introduce a new feature is_known and have 2 precondition subclauses in the parent:
is_valid_csv: is_valid_csv (a_csv)
is_known_csv: is_known_csv (a_csv)
The implementation of is_known_csv in the class POWER_CSV_PROCESSOR would be
is_known_csv (a_csv: ...)
do
Result := attached {POWER_DEVICE} a_csv.device
end
and the precondition of feature process in POWER_CSV_PROCESSOR would be empty.
The caller would then do something like
if processor.is_known_csv (csv) then
processor.process (csv)
end
Related
Sometimes the checks and contract constructions need an elaboration which wants to be avoided when assertions removed to improve performances and avoid doing useless things with the "only" work of the compiler. I refer for ex. to job in loops checks or other things. Sometimes having to build a function or having to think how to build it without being executed when assertions are on goes away of the intuitive way of the contract and its sense. I refer particularly to the check structure
Is there a way to do something such as
if checks_are_enabled then
do check stuff here
end
do_some_normal_job
if checks_are_enabled then
do other check stuff here
end
Assertions can be turned on and off on a class-by-class basis, with different levels: preconditions, postconditions, invariants, etc. As a result, it would be tricky and unreliable to report when they are enabled or not (consider, for example, inherited code: the checks might be on in one case and off in another). On a methodological level it would also break the idea that a correct program works the same way regardless of assertion monitoring.
What are workarounds?
If assertions are complex, they can be factored out to dedicated queries and look like
check
is_valid: complex_query
end
An alternative is to use debug statements:
debug ("check_this", "check_that")
... some complex code, including assertions
end
where "check_this" and "check_that" are debug keys that can be turned on when compiling for debugging.
There are hacks that could work now, but not in the future:
If a complex state needs to be computed and then checked after some operation, it can be saved in an object passed to some function with complex calculations and used later again:
check
is_valid_before: valid_pre (state) -- The state is computed by `valid_pre`.
end
code_that_does_the_work
check
is_valid_after: valid_post (state) -- The state is checked by `valid_post`.
end
Some global flag can be used to keep track about assertion monitoring:
check
is_monitoring_checks
end
where query is_monitoring_checks has side effects:
is_monitoring_checks: BOOLEAN
-- Record whether assertion checks are turned on.
do
is_check_on := True
Result := True
end
Then, subsequent code could be written as asked in the question:
if is_check_on then
... -- Do some complex calculations when assertions are turned on.
end
Animal
deferred class ANIMAL
inherit
ANY
redefine
default_create
end
feature
creator: like Current
guts: GUTS
default_create
do
create guts
end
make_malformed
do
default_create
end
end --class
PIG
class PIG
inherit
ANIMAL
redefine
make_malformed
end
create
default_create,
make_malformed,
make_from_insemination
feature
guts: GUTS
make_malformed
do
Precursor
set_left_eye (create {MALFORMED_EYE})
end
make_from_insemination (some_humain: HUMAIN)
do
default_create
creator := some_humain
end
end --class
Into my vision of best practices, I'll say that
If there is no particular sense of making a creation procedure (like my make_malformed example) redefine the default_create
All creation procedure should call default_create and add specific behavior (like my make_from_db example)
So what is the purpose of many libraries in Eiffel which are adding a make like create {LINKED_LIST}.make
Correct me if I'm wrong. Thanks in advance!
Many Eiffel libraries were developed before default_create was added to ANY with the corresponding semantics. This explains why many classes of the base library do not use it.
Also, creation procedures can carry some specific sense. For example, make can create a container that compares internal objects using reference equality whereas make_equal can create a container that uses object equality instead (this is the case for HASH_TABLE, though there is an additional argument to indicate an expected number of elements, this argument could be omitted with some other design choice). In such cases, default_create and default_create_equal would be non-symmetric, whereas make and make_equal are symmetric, so that the design is more consistent.
As you point out, default_create should not carry any specific behavior, just some basic things, expected from all descendants.
Whether default_create should be called by all other creation procedures heavily depends on the design. One example, where this is almost a rule, is the library "vision" that encodes in default_create a correct order of initialization, crucial for void safety. It's still possibly to write a class (based on this library) that performs the initialization correctly without calling default_create in its creation procedure, but having a ready-to-follow patters simplifies development.
How do I handle more than one condition (with different boolean expressions) in a UML state machine transition (as guard)?
Example:
In this example I would like to add more than only one condition (Tries < 3) in the transition from "logging in" to "Logged In" like discribed in the note.
How to handle this UML compliant?
Simply spoken (and to focus on the needed step)
put a boolean condition like above in the Guard. This can be any text. You can write C-style or plain text. I'm not sure about OCL here, but that's for academic purpose anyway (my opinion).
N.B. Your diagram shows Tries = 3 which should be a Guard also (i.e. [Tries = 3]) rather than a Name.
There are a couple of options here:
Your guard condition can combine multiple checks within the '[]' - much like you were doing in the note.
You can have multiple transitions between the same two states, each with its own condition.
You can have states within states. So in your example the three states could be within a superstate of 'Normal Operation' - which you then further define in other documentation or via a note.
All of these are valid UML syntax. But note that just because something is valid doesn't mean it will be supported in your editor. For example it was many years before most of the features of sequence diagrams became available within editors...
I work in safety critical application development. Recently as a code reviewer I complained against coding style shown below, but couldn't make a strong case against it. So what would be a good argument against such Variable redundancy/duplication, I am looking for cases where this might lead to problems or test cases which might fail, rather than just coding style.
//global data
// global data
int Block1Var;
int Block2Var;
...
//Block1
{
...
Block1Var = someCondition; // someCondition is an logical expression
...
}
//Block2
{
...
Block2Var = Block1Var; // Block2Var is an unconditional copy of Block1Var
...
}
I think a little more context would be helpful perhaps.
You could argue that the value of Block1Var is not guaranteed to stay the
same across concurrent access/modification. This is only valid if Block1Var
ever changes (ie is not only read). I don't know if you are concerned with
multi-threaded applications or not.
Readability is an important issue as well. Future code maintainers
don't want to have to trace around a bunch of trivial assignments.
Depends on what's done with those variables later, but one argument is that it's not future-proof. If, in the future, you change the code such that it changes the value of Block1Var, but Block2Var is used instead (without the additional change) later on, then this will result in erroneous behavior.
If the shown function context reaches a certain length (I'm assuming a lot of detail has been discarded to create the minimal reproducible example for this question), a good next step could be to create a new (sub-)function out of Block 2. This subfunction then should be started assigning Block1Var (-> actual parameter) to Block2Var (-> formal parameter). If there were no other coupling to the rest of the function, one could cut the rest of Block 2 and drop it as a function definition, and would only have to replace the assignment by the subfunction call.
My answer is fairly speculative, but I have seen many cases where this strategy helped me to mark useful points to split a complex function later during the development. Of course, this interpretation only applies to an intermediate stage of development and not to code that is stated to be "ready for release".
I'm not sure I got how to define private functions right.
When I'm writing a package mathematica, I just do this:
BeginPackage["myPackage`"]
myPublicFunction::usage="myPublicFunction blahblahblah";
Begin["Private"]
myPrivateFunction[input_]:= ... ;
myPublicFunction[input_]:= ... ;
End[]
EndPackage[]
Is this the correct way or am I missing something?
Yep, that's a correct way. It may pay off to understand some of the internal package mechanics. Mathematica contexts are similar to namespaces in other languages. They can be nested. Every symbol belongs to some context. At any given moment, some context is "current". Whenever a new symbol is created, the system must decide to which context the symbol will belong. This happens at parse-time. The fundamental quantity (variable) here is $ContextPath. It is basically the search path for symbols. It is a list of contexts, and whenever the system sees a new symbol, it tests if the symbol with the same short name (that is, the name of the symbol proper, without the context) exists in some context on the $ContextPath. If it does exist, then the given occurrence of the symbol will be associated with that existing one. If it does not, then the symbol is created in a current context. Note that this is a dynamic thing - if you change the $ContextPath at any moment, the next symbol occurrence can be associated with a different symbol.
Anyways, what BeginPackage does is that it simply replaces the current value of $ContextPath with just {youPublicPackageContext, "System'"}, plus possibly additional contexts that you publicly import through the second optional argument of BeginPackage. Therefore, all symbols that are in the "public" section, are parsed into the public context, if they are not in "System'" or other contexts that you import. And what EndPackage does is to restore the value of the $ContextPath to what it was before you started loading the package. So, technically the usage message is not the only way to make a symbol public in your main context - you could just as well simply type a symbol with a semicolon, like myFunction; (this practice is discouraged, I just mentioned it to clarify the mechanism). Now, what happens when you enter Begin["'Private'"] is that the current context becomes YourContext'Private' (a sub-context). The $ContextPath is not changed. Therefore, any symbol entered there, which does not exist in your public package or other imported packages (that is, contexts currently on the $ContextPath), automatically is parsed into the 'Private' subcontext.
What really makes these symbols private is that whenever you import your package into some other context (package), only the main package is added to the $ContextPath, but not its sub-packages. Technically, you can break encapsulation by manually adding YourPackage'Private' to the $ContextPath (say, PrependTo[$ContextPath, YourPackage'Private']), and then all your private functions and other symbols will become public in that particular context where you do the import. Again, this practice is discouraged, but it explains the mechanics. The bottom line is that the notion of private or public can be entirely understood when we know how symbols are parsed, and what are the manipulations with $ContextPath and $Context (another system variable giving the value of the current context), that are performed by commands such as Begin and BeginPackage. To put it another way, one could, in principle, emulate the actions of BeginPackage,Begin, End and EndPackage with a user-defined code. There are just a few principles operating here (which I tried to outline above), and the mechanism itself is in fact very much exposed to the user, so that if, in some rare cases, one may want some other behavior, one can make some "custom" manipulations with $ContextPath and Context, to ensure some non-standard way of symbol parsing and therefore, control package-scale encapsulation in some "non-standard" way. I am not encouraging this, just mentioning to emphasize that the mechanism is in fact much simpler and much more controllable than it may seem on the surface.