Animal
deferred class ANIMAL
inherit
ANY
redefine
default_create
end
feature
creator: like Current
guts: GUTS
default_create
do
create guts
end
make_malformed
do
default_create
end
end --class
PIG
class PIG
inherit
ANIMAL
redefine
make_malformed
end
create
default_create,
make_malformed,
make_from_insemination
feature
guts: GUTS
make_malformed
do
Precursor
set_left_eye (create {MALFORMED_EYE})
end
make_from_insemination (some_humain: HUMAIN)
do
default_create
creator := some_humain
end
end --class
Into my vision of best practices, I'll say that
If there is no particular sense of making a creation procedure (like my make_malformed example) redefine the default_create
All creation procedure should call default_create and add specific behavior (like my make_from_db example)
So what is the purpose of many libraries in Eiffel which are adding a make like create {LINKED_LIST}.make
Correct me if I'm wrong. Thanks in advance!
Many Eiffel libraries were developed before default_create was added to ANY with the corresponding semantics. This explains why many classes of the base library do not use it.
Also, creation procedures can carry some specific sense. For example, make can create a container that compares internal objects using reference equality whereas make_equal can create a container that uses object equality instead (this is the case for HASH_TABLE, though there is an additional argument to indicate an expected number of elements, this argument could be omitted with some other design choice). In such cases, default_create and default_create_equal would be non-symmetric, whereas make and make_equal are symmetric, so that the design is more consistent.
As you point out, default_create should not carry any specific behavior, just some basic things, expected from all descendants.
Whether default_create should be called by all other creation procedures heavily depends on the design. One example, where this is almost a rule, is the library "vision" that encodes in default_create a correct order of initialization, crucial for void safety. It's still possibly to write a class (based on this library) that performs the initialization correctly without calling default_create in its creation procedure, but having a ready-to-follow patters simplifies development.
Related
Eiffel Studio seems to pass through my requirements even if I have them enabled on project settings. And as far as I remember I was able some time to put a break point into the requirements...
I don't understand what I'am missing here, as you can see in my example, the requirement passes through as I have the same condition on the code and it goes into (attached {POWER_DEVICE} a_csv.device as l_dev).
A general rule for inherited assertions is the following:
preconditions can be only relaxed;
postconditions can be only strengthened.
In the particular example the effective precondition is
True
or else
valid_csv (a_csv) and then attached {POWER_DEVICE} a_csv.device
This is reflected by the keywords require at the beginning and require else in the middle of the combined precondition in the flat form of the feature. The expression True is inherited. This is the precondition of the feature in the parent.
A possible solution is to move valid_csv (a_csv) to the parent feature, and redefine valid_csv in the descendant. If valid_csv is common for all calls, but the second test varies across descendants, it might be better to introduce a new feature is_known and have 2 precondition subclauses in the parent:
is_valid_csv: is_valid_csv (a_csv)
is_known_csv: is_known_csv (a_csv)
The implementation of is_known_csv in the class POWER_CSV_PROCESSOR would be
is_known_csv (a_csv: ...)
do
Result := attached {POWER_DEVICE} a_csv.device
end
and the precondition of feature process in POWER_CSV_PROCESSOR would be empty.
The caller would then do something like
if processor.is_known_csv (csv) then
processor.process (csv)
end
I am tasked to assist with the design of a dynamic library (exposed with a C interface) aimed to be used in embed software application on various embed platform (Android,Windows,Linux).
Main requirements are speed , and decoupling.
For the decoupling part : one of our requirement is to be able to facilitate integration and so permit backward compatibility and resilience.
My library have some entry points that should be called by the integrating software (like an initialize constructor to provide options as where to log, how to behave etc...) and could also call some callback in the application (an event to inform when task is finished).
So I have come with several propositions but as each of one not seems great I am searching advice on a better or standard ways to achieve decoupling an d backward compatibility than this 3 ways that I have come up :
First an option that I could think of is to have a generic interface call for my exposed entry points for example with a hashmap of key/values for the parameters of my functions so in pseudo code it gives something like :
myLib.Initialize(Key_Value_Option_Array_Here);
Another option is to provide a generic function to provide all the options to the library :
myLib.SetOption(Key_Of_Option, Value_OfOption);
myLib.SetCallBack(Key_Of_Callbak, FunctionPointer);
When presenting my option my collegue asked me why not use a google protobuf argument as interface between the library and the embed software : but it seems weird to me, as their will be a performance hit on each call for serialization and deserialization.
Are there any more efficient or standard way that you coud think of?
You could have a struct for optional arguments:
typedef struct {
uint8_t optArg1;
float optArg2;
} MyLib_InitOptArgs_T;
void MyLib_Init(int16_t arg1, uint32_t arg2, MyLib_InitOptArgs_T const * optionalArgs);
Then you could use compound literals on function call:
MyLib_Init(1, 2, &(MyLib_InitOptArgs_T){ .optArg2=1.2f });
All non-specified values would have zero-ish value (0, NULL, NaN), and would be considered unused. Similarly, when passing NULL for struct pointer, all optional arguments would be considered unused.
Downside with this method is that if you expect to have many new arguments in the future, structure could grow too big. But whether that is an issue, depends on what your limits are.
Another option is to simply have multiple smaller initialization functions for initializating different subsystems. This could be combined with the optional arguments system above.
Is there a way (I'm sure there is out of runtime check...) to specify that a parameter or a variable in general conforms to multiple types? to avoid doing something such as
work (a_printer: PRINTER; a_scanner: SCANNER)
do
a_printer.print
a_scanner.scan
-- OR without second parameter
if attached {SCANNER} a_printer as l_scanner then
l_scanner.scan
else
throw RuntimeError
end
end
If feature work belongs to a class that may have formal generic parameters, it could be defined as taking one argument of the corresponding formal generic type:
class X [D -> {PRINTER, SCANNER}] feature
work (device: D)
do
device.scan
device.print
end
end
Then, at the caller site, one could make the call
x.work (multi_function_device)
where x has an appropriate type, e.g. X [MULTI_FUNCTION_PRINTER].
If work could also be declared and implemented as a class feature, the temporary variable could be avoided:
{X [like multi_function_device]}.work (multi_function_device)
If the auxiliary class X is not an option, the current version of the language provides no means to declare an argument as conforming to more than 1 type (e.g., work (d: {PRINTER, SCANNER})), so you would have to resort to preconditions like
work (p: PRINTER)
require
attached {SCANNER} p
do
check
from_precondition: attached {SCANNER} p as s
then
s.scan
end
p.print
end
I think that, if possible, you should use a common ancestor to your multiple types. If you cannot (if you are using library types), you can create descendant classes (MY_PRINTER inherit from PRINTER and DEVICE and MY_SCANNER inherit from SCANNER and DEVICE). Another way is to use ANY as the type, but it is not the best solution.
I'm not sure I got how to define private functions right.
When I'm writing a package mathematica, I just do this:
BeginPackage["myPackage`"]
myPublicFunction::usage="myPublicFunction blahblahblah";
Begin["Private"]
myPrivateFunction[input_]:= ... ;
myPublicFunction[input_]:= ... ;
End[]
EndPackage[]
Is this the correct way or am I missing something?
Yep, that's a correct way. It may pay off to understand some of the internal package mechanics. Mathematica contexts are similar to namespaces in other languages. They can be nested. Every symbol belongs to some context. At any given moment, some context is "current". Whenever a new symbol is created, the system must decide to which context the symbol will belong. This happens at parse-time. The fundamental quantity (variable) here is $ContextPath. It is basically the search path for symbols. It is a list of contexts, and whenever the system sees a new symbol, it tests if the symbol with the same short name (that is, the name of the symbol proper, without the context) exists in some context on the $ContextPath. If it does exist, then the given occurrence of the symbol will be associated with that existing one. If it does not, then the symbol is created in a current context. Note that this is a dynamic thing - if you change the $ContextPath at any moment, the next symbol occurrence can be associated with a different symbol.
Anyways, what BeginPackage does is that it simply replaces the current value of $ContextPath with just {youPublicPackageContext, "System'"}, plus possibly additional contexts that you publicly import through the second optional argument of BeginPackage. Therefore, all symbols that are in the "public" section, are parsed into the public context, if they are not in "System'" or other contexts that you import. And what EndPackage does is to restore the value of the $ContextPath to what it was before you started loading the package. So, technically the usage message is not the only way to make a symbol public in your main context - you could just as well simply type a symbol with a semicolon, like myFunction; (this practice is discouraged, I just mentioned it to clarify the mechanism). Now, what happens when you enter Begin["'Private'"] is that the current context becomes YourContext'Private' (a sub-context). The $ContextPath is not changed. Therefore, any symbol entered there, which does not exist in your public package or other imported packages (that is, contexts currently on the $ContextPath), automatically is parsed into the 'Private' subcontext.
What really makes these symbols private is that whenever you import your package into some other context (package), only the main package is added to the $ContextPath, but not its sub-packages. Technically, you can break encapsulation by manually adding YourPackage'Private' to the $ContextPath (say, PrependTo[$ContextPath, YourPackage'Private']), and then all your private functions and other symbols will become public in that particular context where you do the import. Again, this practice is discouraged, but it explains the mechanics. The bottom line is that the notion of private or public can be entirely understood when we know how symbols are parsed, and what are the manipulations with $ContextPath and $Context (another system variable giving the value of the current context), that are performed by commands such as Begin and BeginPackage. To put it another way, one could, in principle, emulate the actions of BeginPackage,Begin, End and EndPackage with a user-defined code. There are just a few principles operating here (which I tried to outline above), and the mechanism itself is in fact very much exposed to the user, so that if, in some rare cases, one may want some other behavior, one can make some "custom" manipulations with $ContextPath and Context, to ensure some non-standard way of symbol parsing and therefore, control package-scale encapsulation in some "non-standard" way. I am not encouraging this, just mentioning to emphasize that the mechanism is in fact much simpler and much more controllable than it may seem on the surface.
I know the basics of this methods,procedures,function and classes but i always confuse to differentiate among those in contrast of Object oriented programming so please can any body tell me the difference among those with simple examples ?
A class, in current, conventional OOP, is a collection of data (member variables) bound together with the functions/procedures that work on that data (member functions or methods). The class has no relationship to the other three terms aside from the fact that it "contains" (more properly "is associated with") the latter.
The other three terms ... well, it depends.
A function is a collection of computing statements. So is a procedure. In some very anal retentive languages, though, a function returns a value and a procedure doesn't. In such languages procedures are generally used for their side effects (like I/O) while functions are used for calculations and tend to avoid side effects. (This is the usage I tend to favour. Yes, I am that anal retentive.)
Most languages are not that anal retentive, however, and as a result people will use the terms "function" and "procedure" interchangeably, preferring one to the other based on their background. (Modula-* programmers will tend to use "procedure" while C/C++/Java/whatever will tend to use "function", for example.)
A method is just jargon for a function (or procedure) bound to a class. Indeed not all OOP languages use the term "method". In a typical (but not universal!) implementation, methods have an implied first parameter (called things like this or self or the like) for accessing the containing class. This is not, as I said, universal. Some languages make that first parameter explicit (and thus allow to be named anything you'd like) while in still others there's no magic first parameter at all.
Edited to add this example:
The following untested and uncompiled C++-like code should show you what kind of things are involved.
class MyClass
{
int memberVariable;
void setMemberVariableProcedure(int v)
{
memberVariable = v;
}
int getMemberVariableFunction()
{
return memberVariable;
}
};
void plainOldProcedure(int stuff)
{
cout << stuff;
}
int plainOldFunction(int stuff)
{
return 2 * stuff;
}
In this code getMemberVariableProcedure and getMemberVariableFunction are both methods.
Procedures, function and methods are generally alike, they hold some processing statements.
The only differences I can think between these three and the places where they are used.
I mean 'method' are generally used to define functions inside a class, where several types of user access right like public, protected, private can be defined.
"Procedures", are also function but they generally represent a series of function which needs to be carried out, upon the completion of one function or parallely with another.
Classes are collection of related attributes and methods. Attributes define the the object of the class where as the methods are the action done by or done on the class.
Hope, this was helpful
Function, method and procedure are homogeneous and each of them is a subroutine that performs some calculations.
A subroutine is:
a method when used in Object-Oriented Programming (OOP). A method can return nothing (void) or something and/or it can change data outside of the subroutine or method.
a procedure when it does not return anything but it can change data outside of the subroutine, think of a SQL stored procedure. Not considering output parameters!
a function when it returns something (its calculated result) without changing data outside of the subroutine or function. This is the way how SQL functions work.
After all, they are all a piece of re-usable code that does something, e.g. return data, calculate or manipulate data.
There is no difference between of among.
Method : no return type like void
Function : which have return type