Eiffel, multiple types conformance: a way to specify that a parameter is a descendent from A and B? - eiffel

Is there a way (I'm sure there is out of runtime check...) to specify that a parameter or a variable in general conforms to multiple types? to avoid doing something such as
work (a_printer: PRINTER; a_scanner: SCANNER)
do
a_printer.print
a_scanner.scan
-- OR without second parameter
if attached {SCANNER} a_printer as l_scanner then
l_scanner.scan
else
throw RuntimeError
end
end

If feature work belongs to a class that may have formal generic parameters, it could be defined as taking one argument of the corresponding formal generic type:
class X [D -> {PRINTER, SCANNER}] feature
work (device: D)
do
device.scan
device.print
end
end
Then, at the caller site, one could make the call
x.work (multi_function_device)
where x has an appropriate type, e.g. X [MULTI_FUNCTION_PRINTER].
If work could also be declared and implemented as a class feature, the temporary variable could be avoided:
{X [like multi_function_device]}.work (multi_function_device)
If the auxiliary class X is not an option, the current version of the language provides no means to declare an argument as conforming to more than 1 type (e.g., work (d: {PRINTER, SCANNER})), so you would have to resort to preconditions like
work (p: PRINTER)
require
attached {SCANNER} p
do
check
from_precondition: attached {SCANNER} p as s
then
s.scan
end
p.print
end

I think that, if possible, you should use a common ancestor to your multiple types. If you cannot (if you are using library types), you can create descendant classes (MY_PRINTER inherit from PRINTER and DEVICE and MY_SCANNER inherit from SCANNER and DEVICE). Another way is to use ANY as the type, but it is not the best solution.

Related

Sharing a type with a generic class?

How do you declare a variable to be the same type as a type parameter used to instantiate a generic class? The following code does not compile:
class
TEST [G, H -> INTEGER]
feature
f (i: INDEXABLE [G, H])
local
y: H
do
y := i.lower -- Type error here.
end
end
The compiler says that the source of the assignment is not compatible with target.
In the current implementation, INDEXABLE [G, H] inherits from TABLE [G, INTEGER]. As a result, lower is of type INTEGER, not H. And INTEGER does not conform to the formal generic type H of the class TEST. This explains the error.
To me, it looks like a mistake in the declaration of class INDEXABLE. It should inherit from TABLE [G, H] instead. Then, the example code would compile.
Type anchoring can be used in those cases:
feature
f (i: INDEXABLE [G, H])
local
y: like i.lower
do
y := i.lower
end
Sometimes a generic type is not used as return type of any accessible feature on a class, so in those cases I like to declare a fake feature specifically to allow anchoring:
class SOME_CLASS [G]
feature
generic_type_anchor: G
do
check
for_anchoring_only: False
-- This method should never actually be called, only used as an anchor in type declarations
end
end
This is particularly useful with complex inheritance trees or when descendant classes close the generics, in which case the correct type is not apparent from the declared type. Personally I tend to use type anchoring whenever values are semantically related as this helps expressing intent, simplifies refactoring (as there are fewer repetitions of types which by definition must match) and facilitates covariance.
Also as a side-note, expanded types (like INTEGER) cannot be used polymorphically (you need a reference for that; if class A is expanded and class B [expanded or reference] inherits A, you could not assign a value of type B to a variable of type A; inheritance from expanded types is implicitly non-conforming), and moreover the compiler disallows inheriting from basic expanded types (INTEGER, BOOLEAN, REAL_64, etc.), therefore the generic constraint in your example makes no sense because H could never be anything but INTEGER.

Eiffel: best practices for creation procedures

Animal
deferred class ANIMAL
inherit
ANY
redefine
default_create
end
feature
creator: like Current
guts: GUTS
default_create
do
create guts
end
make_malformed
do
default_create
end
end --class
PIG
class PIG
inherit
ANIMAL
redefine
make_malformed
end
create
default_create,
make_malformed,
make_from_insemination
feature
guts: GUTS
make_malformed
do
Precursor
set_left_eye (create {MALFORMED_EYE})
end
make_from_insemination (some_humain: HUMAIN)
do
default_create
creator := some_humain
end
end --class
Into my vision of best practices, I'll say that
If there is no particular sense of making a creation procedure (like my make_malformed example) redefine the default_create
All creation procedure should call default_create and add specific behavior (like my make_from_db example)
So what is the purpose of many libraries in Eiffel which are adding a make like create {LINKED_LIST}.make
Correct me if I'm wrong. Thanks in advance!
Many Eiffel libraries were developed before default_create was added to ANY with the corresponding semantics. This explains why many classes of the base library do not use it.
Also, creation procedures can carry some specific sense. For example, make can create a container that compares internal objects using reference equality whereas make_equal can create a container that uses object equality instead (this is the case for HASH_TABLE, though there is an additional argument to indicate an expected number of elements, this argument could be omitted with some other design choice). In such cases, default_create and default_create_equal would be non-symmetric, whereas make and make_equal are symmetric, so that the design is more consistent.
As you point out, default_create should not carry any specific behavior, just some basic things, expected from all descendants.
Whether default_create should be called by all other creation procedures heavily depends on the design. One example, where this is almost a rule, is the library "vision" that encodes in default_create a correct order of initialization, crucial for void safety. It's still possibly to write a class (based on this library) that performs the initialization correctly without calling default_create in its creation procedure, but having a ready-to-follow patters simplifies development.

Why does a System.Array object have an Add() method?

I fully understand that a System.Array is immutable.
Given that, why does it have an Add() method?
It does not appear in the output of Get-Member.
$a = #('now', 'then')
$a.Add('whatever')
Yes, I know this fails and I know why it fails. I am not asking for suggestions to use [System.Collections.ArrayList] or [System.Collections.Generic.List[object]].
[System.Array] implements [System.Collections.IList], and the latter has an .Add() method.
That Array implements IList, which is an interface that also covers resizable collections, may be surprising - it sounds like there are historical reasons for it[1]
.
In C#, this surprise is hard to stumble upon, because you need to explicitly cast to IList or use an IList-typed variable in order to even access the .Add() method.
By contrast, since version 3, PowerShell surfaces even a type's explicit interface implementations as direct members of a given type's instance. (Explicit interface implementations are those referencing the interface explicitly in their implementation, such as IList.Add() rather than just .Add(); explicit interface implementations are not a direct part of the implementing type's public interface, which is why C# requires a cast / interface-typed variable to access them).
As a byproduct of this design, in PowerShell the .Add() method can be called directly on System.Array instances, which makes it easier to stumble upon the problem, because you may not realize that you're invoking an interface method. In the case of an array, the IList.Add() implementation (rightfully) throws an exception stating that Collection was of a fixed size; the latter is an exception of type NotSupportedException, which is how types implementing an interface are expected to report non-support for parts of an interface.
What helps is that the Get-Member cmdlet and even just referencing a method without invoking it - simply by omitting () - allow you to inspect a method to determine whether it is native to the type or an interface implementation:
PS> (1, 2).Add # Inspect the definition of a sample array's .Add() method
OverloadDefinitions
-------------------
int IList.Add(System.Object value)
As you can see, the output reveals that the .Add() method belongs to the Ilist interface.
[1] Optional reading: Collection-related interfaces in .NET with respect to mutability
Disclaimer: This is not my area of expertise. If my explanation is incorrect / can stand improvement, do tell us.
The root of the hierarchy of collection-related interfaces is ICollection (non-generic, since v1) and ICollection<T> (generic, since v2).
(They in turn implement IEnumerable / IEnumerable<T>, whose only member is the .GetEnumerator() method.)
While the non-generic ICollection interface commendably makes no assumptions about a collection's mutability, its generic counterpart (ICollection<T>) unfortunately does - it includes methods for modifying the collection (the docs even state the interface's purpose as "to manipulate generic collections" (emphasis added)). In the non-generic v1 world, the same had happened, just one level below: the non-generic IList includes collection-modifying methods.
By including mutation methods in these interfaces, even read-only/fixed-size lists/collections (those whose number and sequence of elements cannot be changed, but their element values may) and fully immutable lists/collections (those that additionally don't allow changing their elements' values) were forced to implement the mutating methods, while indicating non-support for them with NotSupportedException exceptions.
While read-only collection implementations have existed since v1.1 (e.g, ReadOnlyCollectionBase), in terms of interfaces it wasn't until .NET v4.5 that IReadOnlyCollection<T> and IImmutableList<T> were introduced (with the latter, along with all types in the System.Collections.Immutable namespace, only available as a downloadable NuGet package).
However, since interfaces that derive from (implement) other interfaces can never exclude members, neither IReadOnlyCollection<T> nor IImmutableCollection<T> can derive from ICollection<T> and must therefore derive directly from the shared root of enumerables, IEnumerable<T>.
Similarly, more specialized interfaces such as IReadOnlyList<T> that implement IReadOnlyCollection<T> can therefore not implement IList<T> and ICollection<T>.
More fundamentally, starting with a clean slate would offer the following solution, which reverses the current logic:
Make the major collection interfaces mutation-agnostic, which means:
They should neither offer mutation methods,
nor should they make any guarantees with respect to immutability.
Create sub-interfaces that:
add members depending on the specific level of mutability.
make immutability guarantees, if needed.
Using the example of ICollection and IList, we'd get the following interface hierarchy:
IEnumerable<T> # has only a .GetEnumerator() method
ICollection<T> # adds a .Count property (only)
IResizableCollection<T> # adds .Add/Clear/Remove() methods
IList<T> # adds read-only indexed access
IListMutableElements<T> # adds writeable indexed access
IResizableList<T> # must also implement IResizableCollection<T>
IResizableListMutableElements<T> # adds writeable indexed access
IImmutableList<T> # guarantees immutability
Note: Only the salient methods/properties are mentioned in the comments above.
Note that these new ICollection<T> and IList<T> interfaces would offer no mutation methods (no .Add() methods, ..., no assignable indexing).
IImmutableList<T> would differ from IList<T> by guaranteeing full immutability (and, as currently, offer mutation-of-a-copy-only methods).
System.Array could then safely and fully implement IList<T>, without consumers of the interface having to worry about NotSupportedExceptions.
To "Add" to #mklement0's answer: [System.Array] implements [System.Collections.IList] which specifies an Add() method.
But to answer why have an Add() if it doesn't work? Well, we haven't looked at the other properties, i.e. IsFixedSize :
PS > $a = #('now', 'then')
PS > $a.IsFixedSize
True
So, a [System.Array] is just a [System.Collections.IList] that is a Fixed Size. When we look back at the Add() method, it explicitly defines that if the List is Read-Only or Fixed Size, throw NotSupportedException which it does.
I believe the essence is not, "Let's have a function that just throws an error message for no reason", or to expand on it, No other reason than to fulfill an Interface, but it actually is providing a warning that you are legitimately doing something that you shouldn't do.
It's the typical Interface ideas, you can have an IAnimal type, with an GetLeg() method. This method would be used 90% of all animals, which makes it a good reason for implementing into the base Interface, but would throw an error when you use it against a Snake object because you didn't first check the .HasFeet property first.
The Add() method is a really good method for a List Interface, because it is an essential method for Non-Readonly and Non-Fixed length lists. We are the ones being stupid by not checking that the list is not IsFixedSize before calling an Add() method that would not work. i.e. this falls into the category of $null checks before trying to use things.

Recursive visibility of symbols in Ada packages

Let's say I have a generic vector library. To make it easier to use, I want to instantiate various common forms of the vector library and make them visible in a single package.
I'm trying this:
with GenericVector;
package Vectors is
package Vectors3 is new GenericVector(3);
use all type Vectors3.Vector;
subtype Vector3 is Vectors3.Vector;
package Vectors4 is new GenericVector(4);
use all type Vectors4.Vector;
subtype Vector4 is Vectors4.Vector;
end;
The end goal is that I want to be able to do with Vectors; use Vectors; and end up with Vector3 and Vector4 types directly available which Just Work.
Naturally, the code above doesn't work. It looks like the use all type statements import the definitions attached to the specified type into the package specification but then those definitions aren't exported to the user of Vectors. I have to do with Vectors; use Vectors; use all type Vectors.Vectors3; instead. This is kind of sucky.
How can I do this?
You could simply make Vector3 and Vector4 new types, and not just subtypes. That would implicitly declare all the inherited, primitive operations from GenericVector in Vectors.
use Vectors gives you direct visibility to those identifiers that are declared in Vectors, including those that are declared implicitly. (Implicit declarations are things like "+", "-", operators when you declare a new integer type, and inherited operations when you declare a derived type.) But it doesn't give you direct visibility to anything else. In particular, use is not transitive, because use all type Vectors3.Vector does not declare any new identifiers in Vectors.
You could accomplish this by declaring renaming identifiers for everything that you want a use Vectors user to see. (For types, you'd have to use subtype since there's no type renaming in Ada.) E.g. in Vectors:
function Dot_Product (V1, V2 : Vectors3.Vector) return Float
renames Vectors3.Dot_Product;
(I'm just guessing at what the operations in GenericVectors might look like.) Now, anywhere that says use Vectors will be able to use Dot_Product directly. You'd have to do something like this for every identifier, though. If there are a lot of them, this probably isn't a viable solution. (The renaming declaration doesn't have to use the same name Dot_Product.)
Although it may seem annoying that you can't get this kind of transitive visibility, the alternative probably would turn out to be more annoying, because you can't look at Vectors and see what identifiers would be made visible by use Vectors; the result would probably be either unexpected name conflicts or other surprises.

Initialize a GObject with parameters which are not GObject properties?

I have a GObject "A" which creates an instance of another GObject "B" in its constructor.
The "B" object needs to be passed several construction-only properties. Now when creating an instance of object "A" I want to allow passing values for these properties through the constructor of object "A" on to the constructor of object "B".
The only way I have found to do that was to create identical properties for object "A" and pass their values on to the constructor of "B". These properties would have no further meaning to "A" so this seems like a kludge.
Is there a better way to do what I want?
Have A inherit from B. Then A has all of B's properties automatically.
Don't use properties in A, but instead pass B's properties (or even better, an already-constructed B object) as parameters to A's constructor.
Delay construction of B until A can figure out how it nees to configure B. Add a private flag to A, b_initialized or something, that tells you whether A's internal pointer to B is valid.
Some more clarification on the second suggestion:
A's stuff is constructed in the a_init() function that is provided for by the G_DEFINE_TYPE() macro. But that's not how you get an instance of A. It's usual to write a function, which is part of the public interface of A, like this:
A *a_new()
{
return (A *)g_object_new(TYPE_A, NULL);
}
You can easily extend this to include other parameters:
A *a_new(int b_param_1, int b_param_2)
{
A *a = (A *)g_object_new(TYPE_A, NULL);
a->priv->b = b_new(b_param_1, b_param_2);
return a;
}
This has the disadvantage of leaving your A object in an invalid state (i.e., without a B) if you construct it using g_object_new, for example if you're trying to build it from a GtkBuilder file. If that's a problem, I still strongly suggest refactoring.
Use dependency injection, pass an already initialized object of type B to the constructor of A.
That way the client that is using your class can decide whether to pass in different kinds of Bs (if it makes sense you can even use an interface instead of a class as the B type, writing code against interfaces is generally better than writing code against implementations).
Deriving A from B only makes sense if it really is a specialization of it's parent class.
From the question it isn't clear if derivation makes sense, but it's an often overused method for composition.

Resources