Eiffel: How do I compare the type of an object to a given type? - eiffel

How do I compare the type of an object to a given type (instanceOf statement in Java)?
do_stuff (a_type: TYPE)
local
an_object: ANY
do
an_object := get_from_sky
if an_object.instanceOf (a_type) then
io.putstring("Object is same type as parameter")
else
io.putstring("Object is NOT same type as parameter")
end
end

Depending on the generality of the solution, there are different options. First, consider the case when an_object is always attached:
a_type always denotes a reference type.
For reference types one can use the feature attempted (with an alias /) of the class TYPE.
if attached (a_type / an_object) then
-- Conforms.
else
-- Does not conform.
end
a_type can denote either a reference or expanded type.
The feature attempted of the class TYPE is unusable in this case because it would always return an attached object for expanded types. Therefore, the types should be compared directly.
if an_object.generating_type.conforms_to
(({REFLECTOR}.type_of_type ({REFLECTOR}.detachable_type (a_type.type_id))))
then
-- Conforms.
else
-- Does not conform.
end
If an_object could also be Void, the condition should have additional tests for voidness. Denoting the conditions from the cases above with C, the tests that handle detachable an_object would be:
if
-- If an_object is attached.
attached an_object and then C or else
-- If an_object is Void.
not attached an_object and then not a_type.is_attached
then
-- Conforms.
else
-- Does not conform.
end

If you want to check if the type of a_type is identical to the type of an_object,
use the feature {ANY}.same_type, if you want to check type conformance just use {ANY}.conforms_to
do_stuff (a_type: TYPE)
local
an_object: ANY
do
an_object := get_from_sky
if an_object.same_type (a_type) then
io.putstring("Object an_object is same type as argment a_type")
elseif an_object.conforms_to (a_type)
io.putstring("Object an_object conforms to type of a_type ")
else
io.putstring ("...")
end
end

Related

How to create a user defined function such as ISNULL? Parameter is Expression and return type is the type of replace type

How can I create a user defined function that behaves in a similar manner to the builtin ISNULL in SQLServer2017?
ISNULL ( check_expression , replacement_value )
Arguments
check_expression Is the expression to be checked for NULL.
check_expression can be of any type.
Return Types Returns the same type as check_expression.
How can I create my own functions with this behaviour?
I understand what you want to do - and this isn't possible.
You can't create a single user defined function dbo.Foo that returns an int when passed an int and a varchar when passed a varchar for example.
You can use sql_variant as the type of the input parameter and the return type but this isn't really the same.

Eiffel: a way to evaluate a STRING expression like `eval`

Is there a way to evaluate a STRING expression in Eiffel? (A big source of error I know... but a powerful mechanism!)
Looking for a way to do a generic setting mechanism for database fields and classes I'm trying to do something like.
fields_mapping: HASH_TABLE [ANY, STRING]
do
create Result.make (10)
Result.put (name, "name")
Result.put (surname, "surname")
...
end
set_fields
do
across
fields_mapping as field
loop
eval("set_" + field.key + "(" + field.item + ")")
end
end
I know I could do it with agents, but it seems for me less generic as I have to define every function 2 times
in the fields_mapping
in another fields_mapping I have for JSON conversion
Existing implementations of Eiffel are compiling rather than interpreting. Therefore, evaluation of arbitrary expressions is not supported. Updating arbitrary object fields using given values is still possible using reflection classes:
update_object (field_values: HASH_TABLE [ANY, STRING]; target: ANY)
-- Update fields of object `target` with values from `field_values`.
local
object: REFLECTED_REFERENCE_OBJECT
fields: like field_properties
position: like {REFLECTED_OBJECT}.field_count
do
create object.make (target)
fields := field_properties (object.dynamic_type)
across
field_values as v
loop
if attached fields [v.key] as f then
position := f.position
if {REFLECTOR}.field_conforms_to (v.item.generating_type.type_id, f.type_id) then
inspect object.field_type (position)
when {REFLECTOR_CONSTANTS}.integer_32_type then
object.set_integer_32_field (position, {INTEGER_32} / v.item)
when ... then
-- Other basic types.
when {REFLECTOR_CONSTANTS}.reference_type then
object.set_reference_field (position, v.item)
else
print ("Unhandled field type%N")
end
else
print ("Field type mismatch%N")
end
end
end
end
The code above is a bit simplified because it does not handle types SPECIAL and TUPLE as well as user-defined expanded types. The code relies on a helper feature that records type information so that next time it should not be recomputed from scratch:
field_properties (type_id: like {TYPE [ANY]}.type_id):
HASH_TABLE [TUPLE [position: INTEGER; type_id: INTEGER], STRING]
-- Positions and types of fields indexed by their name
-- for a specified type ID `type_id`.
local
i: like {REFLECTOR}.field_count_of_type
do
Result := field_positions_table [type_id]
if not attached Result then
from
i := {REFLECTOR}.field_count_of_type (type_id)
create Result.make (i)
field_positions_table.force (Result, type_id)
until
i <= 0
loop
Result [{REFLECTOR}.field_name_of_type (i, type_id)] :=
[i, {REFLECTOR}.field_static_type_of_type (i, type_id)]
i := i - 1
end
end
end
field_positions_table: HASH_TABLE [HASH_TABLE
[TUPLE [position: INTEGER; type_id: INTEGER], STRING], INTEGER]
once
create Result.make (1)
end
Using the feature update_object and assuming an object x has fields foo and bar of types INTEGER_32 and detachable STRING_8 respectively, the following code
field_values: HASH_TABLE [ANY, STRING]
do
create field_values.make (10)
field_values ["foo"] := {INTEGER_32} 5
field_values ["bar"] := " bottles"
update_object (field_values, x)
print (x.foo)
print (x.bar)
will print 5 bottles regardless of the previous state of the object x.

Is there a way to specify various types for a parameter

Is there a way to restrict the conformance of a type to be a collection of types?
Let me explain by example:
give_foo (garbage: ANY): STRING
do
if attached {STRING} garbage as l_s then
Result := l_s
elseif attached {INTEGER} garbage as l_int then
Result := l_int.out
elseif attached {JSON_OBJECT} garbage as l_int then
Result := l_int.representation
elseif attached {RABBIT} garbage as l_animal then
Result := l_animal.name + l_animal.color
else
Result := ""
check
unchecked_type_that_compiler_should_be_able_to_check_for_me: False
end
end
end
Couldn't I do something like (like a convert function could do)
give_foo (garbage: {STRING, INTEGER, JSON_OBJECT, RABBIT}): STRING
do
if attached {STRING} garbage as l_s then
Result := l_s
elseif attached {INTEGER} garbage as l_int then
Result := l_int.out
elseif attached {JSON_OBJECT} garbage as l_int then
Result := l_int.representation
elseif attached {RABBIT} garbage as l_animal then
Result := l_animal.name + l_animal.color
else
Result := ""
check
unchecked_type_that_compiler_should_be_able_to_check_for_me: False
end
end
end
or something like
not_garbage_hash_table: HASH_TABLE[{INTEGER, STRING, ANIMAL}, STRING]
Conformance to a collection of types is not supported for several reasons:
Calling a feature on an expression of such a type becomes ambiguous because the same name could refer to completely unrelated features.
In one case we need a sum (disjoint union) of types, in the second - plain union, in the third - an intersection, etc. And then, there could be combinations. One would need an algebra built on top of a type system that becomes too complicated.
If the requirement is to check that an argument is one of expected types, the following precondition can be used:
across {ARRAY [TYPE [detachable ANY]]}
<<{detachable STRING}, {INTEGER}, {detachable JSON_OBJECT}>> as t
some argument.generating_type.conforms_to (t.item) end
A common practice to process an expression of a potentially unknown type is a visitor pattern that deals with known cases and falls back to a default for unknown ones.
Possibly place Alexander's solution in a BOOLEAN query so it can be reused?
is_string_integer_or_json_object (v: detachable ANY): BOOLEAN
-- Does `v' conform to {STRING}, {INTEGER}, or {JSON_OBJECT}?
do
across {ARRAY [TYPE [detachable ANY]]}
<<{detachable STRING}, {INTEGER}, {detachable JSON_OBJECT}>> as t
some v.generating_type.conforms_to (t.item) end
end

PostgreSQL C aggregate function: How to return multiple values in transition function [duplicate]

Is the only way to pass an extra parameter to the final function of a PostgreSQL aggregate to create a special TYPE for the state value?
e.g.:
CREATE TYPE geomvaltext AS (
geom public.geometry,
val double precision,
txt text
);
And then to use this type as the state variable so that the third parameter (text) finally reaches the final function?
Why aggregates can't pass extra parameters to the final function themselves? Any implementation reason?
So we could easily construct, for example, aggregates taking a method:
SELECT ST_MyAgg(accum_number, 'COMPUTE_METHOD') FROM blablabla
Thanks
You can define an aggregate with more than one parameter.
I don't know if that solves your problem, but you could use it like this:
CREATE OR REPLACE FUNCTION myaggsfunc(integer, integer, text) RETURNS integer
IMMUTABLE STRICT LANGUAGE sql AS
$f$
SELECT CASE $3
WHEN '+' THEN $1 + $2
WHEN '*' THEN $1 * $2
ELSE NULL
END
$f$;
CREATE AGGREGATE myagg(integer, text) (
SFUNC = myaggsfunc(integer, integer, text),
STYPE = integer
);
It could be used like this:
CREATE TABLE mytab
AS SELECT * FROM generate_series(1, 10) i;
SELECT myagg(i, '+') FROM mytab;
myagg
-------
55
(1 row)
SELECT myagg(i, '*') FROM mytab;
myagg
---------
3628800
(1 row)
I solved a similar issue by making a custom aggregate function that did all the operations at once and stored their states in an array.
CREATE AGGREGATE myagg(integer)
(
INITCOND = '{ 0, 1 }',
STYPE = integer[],
SFUNC = myaggsfunc
);
and:
CREATE OR REPLACE FUNCTION myaggsfunc(agg_state integer[], agg_next integer)
RETURNS integer[] IMMUTABLE STRICT LANGUAGE 'plpgsql' AS $$
BEGIN
agg_state[1] := agg_state[1] + agg_next;
agg_state[2] := agg_state[2] * agg_next;
RETURN agg_state;
END;
$$;
Then made another function that selected one of the results based on the second argument:
CREATE OR REPLACE FUNCTION myagg_pick(agg_state integer[], agg_fn character varying)
RETURNS integer IMMUTABLE STRICT LANGUAGE 'plpgsql' AS $$
BEGIN
CASE agg_fn
WHEN '+' THEN RETURN agg_state[1];
WHEN '*' THEN RETURN agg_state[2];
ELSE RETURN 0;
END CASE;
END;
$$;
Usage:
SELECT myagg_pick(myagg("accum_number"), 'COMPUTE_METHOD') FROM "mytable" GROUP BY ...
Obvious downside of this is the overhead of performing all the functions instead of just one. However when dealing with simple operations such as adding, multiplying etc. it should be acceptable in most cases.
You would have to rewrite the final function itself, and in that case you might as well write a set of new aggregate functions, one for each possible COMPUTE_METHOD. If the COMPUTE_METHOD is a data value or implied by a data value, then a CASE statement can be used to select the appropriate aggregate method. Alternatively, you may want to create a custom composite type with fields for accum_number and COMPUTE_METHOD, and write a single new aggregate function that uses this new data type.

Compare arrays for equality, ignoring order of elements

I have a table with 4 array columns.. the results are like:
ids signed_ids new_ids new_ids_signed
{1,2,3} | {2,1,3} | {4,5,6} | {6,5,4}
Anyway to compare ids and signed_ids so that they come out equal, by ignoring the order of the elements?
You can use contained by operator:
(array1 <# array2 and array1 #> array2)
The additional module intarray provides operators for arrays of integer, which are typically (much) faster. Install once per database with (in Postgres 9.1 or later):
CREATE EXTENSION intarray;
Then you can:
SELECT uniq(sort(ids)) = uniq(sort(signed_ids));
Or:
SELECT ids #> signed_ids AND ids <# signed_ids;
Bold emphasis on functions and operators from intarray.
In the second example, operator resolution arrives at the specialized intarray operators if left and right argument are type integer[].
Both expressions will ignore order and duplicity of elements. Further reading in the helpful manual here.
intarray operators only work for arrays of integer (int4), not bigint (int8) or smallint (int2) or any other data type.
Unlike the default generic operators, intarray operators do not accept NULL values in arrays. NULL in any involved array raises an exception. If you need to work with NULL values, you can default to the standard, generic operators by schema-qualifying the operator with the OPERATOR construct:
SELECT ARRAY[1,4,null,3]::int[] OPERATOR(pg_catalog.#>) ARRAY[3,1]::int[]
The generic operators can't use indexes with an intarray operator class and vice versa.
Related:
GIN index on smallint[] column not used or error "operator is not unique"
The simplest thing to do is sort them and compare them sorted. See sorting arrays in PostgreSQL.
Given sample data:
CREATE TABLE aa(ids integer[], signed_ids integer[]);
INSERT INTO aa(ids, signed_ids) VALUES (ARRAY[1,2,3], ARRAY[2,1,3]);
the best thing to do is to if the array entries are always integers is to use the intarray extension, as Erwin explains in his answer. It's a lot faster than any pure-SQL formulation.
Otherwise, for a general version that works for any data type, define an array_sort(anyarray):
CREATE OR REPLACE FUNCTION array_sort(anyarray) RETURNS anyarray AS $$
SELECT array_agg(x order by x) FROM unnest($1) x;
$$ LANGUAGE 'SQL';
and use it sort and compare the sorted arrays:
SELECT array_sort(ids) = array_sort(signed_ids) FROM aa;
There's an important caveat:
SELECT array_sort( ARRAY[1,2,2,4,4] ) = array_sort( ARRAY[1,2,4] );
will be false. This may or may not be what you want, depending on your intentions.
Alternately, define a function array_compare_as_set:
CREATE OR REPLACE FUNCTION array_compare_as_set(anyarray,anyarray) RETURNS boolean AS $$
SELECT CASE
WHEN array_dims($1) <> array_dims($2) THEN
'f'
WHEN array_length($1,1) <> array_length($2,1) THEN
'f'
ELSE
NOT EXISTS (
SELECT 1
FROM unnest($1) a
FULL JOIN unnest($2) b ON (a=b)
WHERE a IS NULL or b IS NULL
)
END
$$ LANGUAGE 'SQL' IMMUTABLE;
and then:
SELECT array_compare_as_set(ids, signed_ids) FROM aa;
This is subtly different from comparing two array_sorted values. array_compare_as_set will eliminate duplicates, making array_compare_as_set(ARRAY[1,2,3,3],ARRAY[1,2,3]) true, whereas array_sort(ARRAY[1,2,3,3]) = array_sort(ARRAY[1,2,3]) will be false.
Both of these approaches will have pretty bad performance. Consider ensuring that you always store your arrays sorted in the first place.
If your arrays have no duplicates and are of the same dimension:
use array contains #>
AND array_length where the length must match the size you want on both sides
select (string_agg(a,',' order by a) = string_agg(b,',' order by b)) from
(select unnest(array[1,2,3,2])::text as a,unnest(array[2,2,3,1])::text as b) A

Resources