Passing parameters to macro with a loop - loops

There is a task to fill deep structure with macro, where names of structure components are similar and can be constructed by simple loop with index.
For example, the structure is root-level1-level2-level3-level4
I wanna fill it with following nested macros
DEFINE iterate_menges.
do &4 times.
fill &1 &2 sy-index level4.
enddo.
END-OF-DEFINITION.
DEFINE fill.
cs_root-sheet&1-&2-level&3-&4 = 'some_value'.
END-OF-DEFINITION.
But this concept doesn't work and sy-index is treated like a text. The error
component cs_root-sheet1-level2-levelsy-index-level4 is not found
is shown, however numeric literal works wonderfully.
What syntax should be used here?
ADDITION: here is an example snippet I found on SCN and it works perfectly. Why is so?
DEFINE ADD_MAPPING.
p_c = &1.
CONDENSE p_c.
CONCATENATE 'p_old' p_c INTO p_c.
ASSIGN (p_c) TO <fs>.
WRITE <fs>.
END-OF-DEFINITION.
DO 14 TIMES.
ADD_MAPPING sy-index.
ENDDO.
P.S. Yes, I know macros are undebuggable, unsafe and totally shouldn't be used, but I am interested in this particular problem and not best-practice advice.

Using dynamic programming, change your fill macro to:
DATA l_field TYPE string.
FIELD-SYMBOLS <l_value> TYPE any.
DEFINE fill.
l_field = &3.
CONDENSE l_field.
CONCATENATE 'cs_root-sheet&1-&2-level' l_field '-&4' INTO l_field.
ASSIGN (l_field) TO <l_value>.
IF sy-subrc = 0.
<l_value> = 'some_value'.
ELSE.
" TODO: error handling?
ENDIF.
END-OF-DEFINITION.
This will work, although you might want to check sy-subrc after the ASSIGN because that parameter is invariant (only known at runtime) and thus will not pick up errors at compile-time like the other parameter would.
You can also add compile-time validation for the upper-bound of your DO loop since you thus know the maximum value of sy-index. To do this, you can add a non-executing reference in the iterate_menges macro:
DEFINE iterate_menges.
IF 1 = 0. " compile-time boundary validation test
cs_root-sheet&1-&2-level&4-level4 = ''.
ENDIF.
DO &4 TIMES.
fill &1 &2 sy-index level4.
ENDDO.
END-OF-DEFINITION.
A second method is to add a case statement. This can only be used if you know there will always be a certain amount of fields in that part (surely there should always be at least one...). So if you know the bottom bound of your DO loop, then you can code the following as a optimization:
CASE &3.
WHEN 1. " set 1
WHEN 2. " set 2
WHEN 3. " set 3
" ...
WHEN OTHERS.
" dynamic set
ENDCASE.
Since the dynamic set is slower, optimizing tight loops is always a good idea.

The system is doing exactly what is stated in the documentation. Unfortunately, in this case, the English translation is lacking some details as opposed to the German original text which is more to the point, IMHO. Usage of a macro is not a call of some sort, it's a textual replacement that happens before compilation. The parameters are replaced, not evaluated - they can not be evaluated because in most cases, the value is not known at compile time, only at runtime. To do what you want to do, you will have to use dynamic access techniques like ASSIGN COMPONENT ... OF ...

What you are trying to do is impossible, because the macros are only known at compilation time. They are no part of means of modularization in ABAP.
One could even write his own version of Brainfuck. Look at the following example.
REPORT zzz.
TYPES: BEGIN OF t1,
sheet1 TYPE c,
sheet2 TYPE c,
END OF t1.
DATA:
cs_root TYPE t1.
DEFINE test.
cs_root-sheet&1 = 'costam'.
END-OF-DEFINITION.
DEFINE brainfuck.
&4=>&1&3 &2.
END-OF-DEFINITION.
START-OF-SELECTION.
brainfuck IF_SYSTEM_UUID_STATIC~CREATE_UUID_X16 ) ( CL_SYSTEM_UUID.
test sy-index.
Answering your comment under the other answer. The solution could look like this.
REPORT zzz.
TYPES: BEGIN OF t4,
level4 TYPE c,
END OF t4.
TYPES: BEGIN OF t3,
level1 TYPE t4,
level2 TYPE t4,
END OF t3.
TYPES: BEGIN OF t2,
level1 TYPE t3,
level2 TYPE t3,
END OF t2.
TYPES: BEGIN OF t1,
sheet1 TYPE t2,
sheet2 TYPE t2,
END OF t1.
CLASS lcl_test DEFINITION FINAL.
PUBLIC SECTION.
CLASS-METHODS:
test
IMPORTING
i_1 TYPE i
i_2 TYPE i
i_3 TYPE i
CHANGING
cs_root TYPE t1.
ENDCLASS.
CLASS lcl_test IMPLEMENTATION.
METHOD test.
ASSIGN COMPONENT |sheet{ i_1 }| OF STRUCTURE cs_root TO FIELD-SYMBOL(<fs_sheet>).
IF sy-subrc = 0.
ASSIGN COMPONENT |level{ i_2 }| OF STRUCTURE <fs_sheet> TO FIELD-SYMBOL(<fs_level1>).
IF sy-subrc = 0.
ASSIGN COMPONENT |level{ i_3 }| OF STRUCTURE <fs_level1> TO FIELD-SYMBOL(<fs_level2>).
IF sy-subrc = 0.
ASSIGN COMPONENT 'level4' OF STRUCTURE <fs_level2> TO FIELD-SYMBOL(<fs_level3>).
IF sy-subrc = 0.
<fs_level3> = 'some_value'.
ENDIF.
ENDIF.
ENDIF.
ENDIF.
ENDMETHOD.
ENDCLASS.
DEFINE test.
lcl_test=>test(
EXPORTING
i_1 = &1
i_2 = &2
i_3 = &3
CHANGING
cs_root = &4
).
END-OF-DEFINITION.
DATA: gs_root TYPE t1.
START-OF-SELECTION.
DO 14 TIMES.
test 1 2 sy-index gs_root.
ENDDO.

Related

Optimizing custom fill of a 2d array in Julia

I'm a little new to Julia and am trying to use the fill! method to improve code performance on Julia. Currently, I read a 2d array from a file say read_array and perform row-operations on it to get a processed_array as follows:
function preprocess(matrix)
# Initialise
processed_array= Array{Float64,2}(undef, size(matrix));
#first row of processed_array is the difference of first two row of matrix
processed_array[1,:] = (matrix[2,:] .- matrix[1,:]) ;
#last row of processed_array is difference of last two rows of matrix
processed_array[end,:] = (matrix[end,:] .- matrix[end-1,:]);
#all other rows of processed_array is the mean-difference of other two rows
processed_array[2:end-1,:] = (matrix[3:end,:] .- matrix[1:end-2,:]) .*0.5 ;
return processed_array
end
However, when I try using the fill! method I get a MethodError.
processed_array = copy(matrix)
fill!(processed_array [1,:],d[2,:]-d[1,:])
MethodError: Cannot convert an object of type Matrix{Float64} to an object of type Float64
I'll be glad if someone can tell me what I'm missing and also suggest a method to optimize the code. Thanks in advance!
fill!(A, x) is used to fill the array A with a unique value x, so it's not what you want anyway.
What you could do for a little performance gain is to broadcast the assignments. That is, use .= instead of =. If you want, you can also use the #. macro to automatically add dots everywhere for you (for maybe cleaner/easier-to-read code):
function preprocess(matrix)
out = Array{Float64,2}(undef, size(matrix))
#views #. out[1,:] = matrix[2,:] - matrix[1,:]
#views #. out[end,:] = matrix[end,:] - matrix[end-1,:]
#views #. out[2:end-1,:] = 0.5 * (matrix[3:end,:] - matrix[1:end-2,:])
return out
end
For optimal performance, I think you probably want to write the loops explicitly and use multithreading with a package like LoopVectorization.jl for example.
PS: Note that in your code comments you wrote "cols" instead of "rows", and you wrote "mean" but take a difference. (Not sure it was intentional.)

How to create table array in MATLAB?

I would like to store multiple tables in one array. In my code below, I am creating two tables T1 and T2. I want to store these tables into one variable MyArray.
LastName = {'Sanchez';'Johnson';'Li';'Diaz';'Brown'};
Age = [38;43;38;40;49];
Smoker = logical([1;0;1;0;1]);
Height = [71;69;64;67;64];
Weight = [176;163;131;133;119];
BloodPressure = [124 93; 109 77; 125 83; 117 75; 122 80];
T1 = table(LastName,Age,Smoker);
T2 = table(Height,Weight,BloodPressure);
% The code below does not work
MyArray(1) = T1;
MyArray(2) = T2;
I know I can use a cell array but I would like to know if it is possible to create a table datatype array in MATLAB.
Because table already implements () indexing, it's not really clear to me how you would expect to index MyArray. Your example almost looks to me like MyArray = [T1, T2].
I'm not sure if it satisfies your needs, but you can have table objects with table variables, like this:
T = table(T1, T2);
You can then using indexing as normal, e.g.
T.T1.LastName{2}
There was a time when
builtin('subsref',T1,substruct('()',{1}))
(for any custom class T1*) would skip calling the class-specific overloaded subsref and use the built-in method instead. This would be equivalent to T1(1), but ignoring whatever the class defined for that syntax. Similarly for subsasgn, which is the subscripted assignment operation T1(2)=T2. This allowed the creation and use of arrays of a class.
However, this seems to no longer work. Maybe it is related to the classdef-style classes, as the last time I used the trick above was before those were introduced.
I would suggest that you use cell arrays for this (even if the above still worked, I would not recommend it).
* Note that table is a custom class, you can edit table to see the source code.

Custom searchsortedfirst method

I'm kinda new in Julia lang, so I'm still struggling with reading Julia documentation. Here is a piece of it and I am looking for explanation specifically the bolded part.
Base.Sort.searchsortedfirst — Function.
searchsortedfirst(a, x, [by=,] [lt=,]
[rev=false])
Returns the index of the first value in a greater than or equal to x,
according to the specified order. Returns length(a)+1 if x is greater
than all values in a. a is assumed to be sorted.
Website
My array looks like this:
A = Vector{Record}()
where
type Record
y::Int64
value::Float64
end
Now here is my problem. I would like to call above-mentioned method on my array and obtain Record where given x equals y in this Record (Record.y == x). Guess I have to write 'by' transfrom or 'lt' comparator? or both?
Any help would be appraciated :)
#crstnbr has provided a perfectly good answer for the case of one-off uses of searchsortedfirst. I thought it worth adding that there is also a more permanent solution. If your type Record exhibits a natural ordering, then just extend Base.isless and Base.isequal to your new type. The following example code shows how this works for some new type you might define:
struct MyType ; x::Float64 ; end #Define some type of my own
yvec = MyType.(sort!(randn(10))) #Build a random vector of my type
yval = MyType(0.0) #Build a value of my type
searchsortedfirst(yvec, yval) #ERROR: this use of searchsortedfirst will throw a MethodError since julia doesn't know how to order MyType
Base.isless(y1::MyType, y2::MyType)::Bool = y1.x < y2.x #Extend (aka overload) isless so it is defined for the new type
Base.isequal(y1::MyType, y2::MyType)::Bool = y1.x == y2.x #Ditto for isequal
searchsortedfirst(yvec, yval) #Now this line works
Some points worth noting:
1) In the step where I overload isless and isequal, I preface the method definition with Base.. This is because the isless and isequal functions are originally defined in Base, where Base refers to the core julia package that is automatically loaded every time you start julia. By prefacing with Base., I ensure that my new methods are added to the current set of methods for these two functions, rather than replacing them. Note, I could also achieve this by omitting Base. but including a line beforehand of import Base: isless, isequal. Personally, I prefer the way I've done it above (for the overly pedantic, you can also do both).
2) I can define isless and isequal however I want. It is my type and my method extensions. So you can choose whatever you think is the natural ordering for your new type.
3) The operators <. <=, ==, >=, >, all actually just call isless and isequal under the hood, so all of these operators will now work with your new type, eg MyType(1.0) > MyType(2.0) returns false.
4) Any julia function that uses the comparative operators above will now work with your new type, as long as the function is defined parametrically (which almost everything in Base is).
You can just define a custom less-than operation and give it to searchsortedfirst via lt keyword argument:
julia> type Record
y::Int64
value::Float64
end
julia> A = Vector{Record}()
0-element Array{Record,1}
julia> push!(A, Record(3,3.0))
1-element Array{Record,1}:
Record(3, 3.0)
julia> push!(A, Record(4,3.0))
2-element Array{Record,1}:
Record(3, 3.0)
Record(4, 3.0)
julia> push!(A, Record(5,3.0))
3-element Array{Record,1}:
Record(3, 3.0)
Record(4, 3.0)
Record(5, 3.0)
julia> searchsortedfirst(A, 4, lt=(r,x)->r.y<x)
2
Here, (r,x)->r.y<x is an anonymous function defining your custom less-than. It takes two arguments (the elements to be compared). The first will be the elements from A, the second is the fixed element to compare to.

Odd "Check_VIOLATION" failed test case in Eiffel

The main issue from the below picture is that when "check Result end" statement is added it automatically fails and displays "CHECK_VIOLATION" error in debugger.
Also, the HASH_TABLE doesn't store all items given to it but I fixed that by switching HASH_TABLE[G, INTEGER] instead of using the current HASH_TABLE[INTEGER, G]
My main problem is why does it throw Check_violation always and fail whenever a "check result end" statement is seen? Maybe the HAS[...] function is bad?
Currently any test case feature with "check result end" makes it false and throw CHECK_VILOATION
code:
class
MY_BAG[G -> {HASHABLE, COMPARABLE}]
inherit
ADT_BAG[G]
create
make_empty, make_from_tupled_array
convert
make_from_tupled_array ({ARRAY [TUPLE [G, INTEGER]]})
feature{NONE} -- creation
make_empty
do
create table.make(1)
end
make_from_tupled_array (a_array: ARRAY [TUPLE [x: G; y: INTEGER]])
require else
non_empty: a_array.count >= 0
nonnegative: is_nonnegative(a_array)
do
create table.make(a_array.count)
across a_array as a
loop
table.force (a.item.y, a.item.x)
end
end
feature -- attributes
table: HASH_TABLE[INTEGER, G]
counter: INTEGER
testing code:
t6: BOOLEAN
local
bag: MY_BAG [STRING]
do
comment ("t6:repeated elements in contruction")
bag := <<["foo",4], ["bar",3], ["foo",2], ["bar",0]>> -- test passes
Result := bag ["foo"] = 5 -- test passes
check Result end -- test fails (really weird but as soon as check statement comes it fails)
Result := bag ["bar"] = 3
check Result end
Result := bag ["baz"] = 0
end
Most probably ADT_BAG stands for an abstraction of a multiset (also called a bag) that allows to keep items and to tell how many items equal to the given one are there (unlike a set, where at most one item may be present). If so, it is correct to use HASH_TABLE [INTEGER, G] as a storage. Then its keys are the elements and its items are the elements numbers.
So, if we add the same element multiple times, its count should be increased. In the initialization line we add 4 elements of "foo", 3 elements of "bar", 2 elements of "foo" again, and 0 elements of "bar" again. As a result we should have a bag with 6 elements of "foo" and 3 elements of "bar". Also there are no elements "baz".
According to this analysis, either initialization is incorrect (numbers for "foo" should be different) or the comparison should be done for 6 instead of 5.
As to the implementation of the class MY_BAG the idea would be to have a feature add (or whatever name specified in the interface of ADT_BAG) that
Checks if there are items with the given key.
If there are none, adds the new element with the given count.
Otherwise, replaces the current element count with the sum of the current element count and the given element count.
For simplicity the initialization procedure would use this feature to add new items instead of storing items in the hash table directly to process repeated items correctly.

Matlab object array to table (or struct array)

Testing matlab2015a. I was using a struct array that at some point I converted to a table with struct2table. This gave a nice table with columns named as the fields of the struct.
Some time later and by unrelated reasons, these structs are now classes (or objects, not sure about the standard naming in matlab). struct2table rejects it; directly applying table(objarray) gives a one-column table with one object per row. I seem unable to find an object2table that does the obvious thing...
The closest I've come is struct2table(arrayfun(#struct, objarray)), which is a bit inelegant and spits a warning per array item. So, any nicer ideas?
Edit: example as follows
>> a.x=1; a.y=2; b.x=3; b.y=4;
>> struct2table([a;b])
ans =
x y
_ _
1 2
3 4
This is the original and desired behavior. Now create a file ab.m with contents
classdef ab; properties; x; y; end end
and execute
>> a=ab; a.x=1; a.y=2; b=ab; b.x=3; b.y=4;
trying to get a table without the arcane incantation gives you:
>> table([a;b])
ans =
Var1
________
[1x1 ab]
[1x1 ab]
>> struct2table([a;b])
Error using struct2table (line 26)
S must be a scalar structure, or a structure array with one column
or one row.
>> object2table([a;b])
Undefined function or variable 'object2table'.
And the workaround:
>> struct2table(arrayfun(#struct, [a;b]))
Warning: Calling STRUCT on an object prevents the object from hiding
its implementation details and should thus be avoided. Use DISP or
DISPLAY to see the visible public details of an object. See 'help
struct' for more information.
Warning: Calling STRUCT on an object prevents the object from hiding
its implementation details and should thus be avoided. Use DISP or
DISPLAY to see the visible public details of an object. See 'help
struct' for more information.
ans =
x y
_ _
1 2
3 4
Reading your question, I am not sure if you really should convert an object to a table. Is there any advantage of a table?
Nevertheless, your approach using struct is basically right. I would just wrap it in a way it's easy to use and does not display a warning.
Wrap the functionallity in a class:
classdef tableconvertible;
methods
function t=table(obj)
w=warning('off','MATLAB:structOnObject');
t=struct2table(arrayfun(#struct, obj));
warning(w);
end
end
end
And use it in your class:
classdef ab<tableconvertible; properties; x; y; end end
Usage:
table([a;b])

Resources