Testing matlab2015a. I was using a struct array that at some point I converted to a table with struct2table. This gave a nice table with columns named as the fields of the struct.
Some time later and by unrelated reasons, these structs are now classes (or objects, not sure about the standard naming in matlab). struct2table rejects it; directly applying table(objarray) gives a one-column table with one object per row. I seem unable to find an object2table that does the obvious thing...
The closest I've come is struct2table(arrayfun(#struct, objarray)), which is a bit inelegant and spits a warning per array item. So, any nicer ideas?
Edit: example as follows
>> a.x=1; a.y=2; b.x=3; b.y=4;
>> struct2table([a;b])
ans =
x y
_ _
1 2
3 4
This is the original and desired behavior. Now create a file ab.m with contents
classdef ab; properties; x; y; end end
and execute
>> a=ab; a.x=1; a.y=2; b=ab; b.x=3; b.y=4;
trying to get a table without the arcane incantation gives you:
>> table([a;b])
ans =
Var1
________
[1x1 ab]
[1x1 ab]
>> struct2table([a;b])
Error using struct2table (line 26)
S must be a scalar structure, or a structure array with one column
or one row.
>> object2table([a;b])
Undefined function or variable 'object2table'.
And the workaround:
>> struct2table(arrayfun(#struct, [a;b]))
Warning: Calling STRUCT on an object prevents the object from hiding
its implementation details and should thus be avoided. Use DISP or
DISPLAY to see the visible public details of an object. See 'help
struct' for more information.
Warning: Calling STRUCT on an object prevents the object from hiding
its implementation details and should thus be avoided. Use DISP or
DISPLAY to see the visible public details of an object. See 'help
struct' for more information.
ans =
x y
_ _
1 2
3 4
Reading your question, I am not sure if you really should convert an object to a table. Is there any advantage of a table?
Nevertheless, your approach using struct is basically right. I would just wrap it in a way it's easy to use and does not display a warning.
Wrap the functionallity in a class:
classdef tableconvertible;
methods
function t=table(obj)
w=warning('off','MATLAB:structOnObject');
t=struct2table(arrayfun(#struct, obj));
warning(w);
end
end
end
And use it in your class:
classdef ab<tableconvertible; properties; x; y; end end
Usage:
table([a;b])
Related
I've got two new-style MATLAB classes - B & C, both concrete subclasses of an abstract parent, A. A is a subclass of hgsetset (handle class). I'd like to put them in an array in MATLAB, and treat them both as As. They are defined, roughly, as:
classdef A <hgsetget
methods
function foo(this)
%does some common stuff, then
this.fooWorker;
end
end %public Methods
methods(Abstract, Access=protected)
fooWorker(this);
end %abstract Methods;
end
classdef B < A
methods(Access=protected)
function fooWorker(this)
%implementation
end
end %protected Methods;
end
However if I do this:
arr = [b c]; % where b & c are objects of type B & C respectively.
arr(1).foo;
arr(2).foo;
MATLAB will tell me both are of type B, and if I call the abstract method from A that both implement (foo), it executes, essentially, two copies of b.
However, if I reverse the order:
arr = [c b];
It tells me that both are of type C, and if I try to execute foo on both, it executes, essentially two copies of c.
Any ideas how to use subclasses in a polymorphic way?
I know that I can put them in a cell array and get 90% of what I need. But that is a bit of a kludge.
You can do this now in R2011a by subclassing matlab.mixin.Heterogeneous. So for example as in your code the abstract class would be:
classdef A < matlab.mixin.Heterogeneous
methods
function foo(this)
disp('In abstract parent');
this.fooWorker;
end
end
methods(Abstract, Access=protected)
fooWorker(this);
end
end
and the subclasses would look like:
classdef B < A
methods(Access=protected)
function fooWorker(this)
disp('In B');
end
end
end
and similarly for a class 'C'. This then gives the following output from MATLAB:
>> b = B;
>> c = C;
>> arr = [b, c];
>> arr(1).foo
In abstract parent
In B
>> arr(2).foo
In abstract parent
In C
>>
Unfortunately, all the elements of an array in MATLAB must be of the same type. When you concatenate different classes, MATLAB will attempt to convert them all to the same class.
If you've defined one of your classes as being inferior or superior to the other (using the InferiorClasses attribute or the INFERIORTO/SUPERIORTO functions), then the methods of the more superior class are invoked. If you haven't specified a relationship between the classes, then the two objects have equal precedence and MATLAB calls the left-most object method. This is likely why arr = [b c]; creates an array of class B and arr = [c b]; creates an array of class C.
Option 1: Cell arrays
If you want to execute the foo method defined for class B on object b, and also execute the foo method defined for class C on object c, then you will likely have to use cell arrays and the function CELLFUN. If foo doesn't return a value, you could do something like this:
arr = {b,c};
cellfun(#foo,arr); % Invoke foo on each element of the cell array
Option 2: Fun with jury-rigging polymorphic behavior
For fun, I came up with a potential solution which technically works, but has some limitations. To illustrate the idea, I've put together a few sample classes similar to what you listed in the question. Here's the abstract superclass classA:
classdef classA < hgsetget
properties
stuff
end
properties (Access = protected)
originalClass
end
methods
function foo(this)
disp('I am type A!');
if ~strcmp(class(this),this.originalClass)
this = feval(this.originalClass,this);
end
this.fooWorker;
end
end
methods (Abstract, Access = protected)
fooWorker(this);
end
end
And here's an example of the subclass classB (classC is exactly the same with everywhere B replaced by C and vice versa):
classdef classB < classA
methods
function this = classB(obj)
switch class(obj)
case 'classB' % An object of classB was passed in
this = obj;
case 'classC' % Convert input from classC to classB
this.stuff = obj.stuff;
this.originalClass = obj.originalClass;
otherwise % Create a new object
this.stuff = obj;
this.originalClass = 'classB';
end
end
end
methods (Access = protected)
function fooWorker(this)
disp('...and type B!');
end
end
end
The constructors for classB and classC are designed such that the two classes can be converted to one another. The property originalClass is initialized at creation and indicates what the original class of the object was. This property will remain unchanged if an object is converted from one class to another.
Within the foo method, the current class of the object passed in is checked against its original class. If they differ, the object is first converted back to its original class before invoking the fooWorker method. Here's a test:
>> b = classB('hello'); % Create an instance of classB
>> c = classC([1 2 3]); % Create an instance of classC
>> b.foo % Invoke foo on b
I am type A!
...and type B!
>> c.foo % Invoke foo on c
I am type A!
...and type C!
>> arr = [b c] % Concatenate b and c, converting both to classB
arr =
1x2 classB handle
Properties:
stuff
Methods, Events, Superclasses
>> arr(1).foo % Invoke foo on element 1 (formerly b)
I am type A!
...and type B!
>> arr(2).foo % Invoke foo on element 2 (formerly c)
I am type A!
...and type C!
One key limitation (aside from being a little ugly) is the case where classB and classC each have properties that the other does not. In such a case, converting to the other class and then converting back would likely cause those properties to be lost (i.e. reset to their default values). However, if one class were a subclass of the other, such that it had all the same properties and thensome, there is a solution. You could set the subclass to be superior to the superclass (see discussion above) such that concatenating objects of the two classes will always cause the superclass objects to be converted to the subclass. When converted back within "polymorphic" methods (like foo above), no object data will be lost.
I don't know how workable a solution this is, but maybe it will at least give you some interesting ideas. ;)
I'm kinda new in Julia lang, so I'm still struggling with reading Julia documentation. Here is a piece of it and I am looking for explanation specifically the bolded part.
Base.Sort.searchsortedfirst — Function.
searchsortedfirst(a, x, [by=,] [lt=,]
[rev=false])
Returns the index of the first value in a greater than or equal to x,
according to the specified order. Returns length(a)+1 if x is greater
than all values in a. a is assumed to be sorted.
Website
My array looks like this:
A = Vector{Record}()
where
type Record
y::Int64
value::Float64
end
Now here is my problem. I would like to call above-mentioned method on my array and obtain Record where given x equals y in this Record (Record.y == x). Guess I have to write 'by' transfrom or 'lt' comparator? or both?
Any help would be appraciated :)
#crstnbr has provided a perfectly good answer for the case of one-off uses of searchsortedfirst. I thought it worth adding that there is also a more permanent solution. If your type Record exhibits a natural ordering, then just extend Base.isless and Base.isequal to your new type. The following example code shows how this works for some new type you might define:
struct MyType ; x::Float64 ; end #Define some type of my own
yvec = MyType.(sort!(randn(10))) #Build a random vector of my type
yval = MyType(0.0) #Build a value of my type
searchsortedfirst(yvec, yval) #ERROR: this use of searchsortedfirst will throw a MethodError since julia doesn't know how to order MyType
Base.isless(y1::MyType, y2::MyType)::Bool = y1.x < y2.x #Extend (aka overload) isless so it is defined for the new type
Base.isequal(y1::MyType, y2::MyType)::Bool = y1.x == y2.x #Ditto for isequal
searchsortedfirst(yvec, yval) #Now this line works
Some points worth noting:
1) In the step where I overload isless and isequal, I preface the method definition with Base.. This is because the isless and isequal functions are originally defined in Base, where Base refers to the core julia package that is automatically loaded every time you start julia. By prefacing with Base., I ensure that my new methods are added to the current set of methods for these two functions, rather than replacing them. Note, I could also achieve this by omitting Base. but including a line beforehand of import Base: isless, isequal. Personally, I prefer the way I've done it above (for the overly pedantic, you can also do both).
2) I can define isless and isequal however I want. It is my type and my method extensions. So you can choose whatever you think is the natural ordering for your new type.
3) The operators <. <=, ==, >=, >, all actually just call isless and isequal under the hood, so all of these operators will now work with your new type, eg MyType(1.0) > MyType(2.0) returns false.
4) Any julia function that uses the comparative operators above will now work with your new type, as long as the function is defined parametrically (which almost everything in Base is).
You can just define a custom less-than operation and give it to searchsortedfirst via lt keyword argument:
julia> type Record
y::Int64
value::Float64
end
julia> A = Vector{Record}()
0-element Array{Record,1}
julia> push!(A, Record(3,3.0))
1-element Array{Record,1}:
Record(3, 3.0)
julia> push!(A, Record(4,3.0))
2-element Array{Record,1}:
Record(3, 3.0)
Record(4, 3.0)
julia> push!(A, Record(5,3.0))
3-element Array{Record,1}:
Record(3, 3.0)
Record(4, 3.0)
Record(5, 3.0)
julia> searchsortedfirst(A, 4, lt=(r,x)->r.y<x)
2
Here, (r,x)->r.y<x is an anonymous function defining your custom less-than. It takes two arguments (the elements to be compared). The first will be the elements from A, the second is the fixed element to compare to.
There is a task to fill deep structure with macro, where names of structure components are similar and can be constructed by simple loop with index.
For example, the structure is root-level1-level2-level3-level4
I wanna fill it with following nested macros
DEFINE iterate_menges.
do &4 times.
fill &1 &2 sy-index level4.
enddo.
END-OF-DEFINITION.
DEFINE fill.
cs_root-sheet&1-&2-level&3-&4 = 'some_value'.
END-OF-DEFINITION.
But this concept doesn't work and sy-index is treated like a text. The error
component cs_root-sheet1-level2-levelsy-index-level4 is not found
is shown, however numeric literal works wonderfully.
What syntax should be used here?
ADDITION: here is an example snippet I found on SCN and it works perfectly. Why is so?
DEFINE ADD_MAPPING.
p_c = &1.
CONDENSE p_c.
CONCATENATE 'p_old' p_c INTO p_c.
ASSIGN (p_c) TO <fs>.
WRITE <fs>.
END-OF-DEFINITION.
DO 14 TIMES.
ADD_MAPPING sy-index.
ENDDO.
P.S. Yes, I know macros are undebuggable, unsafe and totally shouldn't be used, but I am interested in this particular problem and not best-practice advice.
Using dynamic programming, change your fill macro to:
DATA l_field TYPE string.
FIELD-SYMBOLS <l_value> TYPE any.
DEFINE fill.
l_field = &3.
CONDENSE l_field.
CONCATENATE 'cs_root-sheet&1-&2-level' l_field '-&4' INTO l_field.
ASSIGN (l_field) TO <l_value>.
IF sy-subrc = 0.
<l_value> = 'some_value'.
ELSE.
" TODO: error handling?
ENDIF.
END-OF-DEFINITION.
This will work, although you might want to check sy-subrc after the ASSIGN because that parameter is invariant (only known at runtime) and thus will not pick up errors at compile-time like the other parameter would.
You can also add compile-time validation for the upper-bound of your DO loop since you thus know the maximum value of sy-index. To do this, you can add a non-executing reference in the iterate_menges macro:
DEFINE iterate_menges.
IF 1 = 0. " compile-time boundary validation test
cs_root-sheet&1-&2-level&4-level4 = ''.
ENDIF.
DO &4 TIMES.
fill &1 &2 sy-index level4.
ENDDO.
END-OF-DEFINITION.
A second method is to add a case statement. This can only be used if you know there will always be a certain amount of fields in that part (surely there should always be at least one...). So if you know the bottom bound of your DO loop, then you can code the following as a optimization:
CASE &3.
WHEN 1. " set 1
WHEN 2. " set 2
WHEN 3. " set 3
" ...
WHEN OTHERS.
" dynamic set
ENDCASE.
Since the dynamic set is slower, optimizing tight loops is always a good idea.
The system is doing exactly what is stated in the documentation. Unfortunately, in this case, the English translation is lacking some details as opposed to the German original text which is more to the point, IMHO. Usage of a macro is not a call of some sort, it's a textual replacement that happens before compilation. The parameters are replaced, not evaluated - they can not be evaluated because in most cases, the value is not known at compile time, only at runtime. To do what you want to do, you will have to use dynamic access techniques like ASSIGN COMPONENT ... OF ...
What you are trying to do is impossible, because the macros are only known at compilation time. They are no part of means of modularization in ABAP.
One could even write his own version of Brainfuck. Look at the following example.
REPORT zzz.
TYPES: BEGIN OF t1,
sheet1 TYPE c,
sheet2 TYPE c,
END OF t1.
DATA:
cs_root TYPE t1.
DEFINE test.
cs_root-sheet&1 = 'costam'.
END-OF-DEFINITION.
DEFINE brainfuck.
&4=>&1&3 &2.
END-OF-DEFINITION.
START-OF-SELECTION.
brainfuck IF_SYSTEM_UUID_STATIC~CREATE_UUID_X16 ) ( CL_SYSTEM_UUID.
test sy-index.
Answering your comment under the other answer. The solution could look like this.
REPORT zzz.
TYPES: BEGIN OF t4,
level4 TYPE c,
END OF t4.
TYPES: BEGIN OF t3,
level1 TYPE t4,
level2 TYPE t4,
END OF t3.
TYPES: BEGIN OF t2,
level1 TYPE t3,
level2 TYPE t3,
END OF t2.
TYPES: BEGIN OF t1,
sheet1 TYPE t2,
sheet2 TYPE t2,
END OF t1.
CLASS lcl_test DEFINITION FINAL.
PUBLIC SECTION.
CLASS-METHODS:
test
IMPORTING
i_1 TYPE i
i_2 TYPE i
i_3 TYPE i
CHANGING
cs_root TYPE t1.
ENDCLASS.
CLASS lcl_test IMPLEMENTATION.
METHOD test.
ASSIGN COMPONENT |sheet{ i_1 }| OF STRUCTURE cs_root TO FIELD-SYMBOL(<fs_sheet>).
IF sy-subrc = 0.
ASSIGN COMPONENT |level{ i_2 }| OF STRUCTURE <fs_sheet> TO FIELD-SYMBOL(<fs_level1>).
IF sy-subrc = 0.
ASSIGN COMPONENT |level{ i_3 }| OF STRUCTURE <fs_level1> TO FIELD-SYMBOL(<fs_level2>).
IF sy-subrc = 0.
ASSIGN COMPONENT 'level4' OF STRUCTURE <fs_level2> TO FIELD-SYMBOL(<fs_level3>).
IF sy-subrc = 0.
<fs_level3> = 'some_value'.
ENDIF.
ENDIF.
ENDIF.
ENDIF.
ENDMETHOD.
ENDCLASS.
DEFINE test.
lcl_test=>test(
EXPORTING
i_1 = &1
i_2 = &2
i_3 = &3
CHANGING
cs_root = &4
).
END-OF-DEFINITION.
DATA: gs_root TYPE t1.
START-OF-SELECTION.
DO 14 TIMES.
test 1 2 sy-index gs_root.
ENDDO.
I am trying to initialize a derived type using a parameter declaration. When I compile, I get the following error
Element in INTEGER(4) array constructor at (1) is CHARACTER(1).
User defined kinds values ip and dp are found in fasst_global. They are:
integer,parameter:: ip = selected_int_kind(8)
integer,parameter:: dp = selected_real_kind(15,307)
I have tried using 1_ip instead of 1 as the first element and it made no difference. What am I doing wrong?
module fasst_derived_types
use fasst_global
implicit none
type fasst_default_soil
integer(ip):: sid
character(len=2):: ssname
real(dp):: dens,pors,ssemis,ssalb,shc,smin,smax,salpha,svgn
real(dp):: sspheat,sorgan,spsand,spsilt,spclay,spgravel
end type fasst_default_soil
type(fasst_default_soil),parameter:: fasst_soil(1) = fasst_default_soil( &
(/1, 'GW',1.947_dp,0.293_dp, 0.92_dp,0.40_dp,1.1197e-2_dp, &
0.01_dp,0.293_dp,22.6125_dp, 3.45_dp, 820.0_dp, &
0.0_dp, 5.0_dp, 2.0_dp, 2.0_dp,91.0_dp/))
end module fasst_derived_types
You are trying to use two constructors here:
an array constructor;
a structure constructor.
You have the correct syntax for each, but you are using them incorrectly.
The array constructor (/.../) is to construct an array. But you want an array of derived type values (well, one value) rather than an array as the component for a single derived type value. The syntax error comes from attempting to create an array with various different/incompatible types.
So, instead you want
type(fasst_default_soil),parameter:: fasst_soil(1) = (/fasst_default_soil(1_ip,'GW', ...)/)
Or, as you just want a single element array you don't even need to construct that array of derived types.
I currently have a function which I am batch running. It outputs its results to a cell array. I would like to export each of the outputs from the cell array to their own variable.
As such I have a variable id which records the ID of each layer. This process is possible manually as follows:
>> output =
[300x300x2 double] [300x300x2 double] [300x300x2 double]
>> [a1,a2,a3]=deal(output{:});
Where the number after a represents the ID. Is it possible to automate this command, so that the prefix (in this case: a) can be set by the user and the id filled in automatically? As in, I could have variables set as below, and use these in the deal command to name my new variables?
>> id =
1 2 3
>> prefix =
a
Any ideas?
Thanks.
You can construct your own custom expression as a string and then evaluate it with eval() (or evalin() if it's in a function and you want to return the output to your workspace).
function deal_output(output, id, prefix)
id = id(:);
vars = strcat(prefix, cellstr(num2str(id)))';
myexpr = ['[', sprintf('%s,', vars{1:end-1}), vars{end}, '] = deal(output{:})'];
evalin('caller', myexpr)
>> output = num2cell(1:3);
>> id = 1:3;
>> prefix = 'a';
>> deal_output(output, id, prefix)
a1 =
4
a2 =
5
a3 =
6
Also check out the join.m file on FileExchange for avoiding the ugly sprintf.
Perhaps something like:
function dealinto(prefix, cellarray)
% DEALINTO
% USAGE
% dealinto('a', {'one', 'two', 'three'})
% Causes these variables to be set in the base workspace:
% a1: 'one'
% a2: 'two'
% a3: 'three'
for i=1:numel(cellarray)
assignin('base', [prefix num2str(i)], cellarray{i});
end
If you replace 'base' with 'caller' in the above, the variables will be written into the calling function's workspace. I don't recommend doing this, though, for the same reason that I would not recommend calling LOAD with no output arguments inside a function: arbitrarily writing into a running function's workspace isn't very safe.
If you need something like this for use inside of functions but don't want it just writing variables willy nilly, you could do the same thing that LOAD does, which is to give you a structure whose fields are the variables you would otherwise produce.
Do you really have to output them as completely separate variables? It would be much more efficient to use dynamic field names in a structure as this would avoid the use of the eval() statement. See the MATLAB blogs for avoiding eval()
In this case
m = length(output);
[a(1:m).data] = deal(output{:});
This will return a structure array, the length calculation is added so it'll work for different sizes of the output cell array.
Each array can be accessed individually with an id number.
a(1).data
a(2).data
a(3).data
I can't seem to add a comment so I'm adding an answer in the form of a question ^_^
Is it not better to pre-create the list of names and check them in the caller workspace using genvarname ?
Names = cell(1,numel(data))
for idx = 1:numel(data)
Names{idx} = [Prefix, num2str(idx)]
end
Names = genvarname(Names,who)
dealinto(Names, data)
would prevent any invalid names from being create in the existing work space, then dealto would need to be modified as
function dealinto(Names, Values)
for idx = 1:length(Names)
assignin('caller', Names(idx), Values(idx))
end