How do we concatenate fields of a dynamic work area? The idea is in the below code:
LOOP AT lt_final INTO DATA(ls_final).
CONCATENATE ls_final-field1
ls_final-field2
ls_final-field3
ls_final-field4
ls_final-field5
INTO ls_attachment SEPARATED BY lc_tab. "lc_tab is horizontal tab
APPEND ls_attachment TO lt_attachment.
CLEAR: ls_attachment.
ENDLOOP.
(This code will be used for sending email attachment.) Now, my problem is, the internal table in the above code is a dynamic internal table, therefore I am not sure how many fields will be there and the field names as well.
How do I concatenate the fields? Any idea, please help..
LOOP AT <dynamic_table> INTO DATA(ls_final).
CONCATENATE ls_final-(?)
ls_final-(?)
ls_final-(?)
ls_final-(?)
ls_final-(?)
"or more fields insert here depending on dynamic table
INTO ls_attachment SEPARATED BY lc_tab. "lc_tab is horizontal tab
APPEND ls_attachment TO lt_attachment.
CLEAR: ls_attachment.
ENDLOOP.
FIELD-SYMBOLS: <lv_field> TYPE ANY.
LOOP AT lt_final
ASSIGNING FIELD-SYMBOL(<ls_final>).
DO.
ASSIGN COMPONENT sy-index
OF STRUCTURE <ls_final>
TO <lv_field>.
IF sy-subrc EQ 0.
IF sy-index EQ 1.
ls_attachment = <lv_field>.
ELSE.
ls_attachment = ls_attachment && lc_tab && <lv_field>.
ENDIF.
ELSE.
EXIT.
ENDIF.
ENDDO.
ENDLOOP.
I hope it is self explaining, but:
You can use the system variable (sy-index), it is incremented automatically by SAP.
In the first step, just copy the value, there is nothing to concatenate yet (otherwise there will be an unnecessary lc_tab at the beginning of the string).
Just read your structure by index.
data :
lv_attachment type string.
lv_index type i value 1.
field-symbols:
<lv_value> type any.
while 1 = 1.
assign component lv_index of structure ls_final to <lv_value>.
if sy-subrc <> 0.
exit.
endif.
concatenate lv_attachment <lv_value> into lv_attachment separated by lc_tab.
lv_index = lv_index + 1.
endwhile.
Hope it helps.
You can use CL_ABAP_CONTAINER_UTILITIES class for that task, method FILL_CONTAINER_C.
Here is the sample of populating dynamic table and concatenating its fields into container field:
PARAMETERS: p_tab TYPE string.
FIELD-SYMBOLS: <fs_tab> TYPE STANDARD TABLE.
DATA tab TYPE REF TO data.
CREATE DATA tab TYPE TABLE OF (p_tab).
ASSIGN tab->* TO <fs_tab>.
SELECT * UP TO 100 ROWS
INTO TABLE <fs_tab>
FROM (p_tab).
LOOP AT <fs_tab> ASSIGNING FIELD-SYMBOL(<fs_line>).
CALL METHOD CL_ABAP_CONTAINER_UTILITIES=>FILL_CONTAINER_C
EXPORTING
IM_VALUE = <fs_line>
IMPORTING
EX_CONTAINER = DATA(container)
EXCEPTIONS
ILLEGAL_PARAMETER_TYPE = 1
others = 2.
CONDENSE container.
" do smth
ENDLOOP.
Related
In Lua, I have a set of tables:
Column01 = {}
Column02 = {}
Column03 = {}
ColumnN = {}
I am trying to access these tables dynamically depending on a value. So, later on in the programme, I am creating a variable like so:
local currentColumn = "Column" .. variable
Where variable is a number 01 to N.
I then try to do something to all elements in my array like so:
for i = 1, #currentColumn do
currentColumn[i] = *do something*
end
But this doesn't work as currentColumn is a string and not the name of the table. How can I convert the string into the name of the table?
If I understand correctly, you're saying that you'd like to access a variable based on its name as a string? I think what you're looking for is the global variable, _G.
Recall that in a table, you can make strings as keys. Think of _G as one giant table where each table or variable you make is just a key for a value.
Column1 = {"A", "B"}
string1 = "Column".."1" --concatenate column and 1. You might switch out the 1 for a variable. If you use a variable, make sure to use tostring, like so:
var = 1
string2 = "Column"..tostring(var) --becomes "Column1"
print(_G[string2]) --prints the location of the table. You can index it like any other table, like so:
print(_G[string2][1]) --prints the 1st item of the table. (A)
So if you wanted to loop through 5 tables called Column1,Column2 etc, you could use a for loop to create the string then access that string.
C1 = {"A"} --I shorted the names to just C for ease of typing this example.
C2 = {"B"}
C3 = {"C"}
C4 = {"D"}
C5 = {"E"}
for i=1, 5 do
local v = "C"..tostring(i)
print(_G[v][1])
end
Output
A
B
C
D
E
Edit: I'm a doofus and I overcomplicated everything. There's a much simpler solution. If you only want to access the columns within a loop instead of accessing individual columns at certain points, the easier solution here for you might just be to put all your columns into a bigger table then index over that.
columns = {{"A", "1"},{"B", "R"}} --each anonymous table is a column. If it has a key attached to it like "column1 = {"A"}" it can't be numerically iterated over.
--You could also insert on the fly.
column3 = {"C"}
table.insert(columns, column3)
for i,v in ipairs(columns) do
print(i, v[1]) --I is the index and v is the table. This will print which column you're on, and get the 1st item in the table.
end
Output:
1 A
2 B
3 C
To future readers: If you want a general solution to getting tables by their name as a string, the first solution with _G is what you want. If you have a situation like the asker, the second solution should be fine.
I have a cell array called BodyData in MATLAB that has around 139 columns and 3500 odd rows of skeletal tracking data.
I need to extract all rows between two string values (these are timestamps when an event happened) that I have
e.g.
BodyData{}=
Column 1 2 3
'10:15:15.332' 'BASE05' ...
...
'10:17:33:230' 'BASE05' ...
The two timestamps should match a value in the array but might also be within a few ms of those in the array e.g.
TimeStamp1 = '10:15:15.560'
TimeStamp2 = '10:17:33.233'
I have several questions!
How can I return an array for all the data between the two string values plus or minus a small threshold of say .100ms?
Also can I also add another condition to say that all str values in column2 must also be the same, otherwise ignore? For example, only return the timestamps between A and B only if 'BASE02'
Many thanks,
The best approach to the first part of your problem is probably to change from strings to numeric date values. In Matlab this can be done quite painlessly with datenum.
For the second part you can just use logical indexing... this is were you put a condition (i.e. that second columns is BASE02) within the indexing expression.
A self-contained example:
% some example data:
BodyData = {'10:15:15.332', 'BASE05', 'foo';...
'10:15:16.332', 'BASE02', 'bar';...
'10:15:17.332', 'BASE05', 'foo';...
'10:15:18.332', 'BASE02', 'foo';...
'10:15:19.332', 'BASE05', 'bar'};
% create column vector of numeric times, and define start/end times
dateValues = datenum(BodyData(:, 1), 'HH:MM:SS.FFF');
startTime = datenum('10:15:16.100', 'HH:MM:SS.FFF');
endTime = datenum('10:15:18.500', 'HH:MM:SS.FFF');
% select data in range, and where second column is 'BASE02'
BodyData(dateValues > startTime & dateValues < endTime & strcmp(BodyData(:, 2), 'BASE02'), :)
Returns:
ans =
'10:15:16.332' 'BASE02' 'bar'
'10:15:18.332' 'BASE02' 'foo'
References: datenum manual page, matlab help page on logical indexing.
I have the following postgresql table (named "paperwork"):
paperwork_guid name primary_attribute alter_attributes
123456 test {1,2,3,4,5} {9,8,7,6}
09876 test2 {1,2,3,4} {9,8,7,6}
I would like to return the paperwork_guid for those rows having '5' in the primary_attribute array (In the above table, the result would be '123456').
If there is another question out there on this topic, I have been unable to find it.
Perhaps something along these lines:
SELECT paperwork_guid
FROM (SELECT paperwork_guid, unnest(primary_attribute) AS attr
FROM paperwork) x
WHERE attr = 5
with a DISTINCT if an attribute can occur more than once within a primary_attribute array.
A much simpler one:
SELECT paperwork_guid FROM paperwork where primary_attribute IN ('5');
I have an index stored in a variable lv_index. I need to get lines from the table where the index of a line is greater than lv_index. I tried this with no success.
Example:
DATA:
lt_text TYPE TABLE OF tline-tdline,
lv_text TYPE tline-tdline.
.
.
.
LOOP AT lt_text INTO lv_text WHERE row > lv_index.
* some code here
ENDLOOP.
I get this error:
Type "TDLINE" has no structure so it doesn't have attribute "ROW"
Can someone tell me what should I write instead of row to make it work right?
For example:
LOOP AT lt_text INTO lv_text FROM lv_index.
* some code
ENDLOOP.
As I know, you can read index of current position from system structure SY. Indexes are in SY-TABIX and SY-INDX fields.
or you can create a work area for the table and use loop to get the grater values
DATA:
lt_text TYPE TABLE OF tline-tdline,
lv_text TYPE tline-tdline,
lwa_table TYPE lt_text.
Loop AT lwa_table.
ENDLOOP
.
I have some sql function that returns character varying type. The output is something like this:
'TTFFFFNN'. I need to get this characters by index. How to convert character varying to array?
Use string_to_array() with NULL as delimiter (pg 9.1+):
SELECT string_to_array('TTFFFFNN'::text, NULL) AS arr;
Per documentation:
In string_to_array, if the delimiter parameter is NULL, each character
in the input string will become a separate element in the resulting array.
In older versions (pg 9.0-), the call with NULL returned NULL. (Fiddle.)
To get the 2nd position (example):
SELECT (string_to_array('TTFFFFNN'::text, NULL))[2] AS item2;
Alternatives
For single characters I would use substring() directly, like #a_horse commented:
SELECT substring('TTFFFFNN'::text, 2, 1) AS item2;
SQL Fiddle showing both.
For strings with actual delimiters, I suggest split_part():
Split comma separated column data into additional columns
Only use regexp_split_to_array() if you must. Regular expression processing is more expensive.
In addition to the solutions outlined by Erwin and a horse, you can use regexp_split_to_array() with an empty regexp instead:
select regexp_split_to_array('TTFFFFNN'::text, '');
With an index, that becomes:
select (regexp_split_to_array('TTFFFFNN'::text, ''))[2];