In SAP there is a table T552A. There are several fields like TPR, TTP, FTK, VAR, KNF as per day of a month such as TPR01, TPR02, etc.
In a loop I would like to access the said fields by determining the table field dynamically instead of hard coding of field name, like below:
DATA: ld_begda LIKE sy-datum,
ld_endda LIKE sy-datum.
DATA: lc_day(2) TYPE c.
DATA: lc_field(10) TYPE c.
DATA: lc_value TYPE i.
ld_begda = sy-datum.
ld_endda = ld_begda + 30.
WHILE ld_begda <= ld_endda.
lc_day = ld_begda+6(2).
CONCATENATE 't552a-tpr' lc_day INTO lc_field.
lc_value = &lc_field. " Need support at this point.
ld_begda = ld_begda + 1.
ENDWHILE.
Something like this (depending on the exact requirement):
FIELD-SYMBOLS: <lv_tpr> TYPE tprog.
DATA: ls_t552a TYPE t552a.
DATA: lv_field TYPE fieldname.
WHILE ld_begda <= ld_endda.
lv_field = |TPR| && ld_begda+6(2). "Dynamic field name
ASSIGN COMPONENT lv_field
OF STRUCTURE ls_t552a
TO <lv_tpr>.
IF sy-subrc EQ 0.
... "<lv_tpr> has now the value of the corresponding field
ENDIF.
ld_begda = ld_begda + 1.
ENDWHILE.
To store the result of a dynamic field a variable is needed that can store values of arbitrary types, in ABAP this is supported through field symbols.
A component of a structure (i.e. the row of the table) can then be assigned to a field symbol with ASSIGN COMPONENT:
ASSIGN COMPONENT lc_field OF STRUCTURE row_of_table TO FIELD-SYMBOL(<value>).
" work with <value> here
Recently new generic expressions were introduced (and now also support structures) which would allow you to write this (sooner or later):
... row_of_table-(lc_field) ...
Related
This is how model looks:
type Board struct {
Id uint `gorm:"primaryKey;autoIncrement;unique" json:"id"`
Owner uint `json:"owner"`
Name string `json:"name"`
Contributors datatypes.JSON `gorm:"type:jsonb" json:"contributors"`
GeneratedLink string `gorm:"default:''" json:"generated_link"`
Todos datatypes.JSON `gorm:"type:jsonb" json:"todos"`
}
This is how contributors value looks in the postgresql column:
and how to make query that checks that contributors array contains for example 20?
i tried to do like this: database.DB.Where("contributors IN ?", 20).Find(&contBoards)
but got error: ERROR: syntax error at or near "$1" (SQLSTATE 42601)
Please any ideas, any options.
P.S using gorm, postgresql
You use IN operator in the WHERE clause to check if a value matches any value in a list of values.
IN expects an explicit list of values (or a subquery).
I have created a sample scenario for your case as follows :
contributors := []int{20, 25, 27}
var tmp []string
for _, v := range contributors {
tmp = append(tmp, fmt.Sprint(v))
}
query := "SELECT * from table_name where contributors in (" + strings.Join(tmp, ",") + ")"
OR
ANY works with arrays. This can be useful if you have the list of values already in an array.
With ANY operator you can search for only one value.
select * from table_name where value = ANY(contributors);
If you want to search multiple values, you can use #> operator.
#> is the "contains" operator.
Defined for several data types as follows :
arrays: http://www.postgresql.org/docs/current/static/functions-array.html
range types: http://www.postgresql.org/docs/current/static/functions-range.html
geometric types: http://www.postgresql.org/docs/current/static/functions-geometry.html
JSON (and JSONB): http://www.postgresql.org/docs/current/static/functions-json.html
For better understanding you can refer this link : Postgres: check if array field contains value?
I have a database table defined in EA. In this table I have a column where NULL is allowed. When I use a script or an API to extract nullability from this column, I understand this should be done using the LowerBound and UpperBound values. When LowerBound is 0, it is a nullable field, if it is 1, NULL is not allowed.
However, when I set the NULL field, LowerBound is still 1, as noted on the following picture:
How can I correctly extract nullability from a database column?
You have to look into Attribute.AllowDuplicates or t_attribute.AllowDuplicates (well, it's EA).
There seem to be two ways of retrieving that information:
Via SQL
Via API
Extending your example I defined to additional columns: NotNullAttribute and NullAttribute as shown in the figure below:
Now you get the results shown below when querying the element and its attributes:
LINQ and SQL query showing nullable columns:
SELECT *
FROM
[t_attribute] [t]
WHERE
[t].[Name] LIKE '%NullAttribute%'
Script for showing database table columns attributes:
function main(id)
{
var element AS EA.Element;
element = Repository.GetElementByID(id);
var attributes AS EA.Collection;
attributes = element.Attributes;
for(var c = 0; c < attributes.Count; c++)
{
var attribute AS EA.Attribute;
attribute = attributes.GetAt(c);
Session.Output(attribute.Name + ": " + attribute.AllowDuplicates);
}
}
main(18365);
// Output
/*
Id: true
Name: false
CustomerTypeEnumId: true
NotNullAttribute: true
NullAttribute: false
*/
Currently my code have simple tables containing the data needed for each object like this:
infantry = {class = "army", type = "human", power = 2}
cavalry = {class = "panzer", type = "motorized", power = 12}
battleship = {class = "navy", type = "motorized", power = 256}
I use the tables names as identifiers in various functions to have their values processed one by one as a function that is simply called to have access to the values.
Now I want to have this data stored in a spreadsheet (csv file) instead that looks something like this:
Name class type power
Infantry army human 2
Cavalry panzer motorized 12
Battleship navy motorized 256
The spreadsheet will not have more than 50 lines and I want to be able to increase columns in the future.
Tried a couple approaches from similar situation I found here but due to lacking skills I failed to access any values from the nested table. I think this is because I don't fully understand how the tables structure are after reading each line from the csv file to the table and therefore fail to print any values at all.
If there is a way to get the name,class,type,power from the table and use that line just as my old simple tables, I would appreciate having a educational example presented. Another approach could be to declare new tables from the csv that behaves exactly like my old simple tables, line by line from the csv file. I don't know if this is doable.
Using Lua 5.1
You can read the csv file in as a string . i will use a multi line string here to represent the csv.
gmatch with pattern [^\n]+ will return each row of the csv.
gmatch with pattern [^,]+ will return the value of each column from our given row.
if more rows or columns are added or if the columns are moved around we will still reliably convert then information as long as the first row has the header information.
The only column that can not move is the first one the Name column if that is moved it will change the key used to store the row in to the table.
Using gmatch and 2 patterns, [^,]+ and [^\n]+, you can separate the string into each row and column of the csv. Comments in the following code:
local csv = [[
Name,class,type,power
Infantry,army,human,2
Cavalry,panzer,motorized,12
Battleship,navy,motorized,256
]]
local items = {} -- Store our values here
local headers = {} --
local first = true
for line in csv:gmatch("[^\n]+") do
if first then -- this is to handle the first line and capture our headers.
local count = 1
for header in line:gmatch("[^,]+") do
headers[count] = header
count = count + 1
end
first = false -- set first to false to switch off the header block
else
local name
local i = 2 -- We start at 2 because we wont be increment for the header
for field in line:gmatch("[^,]+") do
name = name or field -- check if we know the name of our row
if items[name] then -- if the name is already in the items table then this is a field
items[name][headers[i]] = field -- assign our value at the header in the table with the given name.
i = i + 1
else -- if the name is not in the table we create a new index for it
items[name] = {}
end
end
end
end
Here is how you can load a csv using the I/O library:
-- Example of how to load the csv.
path = "some\\path\\to\\file.csv"
local f = assert(io.open(path))
local csv = f:read("*all")
f:close()
Alternative you can use io.lines(path) which would take the place of csv:gmatch("[^\n]+") in the for loop sections as well.
Here is an example of using the resulting table:
-- print table out
print("items = {")
for name, item in pairs(items) do
print(" " .. name .. " = { ")
for field, value in pairs(item) do
print(" " .. field .. " = ".. value .. ",")
end
print(" },")
end
print("}")
The output:
items = {
Infantry = {
type = human,
class = army,
power = 2,
},
Battleship = {
type = motorized,
class = navy,
power = 256,
},
Cavalry = {
type = motorized,
class = panzer,
power = 12,
},
}
I have defined an expression in one of my cells from my dataset1. My Design window and I have to repeat for each month's cells but I'm getting an #ERROR when I click on the PREVIEW tab in SSRS.
My thought is if the ActivityMonth value = 1 and the Type value = "PIF" then display the value in the Data column.
=IIF(Fields!Activity_Month.Value = 1 AND Fields!Type.Value = "PIF", Fields!Data.Value, 0)
I got this WARNING from SSRS:
[rsRuntimeErrorInExpression] The Value expression for the textrun ‘Textbox1471.Paragraphs[0].TextRuns[0]’ contains an error: Input string was not in a correct format.
But it ran successfully.
From the comments and the edit history, it looks like you have used & mark which is used to concatenate strings instead of AND keyword. After editing the expression - for me - the following expression looks great:
=IIF(Fields!Activity_Month.Value = 1 AND Fields!Type.Value = "PIF", Fields!Data.Value, 0)
But i have two remarks:
It may cause an error due to the different data types returned by the expression (0 is integer, Data.Value has another data type:
If Fields!Data.Value is of type string then use the following expression:
=IIF(Fields!Activity_Month.Value = 1 AND Fields!Type.Value = "PIF", Fields!Data.Value, "0")
Another thing to mention is that if value contains null it may throw an exception, so you have to check if field contains null:
SSRS expression replace NULL with another field value
isnull in SSRS expressions
I'm using Solr 5.2. Is there a parameter that let you sort the order of the returned result by the specific field value? For example, in mysql I use ORDER BY FIELD to sort the result in specific order:
SELECT id,txt FROM `review`
order by FIELD(a.id,2,3,5,7) ;
I have read the sort section in the document but it doesn't seem to have any mention of a similar parameter.
I'm not sure Solr can do exactly what you want. The closest you might get is a range query. A range query looks like this:
your_field:[valueA TO valueB]
You can achieve custom sort in solr using ^=
Locate Constant Score with ^= in https://cwiki.apache.org/confluence/display/solr/The+Standard+Query+Parser
q=id:(2^=4 3^=3 5^=2 7^=1)
You can run
array = [2,3,5,7]
var string = "q=id:(";
for(i=0;i<array.length;i++){
string += array[i]+"^=" + (array.length-i) + " ";
}
string+=")";
// string => q=id:(2^=4 3^=3 5^=2 7^=1)