How to update internal table without using MODIFY? - loops

I have created internal tables where I want to update age of employee in one internal table by calculating it from another table, I have done arithmetic calculations to get age but now how can I update it by any alternate way instead of MODIFY?
WRITE : / 'FirstName','LastName', ' Age'.
LOOP AT gt_items1 INTO gwa_items1.
READ TABLE gt_header INTO gwa_header WITH KEY empid = gwa_items1-empid.
gwa_items1-age = gv_date+0(4) - gwa_header-bdate+0(4).
MODIFY gt_items1 from gwa_items1 TRANSPORTING age WHERE empid = gwa_items1-empid.
WRITE : / gwa_items1-fname , gwa_items1-lname , gwa_items1-age .
ENDLOOP.

Use field symbols (instead of work areas) by LOOPing over internal tables:
WRITE : / 'FirstName','LastName', ' Age'.
LOOP AT gt_items1
ASSIGNING FIELD-SYMBOL(<ls_item1>).
READ TABLE gt_header
ASSIGNING FIELD-SYMBOL(<ls_header>)
WITH KEY empid = <ls_item1>-empid.
IF sy-subrc EQ 0.
<ls_item1>-age = gv_date+0(4) - <ls_header>-bdate+0(4).
WRITE : / <ls_item1>-fname , <ls_item1>-lname , <ls_item1>-age .
ENDIF.
ENDLOOP.
Field symbols have two advantages:
They modify the internal table directly, no separate MODIFY is
necessary.
They are somewhat faster, than work areas.

Besides József Szikszai's answer you could also use references:
write : / 'FirstName','LastName', ' Age'.
sort gt_header by empid. " <------------- Sort for binary search
loop at gt_items1 reference into data(r_item1).
read table gt_header reference into data(r_header)
with key empid = r_item1->empid binary search. " <------------- Faster read
check sy-subrc eq 0.
r_item1->age = gv_date+0(4) - r_header->bdate+0(4).
write : / r_item1->fname , r_item1->lname , r_item1->age .
endloop.
I added some enhacements to your code also.
For more info check this link.

Related

Return Parts of an Array in Postgres

I have a column (text) in my Postgres DB (v.10) with a JSON format.
As far as i now it's has an array format.
Here is an fiddle example: Fiddle
If table1 = persons and change_type = create then i only want to return the name and firstname concatenated as one field and clear the rest of the text.
Output should be like this:
id table1 did execution_date change_type attr context_data
1 Persons 1 2021-01-01 Create Name [["+","name","Leon Bill"]]
1 Persons 2 2021-01-01 Update Firt_name [["+","cur_nr","12345"],["+","art_cd","1"],["+","name","Leon"],["+","versand_art",null],["+","email",null],["+","firstname","Bill"],["+","code_cd",null]]
1 Users 3 2021-01-01 Create Street [["+","cur_nr","12345"],["+","art_cd","1"],["+","name","Leon"],["+","versand_art",null],["+","email",null],["+","firstname","Bill"],["+","code_cd",null]]
Disassemble json array into SETOF using json_array_elements function, then assemble it back into structure you want.
select m.*
, case
when m.table1 = 'Persons' and m.change_type = 'Create'
then (
select '[["+","name",' || to_json(string_agg(a.value->>2,' ' order by a.value->>1 desc))::text || ']]'
from json_array_elements(m.context_data::json) a
where a.value->>1 in ('name','firstname')
)
else m.context_data
end as context_data
from mutations m
modified fiddle
(Note:
utilization of alphabetical ordering of names of required fields is little bit dirty, explicit order by case could improve readability
resulting json is assembled from string literals as much as possible since you didn't specified if "+" should be taken from any of original array elements
the to_json()::text is just for safety against injection
)

Csv file to a Lua table and access the lines as new table or function()

Currently my code have simple tables containing the data needed for each object like this:
infantry = {class = "army", type = "human", power = 2}
cavalry = {class = "panzer", type = "motorized", power = 12}
battleship = {class = "navy", type = "motorized", power = 256}
I use the tables names as identifiers in various functions to have their values processed one by one as a function that is simply called to have access to the values.
Now I want to have this data stored in a spreadsheet (csv file) instead that looks something like this:
Name class type power
Infantry army human 2
Cavalry panzer motorized 12
Battleship navy motorized 256
The spreadsheet will not have more than 50 lines and I want to be able to increase columns in the future.
Tried a couple approaches from similar situation I found here but due to lacking skills I failed to access any values from the nested table. I think this is because I don't fully understand how the tables structure are after reading each line from the csv file to the table and therefore fail to print any values at all.
If there is a way to get the name,class,type,power from the table and use that line just as my old simple tables, I would appreciate having a educational example presented. Another approach could be to declare new tables from the csv that behaves exactly like my old simple tables, line by line from the csv file. I don't know if this is doable.
Using Lua 5.1
You can read the csv file in as a string . i will use a multi line string here to represent the csv.
gmatch with pattern [^\n]+ will return each row of the csv.
gmatch with pattern [^,]+ will return the value of each column from our given row.
if more rows or columns are added or if the columns are moved around we will still reliably convert then information as long as the first row has the header information.
The only column that can not move is the first one the Name column if that is moved it will change the key used to store the row in to the table.
Using gmatch and 2 patterns, [^,]+ and [^\n]+, you can separate the string into each row and column of the csv. Comments in the following code:
local csv = [[
Name,class,type,power
Infantry,army,human,2
Cavalry,panzer,motorized,12
Battleship,navy,motorized,256
]]
local items = {} -- Store our values here
local headers = {} --
local first = true
for line in csv:gmatch("[^\n]+") do
if first then -- this is to handle the first line and capture our headers.
local count = 1
for header in line:gmatch("[^,]+") do
headers[count] = header
count = count + 1
end
first = false -- set first to false to switch off the header block
else
local name
local i = 2 -- We start at 2 because we wont be increment for the header
for field in line:gmatch("[^,]+") do
name = name or field -- check if we know the name of our row
if items[name] then -- if the name is already in the items table then this is a field
items[name][headers[i]] = field -- assign our value at the header in the table with the given name.
i = i + 1
else -- if the name is not in the table we create a new index for it
items[name] = {}
end
end
end
end
Here is how you can load a csv using the I/O library:
-- Example of how to load the csv.
path = "some\\path\\to\\file.csv"
local f = assert(io.open(path))
local csv = f:read("*all")
f:close()
Alternative you can use io.lines(path) which would take the place of csv:gmatch("[^\n]+") in the for loop sections as well.
Here is an example of using the resulting table:
-- print table out
print("items = {")
for name, item in pairs(items) do
print(" " .. name .. " = { ")
for field, value in pairs(item) do
print(" " .. field .. " = ".. value .. ",")
end
print(" },")
end
print("}")
The output:
items = {
Infantry = {
type = human,
class = army,
power = 2,
},
Battleship = {
type = motorized,
class = navy,
power = 256,
},
Cavalry = {
type = motorized,
class = panzer,
power = 12,
},
}

Entity Framework updates with wrong values after insert

This issue is discovered because I have an object with a field calculated off the ID, which contains the ID as part of it with a prefix and a checksum digit. It is a requirement that these calculated values are unique, but they also cannot be random, so this seemed the best way to do it.
The code in question looks like this:
entity = new Entity() { /* values */ };
context.SaveChanges(); //generate the ID field
entity.CALCULATED_FIELD = CalculateField(prefix, entity.ID);
This works just fine in 99% of cases, but occasionally we get a value in the database which looks like:
ID: 1234
CALCULATED_FIELD : prefix000{1233}8
EXPECTED: prefix000{1234}3
With the parts in the braces being calculated from the ID column.
The fact that the calculated field is incorrect is bad enough, but the implication is that upon doing a savechanges, there is no guarantee that the row returned to Entity Framework is the one which was originally worked on! I am looking into using a stored procedure on insert in order to fix the generated field problem, but in the long run we're going to have lots of bad data if we keep working on the wrong rows.
When I told entity framework to map the table to stored procedures it generated the following boilerplate code:
INSERT [dbo].[tableName](fields...)
VALUES(values...)
DECLARE #ID int
SELECT #ID = [ID]
FROM [dbo].[tableName]
WHERE ##ROWCOUNT > 0 AND [ID] = scope_identity()
SELECT t0.[ID]
FROM [dbo].[tableName] as t0
WHERE ##ROWCOUNT > 0 AND t0.[ID] = #ID
The best idea I can come up with is that an extra insert could occur before scope_identity() is called. We are migrating this system from using stored procedures where we used ##IDENTITY in place instead, could there be a difference there?
EDIT: CalculateField:
public static string CalculateField(string prefix, int ID)
{
var calculated = prefix.PadRight(17 - ID.ToString().Length)
.Replace(" ", "0") + ID.ToString();
var multiplier = 3;
var sum = 0;
foreach (char c in calculated.ToCharArray().Reverse())
{
sum += multiplier * int.Parse(c.ToString());
multiplier = 4 - multiplier;
}
if (sum % 10 == 0) { return calculated + "0"; }
return calculated + (10 - (sum % 10)).ToString();
}
UPDATE: Changing the called method from static to an instance method and only running it later after additional changed were made instead of straight after creation appears to have solved the problem, for reasons I can't comprehend. I'm leaving the question open for now since I don't yet have a large enough sample to be completely sure the problem is resolved, and also because I have no explanation for what really changed.

Calculate Sum and Insert as Row

Using SSIS I am bringing in raw text files that contain this in the output:
I use this data later to report on. The Key columns get pivoted. However, I don't want to show all those columns individually, I only want to show the total.
To accomplish this my idea was calculate the Sum on insert using a trigger, and then insert the sum as a new row into the data.
The output would look something like:
Is what I'm trying to do possible? Is there a better way to do this dynamically on pivot? To be clear I'm not just pivoting these rows for a report, there are other ones that don't need the sum calculated.
Using derived column and Script Component
You can achieve this by following these steps:
Add a derived column (name: intValue) with the following expression:
(DT_I4)(RIGHT([Value],2) == "GB" ? SUBSTRING([Value],1,FINDSTRING( [Value], " ", 1)) : "0")
So if the value ends with GB then the number is taken else the result is 0.
After that add a script component, in the Input and Output Properties, click on the Output and set the SynchronousInput property to None
Add 2 Output Columns outKey , outValue
In the Script Editor write the following script (VB.NET)
Private SumValues As Integer = 0
Public Overrides Sub PostExecute()
MyBase.PostExecute()
Output0Buffer.AddRow()
Output0Buffer.outKey = ""
Output0Buffer.outValue = SumValues.ToString & " GB"
End Sub
Public Overrides Sub Input0_ProcessInputRow(ByVal Row As Input0Buffer)
Output0Buffer.AddRow()
Output0Buffer.outKey = Row.Key
Output0Buffer.outValue = Row.Value
SumValues += Row.intValue
End Sub
I am going to show you a way but I don't recommend adding total to the end of the detail data. If you are going to report on it show it as a total.
After source add a data transformation:
C#
Add two columns to your data flow: Size int and type string
Select Value as readonly
Here is the code:
string[] splits = Row.value.ToString().Split(' '); //Make sure single quote for char
int goodValue;
if(Int32.TryParse(splits[0], out goodValue))
{
Row.Size = goodValue;
Row.Type = "GB";
}
else
{
Row.Size = 0;
Row.Type="None";
}
Now you have the data with the proper data types to do arithmatic in your table.
If you really want the data in your format. Add a multicast and an aggregate and SUM(Size) and then merge back into your original flow.
I was able to solve my problem in another way using a trigger.
I used this code:
INSERT INTO [Table] (
[Filename]
, [Type]
, [DeviceSN]
, [Property]
, [Value]
)
SELECT ms.[Filename],
ms.[Type],
ms.[DeviceSN],
'Memory Device.Total' AS [Key],
CAST(SUM(CAST(left(ms.[Value], 2) as INT)) AS VARCHAR) + ' GB' as 'Value'
FROM [Table] ms
JOIN inserted i ON i.Row# = ms.Row#
WHERE ms.[Value] like '%GB'
GROUP BY ms.[filename],
ms.[type],
ms.[devicesn]

Anorm SQL Folding a List into a class result

please pardon the level of detail. I'm not completely sure how to phrase this question.
I am new to scala and still learning the intricacies of the language. I have a project where all the data I need is contained in a table with a layout like this:
CREATE TABLE demo_data ( table_key varchar(10), description varchar(40), data_key varchar(10), data_value varchar(10) );
Where the table_key column contains the main key I'm searching on, and the description repeats for every row with that table_key. In addition there are descriptive keys and values contained in the data_key and data_value pairs.
I need to consolidate a set of these data_keys into my resulting class so that the class will end up like this:
case class Tab ( tableKey: String, description: String, valA: String, valB: String, valC: String )
object Tab {
val simple = {
get[String]("table_key") ~
get[String]("description") ~
get[String]("val_a") ~
get[String]("val_b") ~
get[String]("val_c") map {
case tableKey ~ description ~ valA ~ valB ~ valC => Tab(table_key, description, valA, valB, valC)
}
}
def list(tabKey: String) : List[Tab] = {
DB.withConnection { implicit connection =>
val tabs = SQL(
"""
SELECT DISTINCT p.table_key, p.description,
a.data_value val_a,
b.data_value val_b,
c.data_value val_c
FROM demo_data p
JOIN demo_data a on p.table_key = a.table_key and a.data_key = 'A'
JOIN demo_data b on p.table_key = b.table_key and b.data_key = 'B'
JOIN demo_data c on p.table_key = c.table_key and c.data_key = 'C'
WHERE p.table_key = {tabKey}
"""
).on('tabKey -> tabKey).as(Tab.simple *)
}
return tabs
}
}
which will return what I want, however I have more than 30 data keys that I wish to retrieve in this manner, and the joins to itself rapidly becomes unmanageable. As in the query ran for 1.5 hours and used up 20GB worth of temporary tablespace before running out of disk space.
So instead I am doing a separate class that retrieves a list of data keys and data values for a given table key using the "where data_key in ('A','B','C',...)", and now I'd like to "flatten" the returned list into a resulting object that will have the valA, valB, valC, ... in it. I still want to return a list of the flattened objects to the calling routine.
Let me try to idealize what I'd like to accomplish..
Take a header result set and a detail result set, extract out the keys out of the detail result set to populate additional elements/properties in the header result set and produce a list of classes containing the all the elements of the header result set, and the selected properties from the detail result set. So I get a list of TabHeader(tabKey,Desc) and for each I retrieve a list of interesting TabDetail(DataKey,DataValue), I then extract out the element where the DataKey == 'A' and put the DataValue element in Tab(valA), and do the same for DataKey == 'B', 'C', ... After I'm done I wish to produce a Tab(tabKey, Desc, valA, valB, valC, ...) in place of the corresponding TabHeader. I could quite possibly muddle through this in Java, but I'm treating this as a learning opportunity and would like to know a good way to do this in Scala.
I'm feeling that something with the scala mapping should do what I need, but I haven't been able to track down exactly what.

Resources