I'm very new in work with data base, and this is my first time with triggers and functions. I have two tables:
main table "orders"
id | order_status_id | tracking_code
----+-----------------+---------------
12 | 2 | 123456
9 | 2 |
6 | 2 |
7 | 2 |
10 | 2 |
8 | 2 |
11 | 2 |
and another one "order_statuses"
id | status
----+-------------
1 | QUEUE
2 | IN_PROGRESS
3 | IN_DELIVERY
4 | DELIVERED
5 | CANCELED
6 | RETURNED
I want to change value in column order_status_id automatically (from 2 to 3) when I change the value in column tracking_code (default value is null).
I wrote trigger and function:
CREATE TRIGGER change_status
BEFORE UPDATE
OF tracking_code
ON orders
EXECUTE PROCEDURE update_status();
CREATE FUNCTION update_status() RETURNS TRIGGER
AS
$$
DECLARE
passed bigint;
BEGIN
SELECT id INTO passed FROM order_statuses WHERE status = IN_DELIVERY;
NEW.order_status_id = passed;
return NEW;
END;
$$;
but when I try to run this code, I get error:
[2020-09-07 15:22:53] [42804] ERROR: function "update_status" in FROM has unsupported return type trigger
[2020-09-07 15:22:53] Где: PL/pgSQL function inline_code_block line 5 at OPEN
I tried to implement a lot of answers for the same question from stack overflow, but nothing helps.
can someone tell me what I do wrong and if is my code "good" in general?!
Your trigger definition is OK, except for one thing; the documentation says:
FOR EACH ROW
FOR EACH STATEMENT
This specifies whether the trigger function should be fired once for every row affected by the trigger event, or just once per SQL statement. If neither is specified, FOR EACH STATEMENT is the default.
So you probably want to specify FOR EACH ROW.
The error message, however, has a different cause:
There is a DO statement somewhere in your code (“inline_code_block”) that calls update_status directly. That doesn't work. You can use trigger functions only in trigger definitions.
Related
Locate moves the cursor to the first row matching a specified set of search criteria.
Let's say that q is TQuery component, which is connected to the database with two columns TAG and TAGTEXT. With next code I am getting letter a. And I would like to use Locate() function to get letter d.
If q.Locate('TAG','1',[loPartialKey]) Then
begin
tag60 := q.FieldByName('TAGTEXT');
end
For example if I got table like this:
TAG | TAGTEXT
+---+--------+
| 1 | a |
+---+--------+
| 2 | b |
+---+--------+
| 3 | c |
+---+--------+
| 1 | d |
+---+--------+
| 4 | e |
+---+--------+
| 1 | f |
+---+--------+
is it possible to locate the second time number one occurred in table?
EDIT
My job is to find the occurrence of TAG with value 1 (which occurrence I need depends on the parameter I get), I need to iterate through table and get the values from all the TAGTEXT fields till I find that value in TAG field is again number 1. Number 1 in this case represents the start of new segment, and all between the two number 1s belongs to one segment. It doesn't have to be same number of rows in each segment. Also I am not allowed to do any changes on table.
What I thought I could do is to create a counter variable that is going to be increased by one every time it comes to TAG with value 1 in it. When the counter equals to the parameter that represents the occurrence I know that I am in the right segment and I am going to iterate through that segment and get the values I need.
But this might be slow solution, and I wanted to know if there was any faster.
You need to be a bit wary of using Locate for a purpose like this, because some
TDataSet descendants' implementation of Locate (or the underlying db-access layer) construct a temporary index on the dataset. which can be discarded immediately afterwards, so repeatedly calling Locate to iterate the rows of a given segment may be a lot more inefficient than one might expect it to be.
Also, TClientDataSet constructs, uses and then discards an expression parser for each invocation of Locate (in its internal call to LocateRecord), which is a lot of overhead for repeated calls, especial when they are entirely avoidable.
In any case, the best way to do this is to ensure that your table records which segment a given row belongs to, adding a column like the SegmentID below if your table does not already have one:
TAG | TAGTEXT|SegmentID
+---+--------+---------+
| 1 | a | 1
| 2 | b | 1
| 3 | c | 1
| 1 | d | 2
+---+--------+---------+ // btw, what happened to the 2 missing rows after this one?
| 4 | e | 2
| 1 | f | 3
+---+--------+---------+
Then, you could use code like this to iterate the rows of a segment:
procedure IterateSegment(Query : TSomeTypeOfQueryComponent; SegmentID : Integer);
var
Sql; String;
begin
Sql := Format('select * from mytable where SegmentID = %d order by Tag', [SegmentID]);
if Query.Active then
Query.Close;
Query.Sql.Text := Sql;
Query.Open;
Query.DisableControls;
try
while not Query.Eof do begin
// process row here
Query.Next;
end;
finally
Query.EnableControls;
end;
end;
Once you have the SegmentID column in the table, if you don't want to open a new query to iterate a block, you can set up a local index (by SegmentID then Tag), assuming your dataset type supports it, set a filter on the dataset to restrict it to a given SegmentID and then iterate over it
You have much options to do this.
If your component don´t provide a locateNext you can make your on function locateNext, comparing the value and make next until find.
You can also bring the sql with order by then use locate for de the first value and test if the next value match the comparision.
If you use a clientDataset you can filter into the component filter propertie, or set IndexFieldNames to order values instead the "order by" of sql in the prior suggestion.
You can filter it on the SQL Where clausule too.
I'm trying via SAS guide to use a loop (via PROC LOOP) to create a new column with a increment ID whenever the value of a specific column changes.
Just for example I'm looking for something like this:
Date | Name | Status | ID
------------------------------------------
20150101 | Tiago | Single | 1
20150102 | Tiago | Single | 1
20150103 | Tiago | Married | 2
20150104 | Tiago | Divorced | 3
20150105 | Tiago | Divorced | 3
20150106 | Tiago | Married | 4
In this case, the new column will be the ID, that will increment whenever the status changes along the records. With this I can then group by name, to see every change that occurred in time (even if they are repeated).
This question seems a little bit confused. If the original data is already sorted with the sample data provided, a data step like this could do.
data new;
set test;
by status notsorted;
if first.status then id + 1;
run;
The notsorted option is used to keep the original data. first.status will be True for the first appearance of status. id + 1 is a summary statement. The variable in the summary statement is not initialized to missing.
And by the way, what is PROC LOOP?
To expand Dajun's answer to work with grouping by name.
/* Sort so that name forms groups and id will go in date order */
proc sort data = test;
by name date;
run;
data want;
set test;
/* Tell SAS we want to know when the value of name or status changes */
by name status notsorted;
/* Reset the ID for each group */
if first.name then id = 0;
/* iterate the ID as per DaJun */
if first.status then id + 1;
run;
The Report will display Two Columns Foo and Bar
Some Rows of Foo are Empty Some have a numerical Value:
Foo:
+----+
| |
+----+
|10.0|
+----+
then There is the Bar Column, this column will take the Values from Foo and add 10 to them, the report should yield Results like this:
Foo: Bar:
+----+----+
| | |
+----+----+
|10.0|20.0|
+----+----+
Thats the Expression i use to determine whether Foo is numeric inside Bar:
=IsNumeric(ReportItems!FooBox.Value)
And this is the Result that Expression will Yield:
Foo: Bar:
+----+-----+
| |False|
+----+-----+
|10.0|True |
+----+-----+
Okay, thats exactly the way i want it so far, thus i write My Expression that way:
=IIf(IsNumeric(ReportItems!FooBox.Value),
ReportItems!FooBox.Value +10,
"")
This will Yield the Following Results:
Foo: Bar:
+----+------+
| |#Error|
+----+------+
|10.0|20.0 |
+----+------+
And most Bizare, when i remove the little addition in the Truepart of the IIf, it will execute:
=IIf(IsNumeric(ReportItems!FooBox.Value),
ReportItems!FooBox.Value,
"")
Foo: Bar:
+----+------+
| | |
+----+------+
|10.0|10.0 |
+----+------+
It's almost as if the "wrong" part of the Ternary operator gets executed Anyway, thus generating this castError or whatever it might be.
How can i convert the Values properly in order to Show the Results as Explained before?
Indeed - using IIf will cause both statements (true and false) to be evaulated and that leads to the error you are getting.
being in your situation, I would create my own function to handle this - you can place it in Code field of your Report (click outside of Page area to access Report object)
your function might look like this:
Public Function MyAdd(val As Object) As Nullable(Of Integer)
If TypeOf(val) Is Integer Then
Return val + 10
Else
Return Nothing
End If
End Function
you might need to play with types used (Object and Nullable Of Integer are maybe not what will work for you)
then, of course, use MyAdd in your expression and pass ReportItems!FooBox.Value as parameter
Consider the a table that contains
ReturnValueID | ReturnValue TriggerValue
------------------------------------------
1 | returnValue1 | testvalue
2 | returnValue2 | testing...
3 | returnValue3 | value3
And given a string: HERE IS THE TEXT testing... AND MORE TEXT testvalue MORE TEXT
I have written a CTE using SQL Server 2008 that uses a FindInString() function I wrote to indicate where the matched text is found. 0 = not found:
1 | returnValue1 | 43
2 | returnValue2 | 18
3 | returnValue3 | 0
What I need to do now, is iterate through this result set in a loop where I will perform some additional logic based on each row.
I have seen a few examples of looping, but I would rather not use a cursor.
What is the best way to approach this?
Thanks.
-- UPDATE --
Once a match is made, the ID of the matched row is added to a table, if it doesn't already exist, then the return value is appended to an VARCHAR value, if it doesn't already exist in the dynamic string:
IF NOT EXISTS -- check if this value is already recorded
(
SELECT *
FROM RecordedReturnValue
WHERE ReturnValueID = #ReturnValueID
)
BEGIN
-- add the visitor/external tag ID to historical table
INSERT INTO RecordedReturnValue (...)
VALUES (...)
-- function checks if string is already present
SET #DynamicString = dbo.AppendDynamicOutput(#ReturnValue, #DynamicString)
END
This must be performed for each matched TriggerValue from the CTE.
Ended up using a CTE, added the values to a temp table, then iterated through the results and performed some logic.
So, say for the sake of simplicity, I have a master table containing two fields - The first is an attribute and the second is the attributes value. If the second field is set to reference a value in another table it is denoted in parenthesis.
Example:
MASTER_TABLE:
Attr_ID | Attr_Val
--------+-----------
1 | 23(table1) --> 23rd value from `table1`
2 | ...
1 | 42 --> the number 42
1 | 72(table2) --> 72nd value from `table2`
3 | ...
1 | txt --> string "txt"
2 | ...
4 | ...
TABLE 1:
Val_Id | Value
--------+-----------
1 | some_content
2 | ...
. | ...
. | ...
. | ...
23 | some_content
. | ...
Is it possible to perform a single query in SQL (without parsing the results inside the application and requerying the db) that would iterate trough master_table and for the given <attr_id> get only the attributes that reference other tables (e.g. 23(table1), 72(table2), ...), then parse the tables names from the parenthesis (e.g. table1, table2, ...) and perform a query to get the (23rd, 72nd, ...) value (e.g. some_content) from that referenced table?
Here is something I've done, and it parses the Attr_Val for the table name, but I don't know how to assign it to a string and then do a query with that string.
PREPARE pstmt FROM
"SELECT * FROM information_schema.tables
WHERE TABLESCHEMA = '<my_db_name>' AND TABLE_NAME=?";
SET #str_tablename =
(SELECT table.tablename FROM
(SELECT #string:=(SELECT <string_column> FROM <table> WHERE ID=<attr_id>) as String,
#loc1:=length(#string)-locate("(", reverse(#string))+2 AS from,
#loc2:=length(#string)-locate(")", reverse(#string))+1-#loc1 AS to,
substr(#string,#loc1, #loc2) AS tablename
) table
); <--this returns 1 rows which is OK
EXECUTE pstmt USING #str_tablename; <--this then returns 0 rows
Any thoughts?
I love the purity of this approach, if pulled off. But I'm thinking you're creating a maintenance bomb. With a cure like this, who needs to be sick?
No one has ever said of a web site "Man, their data sure is pure!" They compliment what is being done with the data. I don't recommend you keep your hands tied behind your back on this one. I guarantee your competitors aren't.