In Delphi can a TADOStoredProc be executed in a BeginTrans/CommitTrans? - database

We currently have a service that does a loop that loads a TADOStoredProc (via StoredProc.Parameters.CreateParameter) and then does an ExecProc to insert it into our table.
Is there a way I can wrap that whole loop inside a TADOConnection.BeginTrans and then finish it with a ADOConnection.CommitTrans? Basically I wish to do a batch insert.

Yes, stored procedures can be called inside of a transaction, and any records inserted/deleted/modified will not be applied to the database until the transaction is committed, eg:
ADOConnection1.BeginTrans;
try
// execute TADOStoredProc as many times as you need...
ADOConnection1.CommitTrans;
except
ADOConnection1.RollbackTrans;
raise;
end;

Related

Snowflake orchestration of tasks

I have a batch load process that loads data into a staging database. I have a number of tasks that execute stored procedures which move the data to a different database. The tasks are executed when the SYSTEM$STREAM_HAS_DATA condition is satisfied on a given table.
I have a separate stored procedure that I want to execute only after the tasks have completed moving the data.
However, I have no way to know which tables will receive data and therefore do not know which tasks will be executed.
How can I know when all the tasks that satisfied the SYSTEM$STREAM_HAS_DATA condition are finished and I can now kick off the other stored procedure? Is there a way to orchestrate this step by step process similar to how you would in a SQL job?
There is no automated way but you can do it with some coding.
You may create a stored procedure to check the STATE column of the task_history view to see if the tasks are completed or skipped:
https://docs.snowflake.com/en/sql-reference/functions/task_history.html
You can call this stored procedure periodically using a task (like every 5 minutes etc).
Based on your checks inside of the stored procedure (all tasks were succeeded, the target SP wasn't executed today yet etc), you can execute your target stored procedure which needs to be executed after all tasks have been completed.
You can also check the status of all the streams via SELECT SYSTEM$STREAM_HAS_DATA('<stream_name>') FROM STREAM which does not process the stream, or SELECT COUNT(*) FROM STREAM.
Look into using IDENTIFIER for dynamic queries.

Does the delete operation block any insert into the same table?

I have table A and a stored procedure that deletes all data from that table periodically. All queries in the stored procedure are packed into 1 transaction. But sometimes the stored procedure execution takes up to 5 minutes. Could it be that executing stored procedure will block inserts on the same table A?
The stored procedure will never be called again until the previous call has been completed.
Will it be different for READ COMMITTED and READ COMMITTED SNAPSHOT ISOLATION?
Yes, a statement like DELETE FROM YourTable; would take out a table lock blocking all other changes to the table until it was done. I don't think that changing the isolation level will help much, unless you put snapshot on the whole database (i.e., Snapshot Isolation).
Usually you want to try a different approach for cases like this. Either:
Try breaking the DELETE up into smaller "chunks" so that it each chunk takes less time and will not block the entire table. Or if this is not appropriate, then ...
Create an empty duplicate of YourTable, then change the name of YourTable to something like Yourtable_deleting and change the new table's name to YourTable. Then DELETE (or just DROP) Yourtable_deleting.

Column Does Not Exist Even Though THe Procedure Is Not Executed

I have two separate procedures. One procedure alters the existing table with new columns. The other procedure adds data to the tables. They aren't being executed, only created. When I hit run, the procedure that adds data to the columns throws an error saying the column does not exist. I understand that it's not created because I didn't exec the procedure that contains the altered code. Not sure why the code inside the procedure executes since I thought that it only creates the procedure.
Some of the code is repetitive and I understand. This is simply to get a working solution before modifying it dynamically.
To answer this more fully than my comment - Stored procedures are compiled. So if you try and do something that is invalid, the compilation will fail. It is not only checked at runtime.
Try this and it will fail every time:
create table junk(a int)
create procedure p as
update junk set b=1
If you want this to work, run the procedure that creates the columns before you attempt to create the procedure that inserts the data, or change the insert procedure so that it uses dynamic sql
Note that if you're desperate to have a db that has no columns but has a procedure that references them for insert, you can create the columns, create the insert procedure and then drop the columns again. The procedure won't run because dropping the columns invalidated it, but it will still exist
Not quite sure why you'd want to though- db schema is very much a design time thing so the design should be evolutionary. If you're doing this as part of a wider work in a front end language, take a look at a database migratory tool - it's a device that runs scripts, typically on app startup, that ensures the db has all the columns and data the app needs for that version to run. It's bidirectional too, typically, so if you downgrade then the migratory will/can remove columns and data it added

Do stored procedures run in database transaction in Postgres?

If a stored procedure fails in middle, are changes at that point from the beginning of SP rolled back implicitly or do we have to write any explicit code to make sure that SP runs in a database transaction only?
Strictly speaking, Postgres did not have stored procedures as defined in the ISO/IEC standard before version 11. The term is often used incorrectly to refer to functions, which provide much of the same functionality (and more) as other RDBMS provide with "stored procedures". The main difference being transaction handling.
What are the differences between “Stored Procedures” and “Stored Functions”?
True stored procedures were finally introduced with Postgres 11:
When to use stored procedure / user-defined function?
Functions are atomic in Postgres and automatically run inside their own transaction unless called within an outer transaction. They always run inside a single transaction and succeed or fail completely. Consequently, one cannot begin or commit transactions within the function. And commands like VACUUM, CREATE DATABASE, or CREATE INDEX CONCURRENTLY which do not run in a transaction context are not allowed.
The manual on PL/pgSQL:
Functions and trigger procedures are always executed within a
transaction established by an outer query — they cannot start or
commit that transaction, since there would be no context for them to
execute in. However, a block containing an EXCEPTION clause
effectively forms a subtransaction that can be rolled back without
affecting the outer transaction.
Error handling:
By default, any error occurring in a PL/pgSQL function aborts
execution of the function, and indeed of the surrounding transaction
as well. You can trap errors and recover from them by using a BEGIN
block with an EXCEPTION clause.
There are exceptions, including but not limited to:
data written to log files
changes made to a sequence
Important: Some PostgreSQL data types and functions have special rules
regarding transactional behavior. In particular, changes made to a
sequence (and therefore the counter of a column declared using serial)
are immediately visible to all other transactions and are not rolled
back if the transaction that made the changes aborts.
prepared statements
SQL Fiddle demo
dblink calls (or similar)
Does Postgres support nested or autonomous transactions?
If you are using Postgres 14 procedure like below:
CREATE OR REPLACE PROCEDURE test_error(schema_name text)
LANGUAGE plpgsql
AS
$$
declare
<declare any vars that you need>
BEGIN
<do your thing>
END
$$;
For all practical purposes, code written in between the BEGIN and END block is executed in a single transaction. Hence, if any of the statements in the block fail, all the previous statements will be rolled back automatically. You do not need to explicitly write any roll back code.
However, there are special cases where one can have fine grained control over when to start/commit/rollback transactions. Refer to : https://www.postgresql.org/docs/current/plpgsql-transactions.html for details.
From the official document of Postgresql:
In procedures invoked by the CALL command as well as in anonymous code
blocks (DO command), it is possible to end transactions using the
commands COMMIT and ROLLBACK. A new transaction is started
automatically after a transaction is ended using these commands, so
there is no separate START TRANSACTION command. (Note that BEGIN and
END have different meanings in PL/pgSQL.)
https://www.postgresql.org/docs/11/plpgsql-transactions.html

How do I create a stored procedure whose effects cannot be rolled back?

I want to have a stored procedure that inserts a record into tableA and updates record(s) in tableB.
The stored procedure will be called from within a trigger.
I want the inserted records in tableA to exist even if the outermost transaction of the trigger is rolled back.
The records in tableA are linearly linked and I must be able to rebuild the linear connection.
Write access to tableA is only ever through the triggers.
How do I go about this?
What you're looking for are autonomous transactions, and these do not exist in SQL Server today. Please vote / comment on the following items:
http://connect.microsoft.com/SQLServer/feedback/details/296870/add-support-for-autonomous-transactions
http://connect.microsoft.com/SQLServer/feedback/details/324569/add-support-for-true-nested-transactions
What you can consider doing is using xp_cmdshell or CLR to go outside the SQL engine to come back in (these actions can't be rolled back by SQL Server)... but these methods aren't without their own issues.
Another idea is to use INSTEAD OF triggers - you can log/update other tables and then just decide not to proceed with the actual action.
EDIT
And along the lines of #VoodooChild's suggestion, you can use a #table variable to temporarily hold data that you can reference after the rollback - this data will survive a rollback, unlike an insert into a #temp table.
See this post Logging messages during a transaction for a (somewhat convoluted) effective way of achieving what you want: the insert into the logging table is persisted even if the transaction had rolled back. The method Simon proposes has several advantages: requires no changes to the caller, is fast and is scalable, and it can be used safely from within a trigger. Simon's example is for logging, but the insert can be for anything.
One way is to create a linked server that points to the local server. Stored procedures executed over a linked server won't be rolled back:
EXEC LinkedServer.DbName.dbo.sp_LogInfo 'this won''t be rolled back'
You can call a remote stored procedure from a trigger.

Resources