I am currently working on a revision comparer, in order to check if every significant change was made in every branch of my project. The problem is, that I am uploading a huge dataset and get a timeout after a while. Probably because my transaction is shot down from my database to be on the safe side. I am assuming that the entire transaction or request will take longer than two hours.
procedure TRCForm.FormCreate(Sender: TObject);
begin
Database := TDatabase.Create(Self);
// Disbale transaction timeout here
end;
Q.: Is there a possibility to extend the maximum transaction duration or to disable it completely?
Q.: Another consideration was that the transaction might be storing too much data, which is why it ends at a certain point. Is that possible, and if so, can it be checked and also circumvented?
Note: I am using a ADOConnection.
I really appreciate any suggestions or ideas, sheers!
Solution
Simply set the CommandTimeout to zero in order to disable any kind of timeout.
procedure TRCForm.FormCreate(Sender: TObject);
begin
Database := TDatabase.Create(Self);
Database.DBConnection.CommandTimeout := 0;
end;
See the docs here for more informations.
Use CommandTimeout to specify the amount of time that expires before an attempt to execute a command is considered unsuccessful.
Related
I want to automate some DB scaling in my Azure SQL database.
This can be easily initiated using this:
ALTER DATABASE [myDatabase]
MODIFY (EDITION ='Standard', SERVICE_OBJECTIVE = 'S3', MAXSIZE = 250 GB);
But that command returns instantly, whilst the resize takes a few 10s of seconds to complete.
We can check the actual current size using the following, which doesn't update until the change is complete:
SELECT DATABASEPROPERTYEX('myDatabase', 'ServiceObjective')
So naturally I wanted to combine this with a WHILE loop and a WAITFOR DELAY, in order to create a stored procedure that will change the DB size, and not return until the change has completed.
But when I wrote that stored procedure (script below) and ran it, I get the following error every time (at about the same time that the size change completes):
A severe error occurred on the current command. The results, if any, should be discarded.
The resize succeeds, but I get errors instead of a cleanly finishing stored procedure call.
Various things I've already tested:
If I separate the "Initiate" and the "WaitLoop" sections, and start the WaitLoop in a separate connection, after initiation but before completion, then that also gives the same error.
Adding a TRY...CATCH block doesn't help either.
Removing the stored procedure aspect, and just running the code directly doesn't fix it either
My interpretation is that the Resize isn't quite as transparent as one might hope, and that connections created before the resize completes get corrupted in some sense.
Whatever the exact cause, it seems to me that this stored procedure just isn't achievable at all; I'll have to do the polling from my external process - opening new connections each time. It's not an awful solution, but it is less pleasant than being able to encapsulate the whole thing in a single stored procedure. Ah well, such is life.
Question:
Before I give up on this entirely ... does anyone have an alternative explanation or solution for this error, which would thus allow a single stored procedure call to change the size and then not return until that sizeChange actually completed?
Initial stored procedure code (simplified to remove parameterisation complexity):
CREATE PROCEDURE [trusted].[sp_ResizeAzureDbToS3AndWaitForCompletion]
AS
ALTER DATABASE [myDatabase]
MODIFY (EDITION ='Standard', SERVICE_OBJECTIVE = 'S3', MAXSIZE = 250 GB);
WHILE ((SELECT DATABASEPROPERTYEX('myDatabase', 'ServiceObjective')) != 'S3')
BEGIN
WAITFOR DELAY '00:00:05'
END
RETURN 0
Whatever the exact cause, it seems to me that this stored procedure
just isn't achievable at all; I'll have to do the polling from my
external process - opening new connections each time.
Yes this is correct. As described here when you change the service objective of a database
A new compute instance is created with the requested service tier and
compute size... the database remains online during this step, and
connections continue to be directed to the database in the original
compute instance ... [then] existing connections to the database in
the original compute instance are dropped. Any new connections are
established to the database in the new compute instance.
The bolded text will kill your stored procedure execution. You need to do this check externally
I have a question but I can never get a clear answer. Any stored
procedure that used a transaction that I have looked at up until my recent job always had a commit transaction + a roll back in case of error. However I have seen a lot of code
at my new job that just has a begin transaction and then a commit at the end with no roll back. I understand why you would use a transaction with a rollback but why would you want to begin a transaction with no roll back? Is it so when you run that code you want to lock the table up so no values can be changed why your code is updating? If so why would you not want the added security of a roll back in case something goes wrong? Is this proper use of the transaction statement? Any thoughts or ideas would be great!
For Example:
BEGIN TRANSACTION [Tran1]
INSERT INTO [Test].[dbo].[T1]
([Title], [AVG])
VALUES ('Tidd130', 130), ('Tidd230', 230)
UPDATE [Test].[dbo].[T1]
SET [Title] = N'az2' ,[AVG] = 1
WHERE [dbo].[T1].[Title] = N'az'
COMMIT TRANSACTION [Tran1]
GO
shouldn't this code be using a roll back syntax for proper use of the begin transaction statement?
The idea is that if that set of transactions needs to be "all or nothing", wrapping the lot in a transaction is the way to ensure that is what will happen. You're not seeing an explicit rollback because that's not what they're guarding against. Imagine the ff scenario with your contrived example:
The insert happens
The server crashes (or the log fills up or some other external reason why things can't continue) before the update can happen
If they're both wrapped in the same transaction, the insert won't be reflected in the table data. Which is the desired behavior.
When transactions are not explicitly declared, SQL Server will automatically BEGIN and COMMIT a TRANSACTION for each command. This frees up each command's lock as soon as the command executes.
When executing multiple commands inside a single transaction (as in the example you posted), locks from all commands are held until the transaction is committed.
Depending on the desired behavior, the script you posted may be correct. However, I would be cautious to ensure that the developer did not mistakenly believe that the transaction would be automatically rolled back on error. If that behavior is desired, you do indeed need to explicitly ROLLBACK or SET XACT_ABORT ON
You use transaction when you need the outcome to be atomic, you would see this alot in financial related procedures where you are gravely worried about data acid consistency . Otherwise it is not necessary and introduces a great deal of locking overhead. There is a good question here and here that goes into great depth.
Edit
The takeaway point is if the procedure is a all or none and must either succeed or fail the correct decision is to use a transaction. If the procedure is not a all or none transaction such as simple insert update etc using a transaction is a) unnecessary and b) can introduce an undue performance overhead due to additional locking.
I am editing a record using a dbedit component, I have a cancel button but I'm not sure on how I can make it so all of the changes made using the dbedit components are reverted.
I was thinking about copying the record either to a temp table or duplicate the record within the same table which would let me remove the old record if the changes are saved or delete the copied record (leaving the original) if the input is cancelled.
I'm just wanting to know the best way to handle this without creating useless tables, creating too many procedures.
If I'm not mistaken changes to a paradox table only get written to the database after a post command.
If you want to cancel the change, just do
TForm1.CancelButtonPresss(Sender: TObject);
begin
ParadoxTable.Cancel;
end;
TForm1.OKButtonPress(Sender: TObject);
begin
ParadoxTable.Post;
end;
BTW, its been a long long time since I've worked with paradox tables, so my recollection my be incorrect, please feel free to vote down this answer if I'm mistaken.
I'm typing this on the mac, so I cannot check it now.
Will see if I can supply you with a more informed answer later.
To Compliment Johan's answer (use TDataSet.Cancel), if you use a TCustomClientDataSet, you also can use the RevertRecord method to remove the modifications to the current record, provided they are still in the change log.
You can also set a snapshot with SavePoint and revert to that state, cancelling all modifications done in the meantime.
Johan's answer is good for single record. If you are working with a SQL database (Oracle, MSSql, MySql, Firebird, etc) there is an additional approach that can be used for multiple records: transactions. Using ADO as an example
TForm1 = class(TForm)
ADOConnection: TADOConnection;
…
// start the transaction
ADOConnection.BeginTrans;
…
// create records and post them
…
// rollback removes the records posted
// since the transaction was started
ADOConnection.RollbackTrans;
… or …
// commit completes saving the records posted
// since the transaction was started
ADOConnection.CommitTrans;
If you do not explicitly start a transaction, one is automatically started and committed as records are posted to the database.
François's answer is similar to transactions, but only works with ClientDatasets.
What is the best approach of breaking long running stored procedures (up to 20 minutes)?
Inside the Stored procedure is wrapped in a transaction. If I close connection will this transaction be rolled back?
Another approach is to start a transaction in C# before I start the stored procedure and when I want to cancel the stored procedure I just need to rollback the C# transaction.
If you close the connection, SQL Server will rollback the transaction if it notices the disconnect before the transaction commits. There'll be a (very) small time window where the transaction might complete just when you disconnect.
A custom transaction adds complexity and has few benefits for a single stored procedure call. So I'd go for the disconnect.
You can set a time out in three place-- the connection, the command, and if you are ultimately programming a web page-- in the page time out.
The relevant time out in this case is the command time out.
Update: Cancelling by a user's event:
To cancel your command by a user event, call Cancel() on the command. I haven't written code to test this, but I suspect that once you call ExecuteReader() it will block, so you'd need an async call-- BeginExecuteNonQuery(), which really is a pain to set up-- it requires extra things on the query string and I think it requires SQL2005+
UPDATE: Re: Transactions
C# (or ADO.NET) transaction code adds about two lines of code and guarantees invocations of stored procedures (which may have more than one statement in them, in not today, maybe a year from now) succeed or fail as a unit. They normally are not a source of poor performance and can be a source of poor performance when not used-- e.g. a long series of inserts runs faster in a transaction.
If you do not call CommitTrans() the transaction will rollback, you do not have to explicitly call Rollback()
Set Timeout at the time of beginning the transaction.
I am receiving the following error message within my Delphi/Oracle application "ora-01000 - maximum open cursors exceeded". The code is as follows:
begin
for i := 0 to 150 do
begin
with myADOQuery do
begin
SQL.Text := 'DELETE FROM SOMETABLE';
ExecSQL; -- from looking at V$OPEN_CURSOR a new cursor is added on each iteration for the session
Close; -- thought this would close the cursor but doesn't
end;
end;
end;
I'm aware I can resolve the problem by simply increasing the number of OPEN_CURSORS parameters, however, I would rather find a solution whereby the cursor is closed after the query is executed. Any ideas?
Delphi 2006 BDS
Oracle 10g
Read Oracle Support documents ID 76684.1 and ID 2055810.6. I do not use ADO, but you may have to find a way to tell it how to configure Oracle not to cache statements.
The default max_cursor value is usually too low, is usually better to increase it, it will made Oracle use a little more memory but on actual machine it is rarely an issue.
To delete a whole table TRUNCATE may be better than DELETE, unless you have to rely on DELETE behaviour (i.e. firing triggers).
Check this link. I'm not Oracle user, but as it seems there is some cursor cache and as they say "The best advice for tuning OPEN_CURSORS is not to tune it. Set it high enough that you won't have to worry about it." So I would say even if the Close command closes the cursor it still remains in the cache. There are also some tips, how to check your current situation.
try using the TADOCommand component instead.
TADOCommand is most often used for
executing data definition language
(DDL) SQL commands or to execute a
stored procedure that does not return
a result set.
or using directly the TADOConnection.Execute function
What happens if you omit the Close?