Ora-01000 - maximum open cursors exceeded error - database

I am receiving the following error message within my Delphi/Oracle application "ora-01000 - maximum open cursors exceeded". The code is as follows:
begin
for i := 0 to 150 do
begin
with myADOQuery do
begin
SQL.Text := 'DELETE FROM SOMETABLE';
ExecSQL; -- from looking at V$OPEN_CURSOR a new cursor is added on each iteration for the session
Close; -- thought this would close the cursor but doesn't
end;
end;
end;
I'm aware I can resolve the problem by simply increasing the number of OPEN_CURSORS parameters, however, I would rather find a solution whereby the cursor is closed after the query is executed. Any ideas?
Delphi 2006 BDS
Oracle 10g

Read Oracle Support documents ID 76684.1 and ID 2055810.6. I do not use ADO, but you may have to find a way to tell it how to configure Oracle not to cache statements.
The default max_cursor value is usually too low, is usually better to increase it, it will made Oracle use a little more memory but on actual machine it is rarely an issue.
To delete a whole table TRUNCATE may be better than DELETE, unless you have to rely on DELETE behaviour (i.e. firing triggers).

Check this link. I'm not Oracle user, but as it seems there is some cursor cache and as they say "The best advice for tuning OPEN_CURSORS is not to tune it. Set it high enough that you won't have to worry about it." So I would say even if the Close command closes the cursor it still remains in the cache. There are also some tips, how to check your current situation.

try using the TADOCommand component instead.
TADOCommand is most often used for
executing data definition language
(DDL) SQL commands or to execute a
stored procedure that does not return
a result set.
or using directly the TADOConnection.Execute function

What happens if you omit the Close?

Related

How do I copy a big database table to another in ABAP?

I want to copy one big database table to another. This is my current approach:
OPEN CURSOR WITH HOLD lv_db_cursor FOR
SELECT * FROM zcustomers.
DO.
REFRESH gt_custom.
FETCH NEXT CURSOR lv_db_cursor
INTO TABLE gt_custom
PACKAGE SIZE lv_package_size.
IF sy-subrc NE 0.
CLOSE CURSOR lv_db_cursor.
EXIT.
ENDIF.
INSERT zcustomers1 FROM TABLE gt_custom.
* Write code to modify u r custom table from gt_custom .
ENDDO.
But the problem is that I get a error "Enterprise]ASE has run out of LOCKS".
I tried to use COMMIT statement after insert some piece of records, but it closes the cursor.
I don't want to increase max locks by database setting or make a copy on database level.
I want to understand how I can copy with best performance and low usage memory in ABAP...
Thank you.
You can also "copy on database level" from within ABAP SQL using a combined INSERT and SELECT:
INSERT zcustomers1 FROM ( SELECT * FROM zcustomers ).
Unlike the other solution, this runs in one single transaction (no inconsistency on the database) and avoids moving the data between the database and the ABAP server, so should be by magnitudes faster. However, like the code in question this might still run into database limits due to opening many locks during the insert (though might avoid other problems). This should be solved on database side and is not a limitation of ABAP.
By using COMMIT CONNECTION instead of COMMIT WORK it is possible to commit only the transaction writing to zcustomers1, while keeping the transaction reading from zcustomers open.
Note that having multiple transactions (one reading, multiple writing) can create inconsistencies in the database if zcustomers or zcustomers1 are written while this code runs. Also reading from zcustomers1 shows only a part of the entries from zcustomers.
DATA:
gt_custom TYPE TABLE OF ZCUSTOMERS,
lv_package_size TYPE i,
lv_db_cursor TYPE cursor.
lv_package_size = 10000.
OPEN CURSOR WITH HOLD lv_db_cursor FOR
SELECT * FROM zcustomers.
DO.
REFRESH gt_custom.
FETCH NEXT CURSOR lv_db_cursor
INTO TABLE gt_custom
PACKAGE SIZE lv_package_size.
IF sy-subrc NE 0.
CLOSE CURSOR lv_db_cursor.
EXIT.
ENDIF.
MODIFY zcustomers2 FROM TABLE gt_custom.
" regularily commiting here releases a partial state to the database
" through that, locks are released and running into ASE error SQL1204 is avoided
COMMIT CONNECTION default.
ENDDO.

SQL Error during Lazy Loop awaiting Azure DB resize

I want to automate some DB scaling in my Azure SQL database.
This can be easily initiated using this:
ALTER DATABASE [myDatabase]
MODIFY (EDITION ='Standard', SERVICE_OBJECTIVE = 'S3', MAXSIZE = 250 GB);
But that command returns instantly, whilst the resize takes a few 10s of seconds to complete.
We can check the actual current size using the following, which doesn't update until the change is complete:
SELECT DATABASEPROPERTYEX('myDatabase', 'ServiceObjective')
So naturally I wanted to combine this with a WHILE loop and a WAITFOR DELAY, in order to create a stored procedure that will change the DB size, and not return until the change has completed.
But when I wrote that stored procedure (script below) and ran it, I get the following error every time (at about the same time that the size change completes):
A severe error occurred on the current command. The results, if any, should be discarded.
The resize succeeds, but I get errors instead of a cleanly finishing stored procedure call.
Various things I've already tested:
If I separate the "Initiate" and the "WaitLoop" sections, and start the WaitLoop in a separate connection, after initiation but before completion, then that also gives the same error.
Adding a TRY...CATCH block doesn't help either.
Removing the stored procedure aspect, and just running the code directly doesn't fix it either
My interpretation is that the Resize isn't quite as transparent as one might hope, and that connections created before the resize completes get corrupted in some sense.
Whatever the exact cause, it seems to me that this stored procedure just isn't achievable at all; I'll have to do the polling from my external process - opening new connections each time. It's not an awful solution, but it is less pleasant than being able to encapsulate the whole thing in a single stored procedure. Ah well, such is life.
Question:
Before I give up on this entirely ... does anyone have an alternative explanation or solution for this error, which would thus allow a single stored procedure call to change the size and then not return until that sizeChange actually completed?
Initial stored procedure code (simplified to remove parameterisation complexity):
CREATE PROCEDURE [trusted].[sp_ResizeAzureDbToS3AndWaitForCompletion]
AS
ALTER DATABASE [myDatabase]
MODIFY (EDITION ='Standard', SERVICE_OBJECTIVE = 'S3', MAXSIZE = 250 GB);
WHILE ((SELECT DATABASEPROPERTYEX('myDatabase', 'ServiceObjective')) != 'S3')
BEGIN
WAITFOR DELAY '00:00:05'
END
RETURN 0
Whatever the exact cause, it seems to me that this stored procedure
just isn't achievable at all; I'll have to do the polling from my
external process - opening new connections each time.
Yes this is correct. As described here when you change the service objective of a database
A new compute instance is created with the requested service tier and
compute size... the database remains online during this step, and
connections continue to be directed to the database in the original
compute instance ... [then] existing connections to the database in
the original compute instance are dropped. Any new connections are
established to the database in the new compute instance.
The bolded text will kill your stored procedure execution. You need to do this check externally

SQL - update, delete, insert - Whatif scenerio

I was reading an article the other day the showed how to run SQL Update, Insert, or Deletes as a whatif type scenario. I don't remember the parameter that they talked about and now I can't find the article. Not sure if I was dreaming.
Anyway, does anyone know if there is a parameter in SQL2008 that lets you try an insert, update, or delete without actually committing it? It will actually log or show you what it would have updated. You remove the parameter and run it if it behaves as you would expect.
I don't know of a SQL2008 specific feature with any SQL service that supports transactions you can do this:
Start a transaction ("BEGIN TRANSACTION" in TSQL)
The rest of your INSERT/UPDATE/DELETE/what-ever code
(optional) Some extra SELECT statements and such if needed to output the result of the above actions, if the default output from step 2 (things like "X rows affected") is not enough
Rollback the transaction ("ROLLBACK TRANSACTION" in TSQL)
(optional) Repeat the testing code to show how things are without the code in step 2 having run
For example:
BEGIN TRANSACTION
-- make changes
DELETE people WHERE name LIKE 'X%'
DELETE people WHERE name LIKE 'D%'
EXEC some_proc_that_does_more_work
-- check the DB state after the changes
SELECT COUNT(*) FROM people
-- undo
ROLLBACK TRANSACTION
-- confirm the DB state without the changes
SELECT COUNT(*) FROM people
(you might prefer to do the optional "confirm" step before starting the transaction rather than after rolling it back, but I've always done it this way around as it keeps the two likely-to-be-identical sections of code together for easier editing)
If you use something like this rather then something SQL2008 specific the technique should be transferable to other RDBS too (just update the syntax if needed).
OK, finally figured it out. I've confused this with another project I was working on with PowerShell. PowerShell has a "whatif" parameter that can be used to show you what files would be removed before they are removed.
My apologies to those who have spent time trying to find an answer to this port and my thanks to those of you who have responsed.
I believe you're talking about BEGIN TRANSACTION
BEGIN TRANSACTION starts a local transaction for the connection issuing the statement. Depending on the current transaction isolation level settings, many resources acquired to support the Transact-SQL statements issued by the connection are locked by the transaction until it is completed with either a COMMIT TRANSACTION or ROLLBACK TRANSACTION statement. Transactions left outstanding for long periods of time can prevent other users from accessing these locked resources, and also can prevent log truncation.
Do you perhaps mean SET NOEXEC ON ?
When SET NOEXEC is ON, SQL Server
compiles each batch of Transact-SQL
statements but does not execute them.
When SET NOEXEC is OFF, all batches
are executed after compilation.
Note that this won't warn/indicate things like key violations.
Toad for SQL Server has a "Validate SQL" feature that checks queries against wrong table/column names etc. . Maybe you are talking about some new feature in SSMS 2008 similar to that...
I'm more than seven years late to this particular party but I suspect the feature in question may also have been the OUTPUT clause. Certainly, it can be used to implement whatif functionality similar to Powershell's in a t-sql stored procedure.
https://learn.microsoft.com/en-us/sql/t-sql/queries/output-clause-transact-sql
Use this in each insert/update/delete/merge query to let the SP output a meaningful resultset of the changes it makes e.g. outputting the table name and action performed as the first two columns then all the altered columns.
Then simply rollback the changes if a #whatif parameter is set to 1 or commit them if #whatif is set to 0.

Enforce query restrictions

I'm building my own clone of http://statoverflow.com/sandbox (using the free controls provided to 10K users from Telerik). I have a proof of concept available I can use locally, but before I open it up to others I need to lock it down some more. Currently I run everything through a stored procedure that looks something like this:
CREATE PROCEDURE WebQuery
#QueryText nvarchar(1000)
AS
BEGIN
-- no writes, so no need to lock on select
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
-- throttles
SET ROWCOUNT 500
SET QUERY_GOVERNOR_COST_LIMIT 500
exec (#QueryText)
END
I need to do two things yet:
Replace QUERY_GOVERNOR_COST_LIMIT with an actual rather than estimated timeout, so no query runs longer than say 2 minutes.
Right now nothing stops users from just putting their own 'SET ROWCOUNT 50000;' in front of their query text to override my restriction, so I need to somehow limit the queries to a single statement or (preferrably) disallow the SET commands inside the exec function.
Any ideas?
You really plan to allow users to run arbitrary Ad-Hoc SQL? Only then can a user place in a SET to override your restrictions. If that's the case, you're best bet is to do some basic parsing using lexx/yacc or flex/bison (or your favorite CLR language tree parser) and detect invalid SET statements. Are you going to allow SET #variable=value though, which syntactically is a SET...
If you impersonate low privileged users via EXECUTE AS make sure you create an irreversible impersonation context, so the user does not simply execute REVERT and regain all the privileges :) You also must really understand the implications of database impersonation, make sure you read Extending Database Impersonation by Using EXECUTE AS.
Another thing to consider is deffering execution of requests to a queue. Since queue readers can be calibrated via MAX_QUEUE_READERS, you get a very cheap throttling. See Asynchronous procedure execution for a related article how to use queues to execute batches. This mechanism is different from resource governance, but I've seen it used to more effect that the governor itself.
Throwing this out there:
The EXEC statement appears to support impersonation. See http://msdn.microsoft.com/en-us/library/ms188332.aspx. Perhaps you can impersonate a limited user. I am looking into the availability of limitations that may prevent SET statements and the like.
On a very basic level, how about blocking any statement that doesn't start with SELECT? Or will other query starts be supported, like CTE's or DECLARE statements? 1000 chars isn't too much room to play with, but i'm not too clear what this is in the first place.
UPDATED
Ok, how about prefixing whatever they submit with SELECT TOP 500 FROM (
and appending a ). If they try to do multiple statements it'll throw an error you can catch. And to prevent denial of service, replace their starting SELECT with another SELECT TOP 500.
Doesn't help if they've appended an ORDER BY to something returning a million rows, though.

sql server set implicit_transactions off and other options

I am still learning sql server somewhat and recently came across a select query in a stored procedure which was causing a very slow fill of a dataset in c#. At first I thought this was to do with .NET but then found a suggestion to put in the stored procedure:
set implicit_transactions off
this seems to cure it but I would like to know why also I have seen other options such as:
set nocount off
set arithabort on
set concat_null_yields_null on
set ansi_nulls on
set cursor_close_on_commit off
set ansi_null_dflt_on on
set ansi_padding on
set ansi_warnings on
set quoted_identifier on
Does anyone know where to find good info on what each of these does and what is safe to use when I have stored procedures setup just to query of data for viewing.
I should note just to stop the usual use/don't use stored procedures debate these queries are complex select statements used on multiple programs in multiple languages it is the best place for them.
Edit: Got my answer didn't end up fully reviewing all the options but did find
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
Sped up the complex queries dramatically, I am not worried about the dirty read in this instance.
This is the page out of SQL Server Books Online (BOL) that you want. It explains all the SET statements that can be used in a session.
http://msdn.microsoft.com/en-us/library/ms190356.aspx
Ouch, someone, somewhere is playing with fire big-time.
I have never had a production scenario where I had to enable implicit transactions. I always open transactions when I need them and commit them when I am done. The problem with implicit transactions is its really easy to "leak" an open transaction which can lead to horrible issues. What this setting means is "please open a transaction for me the first time I run a statement if there is no transaction open, don't worry about committing it".
For example have a look at the following examples:
set implicit_transactions on
go
select top 10 * from sysobjects
And
set implicit_transactions off
go
begin tran
select top 10 * from sysobjects
They both do the exact same thing, however in the second statement its pretty clear someone forgot to commit the transaction. This can get very complicated to track down if you have this set in an obscure place.
The best place to get documentation for all the set statements is the old trusty sql server books online. It together with a bit of experimentation in query analyzer are usually all that is required to get a grasp of most settings.
I would strongly recommend you find out who is setting up implicit transactions, find out why they are doing it, and remove the setting if its not really required. Also, you must confirm that whoever uses this setting commits their implicitly open transactions.
What was probably going on is that you had an open transaction that was blocking a bit of your your stored proc, and somewhere you have a timeout that is occurring, raising an error and being handled in code, when that timeout happens your stored proc continues running. My guess is that the delay is usually 30 seconds exactly.
I think you need to look deeper into your stored procedure. I don't think that SET IMPLICIT_TRANSACTIONS is really going to be what's sped up your procedure, I think it's probably a coincidence.
One thing that may be worth a look at is what is passed from the client to the server by using the profiler.
We had an odd situation where the default SET arguments for the ADO connection were causing an SP to take ages to run from the client which we resolved by looking at exactly what the server was receiving from the client, complete with default SET arguments compared to what was sent when executing from SSMS. We then made the client pass the same SET statements as those sent by SSMS.
This may be way off track but it is a useful method to use when the SP executes in a timely fashion on the server but not from the client.

Resources