I want to copy one big database table to another. This is my current approach:
OPEN CURSOR WITH HOLD lv_db_cursor FOR
SELECT * FROM zcustomers.
DO.
REFRESH gt_custom.
FETCH NEXT CURSOR lv_db_cursor
INTO TABLE gt_custom
PACKAGE SIZE lv_package_size.
IF sy-subrc NE 0.
CLOSE CURSOR lv_db_cursor.
EXIT.
ENDIF.
INSERT zcustomers1 FROM TABLE gt_custom.
* Write code to modify u r custom table from gt_custom .
ENDDO.
But the problem is that I get a error "Enterprise]ASE has run out of LOCKS".
I tried to use COMMIT statement after insert some piece of records, but it closes the cursor.
I don't want to increase max locks by database setting or make a copy on database level.
I want to understand how I can copy with best performance and low usage memory in ABAP...
Thank you.
You can also "copy on database level" from within ABAP SQL using a combined INSERT and SELECT:
INSERT zcustomers1 FROM ( SELECT * FROM zcustomers ).
Unlike the other solution, this runs in one single transaction (no inconsistency on the database) and avoids moving the data between the database and the ABAP server, so should be by magnitudes faster. However, like the code in question this might still run into database limits due to opening many locks during the insert (though might avoid other problems). This should be solved on database side and is not a limitation of ABAP.
By using COMMIT CONNECTION instead of COMMIT WORK it is possible to commit only the transaction writing to zcustomers1, while keeping the transaction reading from zcustomers open.
Note that having multiple transactions (one reading, multiple writing) can create inconsistencies in the database if zcustomers or zcustomers1 are written while this code runs. Also reading from zcustomers1 shows only a part of the entries from zcustomers.
DATA:
gt_custom TYPE TABLE OF ZCUSTOMERS,
lv_package_size TYPE i,
lv_db_cursor TYPE cursor.
lv_package_size = 10000.
OPEN CURSOR WITH HOLD lv_db_cursor FOR
SELECT * FROM zcustomers.
DO.
REFRESH gt_custom.
FETCH NEXT CURSOR lv_db_cursor
INTO TABLE gt_custom
PACKAGE SIZE lv_package_size.
IF sy-subrc NE 0.
CLOSE CURSOR lv_db_cursor.
EXIT.
ENDIF.
MODIFY zcustomers2 FROM TABLE gt_custom.
" regularily commiting here releases a partial state to the database
" through that, locks are released and running into ASE error SQL1204 is avoided
COMMIT CONNECTION default.
ENDDO.
Related
I have a query that return a huge number of rows and I am using SELECT INTO (instead of INSERT INTO) to avoid having problems with transaction log.
The problem is: while this query is running, I can read objects but not showing them in object explorer. When I try to expand the tables item, for example, I receive the message bellow:
Is there a way to avoid this problem?
As M.Ali explained, SELECT INTO has a table lock on your new table, which is also locking the schema objects that SSMS is trying to query in order to build the tree browser.
I would suggest tuning the query so that the statement can run faster. Since this is inserting into a Heap with no indexes and has the tablock, it will be minimally logged as you stated. So it is likely the SELECT part of the statement that is causing things to be slow. See if that query can be optimized or broken into smaller pieces so that the statement does not run so long.
Alternatively, perform the insert in smaller batches using INSERT INTO (and not specifying the tablock hint)
Now here is a Test for you which will give answer to your question...
Open a Query window in SSMS. Write any query which will return any number or rows, could be only one row or maybe 10. and do as follows
Query window 1
BEGIN TRANSACTION;
SELECT *
INTO NEW_Test_TABLE
FROM TABLE_NAME
Query Window 2
Now Open another window and write a SELECT statement against this NEW_Test_TABLE.
SELECT * FROM NEW_Test_TABLE
Your Query will never finish executing,,, no results will be returned (At this time NEW_Test_TABLE only exists in buffer chache). Unless you go back to your 1st Query Window and commit the transaction, And if you goto query window 1 and ROLLBACK TRANSACTION NEW_Test_TABLE would have existed once in buffer chache and no longer exist anywhere.
Similarly when your Select into statement in being executed nothing is committed to disk, therefore SSMS cannot see it neither can show you any information about it via Object explorer.
So the answer is while the query is being executed be patient and let SQL Server Commit the SELECT INTO transaction to disk and you will be able to access it VIA querying it or via Object explorer.
I've got a table, from which I constantly want all of the data (it's a buffer being populated by another program). I also want this data to be deleted from the table once I've read it. So I've got something like
BEGIN TRANS T1;
DELETE FROM table_name OUTPUT DELETED.*;
COMMIT TRANS T1;
In the reader program I'm using a DataReader to loop through each of the rows. What I'm worried about is, what if something goes wrong while reading in this data? Say, the thread crashes and I've only read in part of the data. If this is a transaction, at what point is it committed? When I start reading it out? I would prefer against locking this table for a while since there's another program always trying to write to it.
Is there perhaps a better way to do this? I have a program that's constantly writing one record after another to this table, and I want this program to go in and get it. I'm grabbing it all at once in handfuls so that I can use DELETE - OUTPUT DELETED, instead of a more costly series of "delete where" clauses. I would prefer a way that both the writer and the reader can keep going uninterrupted.
There is no way to be sure with this method.
I would suggest something like this.
Move records to be deleted to staging table.
Output staging table contents to application
Get confirmation from application that contents have been processed.
Delete contents from staging table.
Crash the DataReader and see.
Could you use MSMQ rather than SQL?
Option A - Hogan - good answer
Option B
Does it have a PK?
Is a delete where really too costly?
It gets rid of the need for a transaction.
Option C
Read block (top) of X (like 1000) into .NET collection rather than another table.
delete where ID in (....)
I'm working with cursors at the moment and it's getting messy for me, hope you can highlight some questions to me.
I have checked the oracle documentation about cursors but I cannot find out:
When a cursor is opened, is a local copy of the result created on memory?
Yes: Does it really make sense if I have a table with lot of data? I think it would not be really efficient, isn't it?.
No:
Is the whole data locked to other processes?
YES
: What if I'm doing a truly heavy process for each row, the data would be unavaliable for so long...
NO
: What would happen if another process modify the data I'm currently using with the cursor or if it adds new rows, would it be updated for the cursor?
Thanks so much.
You may want to read the section on Data Concurrency and Consistency in the Concepts Guide.
The answers to your specific questions:
When a cursor is opened, is a local copy of the result created on
memory?
No, however via Oracle's "Multiversion Read Consistency" (see link above) the rows fetched by the cursor will all be consistent with the point in time at which the cursor was opened - i.e. each row when fetched will be a row that existed when the cursor was opened and still has the same values (even though another session may have updated or even deleted it in the mean time).
No: Is the whole data locked to other processes?
No
NO : What would happen if another process modify the data I'm currently using with the cursor or if it adds new rows, would it be updated for the cursor?
Your cursor would not see those changes, it would continue to work with the rows as they existed when the cursor was opened.
The Concepts Guide explains this in detail, but the essence of the way it works is as follows:
Oracle maintains a something called a System Change Number (SCN) that is continually incremented.
When your cursor opens it notes the current value of the SCN.
As the cursor fetches rows it looks at the SCN stamped on them. If this SCN is the same or lower than the cursor's starting SCN then the data is up to date and is used. However if the row's SCN is higher that the cursor's then this means that another session has changed the row (and committed the change). In this case Oracle looks in the rollback segments for the old version of the row and uses that instead. If the query runs for a long time it is possible that the old version has been overwritten in the rollback segments. In this case the query fails with an ORA-01555 error.
You can modify this default behaviour if needed. For example, if it is vital that no other session modifies the rows you are querying during the running of your cursor, then you can use the FOR UPDATE clause to lock the rows:
CURSOR c IS SELECT sal FROM emp FOR UPDATE OF sal;
Now any session that tries to modify a row used in your query while it is running is blocked until your query has finished you commit or rollback.
I am moving a system from a VB/Access app to SQL server. One common thing in the access database is the use of tables to hold data that is being calculated and then using that data for a report.
eg.
delete from treporttable
insert into treporttable (.... this thing and that thing)
Update treportable set x = x * price where (...etc)
and then report runs from treporttable
I have heard that SQL server does not like it when all records from a table are deleted as it creates huge logs etc. I tried temp sql tables but they don't persists long enough for the report which is in a different process to run and report off of.
There are a number of places where this is done to different report tables in the application. The reports can be run many times a day and have a large number of records created in the report tables.
Can anyone tell me if there is a best practise for this or if my information about the logs is incorrect and this code will be fine in SQL server.
If you do not need to log the deletion activity you can use the truncate table command.
From books online:
TRUNCATE TABLE is functionally
identical to DELETE statement with no
WHERE clause: both remove all rows in
the table. But TRUNCATE TABLE is
faster and uses fewer system and
transaction log resources than DELETE.
http://msdn.microsoft.com/en-us/library/aa260621(SQL.80).aspx
delete from sometable
Is going to allow you to rollback the change. So if your table is very large, then this can cause a lot of memory useage and time.
However, if you have no fear of failure then:
truncate sometable
Will perform nearly instantly, and with minimal memory requirements. There is no rollback though.
To Nathan Feger:
You can rollback from TRUNCATE. See for yourself:
CREATE TABLE dbo.Test(i INT);
GO
INSERT dbo.Test(i) SELECT 1;
GO
BEGIN TRAN
TRUNCATE TABLE dbo.Test;
SELECT i FROM dbo.Test;
ROLLBACK
GO
SELECT i FROM dbo.Test;
GO
i
(0 row(s) affected)
i
1
(1 row(s) affected)
You could also DROP the table, and recreate it...if there are no relationships.
The [DROP table] statement is transactionally safe whereas [TRUNCATE] is not.
So it depends on your schema which direction you want to go!!
Also, use SQL Profiler to analyze your execution times. Test it out and see which is best!!
The answer depends on the recovery model of your database. If you are in full recovery mode, then you have transaction logs that could become very large when you delete a lot of data. However, if you're backing up transaction logs on a regular basis to free the space, this might not be a concern for you.
Generally speaking, if the transaction logging doesn't matter to you at all, you should TRUNCATE the table instead. Be mindful, though, of any key seeds, because TRUNCATE will reseed the table.
EDIT: Note that even if the recovery model is set to Simple, your transaction logs will grow during a mass delete. The transaction logs will just be cleared afterward (without releasing the space). The idea is that DELETE will create a transaction even temporarily.
Consider using temporary tables. Their names start with # and they are deleted when nobody refers to them. Example:
create table #myreport (
id identity,
col1,
...
)
Temporary tables are made to be thrown away, and that happens very efficiently.
Another option is using TRUNCATE TABLE instead of DELETE. The truncate will not grow the log file.
I think your example has a possible concurrency issue. What if multiple processes are using the table at the same time? If you add a JOB_ID column or something like that will allow you to clear the relevant entries in this table without clobbering the data being used by another process.
Actually tables such as treporttable do not need to be recovered to a point of time. As such, they can live in a separate database with simple recovery mode. That eases the burden of logging.
There are a number of ways to handle this. First you can move the creation of the data to running of the report itself. This I feel is the best way to handle, then you can use temp tables to temporarily stage your data and no one will have concurency issues if multiple people try to run the report at the same time. Depending on how many reports we are talking about, it could take some time to do this, so you may need another short term solutio n as well.
Second you could move all your reporting tables to a difffernt db that is set to simple mode and truncate them before running your queries to populate. This is closest to your current process, but if multiple users are trying to run the same report could be an issue.
Third you could set up a job to populate the tables (still in separate db set to simple recovery) once a day (truncating at that time). Then anyone running a report that day will see the same data and there will be no concurrency issues. However the data will not be up-to-the minute. You also could set up a reporting data awarehouse, but that is probably overkill in your case.
I've got in an ASP.NET application this process :
Start a connection
Start a transaction
Insert into a table "LoadData" a lot of values with the SqlBulkCopy class with a column that contains a specific LoadId.
Call a stored procedure that :
read the table "LoadData" for the specific LoadId.
For each line does a lot of calculations which implies reading dozens of tables and write the results into a temporary (#temp) table (process that last several minutes).
Deletes the lines in "LoadDate" for the specific LoadId.
Once everything is done, write the result in the result table.
Commit transaction or rollback if something fails.
My problem is that if I have 2 users that start the process, the second one will have to wait that the previous has finished (because the insert seems to put an exclusive lock on the table) and my application sometimes falls in timeout (and the users are not happy to wait :) ).
I'm looking for a way to be able to have the users that does everything in parallel as there is no interaction, except the last one: writing the result. I think that what is blocking me is the inserts / deletes in the "LoadData" table.
I checked the other transaction isolation levels but it seems that nothing could help me.
What would be perfect would be to be able to remove the exclusive lock on the "LoadData" table (is it possible to force SqlServer to only lock rows and not table ?) when the Insert is finished, but without ending the transaction.
Any suggestion?
Look up SET TRANSACTION ISOLATION LEVEL READ COMMITTED SNAPSHOT in Books OnLine.
Transactions should cover small and fast-executing pieces of SQL / code. They have a tendancy to be implemented differently on different platforms. They will lock tables and then expand the lock as the modifications grow thus locking out the other users from querying or updating the same row / page / table.
Why not forget the transaction, and handle processing errors in another way? Is your data integrity truely being secured by the transaction, or can you do without it?
if you're sure that there is no issue with cioncurrent operations except the last part, why not start the transaction just before those last statements, Whichever they are that DO require isolation), and commit immediately after they succeed.. Then all the upfront read operations will not block each other...