I need to insert 1.3 million of records from one table into another, and it takes really long time (over 13 min). After some research I found that it is better to do this operation in batches, so I put together something like this (actual query is more complicated, it is simplified here for briefness):
DECLARE #key INT; SET #key = 0;
CREATE TABLE #CURRENT_KEYS(KEY INT)
WHILE 1=1
BEGIN
-- Getting subset of keys
INSERT INTO #CURRENT_KEYS(KEY)
SELECT TOP 100000 KEY FROM #ALL_KEYS WHERE KEY > #key
IF ##ROWCOUNT = 0 BREAK
-- Main Insert
INSERT INTO #RESULT(KEY, VALUE)
SELECT MAIN_TABLE.KEY, MAIN_TABLE.VALUE
FROM MAIN_TABLE INNER_JOIN #CURRENT_KEYS
ON MAIN_TABLE.KEY = #CURRENT_KEYS.KEY
SELECT #key = MAX(KEY ) FROM #CURRENT_KEYS
TRUNCATE TABLE #CURRENT_KEYS
END
I already have indexed list of 1.3 million keys in #ALL_KEYS table so idea here is in a loop create smaller subset of keys for the JOIN and INSERT. The above loop executes 13 times (1,300,000 records / 100,000 records in a batch). If I put a break after just one iterations - execution time is 9 seconds. I assumed total execution time would be 9*13 seconds, but it's the same 13 minutes!
Any idea why?
NOTE: Instead of temp table #CURRENT_KEYS, I tried to use CTE, but with the same result.
UPDATE Some wait stats.
I am showing for this process PAGEIOLATCH_SH and sometimes PREEMPTIVE_OS_WRITEFILEGATHER in wait stats occasionally over 500ms, but often < 100Ms. Also SP_WHO shows user as suspended for the duration of the query.
I'm pretty sure you're experiencing disk pressure. PREEMPTIVE_OS_WRITEFILEGATHER is an autogrowth event (database getting larger), and PAGEIOLATCH_SH means that the process is waiting for a latch on a buffer that's an IO request (probably your file growth event).
http://blog.sqlauthority.com/2011/02/19/sql-server-preemptive-and-non-preemptive-wait-type-day-19-of-28/
http://blog.sqlauthority.com/2011/02/09/sql-server-pageiolatch_dt-pageiolatch_ex-pageiolatch_kp-pageiolatch_sh-pageiolatch_up-wait-type-day-9-of-28/
What I would recommend is pre-growing both tempdb (for your temp table) and the database that's going to hold the batch insert.
http://support.microsoft.com/kb/2091024
Related
I'm doing a select on a table with about 6 millions records selecting GETDATE()
select getdate() as date, [...] from MyTable
I verified that the performance issue is on GETDATE(), removing all other fields the query is still slow.
I thought that putting the value of GETDATE() in a separate var would speed the query up
declare #now datetime
set #now = GETDATE()
select #now as date, [...] from MyTable
It is slow as well. Why?
I'd never really noticed this before. But I am seeing the same thing.
Ran the following on a 10 million row table...
-- query #1
DECLARE #now AS DATETIME ;
SET #now = GETDATE() ;
SELECT #now AS [date], * FROM [MyTable] ;
-- cpu time = 2,563 ms
-- duration = 27,511 ms
-- query #2
SELECT GETDATE() AS [date], * FROM [MyTable] ;
-- cpu time = 2,421 ms
-- duration = 26,862 ms
-- query #3
SELECT * FROM [MyTable] ;
-- cpu time = 1,969 ms
-- duration = 23,149 ms
And the cpu times and durations are showing a difference.
All three query plans are more or less the same, with negligible difference between estimated costs for the queries.
The only differences I could see between the plans were the wait stats...
Query #1
WaitType = ASYNC_NETWORK_IO
WaitCount = 77,716
WaitTimeMs = 24,234
Query #2
WaitType = ASYNC_NETWORK_IO
WaitCount = 75,261
WaitTimeMs = 23,662
Query #3
WaitType = ASYNC_NETWORK_IO
WaitCount = 55,434
WaitTimeMs = 20,280
That's an extra 3-4 seconds, between including and not including the GETDATE() column in the result set, just waiting for whatever's running the query to acknowledge it has consumed the data and is ready for more.
In my case, I was using SSMS to execute the queries. So, I can only put it down to SSMS dragging its heels to render that extra column, which amounted to about 75 MB (10M x 8 bytes).
Having said that, the bulk of the time is obviously taken up with scanning all 10 million rows.
Unfortunately, I think the extra execution time to include your GETDATE() column is unavoidable.
Two points.
ASYNC_NETWORK_IO is SQL Server saying that it is waiting for network bandwidth to be available in order to send more data down the pipe.
SSMS stores the output of the Results window in a temp file on your C:\ drive so will be affected by disk I/O, AV scanning, other processes, etc. running on your machine. Same concept if you use a Linux OS.
I'd experiment with limiting the size of the data being returned (10M records can hardly be analysed by a human), and using a different tool to pull the records (if you really need 10M records) for starters.
Also, review the Execution Plan to find out where exactly the delay is. If it still points yo the ASYNC_NETWORK_IO wait, then your problem could be one or more of the network components between yourself and the server. Try using a wired connection instead of WiFi. Do you have a VPN? Is there anything limiting data transfer rates? Or the reason might simply be that too much data is being pulled.
I've already seen a dozen such questions but most of them get answers that doesn't apply to my case.
First off - the database is am trying to get the data from has a very slow network and is connected to using VPN.
I am accessing it through a database link.
I have full write/read access on my schema tables but I don't have DBA rights so I can't create dumps and I don't have grants for creation new tables etc.
I've been trying to get the database locally and all is well except for one table.
It has 6.5 million records and 16 columns.
There was no problem getting 14 of them but the remaining two are Clobs with huge XML in them.
The data transfer is so slow it is painful.
I tried
insert based on select
insert all 14 then update the other 2
create table as
insert based on select conditional so I get only so many records and manually commit
The issue is mainly that the connection is lost before the transaction finishes (or power loss or VPN drops or random error etc) and all the GBs that have been downloaded are discarded.
As I said I tried putting conditionals so I get a few records but even this is a bit random and requires focus from me.
Something like :
Insert into TableA
Select * from TableA#DB_RemoteDB1
WHERE CREATION_DATE BETWEEN to_date('01-Jan-2016') AND to_date('31-DEC-2016')
Sometimes it works sometimes it doesn't. Just after a few GBs Toad is stuck running but when I look at its throughput it is 0KB/s or a few Bytes/s.
What I am looking for is a loop or a cursor that can be used to get maybe 100000 or a 1000000 at a time - commit it then go for the rest until it is done.
This is a one time operation that I am doing as we need the data locally for testing - so I don't care if it is inefficient as long as the data is brought in in chunks and a commit saves me from retrieving it again.
I can count already about 15GBs of failed downloads I've done over the last 3 days and my local table still has 0 records as all my attempts have failed.
Server: Oracle 11g
Local: Oracle 11g
Attempted Clients: Toad/Sql Dev/dbForge Studio
Thanks.
You could do something like:
begin
loop
insert into tablea
select * from tablea#DB_RemoteDB1 a_remote
where not exists (select null from tablea where id = a_remote.id)
and rownum <= 100000; -- or whatever number makes sense for you
exit when sql%rowcount = 0;
commit;
end loop;
end;
/
This assumes that there is a primary/unique key you can use to check if a row int he remote table already exists in the local one - in this example I've used a vague ID column, but replace that with your actual key column(s).
For each iteration of the loop it will identify rows in the remote table which do not exist in the local table - which may be slow, but you've said performance isn't a priority here - and then, via rownum, limit the number of rows being inserted to a manageable subset.
The loop then terminates when no rows are inserted, which means there are no rows left in the remote table that don't exist locally.
This should be restartable, due to the commit and where not exists check. This isn't usually a good approach - as it kind of breaks normal transaction handling - but as a one off and with your network issues/constraints it may be necessary.
Toad is right, using bulk collect would be (probably significantly) faster in general as the query isn't repeated each time around the loop:
declare
cursor l_cur is
select * from tablea#dblink3 a_remote
where not exists (select null from tablea where id = a_remote.id);
type t_tab is table of l_cur%rowtype;
l_tab t_tab;
begin
open l_cur;
loop
fetch l_cur bulk collect into l_tab limit 100000;
forall i in 1..l_tab.count
insert into tablea values l_tab(i);
commit;
exit when l_cur%notfound;
end loop;
close l_cur;
end;
/
This time you would change the limit 100000 to whatever number you think sensible. There is a trade-off here though, as the PL/SQL table will consume memory, so you may need to experiment a bit to pick that value - you could get errors or affect other users if it's too high. Lower is less of a problem here, except the bulk inserts become slightly less efficient.
But because you have a CLOB column (holding your XML) this won't work for you, as #BobC pointed out; the insert ... select is supported over a DB link, but the collection version will get an error from the fetch:
ORA-22992: cannot use LOB locators selected from remote tables
ORA-06512: at line 10
22992. 00000 - "cannot use LOB locators selected from remote tables"
*Cause: A remote LOB column cannot be referenced.
*Action: Remove references to LOBs in remote tables.
Consider table testTable, a table with six fields: one of them a UNIQUEIDENTIFIER, one a TIMESTAMP and four of them VARCHARs. Field Filename is VARCHAR.
This first query takes 1 minutes 38 seconds
Select top 1 * from testTable WHERE Filename = 'any.string.1512.b'
Either of these queries takes 1-3 seconds
Select top 1 * from testTable WHERE Filename = 'any.string.1512'
Select top 1 * from testTable WHERE Filename like 'cusip.realloc.1412.b%'
I have looked at the execution plan for all three and the only difference is that the last query (the LIKE statement) used 46% index seek\54% Key Lookup vs a 50\50 index\key lookup for the first two. As far as I can tell as soon as I no longer use the .b part of this search criterium, the queries go back to normal speed.
FileName has been indexed; table has been removed and recreated just in case. We have added indexes, removed indexes, check table, checked database, restart services, restart server, recreated the table. This field used to be VARCHAR(MAX) and I changed it to VARCHAR(100) to index it, but the problem was occurring before making this change.
Something else that I believe may be happening is that there might be something wrong with the end of the table. It will never complete a full:
Select * from testTable
I hoped it was a corrupted table but that wasn't the case. However when we attempt to generate a script in SSMS it fails to generate (no error given). I was able to recreate it by using another SQL client and generating the structure from SSMS and data copy from the other SQL client.
We are pretty stumped.
I am running into an issue where SQL Server is causing a significant number of locks (95 to 150) on our main table. They are typically short duration locks, lasting under 3 seconds, but I would like to eliminate those if I possibly can. We have also noticed that typically there are no blocks, but occasionally we have a situation where the blocks seem to "cascade" and then the entire system slows down considerably.
Background
We have up to 600 virtual machines processing data and we loaded a table in SQL so we could monitor any records that got stalled and records that were marked complete. We typically have between 200,000 and 1,000,000 records in this table during our processing.
What we are trying to accomplish
We are attempting to get the next available record (Status = 0). However, since there can be multiple hits on the stored proc simultaneously, we are trying to make sure each VM gets a unique record. This is important because processing takes between 1.5 and 2.5 minutes per record and we want to make this as clean as possible.
Our thought process to this point
UPDATE TOP (1) dbo.Test WITH (ROWLOCK)
SET Status = 1,
VMID = #VMID,
ReadCount = ReadCount + 1,
ProcessDT = GETUTCDATE()
OUTPUT INSERTED.RowID INTO #retValue
WHERE Status = 0
This update was causing us a few issues with locks, so we re-worked the process a little bit and changed the where to a sub-query to return the top 1 RowID (primary key) from the table. This seemed to help things run a little bit smoother, but then we occasionally get over-loaded in the database again.
UPDATE TOP (1) dbo.Test WITH (ROWLOCK)
SET Status = 1,
VMID = #VMID,
ReadCount = ReadCount + 1,
ProcessDT = GETUTCDATE()
OUTPUT INSERTED.RowID INTO #retValue
-- WHERE Status = 0
WHERE RowID IN (SELECT TOP 1 RowID FROM do.Test WHERE Status = 0 ORDER BY RowID)
We discovered that having a significant number of Status 1 and 2 records int he table causes slowdowns. We figured it was from a table scan on the Status column. We added the following index but it did not help solve the locks.
CREATE NONCLUSTERED INDEX IX_Test_Status_RowID
ON [dbo].[Test] ([Status])
INCLUDE ([RowID])
The final step after the UPDATE, we use the RowID returned to select out the details:
SELECT 'Test' as FileName, *, #Nick as [Nickname]
FROM Test WITH (NOLOCK)
WHERE RowID IN (SELECT id from #retValue)
Types of locks
The majority of the blocks are LCK_M_U and LCK_M_S, which I would expect with that UPDATE and SELECT query. We did have 1 or 2 LCK_M_X locks as well occasionally. That made me think we may still be getting collisions on our "unique" record code.
Questions
Are these locks and the number of locks just normal SQL operations for this type load?
Is the sub-query causing more issues than a TOP(1) in the UPDATE we started with? I am trying to get confirmation I can remove the ORDER BY statement and remove that extra step of processing.
Would a different index help? I wondered if the index updating was a possible cause of the locks initially, but now I am not sure.
Is there a better or more efficient way to get a unique RowID?
Is the WITH (ROWLOCK) causing more locks than leaving it off would cause? The idea is ROWLOCK would only lock the 1 specific record and allow another proc to update another record and select without locking the table or page.
Does anyone have any tools they recommend to stress test and run 100 queries simultaneously in order to test any potential solutions?
Sorry for all the questions, just trying to make sure I am as clear as possible on our process and the questions we have.
Thanks in advance for any insight as this is a really frustrating issue for us.
Hardware
We are running SQL Server 2008 R2 on a Dual Xeon CPU with 24 GB of RAM. So we should have plenty of horsepower for this process.
It looks like the best solution to the issue was to create a separate table with an identity and use the ##IDENTITY from the insert to determine the next row to process. That has solved all my lock issues so far in my stress testing. Thanks to all who pointed my in the right direction!
Primary Question:
I want to truncate and refresh a table in SQL Server, but want to wait until any queries currently accessing the table to finish. Is this a simple setting in SQL Server, or do I need to create some logic to accomplish it?
Detailed Description:
I have a VB application that sits on about 300 terminals. The application calls a SqlServer(2008 R2) stored procedure ([spGetScreenData]) every 2 minutes to get the latest sales data.
[spGetScreenData] creates a series of temp tables and returns a select query of about 200 rows and 100 columns. It takes about 8 seconds to execute.
My goal is to create a new stored procedure ([spRefreshScreenData]) that executes every two minutes which will refresh the data in a table ([SCREEN_DATA]). I will then change [spGetScreenData] to simply query [SCREEN_DATA].
The job that refreshes [SCREEN_DATA] first sets a flag in a status table to 'RUNNING' while it executes. Once complete, it sets that status to 'COMPLETED'.
[spGetScreenData] checks the status of the flag before querying and waits (for a period of time) until it's ready. Something like...
DECLARE #Condition AS BIT=0
, #Count AS INT=0
, #CycleCount AS INT=10 --10 cycles (20 Seconds)
WHILE #Condition = 0 AND #Count < #CycleCount
BEGIN
SET #Count = #Count + 1
IF EXISTS( SELECT Status
FROM tbl_Process_Status
WHERE Process = 'POS_Table_Refresh'
AND Status='Running')
WAITFOR DELAY '000:00:02' --Wait 2 seconds
ELSE
SET #Condition=1
END
SELECT *
FROM SCREEN_DATA
WHERE (Store=#Store OR #Store IS NULL)
My concern has to do with [spRefreshScreenData]. When [spRefeshScreenData] begins its truncation, there could be dozens of requests for the data currently running.
Will SqlServer simply wait until the request are done before truncating? Is there a setting I have to set to not mess these queries up?
Or do I have to build some mechanism to wait until all requests are completed before starting the truncation?
The job that refreshes [SCREEN_DATA] first sets a flag in a status table to 'RUNNING' while it executes. Once complete, it sets that status to 'COMPLETED'.
[spGetScreenData] checks the status of the flag before querying and waits (for a period of time) until it's ready
Don't. Use app locks. The readers (spGetScreenData) are the app lock in shared mode, the writers (refresh job) requests it X mode. See sp_getapplock.
But even this is no necessary. You can build the new data online, while the queries continue, w/o affecting them using a staging table, ye a different table than the one queried by the apps. When the rebuild is complete simply swap the original tabel with the staging one, using either fast SWITCH operations (see Transferring Data Efficiently by Using Partition Switching) or using the good 'ole sp_rename trick.