I have been working on a stored procedure performance problem for over a week now and is related to my other post on Stackoverflow here. Let me give you some background information.
We have a nightly process which runs and is started by a stored procedure which calls many many many other stored procedures. Lots of the called stored procedures call others, etc. I have looked at some of the called procs and there is all sorts of frightnening complicated stuff in there such as XML string processing, unnecessary over-use of cursors, NOLOCK hints over-used, rare use of set-based processing, etc - the list goes on, it's quite horrendous.
This nightly process in our production environment takes on average 1:15 to run. It sometimes takes 2 hours to run which is unacceptable. I have created a test environment on identical hardware to production and run the proc. It took 45 minutes the first time I ran it. If I restore the database to the exact same point and run it again, it takes longer: indeed, if I repeat this action several times (restoring and re-running), the proc takes progressively longer until it plateaus at around 2 hours. This really puzzles me because I restore the database to the exact same point every time. There are no other user databases on the server.
I thought of two lines of investigation to pursue:
Query plans and parameter spoofing
Tempdb
As a test, I restarted SQL Server to clear out both the cache and tempdb and re-ran the proc with the same database restore. The proc took 45 minutes. I repeated this several times to ensure that it was repeatable - again it took 45 minutes each time. I then embarked on several tests to try and isolate the puzzling increase in run times when SQL Server does not get restarted:
Run the initial stored procedure WITH RECOMPILE
Before running the procedure, executre DBCC FREEPROCCACHE to clear out the procedure cache
Before running the procedure, execute CHECKPOINT followed by DBCC DROPCLEANBUFFERS to ensure that the cache was empty and clean
Executed the following script to ensure all stored procedures were marked for recompilation:
DECLARE #proc_schema SYSNAME
DECLARE #proc_name SYSNAME
DECLARE prcCsr CURSOR local
FOR SELECT specific_schema,
specific_name
FROM INFORMATION_SCHEMA.routines
WHERE routine_type = 'PROCEDURE'
OPEN prcCsr
FETCH NEXT FROM prcCsr INTO #proc_schema, #proc_name
DECLARE #stmt NVARCHAR(MAX)
WHILE ##FETCH_STATUS = 0
BEGIN
SET #stmt = N'exec sp_recompile ''[' + #proc_schema + '].['
+ #proc_name + ']'''
-- PRINT #stmt -- DEBUG
EXEC ( #stmt
)
FETCH NEXT FROM prcCsr INTO #proc_schema, #proc_name
END
In all the above tests, the procedure takes longer and longer to run with the same database restore. I am really at a loss now as to what to try. Looking into the code at this point is an option but realistically its going to take 3-6 months to get that optimised as there is lots of room for improvement there. What I am really interested in getting to the bottom of, is why does the proc execution time get longer each time when a database restore has been performed even when the procedure and buffer caches have been cleaned?
I did also investigate tempdb, and try and clear out old tables in there as described in my other stackoverflow post, but I am unable to manually clear out temp tables that were created from table variables manually and they don't seem to want to disappear on their own (even after leaving them for 24 hours).
Any insight or suggestions for further testing would be greatly appreciated. I am running SQL Server 2005 SP3 64-bit Enterprise edition on a Windows 2003 R2 Ent. edition cluster.
Regards,
Mark.
One thing that could cause this is if the process is leaking XML documents. That would cause SQL Server to use more memory, and parts of that might be written to a page file on disk, causing the process to slow down.
Code that creates an XML document looks like:
EXEC sp_xml_preparedocument #idoc OUTPUT, #strXML
It leaks if there is no corresponding:
EXEC sp_xml_removedocument #idoc
XML documents are COM objects stored outside the configured SQL Server memory. Even if you set SQL Server to use max 5 GB, leaking XML documents grows memory usage beyond that.
Reviewing all posts to-date and your related question, it certainly sounds like your strongest lead is the mystery behind those tempdb objects. Some leading questions:
After a fresh start, after the process is run how many objects are in tempdb? Is it the same number after every fresh start?
Do the numbers grow after “successive” runs? Do they grow at the same rate?
Can you determine if they occupy space?
For that matter, your tempdb files grow with each successive run of your process?
I followed the links, but didn’t find any reference discussion the actual problem. You might want to raise the issue on the Microsoft SQL Technet forums here -- they can be pretty good with the abstract stuff. (If all else fails, you can open a case with MS technical support. It might take days, but odds are very good that they will figure things out. And if it is an MS bug, they refund your money!)
You've said that rewriting the code is not an option. However, if temp table abuse is a factor, identifying and refactoring those parts of the code first might help a lot. To find which those may be, run SQL Profiler while your process executes. This kind of work is, alas, subjective and highly iterative (meaning you hardly ever get just the right set of counters on the first pass). Some thoughts:
Start with tracking SP:Started, to track which stored proedures are being called.
SQL Profiler can be used to group data; it’s awkward and I’m not sure how to describe it in mere text, but configured properly you’ll get a Profiler display showing the number of times each procedures was. Ideally, this would show the most frequenly called procs, and you can analyze them for temp table abuse and refactor as necessary.
If nothing jumps out there, you can trace SP:StmtStarting and do the same thing for individual statements. The problem here is that in a 2+/- hour spaghetti-code run, you might run out of disk space, and analyzing 100s of MB of trace data can be a nightmare. (Hint: load it in a table, build indexes, then carefully delete out the cruft.) Again, the goal would be to identify overly used/abused temp table code to be refactored.
Mark-
So it might take 3-6 months to totally re-write this procedure, but that doesn't mean you can't do some relatively quick performance optimization.
Some of the routines I have to support run 30hrs+, I would be ecstatic to get them to run in 2hrs!! The kind of optimization that you do on these routines is a little different than your normal OLTP database:
Capture a trace of the entire process, making sure to capture SP:StmtCompleted and SQL:StmtCompleted events. Make sure to put a filter on Duration (>10ms or something) to eliminate all the quick, unimportant statements.
Pull this trace into a table, and do some filtering/sorting/grouping, focusing on Duration and Reads. You will likely end up with one of two situations:
(A) A handful of individual queries/statements are responsible for the bulk of the time of the procedure (good news)
(B) A whole lot of similar statements each take a short amount of time, but together they add up to a long time.
In scenario (A), just focus your attention on these queries. Optimize them using indexes, or using other standard techniques. I highly recommend Dan Tow's book "SQL Tuning" for a powerful technique to optimize queries, especially messy ones with complicated joins.
In scenario (B), step back a bit and look at the set of statements as a whole. Are they all similar in some way? Can you add an index on a key, common table that will improve them all? Can you eliminate a loop that executes 10,000 dynamic queries, and instead do a single set-based query?
Still two other possibilities, I suppose:
(C) 15,000 totally different dynamic SQL statements, each requiring its own painstaking optimization. In this case, try to focus on server-level optimizations, such as I/O based improvements that will benefit them all.
(D) Something else weird going on with TempDB or something mis-configured on the server. Not much else I can say here, other than find the problem, and fix it!
Hope this helps.
Can you try the following scenario on the test server:
Make two copies of the database on the server: [A] and [B]. [A] is the database in question, [B] is the copy.
Restart server
Run your process
Drop the database [A]
Rename [B] to [A]
Run your process
This would be like a hot database swap. If the second run takes longer, something on the server level is happening (tempdb, memory, I/O, etc). If the second run takes about the same time, then the problem is on the database level (locks, index fragmentation, etc).
Good luck!
Run the following script at start of test and then after each iteration:
select sum(single_pages_kb) as sum_bp_kb
, sum(multi_pages_kb) as sum_va_kb
, type
from sys.dm_os_memory_clerks
group by type
having sum(single_pages_kb+multi_pages_kb) > 16
order by sum(single_pages_kb+multi_pages_kb) desc
select sum(total_pages), type_desc
from tempdb.sys.allocation_units
group by type_desc;
select * from sys.dm_os_performance_counters
where counter_name in (
'Log Truncations'
,'Log Growths'
,'Log Shrinks'
,'Data File(s) Size (KB)'
,'Log File(s) Size (KB)'
,'Active Temp Tables');
If the results are not self-evident, you can post them somewhere and place a link here, I can look into them and see if something strikes as odd.
What does the overall process do, what is the purpose of the operation being performed?
I would assume that executing the process results in data modification within the database. Is this the case?
If this is the case, then each time you run the process, the data begin considered is different and so different execution plan production is a possibility and so too are differing execution times.
Assuming that modification to the database data is occuring then you should also investigate:
Updating relevant database statistics
between each process run.
Reviewing the level of index
fragmentation between each process
run and determine if defragmentation could prove benificial.
Apparently you want to try anything except what you really have to do which is fix the process. Start by getting rid of the cursors. If it takes two hours right now, without the cursors I'll bet you can get it down to less than ten minutes.
I would log information into a log_table and the time it took to run each steps...that will help you narrow down the issue and also help you progressively improve the process by tackling it one at time (from improving procs that take the longest).
Best way is to simply insert at the beginning and the end of each proc.
Cursors are not peformance boosters, others address that. (not your decision)
Look into the temp tables use/management. Are they global temp tables or session/local temp tables? The fact that they are hanging around looks interesting. The tempdb is locked when temp tables are created which might be part of the issue.
Local temp tables (#mytable syntax) should go away when the session goes out of scope, but you SHOULD have dropped these (release early) to free up resources.
Use of local temp tables in transaction then cancel without COMMIT/ROLLBACK can increase locking in tempdb causing performance issues.
Speaking of transactions - this will cause locks on syscolumns, sysindexes etc. if temp tables are created in transactions - thus other exeuctions are blocked from using the same query.
Use of temp tables created by calling procedures in the called procedures points to logic need - rethink and try to use relational structures instead.
IF you need temp tables (to eliminate cursors :) then avoid SELECT INTO - to avoid system objects locks.
Use of global temp tables (##myglobaltable syntax) should be avoided as multiple sessions accessing can be and issue (the table hangs around until all sessions clear), and for me at least, makes no additive logical value proposition (look into the use of a permanent table instead). Question if global, are there blocking procedures?
Are there a lot of sparse temp tables (grow with large data, but have smaller data sets in them?)
Microsoft SQL Server Book Online,
“Consider using table variables instead of temporary tables. Temporary tables are useful in cases when indexes need to be created explicitly on them, or when the table values need to be visible across multiple stored procedures or functions. In general, table variables contribute to more efficient query processing.”
Of course if the temp table needs indexes, tabel variables are not an option.
I don't have the answer but some ideas of what I would do to isolate issues like this.
First, I would take snapshots of sys.dm_os_wait_stats before and after each execution. You subtract the 2 snapshots (get a deltas) and see if any particular WAIT is prominent or gets worse with each run. An easy way to calculate deltas is to copy the sys.dm_os_wait_stats values into Excel worksheets and use VLOOKUP() to subtract corresponding values. I've used this investigation technique hundreds of times. You don't know what aspect SQL Server is hung up on?! Let SQL Server "tell" you via sys.dm_os_wait_stats !
The other thing I might try is to adjust the behavior of the loop to understand if the subsequent slower executions exhibit constant throughput for all records from beginning to end or does it only slow down for particular sproc(s) in INFORMATION_SCHEMA.routines ... 2 techniques for exploring this is:
1) Add a "top N" clause the SQL SELECT such as "top 100" or "top 1000" (create an artificial limit) to see if you get subsequent slowdowns for all record count scenarios ... or ... do you only get the slowdowns when the cursor resultset is large enough to include the offending sproc.
2) Instead of adding "top N", you can add more print statements (instrumentation) to calculate the throughput as it is processing.
Of course, you can do combination of both.
Maybe these diagnostics will get you closer to the root cause.
Edited to add: Btw, SQL2008 has a new performance monitor that makes it easy to "eyeball" the numbers of sys.dm_os_wait_stats. However for SQL2005, you'll have to manually calculate the deltas via Excel or a script.
These are long shots:
Quickly look through all of the
stored procedures for things that are
unusual and SQL Server should not
really be doing, for example sending
email or writing files, etc. SQL trying to send email to a non-exist email server could cause delays.
The other thing to keep in mind is
that as you restore the database
before each test possibly your disk
is getting more fragmented (not
really sure about this though). So
that may explain why run times get longer each time until they plateau.
Firstly, thanks to everyone for some really great help. I much appreciate your time and expertise in helping me to solve this very strange issue. I have an update.
I started a server-side trace to try and isolate the stored procs that were running slower between iterations. What I found surprised me. 96 stored procedures are involved in the process. Most of these stored procedures ran slower the second time around - about 50 of them. The rest were very quick to run and didn't influence the overall time at all, and in fact some of these ran a little quicker (as would be expected).
I failed over the database instance to another node in my cluster and ran the tests there with the exact same results - so I can rule out any OS differences between cluster nodes - when building the clusters I was very conscious to build them identically.
1100 temp tables get created during the process and persist after it has finished - these are all table variables and I found a way to remove them. Running sp_recompile on every proc and function in the database caused all the temp tables to get cleared up. However this did not improve the run times at all. The only thing that helps the run times is a restart of the SQL Server service. Unfortunately I am out of time now to investigate this further - I have other work to do, but would like to persist with it. Perhaps I will come back to it later if I get a spare few hours. In the meantime however, I have to admit defeat with no solution and no bounty to give.
Thanks again everyone.
Related
I have 1.2 million rows in Azure data table. The following command:
DELETE FROM _PPL_DETAIL WHERE RunId <> 229
is painfully slow.
There is an index on RunId.
I am deleting most of the data.
229 is a small number of records.
It has been running for an hour now
Should it take this long?
I am pretty sure it will finish.
Is there anything I can do to make operations like this faster?
The database does have a PK, although it is a dummy PK (not used). I already saw that as an optimization need to help this problem, but it still takes way too long (SQL Server treats a table without a PK differently -- much less efficient). It is still taking 1+ hour.
How about trying something like below
BEGIN TRAN
SELECT * INTO #T FROM _PPL_DETAIL WHERE RunId = 229
TRUNCATE TABLE _PPL_DETAIL
INSERT INTO _PPL_DETAIL
SELECT * FROM #T
COMMIT TRAN
Without knowing what database tier is using the database where that statment runs it is not easy to help you. However, let us tell you how the system works so that you can make this determination with a bit more investigation by yourself.
Currently the log commit rate is limited by the tier the database has. Deletes are fundamentally limited on the ability to write out log records (and replicate them to multiple machines in case your main machine dies). When you select records, you don't have to go over the network to N machines and you may not even need to go to the local disk if the records are preserved in memory, so selects are generally expected to be faster than inserts/updates/deletes because of the need to harden log for you. You can read about the specific limits for different reservation sizes are here: DTU Limits and vCore Limits.
One common problem is to do individual operations in a loop (like a cursor or driven from the client). This implies that each statement has a single row updated and thus has to harden each log record serially because the app has to wait for the statement to return before submitting the next statement. You are not hitting that since you are running a big delete as a single statement. That could be slow for other reasons such as:
Locking - if you have other users doing operations on the table, it could block the progress of the delete statement. You can potentially see this by looking at sys.dm_exec_requests to see if your statement is blocking on other locks.
Query Plan choice. If you have to scan a lot of rows to delete a small fraction, you could be blocked on the IO to find them. Looking at the query plan shape will help here, as will set statistics time on (We suggest you change the query to do TOP 100 or similar to get a sense of whether you are doing lots of logical read IOs vs. actual logical writes). This could imply that your on-disk layout is suboptimal for this problem. The general solutions would be to either pick a better indexing strategy or to use partitioning to help you quickly drop groups of rows instead of having to delete all the rows explicitly.
An additional strategy to have better performance with deletes is to perform batching.
As I know SQL Server had a change and the default DOP is 1 on their servers, so if you run the query with OPTION(MAXDOP 0) could help.
Try this:
DELETE FROM _PPL_DETAIL
WHERE RunId <> 229
OPTION (MAXDOP 0);
I have reviewed the similar question See and clear Postgres caches/buffers? , but all of the answers focus on the data buffers, and Postgresql has changed a lot since 2010.
Unlike the OP of that question, I am not looking for consistent behavior when computing performance, I am looking for adaptive behavior as the database changes over time.
In my applicaiton, at the beginning of a job execution, rows in the working tables are empty. Queries run very quickly, but as time goes on performance degrades because the prepared statements are not using ideal access paths (they were prepared when the tables were empty - doh!). Since a typical execution of the job will ultimately cover a few hundred million rows, I need to minimize all of the overheads and periodically run statistics to get the best access paths.
In SQLServer, one can periodically call update statistics and DBCC FreeProccache, and the prepared statements will automatically be re-prepared to use the new access paths.
Edit: FreeProcCache: in SQLServer, prepared statements are implemented as stored procedures. FreeProcCache wipes the compiled stored procedures so that they will be recompiled on the next invocation, and the new access paths come into effect immediately.
Edit: Details of postgresql management of prepared statements: Postgresql defers the prepare until the first call to EXECUTE, and caches the result ofthe prepare after the 5th execution. Once cached, the plan is fixed until the session ends or the prepared statement is freed with DEALLOCATE. Closing JDBC objects does not invoke DEALLOCATE, as an optimization to support open/read/close programming like many web apps display.
Is there a way to force a (Edit)JDBC prepared statement to recompile, (Edit) after running ANALYZE, so it will use the latest statistics?
EDIT: I am using JDBC PreparedStatement to prepare and execute queries against the database and the Postgres JDBC driver.
The way Postgresql updates statistics is via ANALYZE. This is also autoexecuted after a VACUUM run (since VACUUM frees references, and truncates empty pages, I would imagine much like your FreeProccache).
If autovacuum is enabled (the default), ANALYZE will be autorun according to the autovacuum cadence.
You do not need to "recompile" the prepared statement to pick up the new statistics in most cases because it will re-plan during each EXECUTE, and a parameterized prepared statement will replan based on the parameter values and the updated statistics. EDIT: The edge case described is where the query planner has decided to force a "generic plan" because the estimated cost of the specific plan exceeds the cost of such "generic plan" after 5 planned-executions.
Edit:
If you do reach this edge case, you can "drop" the prepared statement via DEALLOCATE (and then a re-PREPARE).
You may want to try ANALYZE before EXECUTE, but this will not guarantee a better performance...
Please ensure you really want to re-prepare statements. It might be the case you just want to close DB connection from time to time so statements get prepared "from scratch"
In case you really understand what you are doing (there might be valid reasons like you describe), you can issue DEALLOCATE ALL statement (it is a PostgreSQL-specific statement to deallocate all prepared statements). Recent pgjdbc versions (since 9.4.1210, 2016-09-07) handle that just fine and re-prepare the statements on subsequent use
I'm using SQL Server 2008 on Windows Server 2008 R2, all sp'd up.
I'm getting occasional issues with SQL Server hanging with the CPU usage on 100% on our live server. It seems all the wait time on SQL Sever when this happens is given to SOS_SCHEDULER_YIELD.
Here is the Stored Proc that causes the hang. I've added the "WITH (NOLOCK)" in an attempt to fix what seems to be a locking issue.
ALTER PROCEDURE [dbo].[MostPopularRead]
AS
BEGIN
SET NOCOUNT ON;
SELECT
c.ForeignId , ct.ContentSource as ContentSource
, sum(ch.HitCount * hw.Weight) as Popularity
, (sum(ch.HitCount * hw.Weight) * 100) / #Total as Percent
, #Total as TotalHits
from
ContentHit ch WITH (NOLOCK)
join [Content] c WITH (NOLOCK) on ch.ContentId = c.ContentId
join HitWeight hw WITH (NOLOCK) on ch.HitWeightId = hw.HitWeightId
join ContentType ct WITH (NOLOCK) on c.ContentTypeId = ct.ContentTypeId
where
ch.CreatedDate between #Then and #Now
group by
c.ForeignId , ct.ContentSource
order by
sum(ch.HitCount * hw.HitWeightMultiplier) desc
END
The stored proc reads from the table "ContentHit", which is a table that tracks when content on the site is clicked (it gets hit quite frequently - anything from 4 to 20 hits a minute). So its pretty clear that this table is the source of the problem. There is a stored proc that is called to add hit tracks to the ContentHit table, its pretty trivial, it just builds up a string from the params passed in, which involves a few selects from some lookup tables, followed by the main insert:
BEGIN TRAN
insert into [ContentHit]
(ContentId, HitCount, HitWeightId, ContentHitComment)
values
(#ContentId, isnull(#HitCount,1), isnull(#HitWeightId,1), #ContentHitComment)
COMMIT TRAN
The ContentHit table has a clustered index on its ID column, and I've added another index on CreatedDate since that is used in the select.
When I profile the issue, I see the Stored proc executes for exactly 30 seconds, then the SQL timeout exception occurs. If it makes a difference the web application using it is ASP.NET, and I'm using Subsonic (3) to execute these stored procs.
Can someone please advise how best I can solve this problem? I don't care about reading dirty data...
EDIT:
The MostPopularRead stored proc is called very infrequently - its called on the home page of the site, but the results are cached for a day. The pattern of events that I am seeing is when I clear the cache, multiple requests come in for the home site, and they all hit the stored proc because it hasn't yet been cached. SQL Server then maxes out, and can only be resolved by restarting the sql server process. When I do this, usually the proc will execute OK (in about 200 ms) and put the data back in the cache.
EDIT 2:
I've checked the execution plan, and the query looks quite sound. As I said earlier when it does run it only takes around 200ms to execute. I've added MAXDOP 1 to the select statement to force it to use only one CPU core, but I still see the issue. When I look at the wait times I see that XE_DISPATCHER_WAIT, ONDEMAND_TASK_QUEUE, BROKER_TRANSMITTER, KSOURCE_WAKEUP and BROKER_EVENTHANDLER are taking up a massive amount of wait time.
EDIT 3:
I previously thought that this was related to Subsonic, our ORM, but having switched to ADO.NET, the erros is still live.
The issue is likely concurrency, not locking. SOS_SCHEDULER_YIELD occurs when a task voluntarily yields the scheduler for other tasks to execute. During this wait the task is waiting for its quantum to be renewed.
How often is [MostPopularRead] SP called and how long does it take to execute?
The aggregation in your query might be rather CPU-intensive, especially if there are lots of data and/or ineffective indexes. So, you might end up with high CPU pressure - basically, a demand for CPU time is too high.
I'd consider the following:
Check what other queries are executing while CPU is 100% busy? Look at sys.dm_os_waiting_tasks, sys.dm_os_tasks, sys.dm_exec_requests.
Look at the query plan of [MostPopularRead], try to optimize the query. Quite often an ineffective query is the root cause of a performance problem, and query optimization is much more straightforward than other performance improvement techniques.
If the query plan is parallel and the query is often called by multiple clients simultaneously, forcing a single-thread plan with MAXDOP=1 hint might help (abundant use of parallel plans is usually indicated by SOS_SCHEDULER_YIELD and CXPACKET waits).
Also, have a look at this paper: Performance tuning with wait statistics. It gives a pretty good summary of different wait types and their impact on performance.
P.S. It is easier to use SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED before a query instead of adding (nolock) to each table.
Remove the NOLOCK hint.
Open a query in SSMS, run SET STATISTICSIO ON and run the query in the procedure. Let it finish and post here the IO stats messages. Then post the table definitions and all indexes defined on them. Then somebody will be able to reply with the proper indexes you need.
As with all SQL performance problem, the text of the query is largely irrelevant without complete schema definition.
A guesstimate covering index would be:
create index ContentHitCreatedDate
on ContentHit (CreatedDate)
include (HitCount, ContentId, HitWeightId);
Update
XE_DISPATCHER_WAIT, ONDEMAND_TASK_QUEUE, BROKER_TRANSMITTER, KSOURCE_WAKEUP and BROKER_EVENTHANDLER: you can safely ignore all these waits. They show up because they represent threads parked and waiting to dispatch XEvents, Service Broker or internal SQL thread pool work items. As they spend most of their time parked and waiting, they get accounted for unrealistic wait times. Ignore them.
If you believe ContentHit to be the source of your problem, you could add a Covering Index
CREATE INDEX IX_CONTENTHIT_CONTENTID_HITWEIGHTID_HITCOUNT
ON dbo.ContentHit (ContentID, HitWeightID, HitCount)
Take a look at the Query Plan if you want to be certain about the bottleneck in your query.
By default settings sql server uses all the core/cpu for all queries (max DoP setting> advanced property, DoP= Degree of Parallelism), which can lead to 100% CPU even if only one core is actually waiting for some I/O.
If you search the net or this site you will find resource explaining it better than me (like monitoring your I/o despite you see a CPU-bound problem).
On one server we couldn't change the application with a bad query that locked down all resources (CPU) but by setting DoP to the half of the number of core we managed to avoid that the server get "stopped". The effect on the queries being less parallel was negligible in our case.
--
Dom
Thanks to all who posted, I got some great SQL Server perf tuning tips.
In the end we ran out time to resolve this mystery - we found a more effecient way to collect this information and cache it in the database, so this solved the problem for us.
I'm trying to optimize a stored procedure I'm maintaining, and am wondering if anyone can clue me in to the performance benefits/penalities of the options below. For my solution, I basically need to run a conversion program on an image stored in an IMAGE column in a table. The conversion process lives in an external .EXE file. Here are my options:
Pull the results of the target table into a temporary table, and then use a cursor to go over each row in the table and run a stored procedure on the IMAGE column. The stored proc calls out to the .EXE.
Create a UDF that calls the .EXE file, and run a SQL query similar to "select UDFNAME(Image_Col) from TargetTable".
I guess what I'm looking for is an idea of how much overhead would be added by the creation of the cursor, instead of doing it as a set?
Some additional info:
The size of the set in this case is max. 1000
As an answer mentions below, if done as a set with a UDF, will that mean that the external program is opened 1000 times all at once? Or are there optimizations in place for that? Obviously, on a multi-processor system, it may not be a bad thing to have multiple instances of the process running, but 1000 might be a bit much.
define set base in this context?
If you have 100 rows will this open up the app 100 times in one shot? I would say test and just because you can call an extended proc from a UDF I would still use a cursor for this because setbased doesn't matter in this case since you are not manipulating data in the tables directly
I did a little testing and experimenting, and when done in a UDF, it does indeed process each row at a time - SQL server doesn't run 100 processes for each of the 100 rows (I didn't think it would).
However, I still believe that doing this as a UDF instead of as a cursor would be better, because my research tends to show that the extra overhead of having to pull the data out in the cursor would slow things down. It may not make a huge difference, but it might save time versus pulling all of the data out into a temporary table first.
So for the second day in a row, someone has wiped out an entire table of data as opposed to the one row they were trying to delete because they didn't have the qualified where clause.
I've been all up and down the mgmt studio options, but can't find a confirm option. I know other tools for other databases have it.
I'd suggest that you should always write SELECT statement with WHERE clause first and execute it to actually see what rows will your DELETE command delete. Then just execute DELETE with the same WHERE clause. The same applies for UPDATEs.
Under Tools>Options>Query Execution>SQL Server>ANSI, you can enable the Implicit Transactions option which means that you don't need to explicitly include the Begin Transaction command.
The obvious downside of this is that you might forget to add a Commit (or Rollback) at the end, or worse still, your colleagues will add Commit at the end of every script by default.
You can lead the horse to water...
You might suggest that they always take an ad-hoc backup before they do anything (depending on the size of your DB) just in case.
Try using a BEGIN TRANSACTION before you run your DELETE statement.
Then you can choose to COMMIT or ROLLBACK same.
In SSMS 2005, you can enable this option under Tools|Options|Query Execution|SQL Server|ANSI ... check SET IMPLICIT_TRANSACTIONS. That will require a commit to affect update/delete queries for future connections.
For the current query, go to Query|Query Options|Execution|ANSI and check the same box.
This page also has instructions for SSMS 2000, if that is what you're using.
As others have pointed out, this won't address the root cause: it's almost as easy to paste a COMMIT at the end of every new query you create as it is to fire off a query in the first place.
First, this is what audit tables are for. If you know who deleted all the records you can either restrict their database privileges or deal with them from a performance perspective. The last person who did this at my office is currently on probation. If she does it again, she will be let go. You have responsibilites if you have access to production data and ensuring that you cause no harm is one of them. This is a performance problem as much as a technical problem. You will never find a way to prevent people from making dumb mistakes (the database has no way to know if you meant delete table a or delete table a where id = 100 and a confirm will get hit automatically by most people). You can only try to reduce them by making sure the people who run this code are responsible and by putting into place policies to help them remember what to do. Employees who have a pattern of behaving irresponsibly with your busness data (particulaly after they have been given a warning) should be fired.
Others have suggested the kinds of things we do to prevent this from happening. I always embed a select in a delete that I'm running from a query window to make sure it will delete only the records I intend. All our code on production that changes, inserts or deletes data must be enclosed in a transaction. If it is being run manually, you don't run the rollback or commit until you see the number of records affected.
Example of delete with embedded select
delete a
--select a.* from
from table1 a
join table 2 b on a.id = b.id
where b.somefield = 'test'
But even these techniques can't prevent all human error. A developer who doesn't understand the data may run the select and still not understand that it is deleting too many records. Running in a transaction may mean you have other problems when people forget to commit or rollback and lock up the system. Or people may put it in a transaction and still hit commit without thinking just as they would hit confirm on a message box if there was one. The best prevention is to have a way to quickly recover from errors like these. Recovery from an audit log table tends to be faster than from backups. Plus you have the advantage of being able to tell who made the error and exactly which records were affected (maybe you didn't delete the whole table but your where clause was wrong and you deleted a few wrong records.)
For the most part, production data should not be changed on the fly. You should script the change and check it on dev first. Then on prod, all you have to do is run the script with no changes rather than highlighting and running little pieces one at a time. Now inthe real world this isn't always possible as sometimes you are fixing something broken only on prod that needs to be fixed now (for instance when none of your customers can log in because critical data got deleted). In a case like this, you may not have the luxury of reproducing the problem first on dev and then writing the fix. When you have these types of problems, you may need to fix directly on prod and you should have only dbas or database analysts, or configuration managers or others who are normally responsible for data on the prod do the fix not a developer. Developers in general should not have access to prod.
That is why I believe you should always:
1 Use stored procedures that are tested on a dev database before deploying to production
2 Select the data before deletion
3 Screen developers using an interview and performance evaluation process :)
4 Base performance evaluation on how many database tables they do/do not delete
5 Treat production data as if it were poisonous and be very afraid
So for the second day in a row, someone has wiped out an entire table of data as opposed to the one row they were trying to delete because they didn't have the qualified where clause
Probably the only solution will be to replace someone with someone else ;). Otherwise they will always find their workaround
Eventually restrict the database access for that person and provide them with the stored procedure that takes the parameter used in the where clause and grant them access to execute that stored procedure.
Put on your best Trogdor and Burninate until they learn to put in the WHERE clause.
The best advice is to get the muckety-mucks that are mucking around in the database to use transactions when testing. It goes a long way towards preventing "whoops" moments. The caveat is that now you have to tell them to COMMIT or ROLLBACK because for sure they're going to lock up your DB at least once.
Lock it down:
REVOKE delete rights on all your tables.
Put in an audit trigger and audit table.
Create parametrized delete SPs and only give rights to execute on an as needed basis.
Isn't there a way to give users the results they need without providing raw access to SQL? If you at least had a separate entry box for "WHERE", you could default it to "WHERE 1 = 0" or something.
I think there must be a way to back these out of the transaction journaling, too. But probably not without rolling everything back, and then selectively reapplying whatever came after the fatal mistake.
Another ugly option is to create a trigger to write all DELETEs (maybe over some minimum number of records) to a log table.