I have a bunch of users making ad-hoc queries into a database running SQL Server. Occasionally someone will run a query that locks up the system for a long period of time as they retrieve 10MM rows.
Is it possible to set some options when a specific login connects? e.g.:
transaction isolation level
max rowcount
query timeout
If this isn't possible in SQL Server 2000, is it possible in another version? As far as I can tell, the resource governor does not give you control like this (it just lets you manage memory and CPU).
I realize the user's could do much of this themselves but it'd be awesome if I could control it from the server per user.
Obviously I'd like to move the users away from direct table/view access but that's not an option at the moment.
You can certainly limit the query results. I'm doing it for StackQL using a stored procedure that looks something like this:
CREATE PROCEDURE [dbo].[WebQuery]
#QueryText nvarchar(1000)
AS
BEGIN
INSERT INTO QueryLogs(QueryText)
VALUES(#QueryText)
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SET QUERY_GOVERNOR_COST_LIMIT 15000
SET ROWCOUNT 500
Begin Try
exec (#QueryText)
End Try
Begin Catch
SELECT ERROR_NUMBER() AS ErrorNumber,
ERROR_MESSAGE() AS ErrorMessage,
ERROR_PROCEDURE() AS ErrorProcedure,
ERROR_SEVERITY() AS ErrorSeverity,
ERROR_LINE() AS ErrorLine,
ERROR_STATE() AS ErrorState
End Catch
END
The important part here is the series of three SET statements after the log. This limits based on both the number of rows in the results and the expected costs of the query. The rowcount and query governor values can be use variables, and so it shouldn't be hard to modify that to change the restriction based on the current user as well.
However, you should also note that it's pretty easy for users who are "in the know" to bust out of that if they want. In my case I consider the ability to get past the limits from time to time a feature. But it's also why I do the logging: the code to get past the limits sticks out in the logs, and so I can easily catch and ban anyone doing it too often without my permission.
Finally, any user that calls this should be only in the denydatawriters role, the datareaders role, and then given explicit permissions to execute just this stored procedure. Then they can't really do anything but select on existing tables.
Now, I'll anticipate your next question is whether you can make this automatic from somewhere like report builder or management studio. Unfortunatley, I don't think that's possible. You'll need to give them some kind of interface that makes it easy to call your stored procedure. But I could be wrong here.
In SQL Server 2005 and 2008 you can add a LOGIN Trigger to the database that could change the connections settings, based on who they are connected as. However, you need to be very careful with these as a mistake or error in the trigger could result in everyone being locked out.
There's nothing like that in SQL Server 2000.
I just wanted to add, that if Joel's approach works for you, then I would strongly encourage that method over mine, because a Login Trigger is one mighty big and dangerous grenade to be throwing at this problem.
If you still really want to do them, here is a nice article that demonstrates how to use them. Even more important, however is here which has the crucial instructions for what to do if your Logon Trigger locks everyone out.
Related
I have an INSERT trigger on one of my tables that issues a THROW when it finds a duplicate. Problem is my transactions seem to be implicitly rolled back at this point - this is a problem, I want to control when transactions are rolled back.
The issue can be re-created with this script:
CREATE TABLE xTable (
id int identity not null
)
go
create trigger xTrigger on xTable after insert as
print 'inserting...';
throw 1600000, 'blah', 1
go
begin tran
insert into xTable default values
rollback tran
go
drop table xTable
If you run the rollback tran - it will tell you there is no begin tran.
If i swap the THROW for a 'normal' exception (like SELECT 1/0) the transaction is not rolled back.
I have checked xact_abort flag - and it is off.
Using SQL Server 2012 and testing through SSMS
Any help appreciated, thanks.
EDIT
After reading the articles posted by #Dan Guzman, i came to the following conclusion/summary...
SQL Server automatically sets XACT_ABORT ON in triggers.
My example (above) does not illustrate my situation - In reality I'm creating an extended constraint using a trigger.
My use case was contrived, I was trying to test multiple situations in the SAME unit test (not a real world situation, and NOT good unit test practice).
My handling of the extended constraint check and throwing an error in the trigger is correct, however there is no real situation in which I would not want to rollback the transaction.
It can be useful to SET XACT_ABORT OFF inside a trigger for a particular case; but your transaction will still be undermined by general batch-aborting errors (like deadlocks).
Historical reasons aside, i don't agree with SQL Server's handling of this; just because there is no current situation in which you'd like to continue the transaction, does not mean such a situation may not arise.
I'd like to see one able to setup SQL Server to maintain the integrity of transactions, if your chosen architecture is to have transactions strictly managed at origin, i.e. "he alone who starts the transaction, must finish it". This, aside from usual fail-safes, e.g. if your code is never reached due to system failure etc.
THROW will terminate the batch when outside the scope of TRY/CATCH (https://msdn.microsoft.com/en-us/library/ee677615.aspx). The implication here is that no further processing of the batch takes place, including the statements following the insert. You'll need to either surround your INSERT with a TRY/CATCH or use RAISERROR instead of THROW.
T-SQL error handing is a rather large and complex topic. I suggest you peruse the series of error-handling articles by Erland Sommarskog: http://www.sommarskog.se/error_handling/Part1.html. Most relevant here is the topic Can I Prevent the Trigger from Rolling Back the Transaction? http://www.sommarskog.se/error_handling/Part3.html#Triggers. The take away from a best practices point of view is that triggers are not the right solution if you enforce business rules in a trigger without a rollback.
About 5 times a year one of our most critical tables has a specific column where all the values are replaced with NULL. We have run log explorers against this and we cannot see any login/hostname populated with the update, we can just see that the records were changed. We have searched all of our sprocs, functions, etc. for any update statement that touches this table on all databases on our server. The table does have a foreign key constraint on this column. It is an integer value that is established during an update, but the update is identity key specific. There is also an index on this field. Any suggestions on what could be causing this outside of a t-sql update statement?
I would start by denying any client side dynamic SQL if at all possible. It is much easier to audit stored procedures to make sure they execute the correct sql including a proper where clause. Unless your sql server is terribly broken, they only way data is updated is because of the sql you are running against it.
All stored procs, scripts, etc. should be audited before being allowed to run.
If you don't have the mojo to enforce no dynamic client sql, add application logging that captures each client sql before it is executed. Personally, I would have the logging routine throw an exception (after logging it) when a where clause is missing, but at a minimum, you should be able to figure out where data gets blown out next time by reviewing the log. Make sure your log captures enough information that you can trace it back to the exact source. Assign a unique "name" to each possible dynamic sql statement executed, e.g., each assign a 3 char code to each program, and then number each possible call 1..nn in your program so you can tell which call blew up your data at "abc123" as well as the exact sql that was defective.
ADDED COMMENT
Thought of this later. You might be able to add / modify the update trigger on the sql table to look at the number of rows update prevent the update if the number of rows exceeds a threshhold that makes sense for your. So, did a little searching and found someone wrote an article on this already as in this snippet
CREATE TRIGGER [Purchasing].[uPreventWholeUpdate]
ON [Purchasing].[VendorContact]
FOR UPDATE AS
BEGIN
DECLARE #Count int
SET #Count = ##ROWCOUNT;
IF #Count >= (SELECT SUM(row_count)
FROM sys.dm_db_partition_stats
WHERE OBJECT_ID = OBJECT_ID('Purchasing.VendorContact' )
AND index_id = 1)
BEGIN
RAISERROR('Cannot update all rows',16,1)
ROLLBACK TRANSACTION
RETURN;
END
END
Though this is not really the right fix, if you log this appropriately, I bet you can figure out what tried to screw up your data and fix it.
Best of luck
Transaction log explorer should be able to see who executed command, when, and how specifically command looks like.
Which log explorer do you use? If you are using ApexSQL Log you need to enable connection monitor feature in order to capture additional login details.
This might be like using a sledgehammer to drive in a thumb tack, but have you considered using SQL Server Auditing (provided you are using SQL Server Enterprise 2008 or greater)?
I was reading an article the other day the showed how to run SQL Update, Insert, or Deletes as a whatif type scenario. I don't remember the parameter that they talked about and now I can't find the article. Not sure if I was dreaming.
Anyway, does anyone know if there is a parameter in SQL2008 that lets you try an insert, update, or delete without actually committing it? It will actually log or show you what it would have updated. You remove the parameter and run it if it behaves as you would expect.
I don't know of a SQL2008 specific feature with any SQL service that supports transactions you can do this:
Start a transaction ("BEGIN TRANSACTION" in TSQL)
The rest of your INSERT/UPDATE/DELETE/what-ever code
(optional) Some extra SELECT statements and such if needed to output the result of the above actions, if the default output from step 2 (things like "X rows affected") is not enough
Rollback the transaction ("ROLLBACK TRANSACTION" in TSQL)
(optional) Repeat the testing code to show how things are without the code in step 2 having run
For example:
BEGIN TRANSACTION
-- make changes
DELETE people WHERE name LIKE 'X%'
DELETE people WHERE name LIKE 'D%'
EXEC some_proc_that_does_more_work
-- check the DB state after the changes
SELECT COUNT(*) FROM people
-- undo
ROLLBACK TRANSACTION
-- confirm the DB state without the changes
SELECT COUNT(*) FROM people
(you might prefer to do the optional "confirm" step before starting the transaction rather than after rolling it back, but I've always done it this way around as it keeps the two likely-to-be-identical sections of code together for easier editing)
If you use something like this rather then something SQL2008 specific the technique should be transferable to other RDBS too (just update the syntax if needed).
OK, finally figured it out. I've confused this with another project I was working on with PowerShell. PowerShell has a "whatif" parameter that can be used to show you what files would be removed before they are removed.
My apologies to those who have spent time trying to find an answer to this port and my thanks to those of you who have responsed.
I believe you're talking about BEGIN TRANSACTION
BEGIN TRANSACTION starts a local transaction for the connection issuing the statement. Depending on the current transaction isolation level settings, many resources acquired to support the Transact-SQL statements issued by the connection are locked by the transaction until it is completed with either a COMMIT TRANSACTION or ROLLBACK TRANSACTION statement. Transactions left outstanding for long periods of time can prevent other users from accessing these locked resources, and also can prevent log truncation.
Do you perhaps mean SET NOEXEC ON ?
When SET NOEXEC is ON, SQL Server
compiles each batch of Transact-SQL
statements but does not execute them.
When SET NOEXEC is OFF, all batches
are executed after compilation.
Note that this won't warn/indicate things like key violations.
Toad for SQL Server has a "Validate SQL" feature that checks queries against wrong table/column names etc. . Maybe you are talking about some new feature in SSMS 2008 similar to that...
I'm more than seven years late to this particular party but I suspect the feature in question may also have been the OUTPUT clause. Certainly, it can be used to implement whatif functionality similar to Powershell's in a t-sql stored procedure.
https://learn.microsoft.com/en-us/sql/t-sql/queries/output-clause-transact-sql
Use this in each insert/update/delete/merge query to let the SP output a meaningful resultset of the changes it makes e.g. outputting the table name and action performed as the first two columns then all the altered columns.
Then simply rollback the changes if a #whatif parameter is set to 1 or commit them if #whatif is set to 0.
I'm building my own clone of http://statoverflow.com/sandbox (using the free controls provided to 10K users from Telerik). I have a proof of concept available I can use locally, but before I open it up to others I need to lock it down some more. Currently I run everything through a stored procedure that looks something like this:
CREATE PROCEDURE WebQuery
#QueryText nvarchar(1000)
AS
BEGIN
-- no writes, so no need to lock on select
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
-- throttles
SET ROWCOUNT 500
SET QUERY_GOVERNOR_COST_LIMIT 500
exec (#QueryText)
END
I need to do two things yet:
Replace QUERY_GOVERNOR_COST_LIMIT with an actual rather than estimated timeout, so no query runs longer than say 2 minutes.
Right now nothing stops users from just putting their own 'SET ROWCOUNT 50000;' in front of their query text to override my restriction, so I need to somehow limit the queries to a single statement or (preferrably) disallow the SET commands inside the exec function.
Any ideas?
You really plan to allow users to run arbitrary Ad-Hoc SQL? Only then can a user place in a SET to override your restrictions. If that's the case, you're best bet is to do some basic parsing using lexx/yacc or flex/bison (or your favorite CLR language tree parser) and detect invalid SET statements. Are you going to allow SET #variable=value though, which syntactically is a SET...
If you impersonate low privileged users via EXECUTE AS make sure you create an irreversible impersonation context, so the user does not simply execute REVERT and regain all the privileges :) You also must really understand the implications of database impersonation, make sure you read Extending Database Impersonation by Using EXECUTE AS.
Another thing to consider is deffering execution of requests to a queue. Since queue readers can be calibrated via MAX_QUEUE_READERS, you get a very cheap throttling. See Asynchronous procedure execution for a related article how to use queues to execute batches. This mechanism is different from resource governance, but I've seen it used to more effect that the governor itself.
Throwing this out there:
The EXEC statement appears to support impersonation. See http://msdn.microsoft.com/en-us/library/ms188332.aspx. Perhaps you can impersonate a limited user. I am looking into the availability of limitations that may prevent SET statements and the like.
On a very basic level, how about blocking any statement that doesn't start with SELECT? Or will other query starts be supported, like CTE's or DECLARE statements? 1000 chars isn't too much room to play with, but i'm not too clear what this is in the first place.
UPDATED
Ok, how about prefixing whatever they submit with SELECT TOP 500 FROM (
and appending a ). If they try to do multiple statements it'll throw an error you can catch. And to prevent denial of service, replace their starting SELECT with another SELECT TOP 500.
Doesn't help if they've appended an ORDER BY to something returning a million rows, though.
I am still learning sql server somewhat and recently came across a select query in a stored procedure which was causing a very slow fill of a dataset in c#. At first I thought this was to do with .NET but then found a suggestion to put in the stored procedure:
set implicit_transactions off
this seems to cure it but I would like to know why also I have seen other options such as:
set nocount off
set arithabort on
set concat_null_yields_null on
set ansi_nulls on
set cursor_close_on_commit off
set ansi_null_dflt_on on
set ansi_padding on
set ansi_warnings on
set quoted_identifier on
Does anyone know where to find good info on what each of these does and what is safe to use when I have stored procedures setup just to query of data for viewing.
I should note just to stop the usual use/don't use stored procedures debate these queries are complex select statements used on multiple programs in multiple languages it is the best place for them.
Edit: Got my answer didn't end up fully reviewing all the options but did find
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
Sped up the complex queries dramatically, I am not worried about the dirty read in this instance.
This is the page out of SQL Server Books Online (BOL) that you want. It explains all the SET statements that can be used in a session.
http://msdn.microsoft.com/en-us/library/ms190356.aspx
Ouch, someone, somewhere is playing with fire big-time.
I have never had a production scenario where I had to enable implicit transactions. I always open transactions when I need them and commit them when I am done. The problem with implicit transactions is its really easy to "leak" an open transaction which can lead to horrible issues. What this setting means is "please open a transaction for me the first time I run a statement if there is no transaction open, don't worry about committing it".
For example have a look at the following examples:
set implicit_transactions on
go
select top 10 * from sysobjects
And
set implicit_transactions off
go
begin tran
select top 10 * from sysobjects
They both do the exact same thing, however in the second statement its pretty clear someone forgot to commit the transaction. This can get very complicated to track down if you have this set in an obscure place.
The best place to get documentation for all the set statements is the old trusty sql server books online. It together with a bit of experimentation in query analyzer are usually all that is required to get a grasp of most settings.
I would strongly recommend you find out who is setting up implicit transactions, find out why they are doing it, and remove the setting if its not really required. Also, you must confirm that whoever uses this setting commits their implicitly open transactions.
What was probably going on is that you had an open transaction that was blocking a bit of your your stored proc, and somewhere you have a timeout that is occurring, raising an error and being handled in code, when that timeout happens your stored proc continues running. My guess is that the delay is usually 30 seconds exactly.
I think you need to look deeper into your stored procedure. I don't think that SET IMPLICIT_TRANSACTIONS is really going to be what's sped up your procedure, I think it's probably a coincidence.
One thing that may be worth a look at is what is passed from the client to the server by using the profiler.
We had an odd situation where the default SET arguments for the ADO connection were causing an SP to take ages to run from the client which we resolved by looking at exactly what the server was receiving from the client, complete with default SET arguments compared to what was sent when executing from SSMS. We then made the client pass the same SET statements as those sent by SSMS.
This may be way off track but it is a useful method to use when the SP executes in a timely fashion on the server but not from the client.