Does anyone see a performance issue with my logon Trigger? - sql-server

Does anyone see a performance issue with my logon Trigger?
I'm trying to reduce the overhead and prevent any performance issues before I push this trigger to my production SQL Server.
I currently have the logon trigger working on my Development sql server. I let it run over the past weekend and it put 50,000+ rows into my audit log table. I noticed that 95% of the records where for the logon 'NT AUTHORITY/SYSTEM'. So I decided to filter anything with 'NT AUTHORITY%' and just not insert those records. My thinking is if I do filter these 'NT AUTHORITY' records that the amount of resources I'll save on those inserts will make up for the cost of the IF statement check. I also have been watching Prefmon and don't see anything unusual while the trigger is enabled, but then again my Development server dosn't see the same amount of activity as production.
USE [MASTER]
GO
CREATE TRIGGER AuditServerAuthentication
ON ALL SERVER
WITH EXECUTE AS SELF
FOR LOGON
AS BEGIN
DECLARE #event XML, #Logon_Name VARCHAR(100)
SET #Event = EVENTDATA()
SET #Logon_Name = CAST(#event.query('/EVENT_INSTANCE/LoginName/text()') AS VARCHAR(100))
IF #Logon_Name NOT LIKE 'NT AUTHORITY%'
BEGIN
INSERT INTO Auditing.Audit.Authentication_Log
(Post_Time,Event_Type,Login_Name,Client_Host,Application_Name,Event_Data)
VALUES
(
CAST(CAST(#event.query('/EVENT_INSTANCE/PostTime/text()') AS VARCHAR(64)) AS DATETIME),
CAST(#event.query('/EVENT_INSTANCE/EventType/text()') AS VARCHAR(100)),
CAST(#event.query('/EVENT_INSTANCE/LoginName/text()') AS VARCHAR(100)),
CAST(#event.query('/EVENT_INSTANCE/ClientHost/text()') AS VARCHAR(100)),
APP_NAME(),
#Event
)
END
END
GO

I use a very similar trigger in my servers, and i haven't experience any performance issues. The production DB gets about 10 logins per second. This creates a huge amount of data over time which translates to bigger backups, etc.
For some servers i created a table with the users that logins shouldn't be logged, with this is also possible to refuse logins according to working hours
The difference with my trigger is that I've created a DB for auditing purposes, in which i created some stored procedures that i call in the trigger. The trigger looks like this
alter TRIGGER [tr_AU_LogonLog] ON ALL SERVER
WITH EXECUTE AS 'AUDITUSER'
FOR LOGON
AS
BEGIN
DECLARE
#data XML
, #rc INT
SET #data = EVENTDATA()
EXEC #rc = AuditDB.dbo.LogonLog #data
END ;
The production DB gets about 10 logins per second. This creates a huge amount of data over time which translates to bigger backups, etc.
EDIT: oh i forgot, its recommended if you create a specific user for the trigger, execute as self can be dangerous in some scenarios.
EDIT2: There's some useful information about the execute as statement here. oh and be careful while implementing triggers, you can accidentally lock yourself out, i recommend keeping a connection open just in case :)

It doesn't look like a costly IF statement at all to me (it's not like you are selecting anything from the database) and, as you say, would be far less costly than performing an INSERT that is isn't necessary 95% of the time. However, I should add I am just a database programmer and not a DBA, so am open to being corrected here.
I am, though, slightly curious as to why you are doing this? Doesn't SQL Server already have a built-in mechanism for Login Auditing that you can use?

There's nothing immediately obvious. It's effectively an IF that protects an INSERT. The only thing I would validate is how expensive the XML parsing is - I've not used it in SQL Server yet, so it's an unknown to me.
Admittedly, it would seem odd for Microsoft to supply an easy way to get metadata (EVENTDATA()) yet make it expensive to parse, yet stranger things have happened...

Related

Prevent User Usage of "dbo" in User Databases SQL Server

I am attempting to prevent usage of the default schema of "dbo" in my SQL Server databases. This is being applied to an existing long term project with ongoing maintenance where the developers also manage the SQL Server (are all sysadmin).
This is for the main reason to allow better dependency tracking between code and the SQL Server objects so that we can slowly migrate to a better naming convention. Eg. "dbo.Users", "dbo.Projects", "dbo.Categories" in a DB are nearly impossible to find in code once created because the "dbo." is often left out of SQL Syntax.
However a proper defined schema requires the usage in code. Eg. "Tracker.Users", "Tracker.Projects", etc ...
Even though we have standards set to not use "dbo" for objects it is still accidentally occurring due to management/business pressures for speed to develop.
Note: I'm creating this question simply to provide a solution someone else can find useful
EDIT: As pointed out, for non-sysadmin users the security option stated is a viable solution, however the DDL Trigger solution will also work on sysadmin users. The case for many small teams who have to manage there own boxes.
I feel like it would be 10,000 times simpler to just DENY ALTER on the dbo schema:
DENY ALTER ON SCHEMA::dbo TO [<role(s)/user(s)/group(s)>];
That's not too handy if everyone connects as sa but, well, fix that first.
The following Database DLL Trigger causes error feedback in both the SQL Manager GUI and via Manual TSQL code attempts to create an object for the types specified.
It includes a means to have a special user and provides clear feedback to the user attempting the object creation. It also works to raise the error with users who are sysadmin.
It does not affect existing objects unless the GUI/SQL tries to DROP and CREATE an existing "dbo" based object.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TRIGGER [CREATE_Prevent_dbo_Usage_2] ON DATABASE
FOR CREATE_TABLE, CREATE_VIEW, CREATE_PROCEDURE, CREATE_FUNCTION
AS
DECLARE #E XML = EVENTDATA();
DECLARE #CurrentDB nvarchar(200)=#E.value('(/EVENT_INSTANCE/DatabaseName)[1]', 'NVARCHAR(2000)');
DECLARE #TriggerFeedbackName nvarchar(max)=#CurrentDB+N'.CREATE_Prevent_dbo_Usage'; -- used to feedback the trigger name on a failure so the user can disable it (or know where the issue is raised from)
DECLARE #temp nvarchar(2000)='';
DECLARE #SchemaName nvarchar(2000)=#E.value('(/EVENT_INSTANCE/SchemaName)[1]', 'NVARCHAR(2000)');
DECLARE #ObjectName nvarchar(2000)=#E.value('(/EVENT_INSTANCE/ObjectName)[1]', 'NVARCHAR(2000)');
DECLARE #LoginName nvarchar(2000)=#E.value('(/EVENT_INSTANCE/LoginName)[1]', 'NVARCHAR(2000)');
DECLARE #CurrentObject nvarchar(200)=''; -- Schema.Table
IF #LoginName NOT IN ('specialUser') BEGIN -- users to exclude in evaluation.
IF CASE WHEN #SchemaName IN ('dbo') THEN 1 ELSE 0 END = 1 BEGIN -- is a DBO attempt to create.
SET #CurrentObject = #SchemaName+'.'+#ObjectName; -- grouped here for easy cut and paste/modify.
SET #temp='Cannot create "'+#CurrentObject+'".
This Database "'+#CurrentDB+'" has had creation of "dbo" objects restricted to improve code maintainability.
Use an existing schema or create a new one to group the purpose of the objects/code.
Disable this Trigger TEMPORARILY if you need to do some advanced manipulation of it.
(This message was produced by "'+#TriggerFeedbackName+'")';
throw 51000,#temp,1;
END
END
GO
ENABLE TRIGGER [CREATE_Prevent_dbo_Usage] ON DATABASE
GO

Why Is My Azure SQL Database Table Permanently Locked?

I have an isolated Azure SQL test database that has no active connections except my development machine through SSMS and a development web application instance. I am the only one using this database.
I am running some tests on a table of ~1M records where we need to do a large UPDATE to data in nearly all of the ~1M records.
DECLARE #BatchSize INT = 1000
WHILE #BatchSize > 0
BEGIN
UPDATE TOP (#BatchSize)
[MyTable]
SET
[Data] = [Data] + ' a change'
WHERE
[Data] IS NOT NULL
SET #BatchSize = ##ROWCOUNT
RAISERROR('Updated %d records', 0, 1, #BatchSize) WITH NOWAIT
END
This query works fine, and I can see my data being updated 1000 records at a time every few seconds.
Performing additional INSERT/UPDATE/DELETE commands on MyTable seem to be somewhat affected by this batch query running, but these operations do execute within a few seconds when ran. I assume this is because locks are being taken on MyTable and my other commands will execute in between the batch query's locks/looping iterations.
This behavior is all expected.
However, every so often while the batch query is running I notice that additional INSERT/UPDATE/DELETE commands on MyTable will no longer execute. They always time out/never finish. I assume some type of lock has occurred on MyTable, but it seems that the lock is never being released. Further, even if I cancel the long-running update batch query I can still no longer run any INSERT/UPDATE/DELETE commands on MyTable. Even after 10-15 minutes of the database sitting stale with nothing happening on it anymore I cannot execute write commands on MyTable. The only way I have found to "free up" the database from whatever is occurring is to scale it up and down to a new pricing tier. I assume that this pricing tier change is recycling/rebooting the instance or something.
I have reproduced this behavior multiple times during my testing today.
What is going on here?
Scaling up/down the tier rollback all open transactions and disconnect server logins.
About what you are seeing it seems is lock escalation. Try to serialize access to the database using sp_getapplock. You can also try lock hints.

Giving a database level trigger full access to a different database table [T-SQL]

I'm trying to have a trigger set up in an arbitrary database that will store information in a specific database on execution. However, I found that if the trigger is triggered by someone without explicit access to that database, the trigger execution will fail.
I was hoping to find away around this using:
CREATE TRIGGER [myTrigger]
on database
with execute as 'UserWithPermissions'
but that doesn't seem to work either... Does anyone know if this is possible? Any pointers would be greatly appreciated. Thanks.
In MSSQL Management go to DB-name => Security => Users and make sure the user has access. Also, Windows Network Authentication helps a lot when running procs from other machines
-- edit --
I think what you want is called 'EXECUTE AS'.
CREATE PROCEDURE dbo.usp_Demo
WITH EXECUTE AS 'SqlUser1'
AS
SELECT user_name(); -- Shows execution context is set to SqlUser1.
EXECUTE AS CALLER;
SELECT user_name(); -- Shows execution context is set to SqlUser2, the caller of the module.
REVERT;
SELECT user_name(); -- Shows execution context is set to SqlUser1.
GO

Normal table or global temp table?

Me and another developer are discussing which type of table would be more appropriate for our task. It's basically going to be a cache that we're going to truncate at the end of the day. Personally, I don't see any reason to use anything other than a normal table for this, but he wants to use a global temp table.
Are there any advantages to one or the other?
Use a normal table in tempdb if this is just transient data that you can afford to lose on service restart or a user database if the data is not that transient.
tempdb is slightly more efficient in terms of logging requirements.
Global temp tables get dropped once all referencing connections are the connection that created the table is closed.
Edit: Following #cyberkiwi's edit. BOL does definitely explicitly say
Global temporary tables are visible to
any user and any connection after they
are created, and are deleted when all
users that are referencing the table
disconnect from the instance of SQL
Server.
In my test I wasn't able to get this behaviour though either.
Connection 1
CREATE TABLE ##T (i int)
INSERT INTO ##T values (1)
SET CONTEXT_INFO 0x01
Connection 2
INSERT INTO ##T VALUES(4)
WAITFOR DELAY '00:01'
INSERT INTO ##T VALUES(5)
Connection 3
SELECT OBJECT_ID('tempdb..##T')
declare #killspid varchar(10) = (select 'kill ' + cast(spid as varchar(5)) from sysprocesses where context_info=0x01)
exec (#killspid)
SELECT OBJECT_ID('tempdb..##T') /*NULL - But 2 is still
running let alone disconnected!*/
Global temp table
-ve: As soon as the connection that created the table goes out of scope, it takes
the table with it. This is damaging if you use connection pooling which can swap connections constantly and possibly reset it
-ve: You need to keep checking to see if the table already exists (after restart) and create it if not
+ve: Simple logging in tempdb reduces I/O and CPU activity
Normal table
+ve: Normal logging keeps your cache with your main db. If your "cache" is maintained but is still mission critical, this keeps it consistent together with the db
-ve: follow from above More logging
+ve: The table is always around, and for all connections
If the cache is a something like a quick lookup summary for business/critical data, even if it is reset/truncated at the end of the day, I would prefer to keep it a normal table in the db proper.

Help with sp_msforeachdb -like queries

Where I'm at we have a software package running on a mainframe system. The mainframe makes a nightly dump into sql server, such that each of our clients has it's own database in the server. There are a few other databases in the server instance as well, plus some older client dbs with no data.
We often need to run reports or check data across all clients. I would like to be able to run queries using sp_msforeachdb or something similar, but I'm not sure how I can go about filtering unwanted dbs from the list. Any thoughts on how this could work?
We're still on SQL Server 2000, but should be moving to 2005 in a few months.
Update:
I think I did a poor job asking this question, so I'm gonna clarify my goals and then post the solution I ended up using.
What I want to accomplish here is to make it easy for programmers working on queries for use in their programs to write the query using one client database, and then pretty much instantly run (test) code designed and built on one client's db on all 50 or so client dbs, with little to no modification.
With that in mind, here's my code as it currently sits in Management Studio (partially obfuscated):
use [master]
declare #sql varchar(3900)
set #sql = 'complicated sql command added here'
-----------------------------------
declare #cmd1 varchar(100)
declare #cmd2 varchar(4000)
declare #cmd3 varchar(100)
set #cmd1 = 'if ''?'' like ''commonprefix_%'' raiserror (''Starting ?'', 0, 1) with nowait'
set #cmd3 = 'if ''?'' like ''commonprefix_%'' print ''Finished ?'''
set #cmd2 =
replace('if ''?'' like ''commonprefix_%''
begin
use [?]
{0}
end', '{0}', #sql)
exec sp_msforeachdb #command1 = #cmd1, #command2 = #cmd2, #command3 = #cmd3
The nice thing about this is all you have to do is set the #sql variable to your query text. Very easy to turn into a stored procedure. It's dynamic sql, but again: it's only used for development (famous last words ;) ). The downside is that you still need to escape single quotes used in the query and much of the time you'll end up putting an extra ''?'' As ClientDB column in the select list, but otherwise it works well enough.
Unless I get another really good idea today I want to turn this into a stored procedure and also put together a version as a table-valued function using a temp table to put all the results in one resultset (for select queries only).
Just wrap the statement you want to execute in an IF NOT IN:
EXEC sp_msforeachdb "
IF '?' NOT IN ('DBs','to','exclude') BEGIN
EXEC sp_whatever_you_want_to
END
"
Each of our database servers contains a "DBA" database that contains tables full of meta-data like this.
A "databases" table would keep a list of all databases on the server, and you could put flag columns to indicate database status (live, archive, system, etc).
Then the first thing your SCRIPT does is to go to your DBA database to get the list of all databases it should be running against.
We even have a nightly maintenance script that makes sure all databases physically on the server are also entered into our "DBA.databases" table, and alerts us if they are not. (Because adding a row to this table should be a manual process)
How about taking the definition of sp_msforeachdb, and tweaking it to fit your purpose? To get the definition you can run this (hit ctrl-T first to put the results pane into Text mode):
sp_helptext sp_msforeachdb
Obviously you would want to create your own version of this sproc rather than overwriting the original ;o)
Doing this type of thing is pretty simple in 2005 SSIS packages. Perhaps you could get an instance set up on a server somewhere.
We have multiple servers set up, so we have a table that denotes what servers will be surveyed. We then pull back, among other things, a list of all databases. This is used for backup scripts.
You could maintain this list of databases and add a few fields for your own purposes. You could have another package or step, depending on how you decide which databases to report on and if it could be done programmatically.
You can get code here for free: http://www.sqlmag.com/Articles/ArticleID/97840/97840.html?Ad=1
We based our system on this code.

Resources