I have been told that SQL Profiler makes changes to the MSDB when it is run. Is this true and if so what changes does it make?
MORE INFO
The reason I ask is that we have a DBA who wants us to range a change request when we run the profiler on a live server. Her argument is that it makes changes to the DB's which should be change controlled.
Starting a trace adds a row into msdb.sys.traces, stopping the trace removes the row. However msdb.sys.traces is a view over an internal table valued function and is not backed by any physical storage. To prove this, set msdb to read_only, start a trace, observer the new row in msdb.sys.traces, stop the trace, remember to turn msdb back read_write. Since a trace can be started in the Profiler event when msdb is read only it is clear that normally there is no write into msdb that can occur.
Now before you go and grin to your dba, she is actually right. Profiler traces can pose a significant stress on a live system because the traced events must block until they can generate the trace record. Live, busy, systems may experience blocking on resources of type SQLTRACE_BUFFER_FLUSH, SQLTRACE_LOCK, TRACEWRITE and other. Live traces (profiler) are usualy worse, file traces (sp_trace_create) are better, but still can cause issues. So starting new traces should definetly something that the DBa should be informed about and very carefully considered.
The only ones I know happen when you schedule a trace to gather periodic information - a job is added.
That's not the case as far as I'm aware (other than the trivial change noted by others).
What changes are you referring to?
Nothing I have ever read, heard, or seen says that SQL Profiler or anything it does or uses has any impact on the MSDB database. (SQL Profiler is, essentially, a GUI wrapped around the trace routines.) It is of course possible to configure a specific setup/implementation to do, well, anything, and perhaps that's what someone is thinking of.
This sounds like a kind of "urban legend". I recommend that you challenge it -- get the people who claim it to be true to provide proof.
Related
It is very easy to make mistakes when it comes to UPDATE and DELETE statements in SQL Server Management Studio. You can easily delete way more than you want if you had a mistake in the WHERE condition or, even worse, delete the whole table if you mistakenly write an expression that evaluates to TRUE all the time.
Is there anyway to disallow queries that affects a large number of rows from within SQL Server Management Studio? I know there is a feature like that in MySQL Workbench, but I couldn't find any in SQL Server Management Studio.
No.
It is your responsibility to ensure that:
Your data is properly backed up, so you can restore your data after making inadverdent changes.
You are not writing a new query from scratch and executing it directly on a production database without testing it first.
You execute your query in a transaction, and review the changes before committing the transaction.
You know how to properly filter your query to avoid issuing a DELETE/UPDATE statement on your entire table. If in doubt, always issue a SELECT * or a SELECT COUNT(*)-statement first, to see which records will be affected.
You don't rely on some silly feature in the front-end that might save you at times, but that will completely screw you over at other times.
A lot of good comments already said. Just one tiny addition: I have created a solution to prohibit occasional execution of DELETE or UPDATE without any WHERE condition at all. This is implemented as "Fatal actions guard" in my add-in named SSMSBoost.
(My comments were getting rather unwieldy)
One simple option if you are uncertain is to BEGIN TRAN, do the update, and if the rows affected count is significantly different than expected, ROLLBACK, otherwise, do a few checks, e.g. SELECTs to ensure just the intended data was updated, and then COMMIT. The caveat here is that this will lock rows until you commit / rollback, and potentially require escalation to TABLOCK if a large number of rows are updated, so you will need to have the checking scripts planned in advance.
That said, in any half-serious system, no one, not even senior DBA's, should really be executing direct ad-hoc DML statements on a prod DB (and arguably the formal UAT DB too) - this is what tested applications are meant for (or tested, verified patch scripts executed only after change control processes are considered).
In less formal dev environments, does it really matter if things get broken? In fact, if you are an advocate of Chaos Monkey, having juniors break your data might be a good thing in the long run - it will ensure that your process re scripting, migration, static data deployment, integrity checking are all in good order?
My suggestion for you is disable auto commit. where you can commit your changes after review it. and commit it before ending the session.
for more details you can please follow the MSDN link:
http://msdn.microsoft.com/en-us/library/ms187807.aspx
I've created a software that is supposed to synchronize data between two databases in SQL Server. The program is tested as much as I was able to do so while having a limited amount of data and limited time. Now I need to make it run and I want to play that safe.
What would be the best approach to be able to recover if something goes wrong and database gets corrupted? (meaning not usable by the original program)
I know I can backup both databases each time I perform the sync. I also know that I could do point in time recovery.
Are there any other options? Is it possible to rollback only the changes made by the sync service? (both databases are going to be used by other software)
You probably have, but I suggest investigating the backup and recovery options available in SQL Server. Since you have no spec, you don't know how the system is going to behave against these changes, leaving you with a higher likelihood of problems. For this reason (and many other obvious reasons) I would want to have solid SQL backups/recovery process in place. Unfortunately Express isn't very good in automating this area, but you can run them manually before the sync.
At the very least, make everything transactional; a failure in your program should not leave the databases in a partially sync'd state.
Too bad you don't have a full version of SQL Server... then you might be able to use something like replication services and eliminate this program altogether? ;)
When running a stored procedure, we're getting the error 297
"The user does not have permission to perform this action"
This occurs during times of heavy load (regularly, when a trim job is running concurrently). The error clears up when the service accessing SQL Server is restarted (and very likely the trim job is finished as well), so it's obviously not a real permissions problems. The error is reported on a line of a stored procedure which access a function, which in turn accesses dynamic management views.
What kind of situations could cause an error like this, when it's not really a permissions problem?
Might potentially turning on trace flag 4616 fix this, as per this article? I'd like to be able to just try it, but need more info. Also, I'm baffled by the fact that this is an intermittent problem, only happening under periods of high activity.
I was trying to reproduce this same error in other situations (that were also not real permissions problems), and I found that when running this on SQL Server 2005 I do get the permissions problem:
select * from sys.dm_db_index_physical_stats(66,null,null, null, null)
(66 is an invalid DBID.)
However, we're not using dm_db_index_physical_stats with an incorrect DBID. We ARE using dm_tran_session_transactions and dm_tran_active_transactions, but they don't accept parameters so I can't get the error to happen with them. But I was thinking perhaps that the issue is linked.
Thanks for any insights.
Would it be related to concurrency issues?
For example, the same data being processed or a global temp table being accessed? If so, you may consider sp_getapplock
And does each connection use different credentials with a different set of permissions? Do all users have GRANT VIEW SERVER STATE TO xxx?
Finally, and related to both ideas above, do you use EXECUTE AS anywhere that may not be reverted etc?
Completely random idea: I've seen this before but only when I've omitted a GO between the end of the stored proc definition and the following GRANT statement. So the SP tried to set it's own permissions. Is it possible that a timeout or concurrency issue causes some code to run that wouldn't normally?
If this occurs only during periods of heavy activity maybe you can run Profiler and watch for what locks are being held.
Also is this always being run the same way? For example is it run as a SQL Agent Job? or are you sometimes running manually and sometimes running it as a job. My thinking is maybe it is running as diff. users at different times.
Maybe also take a look at this Blog Post
Thanks everyone for your input. What I did (which looks like it's fixed the problem for now), is alter the daily trim job. It now waits substantially longer between deletes, and also deletes a much smaller chunk of records at a time.
I'll update this later on with more info as I get it.
Thanks again.
I'm currently experiencing some problems on my DotNetNuke SQL Server 2005 Express site on Win2k8 Server. It runs smoothly for most of the time. However, occasionally (order once or twice an hour) it runs very slowly indeed - from a user perspective it's almost like there's a deadlock of some description when this occurs.
To try to work out what the problem is I've run SQL Profiler against the SQL Express database.
Looking at the results, some specific questions I have are:
The SQL trace shows an Audit Logon and Audit Logoff for every RPC:Completed - does this mean Connection Pooling isn't working?
When I look in Performance Monitor at ".NET CLR Data", then none of the "SQL client" counters have any instances - is this just a SQL Express lack-of-functionality problem or does it suggest I have something misconfigured?
The queries running when the slowness occur don't yet seem unusual - they run fast at other times. What other perfmon counters or other trace/log files can you suggest as useful tools for my further investigation.
Jumping straight to Profiler is probably the wrong first step. First, try checking the Perfmon stats on the server. I've got a tutorial online here:
http://www.brentozar.com/perfmon
Start capturing those metrics, and then after it's experienced one of those slowdowns, stop the collection. Look at the performance metrics around that time, and the bottleneck will show up. If you want to send me the csv output from Perfmon at brento#brentozar.com I can give you some insight as to what's going on.
You might still need to run Profiler afterwards, but I'd rule out the OS and hardware first. Also, just a thought - have you checked the server's System and Application event logs to make sure nothing's happening during those times? I've seen instances where, say, the antivirus client downloads new patches too often, and does a light scan after each update.
My spidey sense tells me that you may have SQL Server blocking issues. Read this article to help you monitor blocking on your server to check if its the cause.
If you think the issues may be performance related and want to see what your hardware bottleneck is, then you should gather some cpu, disk and memory stats using perfmon and then co-relate them with your profiler trace to see if the slow response is related.
no
nothing wrong with that...it shows that you're not using the .NET functionality embed in SQL Server.
You can check http://www.xsqlsoftware.com/Product/xSQL_Profiler.aspx for more detailed analysis of profiler trace. It has reports that show top queries by time or CPU (Not one single query, but sum of all execution of a single query).
Some other things to check:
Make sure your datafiles or log files
are not auto-extending.
Make sure your anti-virus is set to
ignore your sql data and log
files.
When looking at the profiler output, be sure the check the queries that finished just prior to your targets,
they could've been blocking.
Make sure you've turned off Auto-close on the database; re-opening after closing takes some
time.
We have a SQL 2000 server that has widely varied jobs that run at different times of day, or even different days of the month. Normally, we only use the SQL profiler to run traces for very short periods of time for performance troubleshooting, but in this case, that really wouldn't give me a good overall picture of the kinds of queries that are run against the database over the course of a day or week or month.
How can I minimize the performance overhead of a long-running SQL trace? I already know to:
Execute the trace server-side (sp_ create_trace), instead of using the SQL Profiler UI.
Trace to a file, and not to a database table (which would add extra overhead to the DB server).
My question really is about filters. If I add a filter to only log queries that run more than a certain duration or reads, it still has to examine all activity on the server to decide if it needs to log it, right? So even with that filter, is the trace going to create an unacceptable level of overhead for a server that is already on the edge of unacceptable performance?
Adding Filters does minimize the overhead of event collection and also prevents the server from logging transaction entries you don't need.
As for whether the trace is going to create an unacceptable level of overhead, you'll just have to test it out and stop it if there are additional complaints. Taking the hints of the DB Tuning Advisor with that production trace file could improve performance for everyone tomorrow though.
You actually should not have the server process the trace as that can cause problems: "When the server processes the trace, no event are dropped - even if it means sacrificing server performace to capture all the events. Whereas if Profiler is processing the trace, it will skip events if the server gets too busy." (From SQL 70-431 exam book best practices.)
I found an article that actually measures the performance impact of a SQL profiler session vs a server-side trace:
http://sqlblog.com/blogs/linchi_shea/archive/2007/08/01/trace-profiler-test.aspx
This really was my underlying question, how to make sure that I don't bog down my production server during a trace. It appears that if you do it correctly, there is minimal overhead.
It’s actually possible to collect more detailed measurements than you can collect from Profiler – and do it 24x7 across an entire instance -- without incurring any overhead. This avoids the necessity of figuring out ahead of time what you need to filter… which can be tricky.
Full disclosure: I work for one of the vendors who provide such tools… but whether you use ours or someone else’s… this may get you around the core issue here.
More info on our tool here http://bit.ly/aZKerz