Problem of SQL Server 2014 - locks on the SQL Server database - sql-server

We upgraded from SQL Server 2008 to SQL Server 2014. The upgrade was successful.
However, there were problems with optimization. Some queries have started to create locks based on. Often the blockade disappears or whip it but the base does not want to move.
The solution to this problem with us is the MAXDOP change. After the change I do not know what is freeing but everything starts to go like before the jam in the database. I have no idea what to do about it anymore.
Our SQL Server configuration
We have already changed the cost and MAXDOP parameters. Doesn't help much. I've optimized queries that cause blockades.
The problem persists all the time. Oddly enough, the MAXDOP change helps with this blockage. The system then completely forgives. SQL queries go down and execute.

The performance issue can raise due to a lot of reasons. improper Maxdop settings is just one of them.
Run a health check with Sp_Blitz
Run sp_blitz [sp] (https://github.com/BrentOzarULTD/SQL-Server-First-Responder-Kit#sp_blitz-overall-health-check) to see what is actually causing your performance bottleneck .
check for priorities from 1to 50, those are most crucial.
start fixing them one by one

Related

Queries slow when run by specific Windows account

Running SQL Server 2014 Express on our domain. We use Windows Authentication to log on. All queries are performed in stored procedures.
Now, the system runs fine for all our users - except one. When he logs on (using our software), all queries take around 10 times longer (e.g. 30 ms instead of 2 ms). The queries are identical, the database is the same, the network speed is the same, the operative system is the same, the SQL Server drivers are the same, connection pooling is the same, DNS is the same. Changing computer does not help. The problem seems to be linked to the account being used.
What on Earth may be the cause for this huge performance hit?
Please advise!
I would try rebuilding the SP (by running an ALTER statement that duplicates its existing structure) to force SQL Server to recompile. I don't know every way SQL Server caches things but it can definitely create distinct execution plans for different types of connections so I wouldn't be surprised if your slow user is running a version with an inefficient execution plan.
http://www.sommarskog.se/query-plan-mysteries.html

SQL Server Stored Procedure not executing only on one machine

We are running SQL Server 2012, and all the developers can execute a specific stored procedure (which is overly complex), and takes a varying amount of time depending on the machine. (Anywhere up to 20 seconds).
We right now are hosting the SQL Server instances locally, and are passing around one backup of the database to work from (we don't want a shared singular instance for dev work)
On a particular machine, it will not finish executing at all. They are all identical machines, and the settings appear to be the same.
Has anyone experienced this before? What are some things that we can try on this specific SQL Server instance to get it working?
We tried restarting the machine, services, DBCC DROPCLEANBUFFERS, DBCC FREEPROCCACHE, inspecting table locks, with no luck.
Thanks!
The solution we found that finally fixed the problem was to rebuild all the indexes. They had become fragmented and so slow that the Stored Procedures were timing out.

SQL Server Performance and Update Statistics

We have a site in development that when we deployed it to the client's production server, we started getting query timeouts after a couple of hours.
This was with a single user testing it and on our server (which is identical in terms of Sql Server version number - 2005 SP3) we have never had the same problem.
One of our senior developers had come across similar behaviour in a previous job and he ran a query to manually update the statistics and the problem magically went away - the query returned in a few miliseconds.
A couple of hours later, the same problem occurred.So we again manually updated the statistics and again, the problem went away. We've checked the database properties and sure enough, auto update statistics isTRUE.
As a temporary measure, we've set a task to update stats periodically, but clearly, this isn't a good solution.
The developer who experienced this problem before is certain it's an environment problem - when it occurred for him previously, it went away of its own accord after a few days.
We have examined the SQL server installation on their db server and it's not what I would regard as normal. Although they have SQL 2005 installed (and not 2008) there's an empty "100" folder in installation directory. There is also MSQL.1, MSQL.2, MSQL.3 and MSQL.4 (which is where the executables and data are actually stored).
If anybody has any ideas we'd be very grateful - I'm of the opinion that rather than the statistics failing to update, they are somehow becoming corrupt.
Many thanks
Tony
Disagreeing with Remus...
Parameter sniffing allows SQL Server to guess the optimal plan for a wide range of input values. Some times, it's wrong and the plan is bad because of an atypical value or a poorly chosen default.
I used to be able to demonstrate this on demand by changing a default between 0 and NULL: plan and performance changed dramatically.
A statistics update will invalidate the plan. The query will thus be compiled and cached when next used
The workarounds are one of these follows:
parameter masking
use OPTIMISE FOR UNKNOWN hint
duplicate "default"
See these SO questions
Why does the SqlServer optimizer get so confused with parameters?
At some point in your career with SQL Server does parameter sniffing just jump out and attack?
SQL poor stored procedure execution plan performance - parameter sniffing
Known issue?: SQL Server 2005 stored procedure fails to complete with a parameter
...and Google search on SO
Now, Remus works for the SQL Server development team. However, this phenomenon is well documented by Microsoft on their own website so blaming developers is unfair
How Data Access Code Affects Database Performance (MSDN mag)
Suboptimal index usage within stored procedure (MS Connect)
Batch Compilation, Recompilation, and Plan Caching Issues in SQL Server 2005 (an excellent white paper)
Is not that the statistics are outdated. What happens when you update statistics all plans get invalidated and some bad cached plan gets evicted. Things run smooth until a bad plan gets again cached and causes slow execution.
The real question is why do you get bad plans to start with? We can get into lengthy technical and philosophical arguments whether a query processor shoudl create a bad plan to start with, but the thing is that, when applications are written in a certain way, bad plans can happen. The typical example is having a where clause like (#somevaribale is null) or (somefield= #somevariable). Ultimately 99% of the bad plans can be traced to developers writing queries that have C style procedural expectation instead of sound, set based, relational processing.
What you need to do now is to identify the bad queries. Is really easy, just check sys.dm_exec_query_stats, the bad queries will stand out in terms of total_elapsed_time and total_logical_reads. Once you identified the bad plan, you can take corrective measures which depend from query to query.

SQL Server 2008 Maintenance Plans Failing because Database is in use

I've created a couple of maintenance plans on our Sql Server 2008 databases that perform back-ups (full and differential) to run overnight, but they keep failing with a message saying the database is currently in use.
We typically have little to no traffic during the times the maintenance plans are scheduled to run so I'm not sure why I'm getting this error. Is there a command I can add to the maintenance plan or a configuration change I can make to the plan(s) to allow the plans to execute?
Thanks.
I can speak for 2000/2005 and say that backups should not be affected by a database being in use. Are there other steps in the maintenance plan that could be causing this? Have you set up the reporting/logging options of the maintenance plan to write to a log file? That might give a little more info.
Seems that one of my databases was getting "stuck" in a restore state. Removing that database from the plan seemed to take care of it. Now on to figuring out what was causing that database to get "stuck"...
If there is little to no traffic during the times the maintenance plans then you should take your database in Read-only mode or take it offline by this all the transaction is going to pause for meanwhile you can take your backup. this is totally same way.

Understanding SQL Profiler trace

I'm currently experiencing some problems on my DotNetNuke SQL Server 2005 Express site on Win2k8 Server. It runs smoothly for most of the time. However, occasionally (order once or twice an hour) it runs very slowly indeed - from a user perspective it's almost like there's a deadlock of some description when this occurs.
To try to work out what the problem is I've run SQL Profiler against the SQL Express database.
Looking at the results, some specific questions I have are:
The SQL trace shows an Audit Logon and Audit Logoff for every RPC:Completed - does this mean Connection Pooling isn't working?
When I look in Performance Monitor at ".NET CLR Data", then none of the "SQL client" counters have any instances - is this just a SQL Express lack-of-functionality problem or does it suggest I have something misconfigured?
The queries running when the slowness occur don't yet seem unusual - they run fast at other times. What other perfmon counters or other trace/log files can you suggest as useful tools for my further investigation.
Jumping straight to Profiler is probably the wrong first step. First, try checking the Perfmon stats on the server. I've got a tutorial online here:
http://www.brentozar.com/perfmon
Start capturing those metrics, and then after it's experienced one of those slowdowns, stop the collection. Look at the performance metrics around that time, and the bottleneck will show up. If you want to send me the csv output from Perfmon at brento#brentozar.com I can give you some insight as to what's going on.
You might still need to run Profiler afterwards, but I'd rule out the OS and hardware first. Also, just a thought - have you checked the server's System and Application event logs to make sure nothing's happening during those times? I've seen instances where, say, the antivirus client downloads new patches too often, and does a light scan after each update.
My spidey sense tells me that you may have SQL Server blocking issues. Read this article to help you monitor blocking on your server to check if its the cause.
If you think the issues may be performance related and want to see what your hardware bottleneck is, then you should gather some cpu, disk and memory stats using perfmon and then co-relate them with your profiler trace to see if the slow response is related.
no
nothing wrong with that...it shows that you're not using the .NET functionality embed in SQL Server.
You can check http://www.xsqlsoftware.com/Product/xSQL_Profiler.aspx for more detailed analysis of profiler trace. It has reports that show top queries by time or CPU (Not one single query, but sum of all execution of a single query).
Some other things to check:
Make sure your datafiles or log files
are not auto-extending.
Make sure your anti-virus is set to
ignore your sql data and log
files.
When looking at the profiler output, be sure the check the queries that finished just prior to your targets,
they could've been blocking.
Make sure you've turned off Auto-close on the database; re-opening after closing takes some
time.

Resources