We are running SQL Server 2012, and all the developers can execute a specific stored procedure (which is overly complex), and takes a varying amount of time depending on the machine. (Anywhere up to 20 seconds).
We right now are hosting the SQL Server instances locally, and are passing around one backup of the database to work from (we don't want a shared singular instance for dev work)
On a particular machine, it will not finish executing at all. They are all identical machines, and the settings appear to be the same.
Has anyone experienced this before? What are some things that we can try on this specific SQL Server instance to get it working?
We tried restarting the machine, services, DBCC DROPCLEANBUFFERS, DBCC FREEPROCCACHE, inspecting table locks, with no luck.
Thanks!
The solution we found that finally fixed the problem was to rebuild all the indexes. They had become fragmented and so slow that the Stored Procedures were timing out.
Related
I'm trying to make our SQL Server go faster and have noticed that no stored procedures are staying in the plan cache for any length of time. Most have of the plans have been created in the last hour or so.
Running the script below I see that the USERSTORE_OBJPERM is around 3GB and is the 2nd biggest memory cache on the server after the SQL BUFFERPOOL.
SELECT top 100 *
FROM sys.dm_os_memory_clerks
where type = 'USERSTORE_OBJPERM'
I've run the same script on a few other of our production servers and none of the USERSTORE_OBJPERM on the other servers are any where near as large around 200MBs.
My question is has anyone seen a USERSTORE_OBJPERM at around 3GB and what might of caused it.
I ran the following to try and clear the cache, it went down down by a 100mb or so and instantly started rising again.
DBCC FREESYSTEMCACHE ('ObjPerm - DatabaseName')
Results of script
SQL Server version is 2017 Enterprise with CU22 applied.
Many Thanks in advance for any tips or advice provided
Cheers Mat
Fixed.
It seems the issue was caused by an application using service broker.
The application was running a script to check permissions every 30 seconds.
Fortunately there was an option to switch the permission check off.
The USERSTORE_OBJPERM cache size is now 200MBs instead of 3GB and stored procedure plans are staying in the cache.
Running SQL Server 2014 Express on our domain. We use Windows Authentication to log on. All queries are performed in stored procedures.
Now, the system runs fine for all our users - except one. When he logs on (using our software), all queries take around 10 times longer (e.g. 30 ms instead of 2 ms). The queries are identical, the database is the same, the network speed is the same, the operative system is the same, the SQL Server drivers are the same, connection pooling is the same, DNS is the same. Changing computer does not help. The problem seems to be linked to the account being used.
What on Earth may be the cause for this huge performance hit?
Please advise!
I would try rebuilding the SP (by running an ALTER statement that duplicates its existing structure) to force SQL Server to recompile. I don't know every way SQL Server caches things but it can definitely create distinct execution plans for different types of connections so I wouldn't be surprised if your slow user is running a version with an inefficient execution plan.
http://www.sommarskog.se/query-plan-mysteries.html
Sometimes queries that normally take almost no time to run at all suddenly start to take as much as 2 seconds to run. (The query is select count(*) from calendars, which returns the number 10). This only happens when running queries through our application, and not when running the query directly against the database server. When we restart our application server software (Tomcat), suddenly performance is back to normal. Normally I would blame the network, but it doesn't make any sense to me that restarting the application server would make it suddenly behave much faster.
My suspicion falls on the connection pool, but I've tried all sorts of different settings and multiple different connection pools and I still have the same result. I'm currently using HikariCP.
Does anyone know what could be causing something like this, or how I might go about diagnosing the problem?
Do you use stored procedures or ad-hoc queries? On reason to get different executions when running a query let's say in management studio vs using stored procedure in you application can be inefficient cached execution plan, which could have been generated like that due to parameter sniffing. You could read more about it here and there are number of solutions you could try (like substituting parameters with local variables). If you restart the whole computer (and SQL Server is also running on it), than this could explain why you get fast queries in the beginning after a restart - because the execution plans are cleaned after reboot.
It turned out we had a rogue process that was grabbing 64 connections to the database at once and using all of them for intense and inefficient work. We were able to diagnose this using jstack. We ran jstack when we noticed the system had slowed down a ton, and it showed us what the application was working on. We saw 64 stack traces all inside the same rogue process, and we had our answer!
When running a very simple query in SQL Server 2000.
SELECT getDate()
Most queries are sub second, but one query randomally in 10 takes about five seconds.
I am running these queries from SQL Server 2008 Management studio, but it occurs in other clients and on other machines as well, so it is not client specific.
The query is running to a server which is on the same network and there is no significant load on the server.
Can anyone tell me why this might be happening?
Sounds like network issues. We had the same thing happen when I worked for a large bank. Due to politics, it was out of our control.
You can do a few things to confirm this, like try running the queries from the server, etc.
The two things I would suspect without more information right off the bat are network latency and server load. Do you get this behavior when running the query from the database server machine itself? Do you get this behavior when running in single-user mode?
We have a dev server running C# and talking to SQL server on the same machine.
We have another server running the same code and talking to SQL server on another machine.
A job does 60,000 reads (that is it calls a stored procedure 60,000 times - each read returns one row).
The job runs in 1/40th of the time on the first server compared to it running on the second server.
We're already looking at the 'internal' differences between the two SQL Servers (fragmentation, tempdb, memory etc) but what's a good way to determine how much slower the second config is simply because it has to go over the network ?
[rather confusingly I found a 'SQL Server Ping' tool but it doesn't actually attempt any timing measurement which, as far as I can see, is what we need]
Open SQL Server Management Server on the remote machine. Start a new query. Click Query, Include Client Statistics. Run your stored procedure. In the Client Statistics tab of the results, you'll see some basic information about how many packets were sent back & forth over the network. My guess is that for one read, you're not going to see that much overhead.
To get a better idea, I'd try doing a plain select of 60,000 records (since you said it's returning 60,000 records one by one) over the network from your remote machine. Again, that doesn't give you an idea of the stored procedure overhead, but it'll give you a quick seat-of-the-pants idea of the network speed between machines.
SQL Server ships with the Profiler utility. This will tell you what the execution time of your query is on each of your SQL Server instances. Note any discrepencies. Whatever time (in the ExecutionTime column) can not be accounted for here is transmission time... or client display time. Perhaps your client machine takes longer to render the results, or compute the results.
What results are you expecting? Running everything on one machine vs over a network will certainly give you different timings. Your biggest timing difference will be the network throughput. You need to communicate to the networked server both ways.
If you can set NOCOUNT to on, this will help in less network traffic.