Large USERSTORE_OBJPERM in SQL Server - sql-server

I'm trying to make our SQL Server go faster and have noticed that no stored procedures are staying in the plan cache for any length of time. Most have of the plans have been created in the last hour or so.
Running the script below I see that the USERSTORE_OBJPERM is around 3GB and is the 2nd biggest memory cache on the server after the SQL BUFFERPOOL.
SELECT top 100 *
FROM sys.dm_os_memory_clerks
where type = 'USERSTORE_OBJPERM'
I've run the same script on a few other of our production servers and none of the USERSTORE_OBJPERM on the other servers are any where near as large around 200MBs.
My question is has anyone seen a USERSTORE_OBJPERM at around 3GB and what might of caused it.
I ran the following to try and clear the cache, it went down down by a 100mb or so and instantly started rising again.
DBCC FREESYSTEMCACHE ('ObjPerm - DatabaseName')
Results of script
SQL Server version is 2017 Enterprise with CU22 applied.
Many Thanks in advance for any tips or advice provided
Cheers Mat

Fixed.
It seems the issue was caused by an application using service broker.
The application was running a script to check permissions every 30 seconds.
Fortunately there was an option to switch the permission check off.
The USERSTORE_OBJPERM cache size is now 200MBs instead of 3GB and stored procedure plans are staying in the cache.

Related

Problem of SQL Server 2014 - locks on the SQL Server database

We upgraded from SQL Server 2008 to SQL Server 2014. The upgrade was successful.
However, there were problems with optimization. Some queries have started to create locks based on. Often the blockade disappears or whip it but the base does not want to move.
The solution to this problem with us is the MAXDOP change. After the change I do not know what is freeing but everything starts to go like before the jam in the database. I have no idea what to do about it anymore.
Our SQL Server configuration
We have already changed the cost and MAXDOP parameters. Doesn't help much. I've optimized queries that cause blockades.
The problem persists all the time. Oddly enough, the MAXDOP change helps with this blockage. The system then completely forgives. SQL queries go down and execute.
The performance issue can raise due to a lot of reasons. improper Maxdop settings is just one of them.
Run a health check with Sp_Blitz
Run sp_blitz [sp] (https://github.com/BrentOzarULTD/SQL-Server-First-Responder-Kit#sp_blitz-overall-health-check) to see what is actually causing your performance bottleneck .
check for priorities from 1to 50, those are most crucial.
start fixing them one by one

SQL Server Stored Procedure not executing only on one machine

We are running SQL Server 2012, and all the developers can execute a specific stored procedure (which is overly complex), and takes a varying amount of time depending on the machine. (Anywhere up to 20 seconds).
We right now are hosting the SQL Server instances locally, and are passing around one backup of the database to work from (we don't want a shared singular instance for dev work)
On a particular machine, it will not finish executing at all. They are all identical machines, and the settings appear to be the same.
Has anyone experienced this before? What are some things that we can try on this specific SQL Server instance to get it working?
We tried restarting the machine, services, DBCC DROPCLEANBUFFERS, DBCC FREEPROCCACHE, inspecting table locks, with no luck.
Thanks!
The solution we found that finally fixed the problem was to rebuild all the indexes. They had become fragmented and so slow that the Stored Procedures were timing out.

SQL Server randomly 200x slower than normal for simple query

Sometimes queries that normally take almost no time to run at all suddenly start to take as much as 2 seconds to run. (The query is select count(*) from calendars, which returns the number 10). This only happens when running queries through our application, and not when running the query directly against the database server. When we restart our application server software (Tomcat), suddenly performance is back to normal. Normally I would blame the network, but it doesn't make any sense to me that restarting the application server would make it suddenly behave much faster.
My suspicion falls on the connection pool, but I've tried all sorts of different settings and multiple different connection pools and I still have the same result. I'm currently using HikariCP.
Does anyone know what could be causing something like this, or how I might go about diagnosing the problem?
Do you use stored procedures or ad-hoc queries? On reason to get different executions when running a query let's say in management studio vs using stored procedure in you application can be inefficient cached execution plan, which could have been generated like that due to parameter sniffing. You could read more about it here and there are number of solutions you could try (like substituting parameters with local variables). If you restart the whole computer (and SQL Server is also running on it), than this could explain why you get fast queries in the beginning after a restart - because the execution plans are cleaned after reboot.
It turned out we had a rogue process that was grabbing 64 connections to the database at once and using all of them for intense and inefficient work. We were able to diagnose this using jstack. We ran jstack when we noticed the system had slowed down a ton, and it showed us what the application was working on. We saw 64 stack traces all inside the same rogue process, and we had our answer!

How to see what's making SQL Server grow after starting and not doing anything else

When I start SQL Server it won't stop growing and gets to over a gigabyte within a couple minutes. I have restarted the service several times and each time it does the same thing. After each time I restarted the service I do not start any applications and have no applications open that would use SQL Server.
I turned the SQL Agent off just to make sure it wasn't running anything.
I've tried seeing if the profiler would show anything being run but didn't see anything.
I tried running a query to see what the longest running queries were but nothing seems out of order there either.
I'm wondering what other options I have to try and figure out what's causing SQL Server to grow incessantly?
You can configure max memory usage: http://technet.microsoft.com/en-us/library/ms178067.aspx
If you really want to see what SQL Server is caching in RAM then have a look at the sys.dm_os_buffer_descriptors DMV:
sys.dm_os_buffer_descriptors
This DMV will return a row for every single page in the buffer pool, it will tell you what type of page it is and it will tell you which database it belongs to.
The other thing in SQL Server that can eat RAM is the plan cache, which you can see by referencing the associated DMV (sys.dm_exec_cached_plans) and the associated functions around this.
There are loads of resources available on how to use these DMVs to analyse the contents of the plan cache and buffer pool, so you will be able to tell exactly what SQL Server is using all this memory for.
The previous answer is correct though, you really do need to set the max server memory property to prevent SQL Server running away with all the RAM, especially in this case where it appears to be a concern.

"Priming" a whole database in SQL Server for first-hit speed

For a particular apps I have a set of queries that I run each time the database has been restarted for any reason (server reboot usually). These "prime" SQL Server's page cache with the common core working set of the data so that the app is not unusually slow the first time a user logs in afterwards.
One instance of the app is running on an over-specced arrangement where the SQL box has more RAM than the size of the database (4Gb in the machine, the DB is under 1.5Gb currently and unlikely to grow too much relative to that in the near future). Is there a neat/easy way of telling SQL Server to go away and load everything into RAM?
It could be done the hard way by having a script scan sysobjects & sysindexes and running SELECT * FROM <table> WITH(INDEX(<index_name>)) ORDER BY <index_fields> for every key and index found, which should cause every used page to be read at least once and so be in RAM, but is there a cleaner or more efficient way? All planned instances where the database server is stopped are out-of-normal-working-hours (all the users are at most one timezone away and unlike me none of them work at silly hours) so such a process (until complete) slowing down users more than the working set not being primed at all would is not an issue.
I'd use a startup stored proc that invoked sp_updatestats
It will benefit queries anyway
It already loops through everything anyway (you have indexes, right?)

Resources