User activity vs. System activity on the Index Usage Statistics report - sql-server

I recently decided to crawl over the indexes on one of our most heavily used databases to see which were suboptimal. I generated the built-in Index Usage Statistics report from SSMS, and it's showing me a great deal of information that I'm unsure how to understand.
I found an article at Carpe Datum about the report, but it doesn't tell me much more than I could assume from the column titles.
In particular, the report differentiates between User activity and system activity, and I'm unsure what qualifies as each type of activity.
I assume that any query that uses a given index increases the '# of user X' columns. But what increases the system columns? building statistics?
Is there anything that depends on the user or role(s) of a user that's running the query?

But what increases the system columns?
building statistics?
SQL Server maintains statistics on an index (it's controlled by an option called "Auto Update Statistics", by default it's enabled.) Also, sometimes an index grows or is reorganized on disk. Those things come in under System Activity.
Is there anything that depends on the
user or role(s) of a user that's
running the query?
You could look into using SQL Server Profiler to gather data about which users use which indexes. It allows you to save traces as a table. If you can include index usage in the trace, you could correlate it with users. I'm sure the "showplan" would include it, but that's rather coarse.
This article describes a way to collect a trace, run it through the index tuning wizard, and analyze the result.

Related

Can Stats per partition prevent parameter sniffing problem when data varies by wide margin in partitions?

Currently we have a Datawarehouse that is holding data from multiple tenants. SQL server is on version 2019. Same schema for all the tenant databases and the data from all the tenants is consolidated in the Datawarehouse. Data is partitioned in the datawarehouse on Tenant basis. We have parameter sniffing problem with the new dashboard as the data varies a lot between the tenants. Some tenants have data less than 10000 rows and a couple of tenants have data ranging up to 5 million rows. Due to this, dashboard performance is bad for large tenants if the execution plan is built based on a smaller tenant.
Suggestions on the internet are available asking to use Recompile hint or Optimize for hint etc. But I have a doubt on the basics of this parameter sniffing. As statistics are maintained by the SQL server at partition level, is this statistics information not used to see if the plan built is right for a new run time value? Before executing, are stats ever compared for the plans built on compile time and run time to see if they are valid and the associated plan is valid?
Kindly advise.
Embed the Partition number or the TenantID key in the query text
Parameters are for when you want shared, reused query plans. Hard-coding the criteria that cause query plans to vary is the basic right answer here.
And even though "As much as possible, we are refraining from using Dynamic SQL in the code", you should make an exception here.
Use OPTION RECOMPILE
If you don't end up spending too much time in query optimization, this is almost as good. Or
Add a comment into the query that varies by tenant or tenant size to get a partitioned plan cache. This is also useful for correlating queries to the code paths that generate them. eg
/* Dashboard: Sales Overview
Visual: Total Sales
TenantID: 12345 */
select sum(Sales) TotalSales
from T
where TenantId = #tenantId

Shrinking pg_toast on RDS instance

I have a Postgres 9.6 RDS instance and it is growing 1GB a day. We have made some optimizations to the relation related to the pg_toast but the pg_toast size is not changing.
Autovacuum is on, but since autovacuum/VACUUM FREEZE do not reclaim space and VACUUM FULL does an exclusive lock, I am not sure anymore what the best approach is.
The data in the table is core to our user experience and although following this approach makes sense, it would take away the data our users expect to see during the vacuum full process.
What are the other options here to shrink the pg_toast?
Here is some data about table sizes. You can see in the first two images, that the relation scoring_responsescore is relation associated with the pg_toast.
Autovacuum settings
Results from current running autovacuum process for that specific pg_toast. It might help.
VACUUM (FULL) is the only method PostgreSQL provides to reduce the size of a table.
Is the bloated TOAST table such a problem for you? TOAST tables are always accessed via the TOAST index, so the bloat shouldn't be a performance problem.
I know of two projects that provide table reorganization with only a short ACCESS EXCLUSIVE lock, namely pg_squeeze and pg_repack, but you probably won't be able to use those in an Amazon RDS database.
To keep the problem from getting worse, you should first try to raise autovacuum_vacuum_cost_limit to 2000 for the affected table, and if that doesn't do the trick, lower autovacuum_vacuum_cost_delay to 0. You can use ALTER TABLE to change the settings for a single table.
pg_repack still does not allow to reduce the size of TOAST Segments in RDS.
And in RDS we cannot run pg_repack with superuser privileges, we have to use "--no-superuser-check" option. With this it will not be able to access the pg_toast.* tables.

How to set Azure SQL to rebuild indexes automatically?

In on premise SQL databases, it is normal to have a maintenance plan for rebuilding the indexes once in a while, when it is not being used that much.
How can I set it up in Azure SQL DB?
P.S: I tried it before, but since I couldn't find any options for that, I thought maybe they are doing it automatically until I've read this post and tried:
SELECT
DB_NAME() AS DBName
,OBJECT_NAME(ps.object_id) AS TableName
,i.name AS IndexName
,ips.index_type_desc
,ips.avg_fragmentation_in_percent
FROM sys.dm_db_partition_stats ps
INNER JOIN sys.indexes i
ON ps.object_id = i.object_id
AND ps.index_id = i.index_id
CROSS APPLY sys.dm_db_index_physical_stats(DB_ID(), ps.object_id, ps.index_id, null, 'LIMITED') ips
ORDER BY ps.object_id, ps.index_id
And found out that I have indexes that need maintaining
Update: Note that the engineering team has published updated guidance to better codify some of the suggestions in this answer in a more "official" from Microsoft place as some customers asked for that. SQL Server/DB Index Guidance. Thanks, Conor
original answer:
I'll point out that most people don't need to consider rebuilding indexes in SQL Azure at all. Yes, B+ Tree indexes can become fragmented, and yes this can cause some space overhead and some CPU overhead compared to having perfectly tuned indexes. So, there are some scenarios where we do work with customers to rebuild indexes. (The primary scenario is when the customer may run out of space, currently, as disk space is somewhat limited in SQL Azure due to the current architecture). So, I will encourage you to step back and consider that using the SQL Server model for managing databases is not "wrong" but it may or may not be worth your effort.
(If you do end up needing to rebuild an index, you are welcome to use the models posted here by the other posters - they are generally fine models to script tasks. Note that SQL Azure Managed Instance also supports SQL Agent which you can also use to create jobs to script maintenance operations if you so choose).
Here are some details that may help you decide if you may be a candidate for index rebuilds:
The link you referenced is from a post in 2013. The architecture for SQL Azure was completely redone after that post. Specifically, the hardware architecture moved from a model that was based on local spinning disks to one based on local SSDs (in most cases). So, the guidance in the original post is out of date.
You can have cases in the current architecture where you can run out of space with a fragmented index. You have options to rebuild the index or to move to a larger reservation size for awhile (which will cost more money) that supports a larger disk space allocation. [Since the local SSD space on the machines is limited, reservation sizes are roughly linked to proportions of the machine. As we get newer hardware with larger/more drives, you have more scale-up options].
SSD fragmentation impact is relatively low compared to rotating disks since the cost of a random IO is not really any higher than a sequential one. The CPU overhead of walking a few more B+ Tree intermediate pages is modest. I've usually seen an overhead of perhaps 5-20% max in the average case (which may or may not justify regular rebuilds which have a much bigger workload impact when rebuilding)
If you are using query store (which is on by default in SQL Azure), you can evaluate whether a specific index rebuild helps your performance visibly or not. You can do this as a test to see if your workload improves before bothering to take the time to build and manage index rebuild operations yourself.
Please note that there is currently no intra-database resource governance within SQL Azure for user workloads. So, if you start an index rebuild, you may end up consuming lots of resources and impacting your main workload. You can try to time things to be done off-hours, of course, but for applications with lots of customers around the world this may not be possible.
Additionally, I will note that many customers have index rebuild jobs "because they want stats to be updated". It is not necessary to rebuild an index just to rebuild the stats. In recent SQL Server and SQL Azure, the algorithm for stats update was made more aggressive on larger tables and the model for how we estimate cardinality in cases where customers are querying recently inserted data (since the last stats update) have been changed in later compatibility levels. So, it is often the case that the customer doesn't even need to do any manual stats update at all.
Finally, I will note that the impact of stats being out of date was historically that you'd get plan choice regressions. For repeated queries, a lot of the impact of this was mitigated by the introduction of the automatic tuning feature over query store (which forces prior plans if it notices a large regression in query performance compared to the prior plan).
The official recommendation that I give customers is to not bother with index rebuilds unless they have a tier-1 app where they've demonstrated real need (benefits outweigh the costs) or where they are a SaaS ISV where they are trying to tune a workload over many databases/customers in elastic pools or in a multi-tenant database design so they can reduce their COGS or avoid running out of disk space (as mentioned earlier) on a very big database. In the largest customers we have on the platform, we sometimes see value in doing index operations manually with the customer, but we often do not need to have a regular job where we do this kind of operation "just in case". The intent from the SQL team is that you don't need to bother with this at all and you can just focus on your app instead. There are always things that we can add or improve into our automatic mechanisms, of course, so I completely allow for the possibility that an individual customer database may have a need for such actions. I've not seen any myself beyond the cases I mentioned, and even those are rarely an issue.
I hope this gives you some context to understand why this isn't being done in the platform yet - it just hasn't been an issue for the vast majority of customer databases we have today in our service compared to other pressing needs. We revisit the list of things we need to build each planning cycle, of course, and we do look at opportunities like this regularly.
Good luck - whatever your outcome here, I hope this helps you make the right choice.
Sincerely,
Conor Cunningham
Architect, SQL
You can use Azure Automation to schedule index maintenance tasks as explained here :Rebuilding SQL Database indexes using Azure Automation
Below are steps :
1) Provision an Automation Account if you don’t have any, by going to https://portal.azure.com and select New > Management > Automation Account
2) After creating the Automation Account, open the details and now click on Runbooks > Browse Gallery
Type on the search box the word “indexes” and the runbook “Indexes tables in an Azure database if they have a high fragmentation” appears:
4) Note that the author of the runbook is the SC Automation Product Team at Microsoft. Click on Import:
5) After importing the runbook, now let’s add the database credentials to the assets. Click on Assets > Credentials and then on “Add a credential…” button.
6) Set a Credential name (that will be used later on the runbook), the database user name and password:
7) Now click again on Runbooks and then select the “Update-SQLIndexRunbook” from the list, and click on the “Edit…” button. You will be able to see the PowerShell script that will be executed:
8) If you want to test the script, just click on the “Test Pane” button, and the test window opens. Introduce the required parameters and click on Start to execute the index rebuild. If any error occurs, the error is logged on the results window. Note that depending on the database and the other parameters, this can take a long time to complete:
9) Now go back to the editor, and click on the “Publish” button enable the runbook. If we click on “Start”, a window appears asking for the parameters. But as we want to schedule this task, we will click on the “Schedule” button instead:
10) Click on the Schedule link to create a new Schedule for the runbook. I have specified once a week, but that will depend on your workload and how your indexes increase their fragmentation over time. You will need to tweak the schedule based on your needs and by executing the initial queries between executions:
11) Now introduce the parameters and run settings:
NOTE: you can play with having different schedules with different settings, i.e. having a specific schedule for a specific table.
With that, you have finished. Remember to change the Logging settings as desired:
Azure Automation is good and pricing is also negligible..
Some other options you have are
1.Create a execute sql task and schedule it through sql agent .The execute sql task should contain the index rebuild code along with stats rebuild
2.You also can create a linked server to SQLAZURE and create a sql agent job.To create a linked server to azure, you can see this SO link:I need to add a linked server to a MS Azure SQL Server
As #TheGamiswar suggested, add a linked server, then create a stored procedure like this:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [LinkedServerName].[RemoteDB].[dbo].[sp_RebuildReorganizIndexes]
AS
BEGIN
ALTER INDEX PK_MyTable ON MyTable REBUILD WITH (STATISTICS_NORECOMPUTE = ON, ONLINE=ON);
ALTER INDEX IX_MyTable ON MyTable REBUILD WITH (STATISTICS_NORECOMPUTE = ON, ONLINE=ON); --Nonclustered index
ALTER INDEX PK_MyTable ON MyTable REORGANIZE;
ALTER INDEX IX_MyTable ON MyTable REORGANIZE;
END
Then on your linked server use "SQL Server Agent" to create a new job and a schedule:
For details please see https://learn.microsoft.com/en-us/sql/ssms/agent/create-a-job?view=sql-server-2017

Automatic database indexing

I have a database which is used by a multi-tenant application. In this database workloads are dynamic and change continuously. Therefore I have to allocate a DA to continuously manage the database. But I thought to use an automated service for this task such as Azure SQL Database Advisor - Automatic index management (platform is not important - I am OK with using MS sql server or oracle or other RDBMS).
I want to know how these automated indexes are actually working.Can I replace database administrator with these automatic indexers. I read that whenever a query execution plan is generated it will find out all the useful indexes to execute that query. Then it uses the indexes which really exist and cache some data about indexes which don't exist. If an index data is cached again and again the sql adviser will show that as a recommended index. But I want to know can we relay on this, what about update and insert queries? If I have a table where records are frequently updated, these automated indexing systems will consider that?
Note that Index Advisor is only available in SQL Database (Azure).
In the background Index Advisor is a machine learning algorithm, a relatively simple and quite effective one. It will analyze your workload, see if you would benefit from indexes. If he thinks you would it will show you as a recommendation - if you turn automatic index creation/dropping on it will actually create the index. To understand better how it works take a look at Channel 9. Note that before you apply a recommendation you can have an estimated impact.
Now the algorithm can make mistakes, right? So once the recommendation is applied it can automatically be reverted based on its performance.
Also note that next to Index Advisor you can check the Query Performance Insights that will show the performance of you queries. So this can help your DBA diagnose other, non-index related problems.
But note that Index Advisor will not drop and create for you new indexes every hour, it takes for him a day or two. So if your database's workload is changing very fast then I am not sure any automatic management tool or DBA will react quickly enough for your workload.

Redshift data summarization

I have 3 tables in Amazon Redshift which have information regarding the usage of an app by the users (basically the screen clicks, the OS version, app version, etc).
I wish to create a summary table which would store the profile of each user with details like "last logged in time", recently used App version, last visited screen etc.
I am not much familiar with columnar databases and have worked previously only on RDBMS. I was thinking of writing a cron job which would run join queries with the three tables for past one day of data and merge the results into the profile table. I don't know if this is possible to do in Redshift.
Amazon Redshift is a fully-compliant SQL database. The fact that it is a columnar database shouldn't impact how you use the database -- it simply means that it can be faster and more efficient at certain types of operations (eg scanning millions and even billions of rows in tables).
Your idea of running a regular set of database queries would work fine. However, to make it more efficient, the queries should only update information for users who have had activity since the last update. That is, do not try to update information about all users since most user information would not change every day.
The query would basically say "select the latest value of click, os, version for any user who accessed the system since the last time we did an update", rather than "select latest click, os, version for all users".
Also, consider whether you actually need such a table to exist. Perhaps you could retrieve this information on-the-fly when you are seeking information about particular users rather than pre-computing the values each day. This would, of course, depend upon how often you wish to retrieve such information.

Resources