My database query has been running very fast until it changed to very slow recently. No changed have occurred in the database apart from normal data growth.
I have noticed that the database statistics have "never" been updated.
Is there an easy way that I can update these statistics across my entire database so I can see if that is the problem?
I am using SQL Server 2000 Sp4.
You can use this
CREATE PROC usp_UPDATE_STATISTICS
(#dbName sysname, #sample int)
AS
SET NOCOUNT ON
DECLARE #SQL nvarchar(4000)
DECLARE #ID int
DECLARE #TableName sysname
DECLARE #RowCnt int
CREATE TABLE ##Tables
(
TableID INT IDENTITY(1, 1) NOT NULL,
TableName SYSNAME NOT NULL
)
SET #SQL = ''
SET #SQL = #SQL + 'INSERT INTO ##Tables (TableName) '
SET #SQL = #SQL + 'SELECT [name] '
SET #SQL = #SQL + 'FROM ' + #dbName + '.dbo.sysobjects '
SET #SQL = #SQL + 'WHERE xtype = ''U'' AND [name] <> ''dtproperties'''
EXEC sp_executesql #statement = #SQL
SELECT TOP 1 #ID = TableID, #TableName = TableName
FROM ##Tables
ORDER BY TableID
SET #RowCnt = ##ROWCOUNT
WHILE #RowCnt <> 0
BEGIN
SET #SQL = 'UPDATE STATISTICS ' + #dbname + '.dbo.[' + #TableName + '] WITH SAMPLE ' + CONVERT(varchar(3), #sample) + ' PERCENT'
EXEC sp_executesql #statement = #SQL
SELECT TOP 1 #ID = TableID, #TableName = TableName
FROM ##Tables
WHERE TableID > #ID
ORDER BY TableID
SET #RowCnt = ##ROWCOUNT
END
DROP TABLE ##Tables
GO
This will update stats on all the tables in the DB. You should also look at indexes and rebuild / defrag as nexessary
Raj
Try here
This should speed up your indices and key distribution. Re-analyzing table statistics optimises SQL Server's choice of index for queries, especially for large datasets
Definitely make yourself a weekly task that runs automatically to update the database's statistics.
Normal Data Growth is good enough as a reson to justify a slowdown of pretty much any not optimized query.
Scalability issues related db size won't manifest till the data volume grows.
Post your query + rough data volume and we'll help you to see what's what.
We've had a very similar problem with MSSQL 2005 and suddenly slow running queries.
Here's how we solved it: we added (nolock) for every select statement in the query. For example:
select count(*) from SalesHistory with(nolock)
Note that nolock should also be added to nested select statements, as well as joins. Here's an article that gives more details about how performance is increased when using nolock. http://www.mollerus.net/tom/blog/2008/03/using_mssqls_nolock_for_faster_queries.html
Don't forget to keep a backup of your original query obviously. Please give it a try and let me know.
Related
I have a system that takes in Revit models and loads all the data in the model to a 2016 SQL Server. Unfortunately, the way the system works it created a new database for each model that is loaded. All the databases start with an identical schema because there is a template database that the system uses to build any new ones.
I need to build a view that can query data from all databases on the server but can automatically add new databases as they are created. The table names and associated columns will be identical across all databases, including data types.
Is there a way to pull a list of current database names using:
SELECT [name] FROM sys.databases
and then use the results to UNION the results from a basic SELECT query like this:
SELECT
[col1]
,[col2]
,[col3]
FROM [database].[dbo].[table]
Somehow replace the [database] part with the results of the sys.databases query?
The goal would be for the results to look as if I did this:
SELECT
[col1]
,[col2]
,[col3]
FROM [database1].[dbo].[table]
UNION
SELECT
[col1]
,[col2]
,[col3]
FROM [database2].[dbo].[table]
but dynamically for all databases on the server and without future management from me.
Thanks in advance for the assistance!
***Added Info: A couple suggestions using STRING_AGG have been made, but that function is not available in 2016.
Try this. It will automatically detect and include new databases with the specified table name. If a database is dropped it will automatically exclude it.
I updated the TSQL. STRING_AGG concatenates the string with each database. Without it it only returns the last database. STRING_AGG is more secure than += which also concatenates. I changed the code so it generates and executes the query. In SQL 2019 the query is all in one line using +=. I don't have SQL 2016. It may format it better in SQL 2016. You can uncomment --SELECT #SQL3 to see what the query looks like. Please mark as answer if this is what you need.
DECLARE #TblName TABLE
(
TblName VARCHAR(100)
)
Declare #SQL VARCHAR(MAX),
#SQL3 VARCHAR(MAX),
#DBName VARCHAR(50),
#Count Int,
#LoopCount Int
Declare #SQL2 VARCHAR(MAX) = ''
Select Identity(int,1,1) ID, name AS DBName into #Temp from sys.databases
Select #Count = ##RowCount
Set #LoopCount = 1
While #LoopCount <= #Count
Begin
SET #DBName = (SELECT DBName FROM #Temp Where ID = #LoopCount)
SET #SQL =
' USE ' + #DBName +
' SELECT TABLE_CATALOG FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = ''table'''
INSERT INTO #TblName (TblName)
EXEC (#SQL)
Set #LoopCount=#LoopCount + 1
End
SELECT #SQL2 +=
' SELECT ' + char(10) +
' [col1] ' + char(10) +
' ,[col2] ' + char(10) +
' ,[col3] ' + char(10) +
' FROM [' + TblName + '].[dbo].[table] ' + char(10) +
' UNION '
FROM #TblName
DROP TABLE #Temp
SET #SQL3 = (SELECT SUBSTRING(#SQL2, 1, LEN(#SQL2) - 5))
--SELECT #SQL3
EXEC (#SQL3)
We are using SQL Server 2014 Enterprise with many databases. I have to execute query and get reports / data from every database with EXACT SAME Schema and database starts with Cab
When a new company is added in our ERP project a new database is created with exact schema starting with Cab and incremented number is assigned to it like:
Cab1
Cab2
Cab3
Cab5
Cab10
I can get the database names as:
SELECT name
FROM master.sys.databases
where [name] like 'Cab%' order by [name]
I have to create a Stored Procedure to get data from tables of every database.
How to do that using a Stored Procedure as the databases are created dynamically starting with Cab?
You can use EXEC(#Statement) or EXEC SP_EXECUTESQL if you have to pass parameters.
CREATE OR ALTER PROCEDURE dbo.GetDataFromAllDatabases
AS
BEGIN
DECLARE #T TABLE (id INT NOT NULL IDENTITY(1, 1), dbName VARCHAR(256) NOT NULL)
INSERT INTO #T
SELECT NAME FROM MASTER.SYS.DATABASES WHERE [NAME] LIKE 'Cab%' ORDER BY [NAME]
CREATE TABLE #AllData (......)
DECLARE #Id INT, #DbName VARCHAR(128)
SELECT #Id = MIN(Id) FROM #T
WHILE #Id IS NOT NULL
BEGIN
SELECT #DbName = dbName FROM #T WHERE Id = #Id
DECLARE #Statement NVARCHAR(MAX)
SET #Statement = CONCAT(N'INSERT INTO #AllData (...) SELECT .... FROM ', #DbName, '.dbo.[TableName]')
EXEC(#Statement);
--YOU CAN USE BELOW LINE TOO IF YOU NEED TO PASS VARIABLE
--EXEC SP_EXECUTESQL #Statement, '#Value INT', #Value = 128
SET #Id = (SELECT MIN(Id) FROM #T WHERE Id > #Id)
END
END
A quick and easy dynamic SQL solution would be something like this:
DECLARE #Sql nvarchar(max);
SET #Sql = STUFF((
SELECT ' UNION ALL SELECT [ColumnsList], '''+ [name] + ''' As SourceDb FROM '+ QUOTENAME([name]) + '.[SchemaName].[TableName]' + char(10)
FROM master.sys.databases
WHERE [name] LIKE 'Cab%'
FOR XML PATH('')
), 1, 10, '');
--When dealing with dynamic SQL, print is your best friend...
PRINT #Sql
-- Once the #Sql is printed and you can see it looks OK, you can run it.
--EXEC(#Sql)
Notes:
Use quotename to protect against "funny" chars in identifiers names.
Replace [ColumnsList] with the actual list of columns you need.
There's no need for loops of any kind, just a simple stuff + for xml to mimic string_agg (which was only introduced in 2017).
I've thrown in the source database name as a "bonus", if you don't want it that's fine.
The Order by clause in the query that generates the dynamic SQL is meaningless for the final query, so I've removed it.
Update: I Solved this. Obviously I reinvented the wheel, but I did not immediately find the answer where I searched.
Given there is another question exactly the same as mine, but that does not answer my question, I will try to be very clear. My question is not answered because the answers do not indicate how to accomplish my task. I don't really care how I accomplish it, but there has to be a way.
I want the ability to count occurrences by level of any two discrete columns in
an arbitrary table. I want to store the results for later reference because the query takes a long time to run.
The table name and two column names should be definable.
Based on a lot of research, it appears that a function, not a procedure should be used, but I am more interested in what happens, not how it happens.
DROP FUNCTION IF EXISTS O_E_1
GO
DROP TABLE IF EXISTS TestTable
GO
CREATE FUNCTION O_E_1
(#feature NVARCHAR(128), #table NVARCHAR(128))
RETURNS TABLE
AS
RETURN
(SELECT
COUNT(DISTINCT [PersonID]) AS count_person,
#feature AS feature, [HasT2DM] AS target
FROM
dbo.#table
GROUP BY
[#feature], [HasT2DM]);
GO
SELECT *
INTO TestTable
FROM O_E_1('Diagnosis', 'PatientDiagnoses')
go
I hope that with a little bit of work, I can accomplish this.
I have a version that does this in a procedure using dynamic SQL but
unfortunately, I don't see how to save that result to a table. If someone wants to tell me how to save the results of a dynamic SELECT to a table in my schema, that would accomplish what I need.
Here is the procedure version with dynamic SQL. Also included is how I am trying to store the results into a table.
BEGIN
SET NOCOUNT ON;
DECLARE #cmd NVARCHAR(max)
set #cmd = '
(SELECT
COUNT(DISTINCT [PersonID]) AS count_person,
[' + #feature + '] AS feature, [HasT2DM] AS target
FROM
dbo.[' + #table + ']
GROUP BY
[' + #feature + '], [HasT2DM])
'
EXEC sp_executesql #cmd
END
GO
O_E_1 #feature = 'Diagnosis', #table = 'PatientDiagnoses'
SELECT *
INTO TestTable
FROM (O_E_1 #feature = 'Diagnosis', #table = 'PatientDiagnoses')
GO
I was able to code the answer I need. Here it is.
DROP PROCEDURE IF EXISTS O_E_1
GO
DROP TABLE IF EXISTS TestTable
GO
CREATE PROCEDUre O_E_1
#feature NVARCHAR(128),
#table NVARCHAR(128)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #cmd NVARCHAR(max)
set #cmd = '
(SELECT
COUNT(DISTINCT [PersonID]) AS count_person,
[' + #feature + '] AS feature, [HasT2DM] AS target
FROM
dbo.[' + #table + ']
GROUP BY
[' + #feature + '], [HasT2DM])
'
EXEC sp_executesql #cmd
END
GO
DROP TABLe IF EXISTS RESULT
CREATE TABLE Result
(count_person numeric,
feature varchar(128),
target varchar(128)
)
INSERT Result EXEC O_E_1 #feature = 'Diagnosis', #table = 'PatientDiagnoses'
Select TOP 100 * FROM RESULT
I have created an audit table that is populated by an audit Trail (triggers after every update, delete, and insert) on different tables in my database. I am now asked to create a stored procedure (script) to rollback the data change using the audit id. How do I go about do so. I wrote a script which seems good. The command is accepted by SQL Server (command completed Successfully). Unfortunately when I test it by passing the Audit_id, the command is completed but the data is not rolled back. This is the Procedure I developed. Any help will be greatly appreciated.
create PROCEDURE [dbo].[spAudit_Rollback_2]
#AUDIT_ID NVARCHAR(MAX)
AS
SET Nocount on
BEGIN
DECLARE
#TABLE_NAME VARCHAR(100),
#COLUMN VARCHAR(100),
#OLD_VALUE VARCHAR(200),
#ID varchar(50)
SELECT #TABLE_NAME = TABLE_NAME FROM AUDIT;
SELECT #COLUMN = [COLUMN] FROM AUDIT;
SELECT #AUDIT_ID = AUDIT_ID FROM AUDIT;
SELECT #OLD_VALUE = OLD_VALUE FROM AUDIT
SELECT #ID = ROW_DESCRIPTION FROM AUDIT;
update [Production].[UnitMeasure]
set #COLUMN = #OLD_VALUE
WHERE [Production].[UnitMeasure].[UnitMeasureCode] = #ID
END
[dbo].[spAudit_Rollback_2]'130F0598-EB89-44E5-A64A-ABDFF56809B5
This is the same script but using adventureworks2017 database and data.
If possible I would even prefer to use a variable to retrieve that table name from Audit and use that in the procedure. That too is giving me another error.
Any help with this procedure will be awesome.
This needs to be dynamic SQL because you're updating a column that's defined in a variable. Do the following in place of your current UPDATE statement.
DECLARE #sql VARCHAR(1000) = ''
SET #sql = 'UPDATE [Production].[UnitMeasure] ' +
'SET ' + #COLUMN + ' = ''' + #OLD_VALUE + '''' +
'WHERE [Production].[UnitMeasure].[UnitMeasureCode] = ''' + #ID + ''''
EXEC(#sql)
In SQL Server, I have a database abc. In this database I have hundreds of tables. Each of these tables is called xyz.table
I want to change all the tables to be called abc.table.
Do we have a way by which I can change all the names from xyz.table to abc.table in database abc?
I am able to manually change the name by changing the schema for each table to abc
You could have a cursor run over all your tables in the xyz schema and move all of those into the abc schema:
DECLARE TableCursor CURSOR FAST_FORWARD
FOR
-- get the table names for all tables in the 'xyz' schema
SELECT t.Name
FROM sys.tables t
WHERE schema_id = SCHEMA_ID('xyz')
DECLARE #TableName sysname
OPEN TableCursor
FETCH NEXT FROM TableCursor INTO #TableName
-- iterate over all tables found
WHILE ##FETCH_STATUS = 0
BEGIN
DECLARE #Stmt NVARCHAR(999)
-- construct T-SQL statement to move table to 'abc' schema
SET #Stmt = 'ALTER SCHEMA abc TRANSFER xyz.' + #TableName
EXEC (#Stmt)
FETCH NEXT FROM TableCursor INTO #TableName
END
CLOSE TableCursor
DEALLOCATE TableCursor
You can use Alter Schema with an undocumented Stored Procedure exec sp_MSforeachtable which basically iterates through all the tables .
exec sp_MSforeachtable "ALTER SCHEMA new_schema TRANSFER ? PRINT '? modified' "
change the new_schema keyword with your new Schema .
For details please go through the link
sp_MSforeachtable
Alter Schema for all the tables
As others have pointed out that the SP is deprecated so There is another way to do this by getting the names of the table from sys.tables
Declare #value int
Set #value=1
declare #sql varchar(max), #table varchar(50), #old varchar(50), #new varchar(50)
set #old = 'dbo'
set #new = 'abc'
while exists(select * from sys.tables where schema_name(schema_id) = #old)
begin
;With CTE as
(
Select *,row_number() over(order by object_id) rowNumber from sys.tables
where schema_name(schema_id) = #old
)
select #table= name from CTE where #value=rowNumber
Set #value=#value+1
set #sql = 'alter schema ' + #new + ' transfer ' + #old + '.' + #table
exec(#sql)
end
I'm assuming You've already created the schema abc in the database.
If not you can refer here
http://www.youtube.com/watch?v=_DDgv8uek6M
http://www.quackit.com/sql_server/sql_server_2008/tutorial/sql_server_database_schemas.cfm
To change the schema of all the tables in database you can use following system created msforeachtable stored procedure to rename schema of each table with alter schema.
exec sp_MSforeachtable "ALTER SCHEMA abc TRANSFER ? PRINT '? modified' "
Without using the undocumented/unsupported sp_MSforeachtable procedure, here's a somewhat concise way to select and/or run all of the necessary ALTER statements for every table on the given schema:
declare #oldSchema nvarchar(50) = 'abc' -- usually 'dbo'
declare #newSchema nvarchar(50) = 'xyz' -- use your new schema name
declare #sql nvarchar(max) =
(select
(select N'alter schema [' + #newSchema + '] transfer [' + #oldSchema + '].[' + name + ']
' as 'data()'
from sys.tables
where schema_name(schema_id) = #oldSchema for xml path(''), type)
.value('text()[1]','nvarchar(max)'))
-- You can select out the results for scrutiny
select #sql
-- Or you can execute the results directly
exec (#sql)
This avoids using a cursor, and uses brackets to escape table names that may conflict with SQL keywords.