Tempdb Full When Querying Distinct Count Of All Tables - database

ORIGINAL PROBLEM
I have created a custom script to retrieve data from a remote SQL server into our local copy in our office. I had some issues with the script where selected tables had some data inserted twice, thus creating duplicates. I know that for all the tables in all databases there should be no duplicates.
This issue has made me paranoid that other tables may have had this problem historically, and therefore I'd like to verify this.
SOLUTION
I have created a SQL script to insert the count and distinct count of all columns into a table for all the databases on our server (excluding the 4 system databases):
DECLARE #TableFullName AS NVARCHAR(MAX)
DECLARE #SQLQuery AS NVARCHAR(MAX)
DECLARE #TableHasDuplicates AS BIT
DECLARE #TempTableRowCount AS INT
DECLARE #ResultsTable TABLE ([CompleteTableName] NVARCHAR(200), [CountAll] INT, [CountDistinct] INT)
DECLARE #CountAll INT
DECLARE #CountDistinct INT
SET NOCOUNT ON
DECLARE #AllTables TABLE ([CompleteTableName] NVARCHAR(200))
INSERT INTO #AllTables ([CompleteTableName])
EXEC sp_msforeachdb 'SELECT ''['' + [TABLE_CATALOG] + ''].['' + [TABLE_SCHEMA] + ''].['' + [TABLE_NAME] + '']'' FROM [?].INFORMATION_SCHEMA.TABLES'
SET NOCOUNT OFF;
DECLARE [table_cursor] CURSOR FOR
(SELECT *
FROM #AllTables
WHERE [CompleteTableName] NOT LIKE '%master%' AND [CompleteTableName] NOT LIKE '%msdb%' AND [CompleteTableName] NOT LIKE '%tempdb%' AND [CompleteTableName] NOT LIKE '%model%');
OPEN [table_cursor]
PRINT N'There were ' + CAST(#CountAll AS NVARCHAR(10)) + ' tables with potential duplicate data'
FETCH NEXT FROM [table_cursor]
INTO #TableFullName
WHILE ##FETCH_STATUS = 0
BEGIN
SET #SQLQuery = 'SELECT #CntAll = COUNT(*) FROM ' + #TableFullName + ' SELECT #CntDistinct = COUNT(*) FROM (SELECT DISTINCT * FROM ' + #TableFullName + ') AS [sq] IF #CntAll > #CntDistinct SELECT #BitResult=1 ELSE SELECT #BitResult=0';
EXEC sp_executesql #SQLQuery, N'#BitResult BIT OUTPUT, #CntAll INT OUTPUT, #CntDistinct INT OUTPUT', #BitResult = #TableHasDuplicates OUTPUT, #CntAll = #CountAll OUTPUT, #CntDistinct = #CountDistinct OUTPUT;
IF #TableHasDuplicates = 1
BEGIN
INSERT INTO #ResultsTable ([CompleteTableName], [CountAll], [CountDistinct])
SELECT #TableFullName, #CountAll, #CountDistinct
END;
FETCH NEXT FROM [table_cursor]
INTO #TableFullName
END
CLOSE [table_cursor];
DEALLOCATE [table_cursor];
SELECT *
FROM #ResultsTable
An overview of how it works is the table variable #AllTables uses sp_msforeachdb with INFORMATION_SCHEMA.TABLES to list all the tables in all databases (there are 16537 tables). A table cursor is used to store all non-system entries and then I use dynamic SQL to undertake a count and distinct count which is stored in another table variable #ResultsTable.
THE PROBLEM WITH THIS SOLUTION
When I run this query, it will run for circa 3 minutes then throw an error saying that the tempdb PRIMARY filegroup is full:
I am my own DBA, and I used Brent Ozar's guide to setting up my SQL server instance, and my tempdb is set up with 8 x 3GB mdf/ndf files (the server has 8 cores):
These files show as having 23997MB available under 'General' properties.
MY QUESTIONS
If I have circa 24GB of tempdb free space, why is this relatively simple query running out of tempdb space?
Is there a better/more efficiency way of getting a count and distinct count of all tables in all databases?

You should always consider contention before adding TempDb file. Adding 7 additional TempDb file won't really help.
If I have circa 24GB of tempdb free space, why is this relatively
simple query running out of tempdb space?
No, it should not. But are you sure that you aren't dealing with large amount of data or you don't have other process running on SQL? Cursors, Temp tables and even table variables use TempDb extensively. Check which object is consuming more TempDb space:
SELECT
SUM (user_object_reserved_page_count)*8 as usr_obj_kb,
SUM (internal_object_reserved_page_count)*8 as internal_obj_kb,
SUM (version_store_reserved_page_count)*8 as version_store_kb,
SUM (unallocated_extent_page_count)*8 as freespace_kb,
SUM (mixed_extent_page_count)*8 as mixedextent_kb
FROM sys.dm_db_file_space_usage
So, if your user and internal objects are more then it clearly means that you have low TempDb space because of cursors and SQL Server internal usage (Ex: intermediate tables, Hash joins, Hash aggregation etc)
Is there a better/more efficiency way of getting a count and distinct
count of all tables in all databases?
You can use below code to get the count of all tables in all databases
DECLARE #Stats TABLE (DBNAME VARCHAR(40), NAME varchar(200), Rows INT)
INSERT INTO #Stats
EXECUTE sp_MSForEachDB
'USE ?; SELECT DB_NAME()AS DBName,
sysobjects.Name
, sysindexes.Rows
FROM
sysobjects
INNER JOIN sysindexes
ON sysobjects.id = sysindexes.id
WHERE
type = ''U''
AND sysindexes.IndId < 2'
SELECT * FROM #Stats
I have written an article on TempDb recommendation; I would suggest you to read that to understand objects which can affect TempDb and how to solve common problems of it. Ideally, your total TempDb size should be calculated based on observation which in your case > 24 GB.
** Edit 1**
If you are unsure about stats update then use below query to get count of all tables
Note: Replace databases for which you don't want stats
DECLARE #ServerStats TABLE (DatabaseName varchar(200), TableName varchar(200), RowsCount INT)
INSERT INTO #ServerStats
exec sp_msforeachdb #command1='
use #;
if ''#'' NOT IN (''master'', ''model'', ''msdb'', ''tempdb'',''ReportServer'')
begin
print ''#''
exec sp_MSforeachtable #command1=''
SELECT ''''#'''' AS DATABASENAME, ''''?'''' AS TABLENAME, COUNT(*) FROM ? ;
''
end
', #replacechar = '#'
SELECT * FROM #ServerStats
similarly you can take distinct in all tables for all databases with below query
DECLARE #ServerStatsDistinct TABLE (DatabaseName varchar(200), TableName varchar(200), RowsCount INT)
INSERT INTO #ServerStatsDistinct
exec sp_msforeachdb #command1='
use #;
if ''#'' NOT IN (''master'', ''model'', ''msdb'', ''tempdb'',''ReportServer'')
begin
print ''#''
exec sp_MSforeachtable #command1=''
SELECT ''''#'''' AS DATABASENAME, ''''?'''' AS TABLENAME, COUNT(*) FROM (
SELECT DISTINCT *
FROM ?
) a ;
''
end
', #replacechar = '#'
SELECT * FROM #ServerStatsDistinct

Related

TRUNCATE multiple tables SQL Server 2014

I want to truncate multiple tables. I know that it isn't possible in the same way that DELETE will delete the rows from multiple tables.
In this question truncate multi tables IndoKnight provides the OP-designated best answer. I want to try that. However, I get a syntax error at:
TRUNCATE TABLE #tableName
To troubleshoot I tried printing the variables because when I first tried using TRUNCATE TABLE I needed to include the database name and schema (e.g. NuggetDemoDB.dbo.tablename) to get it to work. I CAN print the variable #tableList. But I CANNOT print #tableName.
DECLARE #delimiter CHAR(1),
#tableList VARCHAR(MAX),
#tableName VARCHAR(20),
#currLen INT
SET #delimiter = ','
SET #tableList = 'Employees,Products,Sales'
--PRINT #tableList
WHILE LEN(#tableList) > 0
BEGIN
SELECT #currLen =
(
CASE charindex( #delimiter, #tableList )
WHEN 0 THEN len( #tableList )
ELSE ( charindex( #delimiter, #tableList ) -1 )
END
)
SET #tableName = SUBSTRING (#tableList,1,#currLen )
--PRINT #tableName
TRUNCATE TABLE #tableName
SELECT tableList =
(
CASE ( len( #tableList ) - #currLen )
WHEN 0 THEN ''
ELSE right( #tableList, len( #tableList ) - #currLen - 1 )
END
)
END
Edit: Fixed the table list to remove the extra "Sales" from the list of tables and added "Employees".
Even thought Sales is listed twice... No harm
Declare #TableList varchar(max)
SET #tableList = 'Sales,Products,Sales'
Set #tableList = 'Truncate Table '+replace(#tablelist,',',';Truncate Table ')+';'
Print #TableList
--Exec(#tablelist) --<< If you are TRULY comfortable with the results
Returns
Truncate Table Sales;Truncate Table Products;Truncate Table Sales
First and foremost, you may want to consider spending a little energy to come up with a SQL implementation for splitting a string into rows, e.g. Split, List, etc. This will prove to be helpful not only for this exercise, but for many others. Then this post is not about how to turn a comma separated list into rows and we can then concentrate on the dynamic SQL needed in order to do what is needed.
Example
The below example assumes that you have a function named List to take care of transposing the comma separated list into rows.
declare
#TableList varchar(max) = 'Sales, Products, Sales';
declare
#Sql varchar(max) = (
select distinct 'truncate table ' + name + ';'
from List(#TableList)
for xml path(''));
exec (#Sql);
One last thing about truncate of delete
Truncate will not work if you are truncating data where there is a foreign key relationship to another table.
You will get something like the below error.
Msg 4712, Level 16, State 1, Line 19
Cannot truncate table 'Something' because it is being referenced by a FOREIGN KEY constraint.
Below is an example that uses a table variable instead of delimited list. If the source of your table list is already in a table, you could tweak this script to use that as the source instead. Note that the extra Sales table is redundant (gleaned from the script your question) and can be removed. The table names can be database and/or schema qualified if desired.
DECLARE #tableList TABLE(TableName nvarchar(393));
DECLARE #TruncateTableBatch nvarchar(MAX);
INSERT INTO #tableList VALUES
(N'Sales')
, (N'Products')
, (N'Sales');
SET #TruncateTableBatch = (SELECT N'TRUNCATE TABLE ' + TableName + N'
'
FROM #tableList
FOR XML PATH(''), TYPE).value('.', 'nvarchar(MAX)');
--PRINT #SQL;
EXECUTE(#TruncateTableBatch);
What about something like:
exec sp_msforeachtable
#command1 ='truncate table ?'
,#whereand = ' and object_id In (select object_id from sys.objects where name in ("sales", "products")'
Have not tested it yet. But it might give a useful hint.

Create table on the fly using select into

I am trying to use dynamic SQL to fill a temp table with data from one of several servers, depending on a declared variable. The source data may have more columns added in the future, so I'd like to be able to create the destination temp table based on what columns currently exist, without having to explicitly define it.
I tried creating an empty table with the appropriate columns using:
Select top 1 * into #tempTable from MyTable
Delete from #tempTable
Or:
Select * into #tempTable from MyTable where 1 = 0
Both worked to create an empty table, but when I then try to insert into it:
declare #sql varchar(max) = 'Select * from '
+ case when #server = '1' then 'Server1.' else 'Server2.' end
+ 'database.dbo.MyTable'
Insert into #tempTable
exec(#sql)
I get this error:
Msg 213, Level 16, State 7, Line 1
Column name or number of supplied values does not match table definition.
exec(#sql) works fine on its own. I get this error even when I use the same table, on the same server, for both steps. Is this possible to fix, or do I have to go back to explicitly defining the table with create table?
How about using global temp table. there is some disadvantage of using global temp table because it can access from multiple users and databases. ref http://sqlmag.com/t-sql/temporary-tables-local-vs-global
DECLARE #sql nvarchar(max) = 'SELECT * INTO ##tempTable FROM '
+ case when #server = '1' THEN 'Server1.' ELSE 'Server2.' END
+ 'database.dbo.MyTable'
EXECUTE sp_executesql (#sql)
SELECT * FROM ##tempTable
(Thanks to helpful commenter #XQbert)
Replacing the ID column (Int, Identity) in the temp table with a column that was just an int causes
Insert into #tempTable
exec(#sql)
to function as intended.
Both that syntax and
declare #sql varchar(max) = 'Insert into #tempTable Select * from '
+ case when #server = '1' then 'Server1.' else 'Server2.' end
+ 'database.dbo.MyTable'
exec(#sql)
worked, but making insert part of the dynamic sql produced much more helpful error messages for troubleshooting.

UPSERT into sql server from an Excel File

I have a SP that runs everynight to Insert and Update the content of a table based on an excel file (Excel 2010 on Windows Server 20008 R2). Below is my SP and the image represents my table's structure and the excel file format. I just need to double check my SP with you guys to make sure I am doing this correctly and if I am on the right track. The excel file includes 3 columns both Cust_Num and Cust_Seq are primary since there would never be a case that same combination of Cust_Num and Cust_Seq exist for a customer name. For example, for Cust_Num = 1 and Cust_Num=0 there will never be another of same combination of Cust_Num being 1 and Cust_Num being 0. However the name will usually repeat in the spreadsheet. So, would you guys please let me know if the SP is correct or not? (in the SP first the Insert statement runs and then the Update Statement):
**First The Insert runs in the SP
INSERT INTO Database.dbo.Routing_CustAddress
SELECT a.[Cust Num],a.[Cust Seq],a.[Name]
FROM OPENROWSET('Microsoft.ACE.OLEDB.12.0',
'Excel 8.0;HDR=YES;Database=C:\Data\custaddr.xls;',
'SELECT*
FROM [List_Frame_1$]') a Left join Routing_CustAddress b
on a.[Cust Num] = b.Cust_Num and a.[Cust Seq] = b.Cust_Seq where b.Cust_Num is null
***Then the Update Runs in the SP
UPDATE SPCustAddress
SET SPCustAddress.Name = CustAddress.Name
FROM ArPd_App.dbo.Routing_CustAddress SPCustAddress
INNER JOIN OPENROWSET('Microsoft.ACE.OLEDB.12.0',
'Excel 8.0;HDR=YES;Database=C:\Data\custaddr.xls;',
'SELECT *
FROM [List_Frame_1$]')CustAddress
ON SPCustAddress.Cust_Num = CustAddress.[Cust Num]
AND SPCustAddress.Cust_Seq = CustAddress.[Cust Seq]
Right here is some code I havent tested it so I'll leave it for you but it shold work
Create the stagging table first.
CREATE TABLE dbo.Routing_CustAddress_Stagging
(
Cust_Name NVARCHAR(80),
Cust_Seq NVARCHAR(80),
Name NVARCHAR(MAX)
)
GO
Then create the following Stored Procedure. It will take the FilePath and Sheet name as parameter and does the whole lot for you.
1) TRUNCATE the stagging table.
2) Upload data into stagging table from provided Excel file, and sheet.
3) and finnaly does the UPSERT operation in two separate statements.
CREATE PROCEDURE usp_Data_Upload_Via_File
#FilePath NVARCHAR(MAX),
#SheetName NVARCHAR(MAX)
AS
BEGIN
SET NOCOUNT ON;
IF (#FilePath IS NULL OR #SheetName IS NULL)
BEGIN
RAISERROR('Please Provide valid File Path and SheetName',16,1)
RETURN;
END
-- Truncate the stagging table first
TRUNCATE TABLE dbo.Routing_CustAddress_Stagging;
-- Load Data from Excel sheet
DECLARE #Sql NVARCHAR(MAX);
SET #Sql = N' INSERT INTO dbo.Routing_CustAddress_Stagging ([Cust Num],[Cust Seq],[Name]) ' +
N' SELECT [Cust Num],[Cust Seq],[Name] ' +
N' FROM OPENROWSET(''Microsoft.ACE.OLEDB.12.0'', ' +
N' ''Excel 8.0;HDR=YES;Database='+ #FilePath + ';'' ,' +
N' ''SELECT* FROM ['+ #SheetName +']'')'
EXECUTE sp_executesql #Sql
-- Now the UPSERT statement.
UPDATE T
SET T.Name = ST.NAME
FROM dbo.Routing_CustAddress T INNER JOIN dbo.Routing_CustAddress_Stagging ST
ON T.Cust_Name = ST.Cust_Name AND T.Cust_Seq = ST.Cust_Seq
-- Now the Insert Statement
INSERT INTO dbo.Routing_CustAddress
SELECT ST.[Cust Num],ST.[Cust Seq],ST.[Name]
FROM dbo.Routing_CustAddress_Stagging ST LEFT JOIN dbo.Routing_CustAddress T
ON T.Cust_Name = ST.Cust_Name AND T.Cust_Seq = ST.Cust_Seq
WHERE T.Cust_Name IS NULL OR T.Cust_Seq IS NULL
END

Retrieve column definition for stored procedure result set

I'm working with stored procedures in SQL Server 2008 and I've come to learn that I have to INSERT INTO a temp table that has been predefined in order to work with the data. That's fine, except how do I figure out how to define my temp table, if I'm not the one that wrote the stored procedure other than listing its definition and reading through the code?
For example, what would my temporary table look like for `EXEC sp_stored_procedure'? That is a simple stored procedure, and I could probably guess at the data types, but it seems there must be a way to just read the type and length of the columns returned from executing the procedure.
So let's say you have a stored procedure in tempdb:
USE tempdb;
GO
CREATE PROCEDURE dbo.my_procedure
AS
BEGIN
SET NOCOUNT ON;
SELECT foo = 1, bar = 'tooth';
END
GO
There is a quite convoluted way you can go about determining the metadata that the stored procedure will output. There are several caveats, including the procedure can only output a single result set, and that a best guess will be made about the data type if it can't be determined precisely. It requires the use of OPENQUERY and a loopback linked server with the 'DATA ACCESS' property set to true. You can check sys.servers to see if you already have a valid server, but let's just create one manually called loopback:
EXEC master..sp_addlinkedserver
#server = 'loopback',
#srvproduct = '',
#provider = 'SQLNCLI',
#datasrc = ##SERVERNAME;
EXEC master..sp_serveroption
#server = 'loopback',
#optname = 'DATA ACCESS',
#optvalue = 'TRUE';
Now that you can query this as a linked server, you can use the result of any query (including a stored procedure call) as a regular SELECT. So you can do this (note that the database prefix is important, otherwise you will get error 11529 and 2812):
SELECT * FROM OPENQUERY(loopback, 'EXEC tempdb.dbo.my_procedure;');
If we can perform a SELECT *, we can also perform a SELECT * INTO:
SELECT * INTO #tmp FROM OPENQUERY(loopback, 'EXEC tempdb.dbo.my_procedure;');
And once that #tmp table exists, we can determine the metadata by saying (assuming SQL Server 2005 or greater):
SELECT c.name, [type] = t.name, c.max_length, c.[precision], c.scale
FROM sys.columns AS c
INNER JOIN sys.types AS t
ON c.system_type_id = t.system_type_id
AND c.user_type_id = t.user_type_id
WHERE c.[object_id] = OBJECT_ID('tempdb..#tmp');
(If you're using SQL Server 2000, you can do something similar with syscolumns, but I don't have a 2000 instance handy to validate an equivalent query.)
Results:
name type max_length precision scale
--------- ------- ---------- --------- -----
foo int 4 10 0
bar varchar 5 0 0
In Denali, this will be much, much, much easier. Again there is still a limitation of the first result set but you don't have to set up a linked server and jump through all those hoops. You can just say:
DECLARE #sql NVARCHAR(MAX) = N'EXEC tempdb.dbo.my_procedure;';
SELECT name, system_type_name
FROM sys.dm_exec_describe_first_result_set(#sql, NULL, 1);
Results:
name system_type_name
--------- ----------------
foo int
bar varchar(5)
Until Denali, I suggest it would be easier to just roll up your sleeves and figure out the data types on your own. Not just because it's tedious to go through the above steps, but also because you are far more likely to make a correct (or at least more accurate) guess than the engine will, since the data type guesses the engine makes will be based on runtime output, without any external knowledge of the domain of possible values. This factor will remain true in Denali as well, so don't get the impression that the new metadata discovery features are a be-all end-all, they just make the above a bit less tedious.
Oh and for some other potential gotchas with OPENQUERY, see Erland Sommarskog's article here:
http://www.sommarskog.se/share_data.html#OPENQUERY
It looks like in SQL 2012 there is a new SP to help with this.
exec sp_describe_first_result_set N'PROC_NAME'
https://learn.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-describe-first-result-set-transact-sql
A less sophisticated way (that could be sufficient in some cases): edit your original SP, after the final SELECT and before the FROM clause add INSERT INTO tmpTable to save the SP result in tmpTable.
Run the modified SP, preferably with meaningful parameters in order to get actual data. Restore the original code of the procedure.
Now you can get the script of tmpTable from SQL server management studio or query sys.columns to get fields descriptions.
Here is some code that I wrote. The idea is (as someone else stated) is to get the SP code, modify it and execute it. However, my code does not change the original SP.
First step, get the definition of the SP, strip the 'Create' part out and get rid of the 'AS' after the declaration of parameters, if exists.
Declare #SPName varchar(250)
Set nocount on
Declare #SQL Varchar(max), #SQLReverse Varchar(MAX), #StartPos int, #LastParameterName varchar(250) = '', #TableName varchar(36) = 'A' + REPLACE(CONVERT(varchar(36), NewID()), '-', '')
Select * INTO #Temp from INFORMATION_SCHEMA.PARAMETERS where SPECIFIC_NAME = 'ADMIN_Sync_CompareDataForSync'
if ##ROWCOUNT > 0
BEGIN
Select #SQL = REPLACE(ROUTINE_DEFINITION, 'CREATE PROCEDURE [' + ROUTINE_SCHEMA + '].[' + ROUTINE_NAME + ']', 'Declare')
from INFORMATION_SCHEMA.ROUTINES
where ROUTINE_NAME = #SPName
Select #LastParameterName = PARAMETER_NAME + ' ' + DATA_TYPE +
CASE WHEN CHARACTER_MAXIMUM_LENGTH is not null THEN '(' +
CASE WHEN CHARACTER_MAXIMUM_LENGTH = -1 THEN 'MAX' ELSE CONVERT(varchar,CHARACTER_MAXIMUM_LENGTH) END + ')' ELSE '' END
from #Temp
WHERE ORDINAL_POSITION =
(Select MAX(ORDINAL_POSITION)
From #Temp)
Select #StartPos = CHARINDEX(#LastParameterName, REPLACE(#SQL, ' ', ' '), 1) + LEN(#LastParameterName)
END
else
Select #SQL = REPLACE(ROUTINE_DEFINITION, 'CREATE PROCEDURE [' + ROUTINE_SCHEMA + '].[' + ROUTINE_NAME + ']', '') from INFORMATION_SCHEMA.ROUTINES where ROUTINE_NAME = #SPName
DROP TABLE #Temp
Select #StartPos = CHARINDEX('AS', UPPER(#SQL), #StartPos)
Select #SQL = STUFF(#SQL, #StartPos, 2, '')
(Note the creation of a new table name based on a unique identifier)
Now find the last 'From' word in the code assuming this is the code that does the select that returns the result set.
Select #SQLReverse = REVERSE(#SQL)
Select #StartPos = CHARINDEX('MORF', UPPER(#SQLReverse), 1)
Change the code to select the resultset into a table (the table based on the uniqueidentifier)
Select #StartPos = LEN(#SQL) - #StartPos - 2
Select #SQL = STUFF(#SQL, #StartPos, 5, ' INTO ' + #TableName + ' FROM ')
EXEC (#SQL)
The result set is now in a table, it does not matter if the table is empty!
Lets get the structure of the table
Select * from INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = #TableName
You can now do your magic with this
Don't forget to drop that unique table
Select #SQL = 'drop table ' + #TableName
Exec (#SQL)
Hope this helps!
In order to get queryable resultset sys.dm_exec_describe_first_result_set(SQL Server 2012) could be used:
SELECT column_ordinal, name, system_type_name
FROM sys.dm_exec_describe_first_result_set(N'EXEC stored_procedure_name', NULL, 0);
db<>fiddle demo
This soultion has few limitations though for instance SP cannot use temporary tables.
If you are working in an environment with restricted rights where things like loopback linked server seems black magic and are definitely "no way!", but you have a few rights on schema and only a couple of stored procedure to process there is a very simple solution.
You can use the very helpful SELECT INTO syntax, which will create a new table with result set of a query.
Let's say your procedure contains the following Select query :
SELECT x, y, z
FROM MyTable t INNER JOIN Table2 t2 ON t.id = t2.id...
Instead replace it by :
SELECT x, y, z
INTO MyOutputTable
FROM MyTable t INNER JOIN Table2 t2 ON t.id = t2.id...
When you will execute it, it will create a new table MyOutputTable with the results returned by the query.
You just have to do a right click on its name to get the table definition.
That's all !
SELECT INTO only require the ability to create new tables and also works with temporary tables (SELECT... INTO #MyTempTable), but it could be harder to retrieve the definition.
However of course if you need to retrieve the output definition of a thousands SP, it's not the fastest way :)

Why would a Stored Procedure run slower than naked T-SQL?

I have a stored procedure in a MS-SQL 2005 database that:
Creates two temp tables
Executes a query with 7 joins but is not otherwise terribly complex
Inserts the results into one of the temp tables
Executes two more queries (no joins to "real" tables) that puts records from one of the temp tables into the other.
Returns a result set from the second temp table
Drops both temp tables
The SP takes two parameters, which are then used in the first query.
When I run the SP for a given set of parameters, it takes 3 minutes to execute.
When I execute the contents of the SP as a regular T-SQL batch (declaring and setting the parameters beforehand), it takes 10 seconds. These numbers are consistent across multiple sequential runs.
This is a huge difference and there's no obvious functional changes. What could be causing this?
UPDATE
Reindexing my tables (DBCC REINDEX) sped up the SP version dramatically. The SP version now takes 1 second, while the raw SQL takes 6.
That's great as a solution to the immediate problem, but I'd still like to know the "why".
It might have been exactly due to the fact that in SP the execution plan was cached and it was not optimal for the data set. When data set depends greatly on the parameters or changes considerably between invocations it's better to specify 'with recompile' in 'create proc'. You lose a fraction of a second on recompilation, but may win minutes on execution.
PS Why cannot I comment? Only "Your Answer" is available.
I've experienced exactly the same problem a couple of times recently (with MS-SQL 2008). Specific stored procedures would be extremely slow to run (minutes) but the same SQL pasted into SSMS took only seconds.
The problem is basically that the stored procedure is using a bad execution plan while the pasted SQL is using a different (and much better) execution plan.
Compare Execution Plans
To test this hypothesis, open a new query window in SSMS and turn on "Include Actual Execution Plan" (Ctrl-M is the keyboard shortcut for this).
Then paste the contents of the stored procedure into the window and follow that with a call to the actual stored procedure.
For example:
SELECT FirstName, LastName FROM Users where ID = 10
EXEC dbo.spGetUserById 10
Run both queries together and then compare the execution plans for both. I have to say that in my case the "Query cost" estimate for each query did not help at all and pointed me in the wrong direction. Instead, look closely at the indexes being used, whether scans are being performed instead of seeks and the number of rows being processed.
There should be a difference in the plans and that should help identify the tables & indexes that need to be investigated further.
To help fix the issue, in one instance I was able to rewrite the stored procedure to avoid using an index scan and instead rely on index seeks.
In another instance, I found that updating that rebuilding the indexes for a specific table used in the query made all the difference.
Find & Update Indexes
I've used this SQL to find and rebuild the appropriate indexes:
/* Originally created by Microsoft */
/* Error corrected by Pinal Dave (http://www.SQLAuthority.com) */
/* http://blog.sqlauthority.com/2008/03/04/sql-server-2005-a-simple-way-to-defragment-all-indexes-in-a-database-that-is-fragmented-above-a-declared-threshold/ */
/* Catch22: Added parameters to filter by table & view proposed changes */
-- Specify your Database Name
USE AdventureWorks
/* Parameters */
Declare #MatchingTableName nvarchar(100) = 'MyTablePrefix' -- Specify Table name (can be prefix of table name) or blank for all tables
DECLARE #ViewOnly bit = 1 -- Set to 1 to view proposed actions, set to 0 to Execute proposed actions:
-- Declare variables
SET NOCOUNT ON
DECLARE #tablename VARCHAR(128)
DECLARE #execstr VARCHAR(255)
DECLARE #objectid INT
DECLARE #indexid INT
DECLARE #frag decimal
DECLARE #maxreorg decimal
DECLARE #maxrebuild decimal
DECLARE #IdxName varchar(128)
DECLARE #ReorgOptions varchar(255)
DECLARE #RebuildOptions varchar(255)
-- Decide on the maximum fragmentation to allow for a reorganize.
-- AVAILABLE OPTIONS: http://technet.microsoft.com/en-us/library/ms188388(SQL.90).aspx
SET #maxreorg = 20.0
SET #ReorgOptions = 'LOB_COMPACTION=ON'
-- Decide on the maximum fragmentation to allow for a rebuild.
SET #maxrebuild = 30.0
-- NOTE: only specifiy FILLFACTOR=x if x is something other than zero:
SET #RebuildOptions = 'PAD_INDEX=OFF, SORT_IN_TEMPDB=ON, STATISTICS_NORECOMPUTE=OFF, ALLOW_ROW_LOCKS=ON, ALLOW_PAGE_LOCKS=ON'
-- Declare a cursor.
DECLARE tables CURSOR FOR
SELECT CAST(TABLE_SCHEMA AS VARCHAR(100))
+'.'+CAST(TABLE_NAME AS VARCHAR(100))
AS Table_Name
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE'
AND TABLE_NAME like #MatchingTableName + '%'
-- Create the temporary table.
if exists (select name from tempdb.dbo.sysobjects where name like '#fraglist%')
drop table #fraglist
CREATE TABLE #fraglist (
ObjectName CHAR(255),
ObjectId INT,
IndexName CHAR(255),
IndexId INT,
Lvl INT,
CountPages INT,
CountRows INT,
MinRecSize INT,
MaxRecSize INT,
AvgRecSize INT,
ForRecCount INT,
Extents INT,
ExtentSwitches INT,
AvgFreeBytes INT,
AvgPageDensity INT,
ScanDensity decimal,
BestCount INT,
ActualCount INT,
LogicalFrag decimal,
ExtentFrag decimal)
-- Open the cursor.
OPEN tables
-- Loop through all the tables in the database.
FETCH NEXT
FROM tables
INTO #tablename
WHILE ##FETCH_STATUS = 0
BEGIN
-- Do the showcontig of all indexes of the table
INSERT INTO #fraglist
EXEC ('DBCC SHOWCONTIG (''' + #tablename + ''')
WITH FAST, TABLERESULTS, ALL_INDEXES, NO_INFOMSGS')
FETCH NEXT
FROM tables
INTO #tablename
END
-- Close and deallocate the cursor.
CLOSE tables
DEALLOCATE tables
-- Declare the cursor for the list of indexes to be defragged.
DECLARE indexes CURSOR FOR
SELECT ObjectName, ObjectId, IndexId, LogicalFrag, IndexName
FROM #fraglist
WHERE ((LogicalFrag >= #maxreorg) OR (LogicalFrag >= #maxrebuild))
AND INDEXPROPERTY (ObjectId, IndexName, 'IndexDepth') > 0
-- Open the cursor.
OPEN indexes
-- Loop through the indexes.
FETCH NEXT
FROM indexes
INTO #tablename, #objectid, #indexid, #frag, #IdxName
WHILE ##FETCH_STATUS = 0
BEGIN
IF (#frag >= #maxrebuild)
BEGIN
IF (#ViewOnly=1)
BEGIN
PRINT 'WOULD be executing ALTER INDEX ' + RTRIM(#IdxName) + ' ON ' + RTRIM(#tablename) + ' REBUILD WITH ( ' + #RebuildOptions + ' ) -- Fragmentation currently ' + RTRIM(CONVERT(VARCHAR(15),#frag)) + '%'
END
ELSE
BEGIN
PRINT 'Now executing ALTER INDEX ' + RTRIM(#IdxName) + ' ON ' + RTRIM(#tablename) + ' REBUILD WITH ( ' + #RebuildOptions + ' ) -- Fragmentation currently ' + RTRIM(CONVERT(VARCHAR(15),#frag)) + '%'
SELECT #execstr = 'ALTER INDEX ' + RTRIM(#IdxName) + ' ON ' + RTRIM(#tablename) + ' REBUILD WITH ( ' + #RebuildOptions + ' )'
EXEC (#execstr)
END
END
ELSE IF (#frag >= #maxreorg)
BEGIN
IF (#ViewOnly=1)
BEGIN
PRINT 'WOULD be executing ALTER INDEX ' + RTRIM(#IdxName) + ' ON ' + RTRIM(#tablename) + ' REORGANIZE WITH ( ' + #ReorgOptions + ' ) -- Fragmentation currently ' + RTRIM(CONVERT(VARCHAR(15),#frag)) + '%'
END
ELSE
BEGIN
PRINT 'Now executing ALTER INDEX ' + RTRIM(#IdxName) + ' ON ' + RTRIM(#tablename) + ' REORGANIZE WITH ( ' + #ReorgOptions + ' ) -- Fragmentation currently ' + RTRIM(CONVERT(VARCHAR(15),#frag)) + '%'
SELECT #execstr = 'ALTER INDEX ' + RTRIM(#IdxName) + ' ON ' + RTRIM(#tablename) + ' REORGANIZE WITH ( ' + #ReorgOptions + ' )'
EXEC (#execstr)
END
END
FETCH NEXT
FROM indexes
INTO #tablename, #objectid, #indexid, #frag, #IdxName
END
-- Close and deallocate the cursor.
CLOSE indexes
DEALLOCATE indexes
-- Delete the temporary table.
DROP TABLE #fraglist
Does your SP use dynamic T-SQL at all? If so, you' lose the benefits of cached execution plans...
Failing that, are the connections used to run the SP vs T-SQL configured in the same way? Is the speed differential consistent or is the SP as slow the fist time it's run after moification?
this issue is resolved with Different approaches as show by Greg Larsen
Visit https://www.simple-talk.com/sql/t-sql-programming/parameter-sniffing/

Resources