SQL Server: what do tables named #ABCDEF01 contain? - sql-server

I execute request
select *
from tempdb.sys.objects
where type_desc = 'USER_TABLE'
and see tables named like #AB12CD34, #ABCDEF01, etc.
I don't use such naming convention for temp tables. Is that possible to determine real names for these tables?

Any table that starts with '#' is a temporary table that exists until the session or connection is lost. The table is visible only within the current session. Any table that starts with '##' is a similar type table except that it is global in nature and other sessions / connections can see it.

This is not the system naming convention for standard temporary tables.
Temporary tables will generally show up as a 128 character name in the format
#YourTempTableName______________ ... _________00000000000D
Where the hex at the end acts to prevent collisions between different sessions.
Tables named #AB12CD34 are either table variables/table valued parameters or they are cached temporary tables from stored procedures.
When the stored procedure finishes executing a temp table can be cached so it does not have to be re-created again on next use. The FCheckAndCleanupCachedTempTable transaction renames the temp tables to this format as part of this process.
More about temporary table caching in this blog post.
The cached temp tables belong to the execution context of a cached execution plan. You can see stored procedures with cached execution contexts with
SELECT DB_NAME(dbid) AS DatabaseName,
OBJECT_NAME(objectid, dbid) AS ObjectName
FROM sys.dm_exec_cached_plans cp
CROSS apply sys.dm_exec_sql_text(cp.plan_handle) t
JOIN sys.dm_os_memory_objects m1
ON m1.memory_object_address = cp.memory_object_address
JOIN sys.dm_os_memory_objects m2
ON m1.page_allocator_address = m2.page_allocator_address
WHERE m2.type = 'MEMOBJ_EXECUTE'
AND cp.objtype = 'Proc'
You can also see cached temp tables with
select *
from sys.dm_os_memory_cache_entries
where name='tempdb' AND entry_data LIKE '<entry database_id=''2'' entity_type=''object'' entity_id=''-%'
But I don't see any way of linking these together to see which plan caches what temp object.
You could look at the column names and see if you recognize the table structure from one of your procs.
WITH T
AS (SELECT *
FROM tempdb.sys.objects
WHERE type_desc = 'USER_TABLE'
AND name = '#' + CONVERT(VARCHAR, CAST(object_id AS BINARY(4)), 2))
SELECT T.name,
c.name,
type_name(c.user_type_id) AS Type
FROM T
JOIN tempdb.sys.columns c
ON c.object_id = T.object_id;

Related

How to find all tables depends to stored procedure

I want to find all tables used in stored procedures but it only gives me the tables that are in the database that I'm just using. Is there a solution to find all tables used in stored procedures, which are in other databases (in the same server)?
I try this:
SELECT DISTINCT p.name AS proc_name, t.name AS table_name
FROM sys.sql_dependencies d
INNER JOIN sys.procedures p ON p.object_id = d.object_id
INNER JOIN sys.tables t ON t.object_id = d.referenced_major_id
WHERE p.name like '%sp_example%'
ORDER BY proc_name, table_name
The procedures I need to analyze contain tables from different databases, but the code above gives me only results from one database.
First of all, sys.sql_dependencies is deprecated. Instead, you should use sys.sql_expression_dependencies. In it, you will find 4-part names of referenced objects (this includes objects referenced via linked server, for example).
Second, any object_id in SQL Server only makes sense within its database. If you are looking for anything outside of your current DB, don't join tables by these identifiers - use object names instead.

How to make consistent queries to SQL Server metadata

My application needs to cache SQL Server metadata (tables, columns, indexes, etc).
It makes several subsequent queries to system tables and views like sysobjects.
Sometimes data synchronization procedure runs simultaneously that creates tables and indexes.
In this case queried metadata becomes inconsistent:
Application reads tables and columns lists.
Data synchronization procedure creates new table and index.
Application reads indexes list, and the new index is for "non-existing" table.
A simple example to reproduce this.
In session 1:
-- 0. Drop example table if exists
if object_id('test') is not null drop table test
-- 1. Query tables (nothing returned)
select * from sysobjects where name = 'test'
-- 3. Query indexes (index returned for the new table)
select IndexName = x.name, TableName = o.name
from sysobjects o join sysindexes x on x.id = o.id
where o.name = 'test'
In session 2:
-- 2. Create table with index
create table test (id int primary key)
Is there a way to make metadata queries consistent, something like Schema Modification lock on the entire database or database schema?
Running metadata queries in transaction with serializable isolation level does not help.
You can "simulate" consistency with temp table for sysobjects (tables) and then using this temp table to query for indexes that belong to that tables.
Like this:
if object_id('tempdb..#tempTables') is not null
drop table #tempTables;
select
*
into #tempTables
from sys.objects as o
where o.type = 'U'
select
*
from #tempTables t
select
i.*
from #tempTables t
inner join sys.indexes as i on t.object_id = i.object_id

Why aren't system tables updated after compressing tables

SQL Server 2012
I wanted to compress tables and indexes. I did a search to find tables that weren't compressed and manually checked accuracy of script by looking at table properties/storage prior to compressing. I generated scripts for tables as follows:
ALTER TABLE [R_CompPen].[CP2507BodySystem]
REBUILD WITH (DATA_COMPRESSION=PAGE);
After the script ran I verified compression through SMS however, the script I ran to find the uncompressed tables and generate scripts still showed them as uncompressed.
So the question is why didn't the Alter Table script update system tables and if it actually is but showing indexes, how can the script be written to only show tables and conversely a separate script to only show indexes?
SELECT distinct 'ALTER TABLE ['
+ sc.[name] + '].[' + st.[name]
+ '] REBUILD WITH (DATA_COMPRESSION=PAGE);'
FROM sys.partitions SP
INNER JOIN sys.tables ST ON st.object_id = sp.object_id
INNER JOIN sys.Schemas SC on sc.schema_ID = st.schema_ID
WHERE sp.data_compression = 0
The 'DISTINCT' is the culprit here. Once you have multiple indexes, you also have multiple entries in sys.partitions. But the distinct hides the other entries.
Here I have a table called Album with 2 indexes, which I compressed using
ALTER TABLE Album REBUILD WITH (DATA_COMPRESSION = PAGE);
After running this statement, the non clustered index remains uncompressed and keeps appearing in the list.
EDIT:
Turns out that when you only want to know about table level compression, you simply filter for index_id 0 or 1. Higher numbers refer to non clustered indexes. Shameless copy from Barguast's solution on his own question:
SELECT [t].[name] AS [Table], [p].[partition_number] AS [Partition],
[p].[data_compression_desc] AS [Compression]
FROM [sys].[partitions] AS [p]
INNER JOIN sys.tables AS [t] ON [t].[object_id] = [p].[object_id]
WHERE [p].[index_id] in (0,1)

SQL Server ambiguous query validation

I have just come across a curious SQL Server behaviour.
In my scenario I have a sort of dynamic database, so I need to check the existence of tables and columns before run queries involving them.
I can't explain why the query
IF 0 = 1 -- Check if NotExistingTable exists in my database
BEGIN
SELECT NotExistingColumn FROM NotExistingTable
END
GO
executes correctly, but the query
IF 0 = 1 -- Check if NotExistingColumn exists in my ExistingTable
BEGIN
SELECT NotExistingColumn FROM ExistingTable
END
GO
returns Invalid column name 'NotExistingColumn'.
In both cases the IF block is not executed and contains an invalid query (the first misses a table, the second a column).
Is there any reason why SQL engine checks for syntax erorrs just in one case?
Thanks in advance
Deffered name resolution:
Deferred name resolution can only be used when you reference nonexistent table objects. All other objects must exist at the time the stored procedure is created. For example, when you reference an existing table in a stored procedure you cannot list nonexistent columns for that table.
You can look through the system tables for the existence of a specific table / column name
SELECT t.name AS table_name,
SCHEMA_NAME(schema_id) AS schema_name,
c.name AS column_name
FROM sys.tables AS t
INNER JOIN sys.columns c ON t.OBJECT_ID = c.OBJECT_ID
WHERE c.name LIKE '%colname%'
AND t.name LIKE '%tablename%'
ORDER BY schema_name, table_name;
The query above will pull back all tables / columns with partial match of a columnname and tablename, just change the like % for exact match.

How can I tell if a database table is being accessed anymore? Want something like a "SELECT trigger"

I have a very large database with hundreds of tables, and after many, many product upgrades, I'm sure half of them aren't being used anymore. How can I tell if a table is is actively being selected from? I can't just use Profiler - not only do I want to watch for more than a few days, but there are thousands of stored procedures as well, and profiler won't translate the SP calls into table access calls.
The only thing I can think of is to create a clustered index on the tables of interest, and then monitor the sys.dm_db_index_usage_stats to see if there are any seeks or scans on the clustered index, meaning that data from the table was loaded. However, adding a clustered index on every table is a bad idea (for any number of reasons), as isn't really feasible.
Are there other options I have? I've always wanted a feature like a "SELECT trigger", but there are probably other reasons why SQL Server doesn't have that feature either.
SOLUTION:
Thanks, Remus, for pointing me in the right direction. Using those columns, I've created the following SELECT, which does exactly what I want.
WITH LastActivity (ObjectID, LastAction) AS
(
SELECT object_id AS TableName,
last_user_seek as LastAction
FROM sys.dm_db_index_usage_stats u
WHERE database_id = db_id(db_name())
UNION
SELECT object_id AS TableName,
last_user_scan as LastAction
FROM sys.dm_db_index_usage_stats u
WHERE database_id = db_id(db_name())
UNION
SELECT object_id AS TableName,
last_user_lookup as LastAction
FROM sys.dm_db_index_usage_stats u
WHERE database_id = db_id(db_name())
)
SELECT OBJECT_NAME(so.object_id) AS TableName,
MAX(la.LastAction) as LastSelect
FROM sys.objects so
LEFT
JOIN LastActivity la
on so.object_id = la.ObjectID
WHERE so.type = 'U'
AND so.object_id > 100
GROUP BY OBJECT_NAME(so.object_id)
ORDER BY OBJECT_NAME(so.object_id)
Look in sys.dm_db_index_usage_stats. The columns last_user_xxx will contain the last time the table was accessed from user requests. This table resets its tracking after a server restart, so you must leave it running for a while before relying on its data.
Re: Profiler, if you monitor for SP:StmtCompleted, that will capture all statements executing within a stored procedure, so that will catch table accesses within a sproc. If not everything goes through stored procedures, you may also need the SQL:StmtCompleted event.
There will be a large number of events so it's probably still not practical to trace over a long time due to the size of trace. However, you could apply a filter - e.g. where TextData contains the name of your table you want to check for. You could give a list of table names to filter on at any one time and work through them gradually. So you should not get any trace events if none of those tables have been accessed.
Even if you feel it's not a suitable/viable approach for you, I thought it was worth expanding on.
Another solution would be to do a global search of your source code to find references to the tables. You can query the stored procedure definitions to check for matches for a given table, or just generate a complete database script and do a Find on that for table names.
For SQL Server 2008 you should take a look at SQL Auditing. This allows you to audit many things including selects on a table and reports to a file or Events Log.
The following query uses the query plan cache to see if there's a reference to a table in any of the existing plans in cache. This is not guaranteed to be 100% accurate (since query plans are flushed out if there are memory constraints) but can be used to get some insights on table use.
SELECT schema_name(schema_id) as schemaName, t.name as tableName,
databases.name,
dm_exec_sql_text.text AS TSQL_Text,
dm_exec_query_stats.creation_time,
dm_exec_query_stats.execution_count,
dm_exec_query_stats.total_worker_time AS total_cpu_time,
dm_exec_query_stats.total_elapsed_time,
dm_exec_query_stats.total_logical_reads,
dm_exec_query_stats.total_physical_reads,
dm_exec_query_plan.query_plan
FROM sys.dm_exec_query_stats
CROSS APPLY sys.dm_exec_sql_text(dm_exec_query_stats.plan_handle)
CROSS APPLY sys.dm_exec_query_plan(dm_exec_query_stats.plan_handle)
INNER JOIN sys.databases ON dm_exec_sql_text.dbid = databases.database_id
RIGHT JOIN sys.tables t (NOLOCK) ON cast(dm_exec_query_plan.query_plan as varchar(max)) like '%' + t.name + '%'
I had in mind to play with user permissions for different tables, but then I remembered you can turn on trace with an ON LOGON trigger you might benefit from this:
CREATE OR REPLACE TRIGGER SYS.ON_LOGON_ALL
AFTER LOGON ON DATABASE
WHEN (
USER 'MAX'
)
BEGIN
EXECUTE IMMEDIATE 'ALTER SESSION SET SQL_TRACE TRUE';
--EXECUTE IMMEDIATE 'alter session set events ''10046 trace name context forever level 12''';
EXCEPTION
WHEN OTHERS THEN
NULL;
END;
/
Then you can check your trace files.
This solution works better for me then the solution above. But, is still limted that the server was not re-started as well, but still gives you a good idea of tables not used.
SELECT [name]
,[object_id]
,[principal_id]
,[schema_id]
,[parent_object_id]
,[type]
,[type_desc]
,[create_date]
,[modify_date]
,[is_ms_shipped]
,[is_published]
,[is_schema_published]
FROM [COMTrans].[sys].[all_objects]
where object_id not in (
select object_id from sys.dm_db_index_usage_stats
)
and type='U'
order by name

Resources