SQL Server 2012
I wanted to compress tables and indexes. I did a search to find tables that weren't compressed and manually checked accuracy of script by looking at table properties/storage prior to compressing. I generated scripts for tables as follows:
ALTER TABLE [R_CompPen].[CP2507BodySystem]
REBUILD WITH (DATA_COMPRESSION=PAGE);
After the script ran I verified compression through SMS however, the script I ran to find the uncompressed tables and generate scripts still showed them as uncompressed.
So the question is why didn't the Alter Table script update system tables and if it actually is but showing indexes, how can the script be written to only show tables and conversely a separate script to only show indexes?
SELECT distinct 'ALTER TABLE ['
+ sc.[name] + '].[' + st.[name]
+ '] REBUILD WITH (DATA_COMPRESSION=PAGE);'
FROM sys.partitions SP
INNER JOIN sys.tables ST ON st.object_id = sp.object_id
INNER JOIN sys.Schemas SC on sc.schema_ID = st.schema_ID
WHERE sp.data_compression = 0
The 'DISTINCT' is the culprit here. Once you have multiple indexes, you also have multiple entries in sys.partitions. But the distinct hides the other entries.
Here I have a table called Album with 2 indexes, which I compressed using
ALTER TABLE Album REBUILD WITH (DATA_COMPRESSION = PAGE);
After running this statement, the non clustered index remains uncompressed and keeps appearing in the list.
EDIT:
Turns out that when you only want to know about table level compression, you simply filter for index_id 0 or 1. Higher numbers refer to non clustered indexes. Shameless copy from Barguast's solution on his own question:
SELECT [t].[name] AS [Table], [p].[partition_number] AS [Partition],
[p].[data_compression_desc] AS [Compression]
FROM [sys].[partitions] AS [p]
INNER JOIN sys.tables AS [t] ON [t].[object_id] = [p].[object_id]
WHERE [p].[index_id] in (0,1)
Related
My application needs to cache SQL Server metadata (tables, columns, indexes, etc).
It makes several subsequent queries to system tables and views like sysobjects.
Sometimes data synchronization procedure runs simultaneously that creates tables and indexes.
In this case queried metadata becomes inconsistent:
Application reads tables and columns lists.
Data synchronization procedure creates new table and index.
Application reads indexes list, and the new index is for "non-existing" table.
A simple example to reproduce this.
In session 1:
-- 0. Drop example table if exists
if object_id('test') is not null drop table test
-- 1. Query tables (nothing returned)
select * from sysobjects where name = 'test'
-- 3. Query indexes (index returned for the new table)
select IndexName = x.name, TableName = o.name
from sysobjects o join sysindexes x on x.id = o.id
where o.name = 'test'
In session 2:
-- 2. Create table with index
create table test (id int primary key)
Is there a way to make metadata queries consistent, something like Schema Modification lock on the entire database or database schema?
Running metadata queries in transaction with serializable isolation level does not help.
You can "simulate" consistency with temp table for sysobjects (tables) and then using this temp table to query for indexes that belong to that tables.
Like this:
if object_id('tempdb..#tempTables') is not null
drop table #tempTables;
select
*
into #tempTables
from sys.objects as o
where o.type = 'U'
select
*
from #tempTables t
select
i.*
from #tempTables t
inner join sys.indexes as i on t.object_id = i.object_id
I execute request
select *
from tempdb.sys.objects
where type_desc = 'USER_TABLE'
and see tables named like #AB12CD34, #ABCDEF01, etc.
I don't use such naming convention for temp tables. Is that possible to determine real names for these tables?
Any table that starts with '#' is a temporary table that exists until the session or connection is lost. The table is visible only within the current session. Any table that starts with '##' is a similar type table except that it is global in nature and other sessions / connections can see it.
This is not the system naming convention for standard temporary tables.
Temporary tables will generally show up as a 128 character name in the format
#YourTempTableName______________ ... _________00000000000D
Where the hex at the end acts to prevent collisions between different sessions.
Tables named #AB12CD34 are either table variables/table valued parameters or they are cached temporary tables from stored procedures.
When the stored procedure finishes executing a temp table can be cached so it does not have to be re-created again on next use. The FCheckAndCleanupCachedTempTable transaction renames the temp tables to this format as part of this process.
More about temporary table caching in this blog post.
The cached temp tables belong to the execution context of a cached execution plan. You can see stored procedures with cached execution contexts with
SELECT DB_NAME(dbid) AS DatabaseName,
OBJECT_NAME(objectid, dbid) AS ObjectName
FROM sys.dm_exec_cached_plans cp
CROSS apply sys.dm_exec_sql_text(cp.plan_handle) t
JOIN sys.dm_os_memory_objects m1
ON m1.memory_object_address = cp.memory_object_address
JOIN sys.dm_os_memory_objects m2
ON m1.page_allocator_address = m2.page_allocator_address
WHERE m2.type = 'MEMOBJ_EXECUTE'
AND cp.objtype = 'Proc'
You can also see cached temp tables with
select *
from sys.dm_os_memory_cache_entries
where name='tempdb' AND entry_data LIKE '<entry database_id=''2'' entity_type=''object'' entity_id=''-%'
But I don't see any way of linking these together to see which plan caches what temp object.
You could look at the column names and see if you recognize the table structure from one of your procs.
WITH T
AS (SELECT *
FROM tempdb.sys.objects
WHERE type_desc = 'USER_TABLE'
AND name = '#' + CONVERT(VARCHAR, CAST(object_id AS BINARY(4)), 2))
SELECT T.name,
c.name,
type_name(c.user_type_id) AS Type
FROM T
JOIN tempdb.sys.columns c
ON c.object_id = T.object_id;
Using SQL Server 2005, upgrading to 2012
If I have an ETL the does the following(Simplified)
TRUNCATE TABLE detination
INSERT INTO detination
SELECT *
FROM source
Does this clear the index and rebuild it with the inserts? Will I have fragments?
Assume it would not truncate the indexes. That would mean the database was physically inconsistent. So it cannot be this way.
Truncate logically removes all rows and physically creates fresh b-trees for all partitions. As the trees are fresh no fragmentation exists.
Actually I'm not sure if the trees have 0 or 1 pages allocated to them. But it doesn't matter. I believe for temp tables there is a special case that has to do with temp table caching. Also doesn't matter.
The insert from your question works the same way as any other insert. It is not influenced by the previous truncate in a cross-statement communication way. Whether it causes fragmentation is dependent on your specific case and, IMHO, best-placed in a new question.
Challenging #sjaan reponse
MSDN "TRUNCATE TABLE removes all rows from a table, but the table structure and its columns, constraints, indexes, and so on remain."
SQL team is saying that indexes will exist but with no data pages... You could easily check that with reference
If you check the size of indexes on that table it will be zero
SELECT *
FROM
(
SELECT OBJECT_NAME(i.OBJECT_ID) AS TableName,
i.name AS IndexName,
i.index_id AS IndexID,
8 * SUM(a.used_pages) AS 'Indexsize(KB)'
FROM sys.indexes AS i
JOIN sys.partitions AS p ON p.OBJECT_ID = i.OBJECT_ID
AND p.index_id = i.index_id
JOIN sys.allocation_units AS a ON a.container_id = p.partition_id
GROUP BY i.OBJECT_ID,
i.index_id,
i.name
) a
WHERE A.TableName LIKE '%table%'
ORDER BY Tablename,
indexid;
"TRUNCATE TABLE removes all rows from a table, but the table structure and its columns, constraints, indexes, and so on remain."
see article: https://msdn.microsoft.com/en-us/library/ms177570.aspx
There is an old SSIS package that pulls a lot of data from oracle to our Sql Server Database everyday. The data is inserted into a non-normalized database, and I'm working on a stored procedure to select that data, and insert it into a normalized database. The Oracle databases were overly normalized, so the query I wrote ended up having 12 inner joins to get all the columns I need. Another problem is that I'm dealing with large amounts of data. One table I'm selecting from has over 12 million records. Here is my query:
Declare #MewLive Table
(
UPC_NUMBER VARCHAR(50),
ITEM_NUMBER VARCHAR(50),
STYLE_CODE VARCHAR(20),
COLOR VARCHAR(8),
SIZE VARCHAR(8),
UPC_TYPE INT,
LONG_DESC VARCHAR(120),
LOCATION_CODE VARCHAR(20),
TOTAL_ON_HAND_RETAIL NUMERIC(14,0),
VENDOR_CODE VARCHAR(20),
CURRENT_RETAIL NUMERIC(14,2)
)
INSERT INTO #MewLive(UPC_NUMBER,ITEM_NUMBER,STYLE_CODE,COLOR,[SIZE],UPC_TYPE,LONG_DESC,LOCATION_CODE,TOTAL_ON_HAND_RETAIL,VENDOR_CODE,CURRENT_RETAIL)
SELECT U.UPC_NUMBER, REPLACE(ST.STYLE_CODE, '.', '')
+ '-' + SC.SHORT_DESC + '-' + REPLACE(SM.PRIM_SIZE_LABEL, '.', '') AS ItemNumber,
REPLACE(ST.STYLE_CODE, '.', '') AS Style_Code, SC.SHORT_DESC AS Color,
REPLACE(SM.PRIM_SIZE_LABEL, '.', '') AS Size, U.UPC_TYPE, ST.LONG_DESC, L.LOCATION_CODE,
IB.TOTAL_ON_HAND_RETAIL, V.VENDOR_CODE, SD.CURRENT_RETAIL
FROM MewLive.dbo.STYLE AS ST INNER JOIN
MewLive.dbo.SKU AS SK ON ST.STYLE_ID = SK.STYLE_ID INNER JOIN
MewLive.dbo.UPC AS U ON SK.SKU_ID = U.SKU_ID INNER JOIN
MewLive.dbo.IB_INVENTORY_TOTAL AS IB ON SK.SKU_ID = IB.SKU_ID INNER JOIN
MewLive.dbo.LOCATION AS L ON IB.LOCATION_ID = L.LOCATION_ID INNER JOIN
MewLive.dbo.STYLE_COLOR AS SC ON ST.STYLE_ID = SC.STYLE_ID INNER JOIN
MewLive.dbo.COLOR AS C ON SC.COLOR_ID = C.COLOR_ID INNER JOIN
MewLive.dbo.STYLE_SIZE AS SS ON ST.STYLE_ID = SS.STYLE_ID INNER JOIN
MewLive.dbo.SIZE_MASTER AS SM ON SS.SIZE_MASTER_ID = SM.SIZE_MASTER_ID INNER JOIN
MewLive.dbo.STYLE_VENDOR AS SV ON ST.STYLE_ID = SV.STYLE_ID INNER JOIN
MewLive.dbo.VENDOR AS V ON SV.VENDOR_ID = V.VENDOR_ID INNER JOIN
MewLive.dbo.STYLE_DETAIL AS SD ON ST.STYLE_ID = SD.STYLE_ID
WHERE (U.UPC_TYPE = 1) AND (ST.ACTIVE_FLAG = 1)
That query pretty much crashes our server. I tried to fix the problem by breaking the query up into smaller queries, but the temp table variable I use causes the tempdb database to fill the hard drive. I figure this is because the server runs out of memory, and crashes. Is there anyway to solve this problem?
Have you tried using a real table instead of a temporary one. You can use SELECT INTO to create a real table to store the results instead of a temporary one.
Syntax would be:
SELECT
U.UPC_NUMBER,
REPLACE(ST.STYLE_CODE, '.', '').
....
INTO
MEWLIVE
FROM
MewLive.dbo.STYLE AS ST INNER JOIN
...
The command will create the table, and may help with the memory issues you are seeing.
Additionally try looking at the execution plan in query analyser or try the index tuning wizard to suggest some indexes that may help speed up the query.
Try running the query from the Oracle server rather than from the SQL server. As it stands, there's most likely going to be a lot of communication over the wire as the query tries to process.
By pre-processing the joins (maybe with a view), you'll only be sending over the results.
Regarding the over-normalization: have you tested whether or not it's an issue in terms of speed? I find it hard to believe that it could be too normalized.
Proper indexing will definitely help
IF
amount of rows in this query not over "zillions" of rows.
Try the following:
Join on dbo.COLOR is excessive if there is FKey on dbo.STYLE_COLOR(COLOR_ID)=>dbo.COLOR(COLOR_ID)
Proper index (excessive, should be reviewed)
USE MewLive
CREATE INDEX ix1 ON dbo.STYLE_DETAIL (STYLE_ID)
INCLUDE (STYLE_CODE, LONG_DESC)
WHERE ACTIVE_FLAG = 1
GO
CREATE INDEX ix2 ON dbo.UPC (SKU_ID)
INCLUDE(UPC_NUMBER)
WHERE UPC_TYPE = 1
GO
CREATE INDEX ix3 ON dbo.SKU(STYLE_ID)
INCLUDE(SKU_ID)
GO
CREATE INDEX ix3_alternative ON dbo.SKU(SKU_ID)
INCLUDE(STYLE_ID)
GO
CREATE INDEX ix4 ON dbo.IB_INVENTORY_TOTAL(SKU_ID, LOCATION_ID)
INCLUDE(TOTAL_ON_HAND_RETAIL)
GO
CREATE INDEX ix5 ON dbo.LOCATION(LOCATION_ID)
INCLUDE(LOCATION_CODE)
GO
CREATE INDEX ix6 ON dbo.STYLE_COLOR(STYLE_ID)
INCLUDE(SHORT_DESC,COLOR_ID)
GO
CREATE INDEX ix7 ON dbo.COLOR(COLOR_ID)
GO
CREATE INDEX ON dbo.STYLE_SIZE(STYLE_ID)
INCLUDE(SIZE_MASTER_ID)
GO
CREATE INDEX ix8 ON dbo.SIZE_MASTER(SIZE_MASTER_ID)
INCLUDE(PRIM_SIZE_LABEL)
GO
CREATE INDEX ix9 ON dbo.STYLE_VENDOR(STYLE_ID)
INCLUDE(VENDOR_ID)
GO
CREATE INDEX ixA ON dbo.VENDOR(VENDOR_ID)
INCLUDE(VENDOR_CODE)
GO
CREATE INDEX ON dbo.STYLE_DETAIL(STYLE_ID)
INCLUDE(CURRENT_RETAIL)
In SELECT list replace U.UPC_TYPE, to 1 as UPC_TYPE,
Can you segregate the imports - batch them by SKU/location/vendor/whatever and run multiple queries to get the data over? Is there a particular reason it all needs to go across in one hit? (apart from the ease of writing the query)
I've been using the entities framework with ASP.NET MVC and I'm looking for an easy and fast way to drop all of the information in the database. It takes quite a while to delete all of the information from the entities object and then save the changes to the database (probably because there are a lot of many-to-many relationships) and I think it should be really fast to just remove all of the information with a stored procedure but I'm not sure how to go about this. How do I create and use a stored procedure for SQL-Sever which will delete the data in all tables in a database with VS 2010? Also if I do this will the command be compatible with other version of SQL-Server? (I'm using 2008 on my testing comptuer, but when I upload it I not sure if my hosting company uses 2008 or 2005).
Thanks!!
This solution will work well in terms of deleting all your data in your database's tables.
You can create this stored proc right within Visual Studio on your SQL Server 2008 development server. It'll work well in any version of SQL Server (2000+).
CREATE PROC NukeMyDatabase
AS
--order is important here. delete data in FK'd tables first.
DELETE Foo
DELETE Bar
TRUNCATE TABLE Baz
I prefer TRUNCATE TABLE, as it's faster. It'll depend on your data model, as you can't issue a TRUNCATE TABLE on a table referenced by a foreign key constraint (i.e. parent tables).
You could then call this stored proc using Entity Framework after adding it to your .edmx:
myContext.NukeMyDatabase();
I recently faced a similar problem in that I had to clear over 200+ tables that were interlinked through many foreign key constraints.
The critical issue, as p.campbell pointed out, is determining the correct order of DELETE statements.
The foreign key constraints between tables essentially represent a hierarchy. If table 3 is dependent on table 2, and table 2 is dependent on table 1, then table 1 is the root and table 3 is the leaf.
In other words, if your going to delete from these three tables, you have to start with the table that has no dependencies and work your way up. That is the intent of this code:
DECLARE #sql VARCHAR(MAX)
SET #sql = ''
;WITH c AS
(
SELECT
parent_object_id AS org_child,
parent_object_id,
referenced_object_id,
1 AS Depth
FROM sys.foreign_keys
UNION ALL
SELECT
c.org_child,
k.parent_object_id,
k.referenced_object_id,
Depth + 1
FROM c
INNER JOIN sys.foreign_keys k
ON c.referenced_object_id = k.parent_object_id
WHERE c.parent_object_id != k.referenced_object_id
),
c2 AS (
SELECT
OBJECT_NAME(org_child) AS ObjectName,
MAX(Depth) AS Depth
FROM c
GROUP BY org_child
UNION ALL
SELECT
OBJECT_NAME(object_id),
0 AS Depth
FROM sys.objects o
LEFT OUTER JOIN c
ON o.object_id = c.org_child
WHERE c.org_child IS NULL
AND o.type = 'U'
)
SELECT #sql = #sql + 'DELETE FROM ' + CAST(ObjectName AS VARCHAR(100))
+ ';' + CHAR(13) + CHAR(10) /** for readability in PRINT statement */
FROM c2
ORDER BY Depth DESC
PRINT #sql
/** EXEC (#sql) **/
exec sp_MSForEachTable 'truncate table ?';
But I would recommend a different approach: take a backup of the empty database and simply restore this backup before each run. Even better, have no database at all and have your application be capable of deploying the database itself, using a schema version upgrade set of scripts.