I'm new to SQL Server, replication and concept of CDC. I did few primer tutorials for CDC. My problem is like - Because CDC is generating so much data and taking up so much space(memory), we need to make it more efficient. The decision is to move CDC tables to a new filegroup. And following are the options for it (and all have a danger of potentially breaking the CDC):
i) Re-create a Primary Key on each table
ii) Alter table create statement
iii) Move the whole CDC schema to a new FileGroup
Please suggest/guide how to go about this?
Regards,
CD
enter code hereOk, so no one answered my question..I waited for a day, and not even a comment. Anyways, I worked towards the answer myself, so here it is, hopefully will atleast get some votes for this! :P
2 Logical Options:
1) Disable the CDC, and then enable the CDC, while changing the File Group - Now this seems logical, but you lose all the previous CDC Data, and might lose CDC Meta Date. Still this might be useful for some, so find below:
Declare #RowNo Int, #RowCount Int, #Capture_Instance Varchar(200), #strSQL NVarchar(1000)
Set #RowCount = 0
Set #RowNo = 1
Set #Capture_Instance = ''
Set #strSQL = ''
Declare #myTable Table (Capture_instance Varchar(200), RowNo Int)
Insert Into #myTable
Select capture_instance, ROW_NUMBER() Over(Order By Source_Object_Id) As RN From cdc.change_tables
Set #RowCount = ##ROWCOUNT
While #RowNo <= #RowCount
Begin
Select #Capture_Instance = Capture_instance From #myTable Where RowNo = #RowNo
Set #strSQL = 'sys.sp_cdc_disable_table #source_schema = N''' + Left(#Capture_Instance, CharIndex('_', #Capture_Instance) - 1) + ''',
#source_name = N''' + SubString(#Capture_Instance, CharIndex('_', #Capture_Instance) + 1, Len(#Capture_Instance)) + ''',
#capture_instance = N''All'''
Exec sp_ExecuteSQL #strSQL /*Diabling the sp_cdc*/
Set #strSQL = 'sys.sp_cdc_enable_table #source_schema = N''' + Left(#Capture_Instance, CharIndex('_', #Capture_Instance) - 1) + '''
,#source_name = N''' + SubString(#Capture_Instance, CharIndex('_', #Capture_Instance) + 1, Len(#Capture_Instance)) + '''
,#role_name = N''' + 'cdc_Admin' + '''
,#fileGroup_Name = N''' + 'CDCFileGroup' + ''';'
Exec sp_ExecuteSQL #strSQL /*Enabling the sp_cdc, with a new CDCFileGroup(this filegroup would have been created before running this script)*/
Set #RowNo += 1
End
2) This is the correct solution! Create Unique Clustered Index, while changing the File Group - this preserves the previous CDC Data, and everything else. You just need to make sure that the File Group is already created, and it contains files, whose size you have set. (for more info., ask in comment). Script for this:
/*CREATING CLUSTERED INDEX, AND DROPPING CLUSTERED INDEX, TOGETHER*/
CREATE UNIQUE CLUSTERED INDEX dbo_YourTableName_CT_clustered_idx
ON cdc.dbo_YourTableName_CT ( [__$start_lsn] ASC,
[__$seqval] ASC,
[__$operation] ASC)
WITH (DROP_EXISTING = ON)
ON CDCFileGroup /*Your File Group Name*/
Related
When assigning a variable, is there a functional difference between set and select?
I often write scripts that have to iterate through processes, and on each iteration I update a variable with a value.
For example, my company's product has multiple servers, on each server we have a certain number of databases that our clients data resides in. Each database has between 5 and 50 clients. A table in a primary database indicates which of the individual databases each client is on. Today I found a problem with the primary key on one table, and we need to modify the primary key to add a column to it. The table on each database may have several hundred thousand records, so we expect the updates to take some time. We need to do these overnight to avoid performance issues. So I've written the following script to iterate through the process on each database (I'll execute it separately on each server):
DECLARE #DBTABLE TABLE
(
TableID INT IDENTITY PRIMARY KEY NOT NULL,
DbName VARCHAR(50) NOT NULL,
ServerName VARCHAR(50) NOT NULL,
ProcFlag INT NOT NULL DEFAULT 0
)
INSERT INTO #DBTABLE (DbName, ServerName)
SELECT DISTINCT DbName, ServerName
FROM PrimaryDatabase.dbo.Cients WITH(NOLOCK)
WHERE ClientInactive = 0
AND ServerName = ##SERVERNAME
DECLARE #TABLETEST INT
DECLARE #TABLEID INT
DECLARE #DBNAME VARCHAR(50)
DECLARE #SERVERNAME VARCHAR(50)
DECLARE #VAR_SQL VARCHAR(MAX)
SET #TABLETEST = (SELECT COUNT(*) FROM #DBTABLE WHERE ProcFlag = 0)
WHILE #TABLETEST > 0
BEGIN
SET #TABLEID = (SELECT MIN(TableID) FROM #DBTABLE WHERE ProcFlag = 0)
SET #DBNAME = (SELECT DbName FROM #DBTABLE WHERE TableID = #TABLEID)
SET #SERVERNAME = (SELECT ServerName FROM #DBTABLE WHERE TableID = #TABLEID)
SET #VAR_SQL = '
ALTER TABLE ' + #DBNAME + '.dbo.ClientDealTable DROP CONSTRAINT [PK_ClientDealTable]
ALTER TABLE ' + #DBNAME + '.dbo.ClientDealTable ADD CONSTRAINT [PK_ClientDealTable] PRIMARY KEY CLUSTERED ([ClientID] ASC, [DealNumber] ASC, [DealDate] ASC)
'
EXEC(#VAR_SQL)
UPDATE #DBTABLE SET ProcFlag = 1 WHERE TableID = #TABLEID
SET #TABLETEST = (SELECT COUNT(*) FROM #DBTABLE WHERE ProcFlag = 0)
END
Is SET or SELECT the preferred option here, or does it really matter? Is there a performance difference?
SET can only set the value of a single variable. using SELECT you can set the value of any number of variables.
But in your code I wouldn't use either. I would do this without looping. Not to mention there is a mountain less code to write. This should do the same thing and is a lot simpler.
DECLARE #VAR_SQL VARCHAR(MAX)
SELECT #VAR_SQL = 'ALTER TABLE ' + QUOTENAME(DbName) + '.dbo.ClientDealTable DROP CONSTRAINT [PK_ClientDealTable];ALTER TABLE ' + QUOTENAME(DbName) + '.dbo.ClientDealTable ADD CONSTRAINT [PK_ClientDealTable] PRIMARY KEY CLUSTERED ([ClientID] ASC, [DealNumber] ASC, [DealDate] ASC)'
FROM PrimaryDatabase.dbo.Cients --WITH(NOLOCK)
WHERE ClientInactive = 0
AND ServerName = ##SERVERNAME
exec sp_executesql #VAR_SQL
I was looking at the MERGE command which seems cool but still it requires the columns to be specified. I'm looking for something like:
MERGE INTO target AS t
USING source AS s
WHEN MATCHED THEN
UPDATE SET
[all t.fields = s.fields]
WHEN NOT MATCHED THEN
INSERT ([all fields])
VALUES ([all s.fields])
Is it possible?
I'm lazy... this is a cheap proc I wrote that will spit out a general MERGE command for a table. It queries information_schema.columns for column names. I ripped out my source database name - so, you have to update the proc to work with your database (look for #SourceDB... I said it was cheap.) Anyway, I know others could write it much better - it served my purpose well. (It makes a couple assumptions that you could put logic in to handle - namely turning IDENTITY_INSERT OFF - even when a table doesn't have identity columns.) It updates the table in your current context. It was written against sql server 2008 to sync up some tables. Use at your own risk, of course.
CREATE PROCEDURE [dbo].[GenerateMergeSQL]
#TableName varchar(100)
AS
BEGIN
SET NOCOUNT ON
declare #sql varchar(5000),#SourceInsertColumns varchar(5000),#DestInsertColumns varchar(5000),#UpdateClause varchar(5000)
declare #ColumnName varchar(100), #identityColName varchar(100)
declare #IsIdentity int,#IsComputed int, #Data_Type varchar(50)
declare #SourceDB as varchar(200)
-- source/dest i.e. 'instance.catalog.owner.' - table names will be appended to this
-- the destination is your current db context
set #SourceDB = '[mylinkedserver].catalog.myDBOwner.'
set #sql = ''
set #SourceInsertColumns = ''
set #DestInsertColumns = ''
set #UpdateClause = ''
set #ColumnName = ''
set #isIdentity = 0
set #IsComputed = 0
set #identityColName = ''
set #Data_Type = ''
DECLARE #ColNames CURSOR
SET #ColNames = CURSOR FOR
select column_name, COLUMNPROPERTY(object_id(TABLE_NAME), COLUMN_NAME, 'IsIdentity') as IsIdentity ,
COLUMNPROPERTY(object_id(TABLE_NAME), COLUMN_NAME, 'IsComputed') as IsComputed , DATA_TYPE
from information_schema.columns where table_name = #TableName order by ordinal_position
OPEN #ColNames
FETCH NEXT FROM #ColNames INTO #ColumnName, #isIdentity, #IsComputed, #DATA_TYPE
WHILE ##FETCH_STATUS = 0
BEGIN
if #IsComputed = 0 and #DATA_TYPE <> 'timestamp'
BEGIN
set #SourceInsertColumns = #SourceInsertColumns +
case when #SourceInsertColumns = '' THEN '' ELSE ',' end +
'S.' + #ColumnName
set #DestInsertColumns = #DestInsertColumns +
case when #DestInsertColumns = '' THEN '' ELSE ',' end +
#ColumnName
if #isIdentity = 0
BEGIN
set #UpdateClause = #UpdateClause +
case when #UpdateClause = '' THEN '' ELSE ',' end
+ #ColumnName + ' = ' + 'S.' + #ColumnName + char(10)
END
if #isIdentity = 1 set #identityColName = #ColumnName
END
FETCH NEXT FROM #ColNames INTO #ColumnName, #isIdentity, #IsComputed, #DATA_TYPE
END
CLOSE #ColNames
DEALLOCATE #ColNames
SET #sql = 'SET IDENTITY_INSERT ' + #TableName + ' ON;
MERGE ' + #TableName + ' AS D
USING ' + #SourceDB + #TableName + ' AS S
ON (D.' + #identityColName + ' = S.' + #identityColName + ')
WHEN NOT MATCHED BY TARGET
THEN INSERT(' + #DestInsertColumns + ')
VALUES(' + #SourceInsertColumns + ')
WHEN MATCHED
THEN UPDATE SET
' + #UpdateClause + '
WHEN NOT MATCHED BY SOURCE
THEN DELETE
OUTPUT $action, Inserted.*, Deleted.*;
SET IDENTITY_INSERT ' + #TableName + ' OFF'
Print #SQL
END
Not everything you wanted, but partially:
WHEN NOT MATCHED THEN
INSERT([all fields])
VALUES (field1, field2, ...)
(The values list has to be complete, and match the order of the fields in your table's definition.)
Simple alternative to merge without naming any fields or having to update statement whenever table design changes. This is uni-directional from source to target, but it can be made bi-directional. Only acts on changed records, so it is very fast even with linked servers on slower connection.
--Two statement run as transaction batch
DELETE
C
FROM
productschina C
JOIN
(select * from productschina c except select * from productsus) z
on c.productid=z.productid
INSERT into productschina select * from productsus except select * from productschina
Here is code to setup tables to test above:
--Create a target table
--drop table ProductsUS
CREATE TABLE ProductsUS
(
ProductID INT PRIMARY KEY,
ProductName VARCHAR(100),
Rate MONEY
)
GO
--Insert records into target table
INSERT INTO ProductsUS
VALUES
(1, 'Tea', 10.00),
(2, 'Coffee', 20.00),
(3, 'Muffin', 30.00),
(4, 'Biscuit', 40.00)
GO
--Create source table
--drop table productschina
CREATE TABLE ProductsChina
(
ProductID INT PRIMARY KEY,
ProductName VARCHAR(100),
Rate MONEY
)
GO
--Insert records into source table
INSERT INTO ProductsChina
VALUES
(1, 'Tea', 10.00),
(2, 'Coffee', 25.00),
(3, 'Muffin', 35.00),
(5, 'Pizza', 60.00)
GO
SELECT * FROM ProductsUS
SELECT * FROM ProductsChina
GO
I think this answer deserves a little more love. It's simple, elegant and works. However, depending on the tables in question, it may be a little bit slow because the except clause is evaluating every column.
I suspect you can save a little bit of time by just joining on the primary key and the last modified date (if one exists).
DELETE
C
FROM
productschina C
JOIN
(select primary_key, last_mod_date from productschina c except select primary_key, last_mod_date from productsus) z
on c.productid=z.productid
INSERT into productschina select * from productsus except select * from productschina
Can anyone provide the script for rebuilding and re-indexing the fragmented index when 'avg_fragmentation_in_percent' exceeds certain limits (better if cursor is not used)?
To rebuild use:
ALTER INDEX __NAME_OF_INDEX__ ON __NAME_OF_TABLE__ REBUILD
or to reorganize use:
ALTER INDEX __NAME_OF_INDEX__ ON __NAME_OF_TABLE__ REORGANIZE
Reorganizing should be used at lower (<30%) fragmentations but only rebuilding (which is heavier to the database) cuts the fragmentation down to 0%.
For further information see https://msdn.microsoft.com/en-us/library/ms189858.aspx
Two solutions: One simple and one more advanced.
Introduction
There are two solutions available to you depending on the severity of your issue
Replace with your own values, as follows:
Replace XXXMYINDEXXXX with the name of an index.
Replace XXXMYTABLEXXX with the name of a table.
Replace XXXDATABASENAMEXXX with the name of a database.
Solution 1. Indexing
Rebuild all indexes for a table in offline mode
ALTER INDEX ALL ON XXXMYTABLEXXX REBUILD
Rebuild one specified index for a table in offline mode
ALTER INDEX XXXMYINDEXXXX ON XXXMYTABLEXXX REBUILD
Solution 2. Fragmentation
Fragmentation is an issue in tables that regularly have entries both added and removed.
Check fragmentation percentage
SELECT
ips.[index_id] ,
idx.[name] ,
ips.[avg_fragmentation_in_percent]
FROM
sys.dm_db_index_physical_stats(DB_ID(N'XXXMYDATABASEXXX'), OBJECT_ID(N'XXXMYTABLEXXX'), NULL, NULL, NULL) AS [ips]
INNER JOIN sys.indexes AS [idx] ON [ips].[object_id] = [idx].[object_id] AND [ips].[index_id] = [idx].[index_id]
Fragmentation 5..30%
If the fragmentation value is greater than 5%, but less than 30% then it is worth reorganising indexes.
Reorganise all indexes for a table
ALTER INDEX ALL ON XXXMYTABLEXXX REORGANIZE
Reorganise one specified index for a table
ALTER INDEX XXXMYINDEXXXX ON XXXMYTABLEXXX REORGANIZE
Fragmentation 30%+
If the fragmentation value is 30% or greater then it is worth rebuilding then indexes in online mode.
Rebuild all indexes in online mode for a table
ALTER INDEX ALL ON XXXMYTABLEXXX REBUILD WITH (ONLINE = ON)
Rebuild one specified index in online mode for a table
ALTER INDEX XXXMYINDEXXXX ON XXXMYTABLEXXX REBUILD WITH (ONLINE = ON)
Query for REBUILD/REORGANIZE Indexes
30%<= Rebuild
5%<= Reorganize
5%> do nothing
Query:
SELECT OBJECT_NAME(ind.OBJECT_ID) AS TableName,
ind.name AS IndexName, indexstats.index_type_desc AS IndexType,
indexstats.avg_fragmentation_in_percent,
'ALTER INDEX ' + QUOTENAME(ind.name) + ' ON ' +QUOTENAME(object_name(ind.object_id)) +
CASE WHEN indexstats.avg_fragmentation_in_percent>30 THEN ' REBUILD '
WHEN indexstats.avg_fragmentation_in_percent>=5 THEN 'REORGANIZE'
ELSE NULL END as [SQLQuery] -- if <5 not required, so no query needed
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, NULL) indexstats
INNER JOIN sys.indexes ind ON ind.object_id = indexstats.object_id
AND ind.index_id = indexstats.index_id
WHERE
--indexstats.avg_fragmentation_in_percent , e.g. >10, you can specify any number in percent
ind.Name is not null
ORDER BY indexstats.avg_fragmentation_in_percent DESC
Output
TableName IndexName IndexType avg_fragmentation_in_percent SQLQuery
--------------------------------------------------------------------------------------- ------------------------------------------------------
Table1 PK_Table1 CLUSTERED INDEX 75 ALTER INDEX [PK_Table1] ON [Table1] REBUILD
Table1 IX_Table1_col1_col2 NONCLUSTERED INDEX 66,6666666666667 ALTER INDEX [IX_Table1_col1_col2] ON [Table1] REBUILD
Table2 IX_Table2_ NONCLUSTERED INDEX 10 ALTER INDEX [IX_Table2_] ON [Table2] REORGANIZE
Table2 IX_Table2_ NONCLUSTERED INDEX 3 NULL
Here is the modified script which i took from http://www.foliotek.com/devblog/sql-server-optimization-with-index-rebuilding which i found useful to post here.
Although it uses a cursor and i know what is the main problem with cursors it can be easily converted to a cursor-less version.
It is well-documented and you can easily read through it and modify to your needs.
IF OBJECT_ID('tempdb..#work_to_do') IS NOT NULL
DROP TABLE tempdb..#work_to_do
BEGIN TRY
--BEGIN TRAN
use yourdbname
-- Ensure a USE statement has been executed first.
SET NOCOUNT ON;
DECLARE #objectid INT;
DECLARE #indexid INT;
DECLARE #partitioncount BIGINT;
DECLARE #schemaname NVARCHAR(130);
DECLARE #objectname NVARCHAR(130);
DECLARE #indexname NVARCHAR(130);
DECLARE #partitionnum BIGINT;
DECLARE #partitions BIGINT;
DECLARE #frag FLOAT;
DECLARE #pagecount INT;
DECLARE #command NVARCHAR(4000);
DECLARE #page_count_minimum SMALLINT
SET #page_count_minimum = 50
DECLARE #fragmentation_minimum FLOAT
SET #fragmentation_minimum = 30.0
-- Conditionally select tables and indexes from the sys.dm_db_index_physical_stats function
-- and convert object and index IDs to names.
SELECT object_id AS objectid ,
index_id AS indexid ,
partition_number AS partitionnum ,
avg_fragmentation_in_percent AS frag ,
page_count AS page_count
INTO #work_to_do
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL,
'LIMITED')
WHERE avg_fragmentation_in_percent > #fragmentation_minimum
AND index_id > 0
AND page_count > #page_count_minimum;
IF CURSOR_STATUS('global', 'partitions') >= -1
BEGIN
PRINT 'partitions CURSOR DELETED' ;
CLOSE partitions
DEALLOCATE partitions
END
-- Declare the cursor for the list of partitions to be processed.
DECLARE partitions CURSOR LOCAL
FOR
SELECT *
FROM #work_to_do;
-- Open the cursor.
OPEN partitions;
-- Loop through the partitions.
WHILE ( 1 = 1 )
BEGIN;
FETCH NEXT
FROM partitions
INTO #objectid, #indexid, #partitionnum, #frag, #pagecount;
IF ##FETCH_STATUS < 0
BREAK;
SELECT #objectname = QUOTENAME(o.name) ,
#schemaname = QUOTENAME(s.name)
FROM sys.objects AS o
JOIN sys.schemas AS s ON s.schema_id = o.schema_id
WHERE o.object_id = #objectid;
SELECT #indexname = QUOTENAME(name)
FROM sys.indexes
WHERE object_id = #objectid
AND index_id = #indexid;
SELECT #partitioncount = COUNT(*)
FROM sys.partitions
WHERE object_id = #objectid
AND index_id = #indexid;
SET #command = N'ALTER INDEX ' + #indexname + N' ON '
+ #schemaname + N'.' + #objectname + N' REBUILD';
IF #partitioncount > 1
SET #command = #command + N' PARTITION='
+ CAST(#partitionnum AS NVARCHAR(10));
EXEC (#command);
--print (#command); //uncomment for testing
PRINT N'Rebuilding index ' + #indexname + ' on table '
+ #objectname;
PRINT N' Fragmentation: ' + CAST(#frag AS VARCHAR(15));
PRINT N' Page Count: ' + CAST(#pagecount AS VARCHAR(15));
PRINT N' ';
END;
-- Close and deallocate the cursor.
CLOSE partitions;
DEALLOCATE partitions;
-- Drop the temporary table.
DROP TABLE #work_to_do;
--COMMIT TRAN
END TRY
BEGIN CATCH
--ROLLBACK TRAN
PRINT 'ERROR ENCOUNTERED:' + ERROR_MESSAGE()
END CATCH
The real answer, in 2016 and 2017, is: Use Ola Hallengren's scripts:
https://ola.hallengren.com/sql-server-index-and-statistics-maintenance.html
That is all any of us need to know or bother with, at this point in our mutual evolution.
I have found the following script is very good at maintaining indexes, you can have this scheduled to run nightly or whatever other timeframe you wish.
http://sqlfool.com/2011/06/index-defrag-script-v4-1/
[This is a bit of an unusual problem, I know...]
What I need is a script that will change every unique id value to new one in our database. The problem is that we have configuration tables that can be exported between instances of our software which is id-sensitive (clobbering existing ids). Years ago, we set up a "wide-enough" id gap between our development "standard configuration" and our client's instances, which is now not wide enough :( - e.g. we're getting id conflicts when clients import our standard configuration.
A SQL script to do the following is definitely the simplest/shortest-timeframe thing that we can do. e.g. fixing the code is far too complicated and error prone to consider. Note that we are not "eliminating" the problem here. Just changing the gap from 1000's to 1000000's or more (the existing gap took 5 years to fill).
I believe the simplest solution would be to:
change all our tables to UPDATE_CASCADE (none of them are - this will greatly simplify the script)
create an identity table with the new lowest id that we want
For each table, modify the id to the next one in the identity table (using identity insert modifier flags where necessary). Perhaps after each table is processed, we could reset the identity table.
turn off UPDATE_CASCADE, and delete the identity table.
I am seeking any (partial or full) scripts for this.
Unfortunately UPDATE_CASCADE doesn't exist in the world of Sql Server. I suggest for each table you to re-key you do the following (Pseudo Code)
BACKUP DATABASE
CHECK BACKUP WORKS!
FOR EACH TABLE TO BE RE-KEYED
DROP ALL FOREIGN KEY CONSTRAINTS, INDEXES ETC FROM TABLE
SELECT ID + Number, ALL_OTHER_FIELDS INTO TEMP_TABLE FROM TABLE
RENAME TABLE OLD_TABLE
RENAME TEMP_TABLE TABLE
FOR ALL TABLES REFERENCING THIS TABLE
UPDATE FOREIGN_KEY_TABLE SET FK_ID = FK_ID + new number
END FOR
RE-APPLY FOREIGN KEY CONSTRAINTS, INDEXES ETC FROM TABLE
END FOR
Check it all still works ...
This process could be automated through DMO/SMO objects, but depending on the number of tables involved I'd say using management studio to generate scripts that can then be edited is probably quicker. After all, you only need to do this once/5 years.
Here we go with the code for SQL 2005. It's huge, it's hacky, but it will work (except in the case where you have a primary key that is a composite of two other primary keys).
If someone can re-write this with MrTelly's faster id addition (which wouldn't require building sql from a cursor for each updated row), then I'll mark that as the accepted answer. (If I don't notice the new answer, upvote this - then I'll notice :))
BEGIN TRAN
SET NOCOUNT ON;
DECLARE #newLowId INT
SET #newLowId = 1000000
DECLARE #sql VARCHAR(4000)
--**** SELECT ALL TABLES WITH IDENTITY COLUMNS ****
DECLARE tables SCROLL CURSOR
FOR
SELECT '[' + SCHEMA_NAME(schema_id) + '].[' + t.name + ']', c.name
FROM sys.identity_columns c
INNER JOIN sys.objects t
on c.object_id = t.object_id
WHERE t.type_Desc = 'USER_TABLE'
OPEN tables
DECLARE #Table VARCHAR(100)
DECLARE #IdColumn VARCHAR(100)
CREATE Table #IdTable(
id INT IDENTITY(1,1),
s CHAR(1)
)
FETCH FIRST FROM tables
INTO #Table, #IdColumn
WHILE ##FETCH_STATUS = 0
BEGIN
PRINT('
****************** '+#Table+' ******************
')
--Reset the idtable to the 'low' id mark - remove this line if you want all records to have distinct ids across the database
DELETE FROM #IdTable
DBCC CHECKIDENT('#IdTable', RESEED, #newLowId)
--**** GENERATE COLUMN SQL (for inserts and deletes - updating identities is not allowed) ****
DECLARE tableColumns CURSOR FOR
SELECT column_name FROM information_schema.columns
WHERE '[' + table_schema + '].[' + table_name + ']' = #Table
AND column_name <> #IdColumn
OPEN tableColumns
DECLARE #columnName VARCHAR(100)
DECLARE #columns VARCHAR(4000)
SET #columns = ''
FETCH NEXT FROM tableColumns INTO #columnName
WHILE ##FETCH_STATUS = 0
BEGIN
SET #columns = #columns + #columnName
FETCH NEXT FROM tableColumns INTO #columnName
IF ##FETCH_STATUS = 0 SET #columns = #columns + ', '
END
CLOSE tableColumns
DEALLOCATE tableColumns
--**** GENERATE FOREIGN ROW UPDATE SQL ****
DECLARE foreignkeys SCROLL CURSOR
FOR
SELECT con.name,
'[' + SCHEMA_NAME(f.schema_id) + '].[' + f.name + ']' fTable, fc.column_name ,
'[' + SCHEMA_NAME(p.schema_id) + '].[' + p.name + ']' pTable, pc.column_name
FROM sys.foreign_keys con
INNER JOIN sysforeignkeys syscon
ON con.object_id = syscon.constid
INNER JOIN sys.objects f
ON con.parent_object_id = f.object_id
INNER JOIN information_schema.columns fc
ON fc.table_schema = SCHEMA_NAME(f.schema_id)
AND fc.table_name = f.name
AND fc.ordinal_position = syscon.fkey
INNER JOIN sys.objects p
ON con.referenced_object_id = p.object_id
INNER JOIN information_schema.columns pc
ON pc.table_schema = SCHEMA_NAME(p.schema_id)
AND pc.table_name = p.name
AND pc.ordinal_position = syscon.rkey
WHERE '[' + SCHEMA_NAME(p.schema_id) + '].[' + p.name + ']' = #Table
OPEN foreignkeys
DECLARE #FKeyName VARCHAR(100)
DECLARE #FTable VARCHAR(100)
DECLARE #FColumn VARCHAR(100)
DECLARE #PTable VARCHAR(100)
DECLARE #PColumn VARCHAR(100)
--**** RE-WRITE ALL IDS IN THE TABLE ****
SET #sql='DECLARE tablerows CURSOR FOR
SELECT CAST('+#IdColumn+' AS VARCHAR) FROM '+#Table+' ORDER BY '+#IdColumn
PRINT(#sql)
exec(#sql)
OPEN tablerows
DECLARE #rowid VARCHAR(100)
DECLARE #id VARCHAR(100)
FETCH NEXT FROM tablerows INTO #rowid
WHILE ##FETCH_STATUS = 0
BEGIN
--generate new id
INSERT INTO #IdTable VALUES ('')
SELECT #id = CAST(##IDENTITY AS VARCHAR)
IF #rowId <> #Id
BEGIN
PRINT('Modifying '+#Table+': changing '+#rowId+' to '+#id)
SET #sql='SET IDENTITY_INSERT ' + #Table + ' ON
INSERT INTO '+#Table+' ('+#IdColumn+','+#columns+') SELECT '+#id+','+#columns+' FROM '+#Table+' WHERE '+#IdColumn+'='+#rowId
--Updating all foreign rows...
FETCH FIRST FROM foreignkeys
INTO #FKeyName, #FTable, #FColumn, #PTable, #PColumn
WHILE ##FETCH_STATUS = 0
BEGIN
SET #sql = #sql + '
UPDATE '+#FTable+' SET '+#FColumn+'='+#id+' WHERE '+#FColumn+' ='+#rowId
FETCH NEXT FROM foreignkeys
INTO #FKeyName, #FTable, #FColumn, #PTable, #PColumn
END
SET #sql=#sql + '
DELETE FROM '+#Table+' WHERE '+#IdColumn+'='+#rowId
PRINT(#sql)
exec(#sql)
END
FETCH NEXT FROM tablerows INTO #rowid
END
CLOSE tablerows
DEALLOCATE tablerows
CLOSE foreignkeys
DEALLOCATE foreignkeys
--Revert to normal identity operation - update the identity to the latest id...
DBCC CHECKIDENT(#Table, RESEED, ##IDENTITY)
SET #sql='SET IDENTITY_INSERT ' + #Table + ' OFF'
PRINT(#sql)
exec(#sql)
FETCH NEXT FROM tables
INTO #Table, #IdColumn
END
CLOSE tables
DEALLOCATE tables
DROP TABLE #IdTable
--COMMIT
--ROLLBACK
Why don't you use negative numbers for your standard configuration values and continue to use positive numbers for other things?
I'm trying to look for a value in my Microsoft SQL Server 2008 database but I don't know what column or table to look in. I'm trying to craft a query which will just look in all tables and all columns for my value.
You probably could do it using dynamic sql using sys.cols & sys.tables you should be able to create the query.
This will, in all likelyhood, be an extremely long running query.
I rethought my answer and if you run the query below it will generate a number of sql statements, if you run those statements you will find out which column has the value you want. Just replace [your value here] with the appropriate value. This is assuming your value is a varchar.
SELECT 'SELECT ''' + TABLE_NAME + '.' + column_name +
''' FROM ' + TABLE_NAME + ' WHERE ' +
column_name + ' = ''[your value here]'''
FROM INFORMATION_SCHEMA.COLUMNS
WHERE DATA_TYPE = 'varchar';
You can't do it in a single query. You are going to have to cycle through the sys.tables and sys.columns info views and construct multiple queries (a single one for each table) which will look in all the fields for your value in a very long OR construct (one for each field).
I wrote this a while back, not exactly sure what it was for anymore. I remember it was before I knew about sp_msForEachTable though! You might need to adjust the variable sizes (may as well make them all MAX if you are on 2005 +)
create proc SearchForValues (#search varchar(100))
as
Begin
declare #i int
declare #tbl varchar(50)
declare #col varchar(50)
declare #sql varchar(500)
create table #TEMP (id int identity (1,1), colname varchar(50), tblname varchar(50))
insert into #TEMP
select a.name, b.name from dbo.syscolumns a inner join
(
select * from dbo.sysobjects where xtype = 'U'
) b
on a.ID = b.ID
create table #SEARCHRESULT (TblName varchar(50), ColName varchar(50))
If isnumeric(#search) = 0 and #search is not null
begin
set #search = '''' + #search + ''''
end
set #i = 1
While #i <= (select max(id) from #TEMP)
Begin
select #tbl = tblname from #temp where ID = #i
select #col = colname from #temp where ID = #i
set #sql = 'If Exists(select *
from [' + #tbl + ']
where convert(varchar(500), [' + #col + ']) = ' + #search + '
)
Insert Into #SEARCHRESULT (TblName, ColName) Values(''' + #tbl + ''',''' + #col + ''')'
execute (#sql)
set #i = #i + 1
End
drop table #TEMP
select * from #SEARCHRESULT
drop table #SEARCHRESULT
end
You can't with plain SQL. Unless you use a tool that does that (such a PL/SQL developer for Oracle).
I'm risking to be downvoted on this nice 1st april day, but I think it will easier to grep the datafile in this case.
The stored procedure sp_msForEachTable executes a query for each table. This is the simple part. Looking into all columns of every table should be the much harder part. At first, they probably have different data types. So you will probably be only able to perform a string comparison.
But I am quit sure that this is posible by using information from the system tables and some system stored procedures. I would try finding a solution to access a single column on a single table where table name and column name are only given as string parameters. At this point Dynamic SQL comes to mind. If you solved that, it should become relativly simple to get all table names with all column names from the system tables and join every together or put it into a stored procedutre. I would like to see the result if you find a solution.