How to disable CDC for a table manually? - sql-server

I dropped a table before disabling CDC for that. Now when I recreated the table and tried enabling CDC it says that capture instance already exists. I can use a different Capture Instance name but need to know if there is anyway to drop the associated capture instance manually.
When I delete a table through SSMS GUI it drops CDC tables too. But this time I dropped the table using code and it didn't disable or remove CDC. Hence the trouble. Ms documentation talks about a hot fix if Change Table are removed by mistake. But I have removed the base table. Any clues on how to remove this capture instance for the dropped table?

Here are the steps I took to remove an orphaned capture instance in CDC:
DROP FUNCTION [cdc].[fn_cdc_get_net_changes_dbo_(tablename)]
DROP FUNCTION [cdc].[fn_cdc_get_all_changes_dbo_(tablename)]
Then run the following:
declare #objid int
set #objid = (select object_id from cdc.change_tables where capture_instance = 'your orphaned capture instance')
delete from cdc.index_columns where object_id = #objid
delete from cdc.captured_columns where object_id = #objid
delete from cdc.change_tables where object_id = #objid
At that point you should be able to re-create your capture instance via sp_cdc_enable_table as normal.

Well I figured out a way. I removed all the records related to that table from all CDC system tables and tried recreating the capture instance with same name. It worked!

I had to execute one more step in addition to the REPLY by pdanke:
DROP TABLE cdc.<capture_insance>_CT
My cdc orphan may have come about when I restored a database where change data capture had been enabled. In my case,
EXECUTE sys.sp_cdc_help_change_data_capture
resulted in one entry where source_schema and source_table were both NULL.

It's Simple
Just use the following Script
EXEC sys.sp_cdc_disable_table 'schema_name','Source_Table_Name','CDC_Table_Name'

Related

MS SQL Server - safe concurrent use of global temp table?

In MS SQL Server, I'm using a global temp table to store session related information passed by the client and then I use that information inside triggers.
Since the same global temp table can be used in different sessions and it may or may not exist when I want to write into it (depending on whether all the previous sessions which used it before are closed), I'm doing a check for the global temp table existence based on which I create before I write into it.
IF OBJECT_ID('tempdb..##VTT_CONTEXT_INFO_USER_TASK') IS NULL
CREATE TABLE ##VTT_CONTEXT_INFO_USER_TASK (
session_id smallint,
login_time datetime,
HstryUserName VDT_USERNAME,
HstryTaskName VDT_TASKNAME,
)
MERGE ##VTT_CONTEXT_INFO_USER_TASK As target
USING (SELECT ##SPID, #HstryUserName, #HstryTaskName) as source (session_id, HstryUserName, HstryTaskName)
ON (target.session_id = source.session_id)
WHEN MATCHED THEN
UPDATE SET HstryUserName = source.HstryUserName, HstryTaskName = source.HstryTaskName
WHEN NOT MATCHED THEN
INSERT VALUES (##SPID, #LoginTime, source.HstryUserName, source.HstryTaskName);
The problem is that between my check for the table existence and the MERGE statement, SQL Server may drop the temp table if all the sessions which were using it before happen to close in that exact instance (this actually happened in my tests).
Is there a best practice on how to avoid this kind of concurrency issues, that a table is not dropped between the check for its existence and its subsequent use?
The notion of "global temporary table" and "trigger" just do not click. Tables are permanent data stores, as are their attributes -- including triggers. Temporary tables are dropped when the server is re-started. Why would anyone design a system where a permanent block of code (trigger) depends on a temporary shared storage mechanism? It seems like a recipe for failure.
Instead of a global temporary table, use a real table. If you like, put a helpful prefix such as temp_ in front of the name. If the table is being shared by databases, then put it in a database where all code has access.
Create the table once and leave it there (deleting the rows is fine) so the trigger code can access it.
I'll start by saying that, on the long term, I will follow Gordon's advice, i.e. I will take the necessary steps to introduce a normal table in the database to store client application information which needs to be accessible in the triggers.
But since this was not really possible now because of time constrains (it takes weeks to get the necessary formal approvals for a new normal table), I came up with a solution for preventing SQL Server from dropping the global temp table between the check for its existence and the MERGE statement.
There is some information out there about when a global temp table is dropped by SQL Server; my personal tests showed that SQL Server drops a global temp table the moment the session which created it is closed and any other transactions started in other sessions which changed data in that table are finished.
My solution was to fake data changes on the global temp table even before I check for its existence. If the table exists at that moment, SQL Server will then know that it needs to keep it until the current transaction finishes, and it cannot be dropped anymore after the check for its existence. The code looks now like this (properly commented, since it is kind of a hack):
-- Faking a delete on the table ensures that SQL Server will keep the table until the end of the transaction
-- Since ##VTT_CONTEXT_INFO_USER_TASK may actually not exist, we need to fake the delete inside TRY .. CATCH
-- FUTURE 2016, Feb 03: A cleaner solution would use a real table instead of a global temp table.
BEGIN TRY
-- Because schema errors are checked during compile, they cannot be caught using TRY, this can be done by wrapping the query in sp_executesql
DECLARE #QueryText NVARCHAR(100) = 'DELETE ##VTT_CONTEXT_INFO_USER_TASK WHERE 0 = 1'
EXEC sp_executesql #QueryText
END TRY
BEGIN CATCH
-- nothing to do here (see comment above)
END CATCH
IF OBJECT_ID('tempdb..##VTT_CONTEXT_INFO_USER_TASK') IS NULL
CREATE TABLE ##VTT_CONTEXT_INFO_USER_TASK (
session_id smallint,
login_time datetime,
HstryUserName VDT_USERNAME,
HstryTaskName VDT_TASKNAME,
)
MERGE ##VTT_CONTEXT_INFO_USER_TASK As target
USING (SELECT ##SPID, #HstryUserName, #HstryTaskName) as source (session_id, HstryUserName, HstryTaskName)
ON (target.session_id = source.session_id)
WHEN MATCHED THEN
UPDATE SET HstryUserName = source.HstryUserName, HstryTaskName = source.HstryTaskName
WHEN NOT MATCHED THEN
INSERT VALUES (##SPID, #LoginTime, source.HstryUserName, source.HstryTaskName);
Although I would call it a "use it at your own risk" solution, it does prevent that the use of the global temp table in other sessions affects its use in the current one, which was the concern that made me start this thread.
Thanks all for your time! (from text formatting edits to replies)

How do I add a “last modified” and "created" column in a SQL Server table?

I'm design a new db schema for a SQL Server 2012 database.
Each table should get two extra columns called modified and created which should be automatically change as soon a row gets inserted or updated.
I don't know how rather the best way to get there.
I assuming that trigger are the best way to handle it.
I was trying to find examples with triggers.. but the tutorials which I found insert data in another table etc.
I assumed it's a quite common scenario but I couldn't find the answer yet.
The created column is simple - just a DATETIME2(3) column with a default constraint that gets set when a new row is inserted:
Created DATETIME2(3)
CONSTRAINT DF_YourTable_Created DEFAULT (SYSDATETIME())
So when you insert a row into YourTable and don't specify a value for Created, it will be set to the current date & time.
The modified is a bit more work, since you'll need to write a trigger for the AFTER UPDATE case and update it - you cannot declaratively tell SQL Server to do this for you....
Modified DATETIME2(3)
and then
CREATE TRIGGER updateModified
ON dbo.YourTable
AFTER UPDATE
AS
UPDATE dbo.YourTable
SET modified = SYSDATETIME()
FROM Inserted i
WHERE dbo.YourTable.PrimaryKey = i.PrimaryKey
You need to join the Inserted pseudo table which contains all rows that were updated with your base table on your primary key for that table.
And you'll have to create this AFTER UPDATE trigger for each table that you want to have a modified column in.
Generally, you can have the following columns:
LastModifiedBy
LastModifiedOn
CreatedBy
CreatedOn
where LastModifiedBy and CreatedBy are references to a users table (UserID) and the LastModifiedOn and CreatedOn columns are date and time columns.
You have the following options:
Solution without triggers - I have read somewhere that "The best way to write triggers is not to write such." and you should know that generally they are hurting the performance. So, if you can avoid them it is better to do so, even using triggers may look the easiest thing to do in some cases.
So, just edit all you INSERT and UPDATE statements to include the current UserID and current date and time. If such user ID can not be defined (anonymous user) you can use 0 instead and the default value of the columns (in case no user ID is specified will be NULL). When you see NULL values are inserted you should find the "guilty" statements and edit it.
Solution with triggers - you can created AFTER INSERT, UPDATE trigger and populated the users columns there. It's easy to get the current date and time in the context of the trigger (use GETUTCDATE() for example). The issue here is that the triggers do not allowed passing/accepting parameters. So, as you are not inserting the user ID value and you are not able to pass it to the trigger. How to find the current user?
You can use SET CONTEXT_INFO and CONTEXT_INFO. Before all you insert and update statements you must use the SET CONTEXT_INFO to add the current user ID to the current context and in the trigger you are using the CONTEXT_INFO function to extract it.
So, when using triggers you again need to edit all your INSERT and UPDATE clauses - that's why I prefer not to use them.
Anyway, if you need to have only date and time columns and not created/modified by columns, using triggers is more durable and easier as you are not going to edit any other statements now and in the future.
With SQL Server 2016 we can now use the SESSION_CONTEXT function to read session details. The details are set using sp_set_session_context (as read-only or read and write). The things are a little bit user-friendly:
EXEC sp_set_session_context 'user_id', 4;
SELECT SESSION_CONTEXT(N'user_id');
A nice example.
Attention, above works fine but not in all cases,
I lost a lot of time and found this helpfull:
create TRIGGER yourtable_update_insert
ON yourtable
AFTER UPDATE
as
begin
set nocount on;
update yourtable set modified=getdate(), modifiedby = suser_sname()
from yourtable t
inner join inserted i on t.uniqueid=i.uniqueid
end
go
set nocount on; is needed else you get the error:
Microsoft SQL Server Management Studio
No row was updated.
The data in row 5 was not committed.
Error Source: Microsoft.SqlServer.Management.DataTools.
Error Message: The row value(s) updated or deleted either do not make the row unique or they alter multiple rows(2 rows).
Correct the errors and retry or press ESC to cancel the change(s).
OK Help
CREATE TRIGGER [dbo].[updateModified]
ON [dbo].[Transaction_details]
AFTER UPDATE
AS
BEGIN
SET NOCOUNT ON;
UPDATE dbo.Transaction_details
SET ModifedDate = GETDATE() FROM dbo.Transaction_details t JOIN inserted i ON
t.TransactionID = i.TransactionID--SYSDATETIME()
END
One important thing to consider is that you should always have the inserted / updated time for all of your tables and rows be from the same time source. There is a danger - if you do not use triggers - that different applications making direct updates to your tables will be on machines that have different times on their clocks, or that there will not be consistent use of local vs. UTC in the application layer.
Consider a case where the system making the insert or update query that directly sets the updated / modified time value has a clock that is 5 minutes behind (unlikely, but worth considering) or is using local time versus UTC. If another system is polling using an interval of 1 minute, it might miss the update.
For a number of reasons, I never expose my tables directly to applications. To handle this situation, I create a view on the table explicitly listing the fields to be accessed (including the updated / modified time field). I then use an INSTEAD OF UPDATE, INSERT trigger on the view and explicitly set the updatedAt time using the database server's clock. This way I can guarantee that the timebase for all records in the database is identical.
This has a few benefits:
It only makes one insert to the base table and you don't have to
worry about cascading triggers being called
It allows me to control at the field level what information I expose
to the business layer or to other consumers of my data
It allows me to secure the view independently from the base table
It works great on SQL Azure.
Take a look at this example of the trigger on the view:
ALTER TRIGGER [MR3W].[tgUpdateBuilding] ON [MR3W].[vwMrWebBuilding]
INSTEAD OF UPDATE, INSERT AS
BEGIN
SET NOCOUNT ON
IF EXISTS(SELECT * FROM DELETED)
BEGIN
UPDATE [dbo].[Building]
SET
,[BuildingName] = i.BuildingName
,[isActive] = i.isActive
,[updatedAt] = getdate()
FROM dbo.Building b
inner join inserted i on i.BuildingId = b.BuildingId
END
ELSE
BEGIN
INSERT INTO [dbo].[Building]
(
[BuildingName]
,[isActive]
,[updatedAt]
)
SELECT
[BuildingName]
,[isActive]
,getdate()
FROM INSERTED
END
END
I hope this helps, and I would welcome comments if there are reasons this is not the best solution.
This solution might not work for all use cases but wherever possible its a very clean way.
Create an stored procedure for inserting/updating row in table and only use this sp for modifying the table. In stored procedure you can always set created and updated column as required. e.g. setting updatedTime = GetUTCTime()

Why does this update throw an error even though the Alter Table command should be finished?

This has been a nagging issue for me for some time and I would love to know the reason why these SQL Batch commands aren't working.
I have a table that I use to hold configuration settings for a system. When a new setting is added, we add a new field to the table. During an update, I need to change a slew of databases on the server with the same script. Generally, they are all in the same state and I can just do the following:
Alter Table Configuration Add ShowClassesInCheckin bit;
GO
Update Configuration Set ShowClassesInCheckin=ShowFacilitiesInCheckin;
GO
This works fine. However, sometimes one or two databases get updated so I want to write conditional logic to make these changes only if the field doesn't already exist:
if Not Exists(select * from sys.columns where Name = N'ShowClassesInCheckin' AND Object_ID = Object_ID(N'Configuration'))
BEGIN
Alter Table Configuration Add ShowClassesInCheckin bit;
Update Configuration Set ShowClassesInCheckin=ShowFacilitiesInCheckin;
END;
GO
In this case, I get an error: "Invalid column name 'ShowClassesInCheckin'" Now, this makes sense in that the Alter Table isn't comitted in the batch before the Update is called (it doesn't work without the "GO" between the Alter and Update). But that doesn't help...I still don't know how to I achieve what I am after...
The entire SQL script is parsed before it's executed. During the parsing phase, the column will not exist, so the parser generates an error. The error is raised before the first line of the script is executed.
The solution is dynamic SQL:
exec (N'Update Configuration Set ShowClassesInCheckin=ShowFacilitiesInCheckin;')
This won't get parsed before the exec is reached, and by then, the column will exist.
An alternative that should work is to re-introduce the go. This means that you need to use something else as the condition for the update, possibly based on database name.
if Not Exists(select * from sys.columns where Name = N'ShowClassesInCheckin' AND Object_ID = Object_ID(N'Configuration'))
BEGIN
Alter Table Configuration Add ShowClassesInCheckin bit;
END;
GO
if *new condition here*
BEGIN
Update Configuration Set ShowClassesInCheckin=ShowFacilitiesInCheckin;
END;
GO

Unable to find where triggers are stored in sql server 2008

I want to delete and modify previously created triggers but i cant find them anywhere in database. Where they exist and how to edit or delele them
You can find Triggers under Table node:
Under the Tables node in SSMS (SQL Server Management Studio), for each table there is a Triggers node.
You can manage your triggers from there.
Here is a better way:
select a.[name] as trgname, b.[name] as [tbname]
from sys.triggers a join sys.tables b on a.parent_id = b.object_id
Just be sure to run it against the database where you think the trigger is located.
You can also find the triggers by querying the management views in SQL Server Management Studio:
SELECT
OBJECT_NAME(object_id) 'Table name', *
FROM
sys.triggers
That gives you a list of all triggers and what table they're defined on for your current database. You can then go on to either disable or drop them.
To expand a little on the previous answers, in all the recent versions of SQL Server you can right click on a trigger and choose: Script Trigger as… ALTER To… "New Query Editor Window"
This will open an SQL script with the details of the trigger, if you read the code you will notice that it includes the ALTER syntax: ALTER TRIGGER [dbo].triggername ...
This means you can edit the SQL and press Execute to alter the trigger - this will overwrite the previous definition.
If the triggers have been built using automated tools, you may find duplicate code in the trigger definition which you will want to remove.
It is worth trying to Execute the script first before trying to edit anything, that will tell you if the trigger definition is valid. If a table or column has been renamed, things can get out of sync.
Similarly to Delete/Drop a trigger completely select: Script Trigger as… DROP To… "New Query Editor Window" and then execute it.

Why would IF EXISTS not work?

I have a lot of code I am trying to run where I'm querying the sysobjects table to check if an object exists before I drop it and create it again.
Issue being, sometimes if I go:
if not exists (select name from sysobjects o where o.name = 'my_table' and o.type = 'U')
CREATE TABLE my_table (..)
go
it works, no worries. However, when I came back to run it again, I get this lovely error:
SQL Server Error on (myserver) Error:2714 at Line:10 Message:There is already an object named 'my_table' in the database.
Thanks for that, SQL Programmer. I actually asked for you not to create this table if it already exists. -_-
Any ideas?
the logic to what you are doing doesn't seem quite right. based on your statement:
"I am trying to run where I'm querying the sysobjects table to check if an object exists before I drop it and create it again"
you should simply do a delete followed by a create. This way is usually better because it ensures that the table will be updated. if the table existed and you had changes, you are probably not getting what you want.
The immediate issue you are running into is an assumed db ownership that was not consistent between runs.
based on your clarification below - here is what you can do:
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[XXXX]') AND type in (N'U'))
DROP TABLE [dbo].[XXXX]
GO
CREATE TABLE [dbo].[XXXX(...
GO
you can run this over and over again...
The sybase parsers object validation pass is global and not based on conditional evaluation. Even though your code can not execute CREATE TABLE the statement is still checked for syntax and applicability which fails when the system sees that the table already exists.
The only way around this that I know of is to put your create statements inside of an EXEC() which would be evaluated only if the section was executed.
yes, the entire batch of SQL is normalized and compiled so as to create an "execution plan" for the entire batch. During normalization, the "possible" "create table" statement is a problem if it already exists at compile time.
My solution: rename -
if exists (select 1 from ....)
begin
drop table xyz
create table xyz_zzzz ( ... )
exec sp_rename 'xyz_zzzz','xyz'
end

Resources