Remove SQL Server Replication Subscriptions - sql-server

I have a problem with the replication on my machine where I am stuck with a number of subscriptions showing in SSMS that do not exist. I have tried deleting them using the UI in SSMS and also using some T-SQL...but no luck. The publications don't even exist any more (they were on the same machine).
Any ideas on how I could remove them?
More info:
This is the situation I appear to be in where there are 3 subscriptions (that SSMS will not let me look at because 'they do not exist').

You might have some orphaned metadata in the distribution database. If you're no longer publishing any data you might want to try Disabling Publishing and Distribution to remove the orphaned publications and subscriptions.

I find it odd that sp_removedbreplication didn't work... maybe try sp_dropsubscription, and include the ignore_distributor parameter:
declare #yourServer sysname
set #yourServer = ##servername
exec sp_dropsubscription
#publication='UD-ForCMS',
#article='all',
#subscriber=#yourServer,
#destination_db='CMS',
#ignore_distributor=1

I've found sometimes that the system view objects created by the replication can be left hanging. Stopping replication from being manually rebuilt. My solution was to manual drop the system view syncobj. I recommend always dropping the subscriptions and the publication before dropping publication properties above since I've found items get left behind leaving for long term problems.
Msg 2714, Level 16, State 3: There is already an object named
'syncobj_0x3437324238353830' in the database. Msg 21745, Level 16,
State 1, Procedure sp_MSrepl_articleview, Line 272 Cannot generate a
filter view or procedure. Verify that the value specified for the
#filter_clause parameter of sp_addarticle can be added to the WHERE
clause of a SELECT statement to produce a valid query. Msg 3931, Level
16, State 1, Procedure sp_MSrepl_articleview, Line 401 The current
transaction cannot be committed and cannot be rolled back to a
savepoint. Roll back the entire transaction. Warning: The distribution
agent job has been implicitly created and will run under the SQL
Server Agent Service Account.

Related

SQL 2008 All records in column in table updated to NULL

About 5 times a year one of our most critical tables has a specific column where all the values are replaced with NULL. We have run log explorers against this and we cannot see any login/hostname populated with the update, we can just see that the records were changed. We have searched all of our sprocs, functions, etc. for any update statement that touches this table on all databases on our server. The table does have a foreign key constraint on this column. It is an integer value that is established during an update, but the update is identity key specific. There is also an index on this field. Any suggestions on what could be causing this outside of a t-sql update statement?
I would start by denying any client side dynamic SQL if at all possible. It is much easier to audit stored procedures to make sure they execute the correct sql including a proper where clause. Unless your sql server is terribly broken, they only way data is updated is because of the sql you are running against it.
All stored procs, scripts, etc. should be audited before being allowed to run.
If you don't have the mojo to enforce no dynamic client sql, add application logging that captures each client sql before it is executed. Personally, I would have the logging routine throw an exception (after logging it) when a where clause is missing, but at a minimum, you should be able to figure out where data gets blown out next time by reviewing the log. Make sure your log captures enough information that you can trace it back to the exact source. Assign a unique "name" to each possible dynamic sql statement executed, e.g., each assign a 3 char code to each program, and then number each possible call 1..nn in your program so you can tell which call blew up your data at "abc123" as well as the exact sql that was defective.
ADDED COMMENT
Thought of this later. You might be able to add / modify the update trigger on the sql table to look at the number of rows update prevent the update if the number of rows exceeds a threshhold that makes sense for your. So, did a little searching and found someone wrote an article on this already as in this snippet
CREATE TRIGGER [Purchasing].[uPreventWholeUpdate]
ON [Purchasing].[VendorContact]
FOR UPDATE AS
BEGIN
DECLARE #Count int
SET #Count = ##ROWCOUNT;
IF #Count >= (SELECT SUM(row_count)
FROM sys.dm_db_partition_stats
WHERE OBJECT_ID = OBJECT_ID('Purchasing.VendorContact' )
AND index_id = 1)
BEGIN
RAISERROR('Cannot update all rows',16,1)
ROLLBACK TRANSACTION
RETURN;
END
END
Though this is not really the right fix, if you log this appropriately, I bet you can figure out what tried to screw up your data and fix it.
Best of luck
Transaction log explorer should be able to see who executed command, when, and how specifically command looks like.
Which log explorer do you use? If you are using ApexSQL Log you need to enable connection monitor feature in order to capture additional login details.
This might be like using a sledgehammer to drive in a thumb tack, but have you considered using SQL Server Auditing (provided you are using SQL Server Enterprise 2008 or greater)?

Auto Created Statistics not getting deleted by Sql Server 2008

I ran into an error in Sql Server and after resolving it, I am looking for the reason why this was happening.
The situation is that I tried to alter a column in a table like this
Alter Table tblEmployee
Alter Column empDate Date
But while running this script, I get the error -
The statistics 'empDate' is dependent on column 'empDate'.
Msg 4922, Level 16, State 9, Line 1
ALTER TABLE ALTER COLUMN empDate failed because one or more objects access this column.
It turns out that this error was because of a statistic being referenced on this column. I have no script that explicitly creates a statistic, and the error occurred in the production environment, so it must have been auto-created. If it is auto-created, then why isn't Sql Server deleting it by itself? My error was resolved when I dropped the statistic.
I looked at other places and not able to find anything relevant.
I haven't looked hard at SQL statistics for a few versions, but back when, auto-generated statistics had fairly distinctive names (like "_WA_Sys_00000005_00000037"). If your statistics literally had the name "empDate", then it was almost certainly not and auto-created statistics, but something someone created deliberately.

SQL - update, delete, insert - Whatif scenerio

I was reading an article the other day the showed how to run SQL Update, Insert, or Deletes as a whatif type scenario. I don't remember the parameter that they talked about and now I can't find the article. Not sure if I was dreaming.
Anyway, does anyone know if there is a parameter in SQL2008 that lets you try an insert, update, or delete without actually committing it? It will actually log or show you what it would have updated. You remove the parameter and run it if it behaves as you would expect.
I don't know of a SQL2008 specific feature with any SQL service that supports transactions you can do this:
Start a transaction ("BEGIN TRANSACTION" in TSQL)
The rest of your INSERT/UPDATE/DELETE/what-ever code
(optional) Some extra SELECT statements and such if needed to output the result of the above actions, if the default output from step 2 (things like "X rows affected") is not enough
Rollback the transaction ("ROLLBACK TRANSACTION" in TSQL)
(optional) Repeat the testing code to show how things are without the code in step 2 having run
For example:
BEGIN TRANSACTION
-- make changes
DELETE people WHERE name LIKE 'X%'
DELETE people WHERE name LIKE 'D%'
EXEC some_proc_that_does_more_work
-- check the DB state after the changes
SELECT COUNT(*) FROM people
-- undo
ROLLBACK TRANSACTION
-- confirm the DB state without the changes
SELECT COUNT(*) FROM people
(you might prefer to do the optional "confirm" step before starting the transaction rather than after rolling it back, but I've always done it this way around as it keeps the two likely-to-be-identical sections of code together for easier editing)
If you use something like this rather then something SQL2008 specific the technique should be transferable to other RDBS too (just update the syntax if needed).
OK, finally figured it out. I've confused this with another project I was working on with PowerShell. PowerShell has a "whatif" parameter that can be used to show you what files would be removed before they are removed.
My apologies to those who have spent time trying to find an answer to this port and my thanks to those of you who have responsed.
I believe you're talking about BEGIN TRANSACTION
BEGIN TRANSACTION starts a local transaction for the connection issuing the statement. Depending on the current transaction isolation level settings, many resources acquired to support the Transact-SQL statements issued by the connection are locked by the transaction until it is completed with either a COMMIT TRANSACTION or ROLLBACK TRANSACTION statement. Transactions left outstanding for long periods of time can prevent other users from accessing these locked resources, and also can prevent log truncation.
Do you perhaps mean SET NOEXEC ON ?
When SET NOEXEC is ON, SQL Server
compiles each batch of Transact-SQL
statements but does not execute them.
When SET NOEXEC is OFF, all batches
are executed after compilation.
Note that this won't warn/indicate things like key violations.
Toad for SQL Server has a "Validate SQL" feature that checks queries against wrong table/column names etc. . Maybe you are talking about some new feature in SSMS 2008 similar to that...
I'm more than seven years late to this particular party but I suspect the feature in question may also have been the OUTPUT clause. Certainly, it can be used to implement whatif functionality similar to Powershell's in a t-sql stored procedure.
https://learn.microsoft.com/en-us/sql/t-sql/queries/output-clause-transact-sql
Use this in each insert/update/delete/merge query to let the SP output a meaningful resultset of the changes it makes e.g. outputting the table name and action performed as the first two columns then all the altered columns.
Then simply rollback the changes if a #whatif parameter is set to 1 or commit them if #whatif is set to 0.

Temp tables in SQL Server 2005 not automatically dropped

I'm troubleshooting a nasty stored procedure and noticed that after running it, and I have closed my session, lots of temp tables are still left in tempdb. They have names like the following:
#000E262B
#002334C4
#004E1D4D
#00583EEE
#00783A7F
#00832777
#00CD403A
#00E24ED3
#00F75D6C
If I run this code:
if object_id('tempdb..#000E262B') is null
print 'Does NOT exist!'
I get:
Does NOT exist!
If I do:
use tempdb
go
drop TABLE #000E262B
I get an error:
Msg 3701, Level 11, State 5, Line 1
Cannot drop the table '#000E262B', because it does not exist or you do not have permission.
I am connected to SQL Server as sysadmin. Using SP3 64-bit. I currently have over 1100 of these tables in tempdb, and I can't get rid of them. There are no other users on the database server.
Stopping and starting SQL Server is not an option in my case.
Thanks!
http://www.sqlservercentral.com/Forums/Topic456599-149-1.aspx
If temp tables or table variables are frequently used then, instead of dropping them, SQL just 'truncates' them, leaving the definition. It saves the effort of recreating the table next time it's needed.
Tables created with the # prefix are only available to the current connection. Therefore any new connection you create will not be able to see them and therefore not be able to drop them.
How can you tell that they still exist? What command are you running to find this out?
Is the connection that created them closed properly? If not then this may be why they still exist.

SQL Replication "Row Not Found" Error

I have transactional replication running between two databases. I fear they have fallen slightly out of sync, but I don't know which records are affected. If I knew, I could fix it manually on the subscriber side.
SQL Server is giving me this message:
The row was not found at the Subscriber when applying the replicated command. (Source: MSSQLServer, Error number: 20598)
I've looked around to try to find out what table, or even better what record is causing the issue, but I can't find that information anywhere.
The most detailed data I've found so far is:
Transaction sequence number: 0x0003BB0E000001DF000600000000, Command ID: 1
But how do I find the table and row from that? Any ideas?
This gives you the table the error is against
use distribution
go
select * from dbo.MSarticles
where article_id in (
select article_id from MSrepl_commands
where xact_seqno = 0x0003BB0E000001DF000600000000)
And this will give you the command (and the primary key (ie the row) the command was executing against)
exec sp_browsereplcmds
#xact_seqno_start = '0x0003BB0E000001DF000600000000',
#xact_seqno_end = '0x0003BB0E000001DF000600000000'
I'll answer my own question with a workaround I ended up using.
Unfortunately, I could not figure out which table was causing the issue through the SQL Server replication interface (or the Event Log for that matter). It just didn't say.
So the next thing I thought of was, "What if I could get replication to continue even though there is an error?" And lo and behold, there is a way. In fact, it's easy. There is a special Distribution Agent profile called "Continue on data consistency errors." If you enable that, then these types of errors will just be logged and passed on by. Once it is through applying the transactions and potentially logging the errors (I only encountered two), then you can go back and use RedGate SQL Data Compare (or some other tool) to compare your two databases, make any corrections to the subscriber and then start replication running again.
Keep in mind, for this to work, your publication database will need to be "quiet" during the part of the process where you diff and fix the subscriber database. Luckily, I had that luxury in this case.
If your database is not prohibitively large, I would stop replication, re-snapshot and then re-start replication. This technet article describes the steps.
If it got out of sync due to a user accidently changing data on the replica, I would set the necessary permissions to prevent this.
This replication article is worth reading.
Use this query to find out the article that is out of sync:
USE [distribution]
select * from dbo . MSarticles
where article_id IN ( SELECT Article_id from MSrepl_commands
where xact_seqno = 0x0003BB0E000001DF000600000000)
of course if you check the error when the replication fails it also tells you which record is at fault and you could extract that data from the core system and just insert it on the subscriber.
This is better than skipping errors as with the SQL Data Compare it will lock the table for the comparison and if you have millions of rows this can take a long time to run.
Tris
Changing the profile to "Continue on data consistency errors" won't always work. Obviously it reduces or nullifies an error, but you won't get the whole proper data. It will skip the rows by which an error occurs, and hence you fail to get accurate data.
the following checks resolve my problem
check that all the replication SQL Agents jobs are working fine and if not start them.
in my case it was stopped because of some killed session occurred a few hours before by Some DBA because of blocking issue
after a very short time all data in subscription were updated and no
other error in replication monitor
in my case all above queries did not returned nothing
This error usually comes when particular record does not exists on subscriber and a update or delete command executed for same record on primary server and which got replicated on subscriber as well.
As this records does not exists on subscriber, replication throws an error " Row Not Found"
Solution of this error to make replication work back to the normal running state:
We can check with following query, whether request at publisher was of update or delete statement:
USE [distribution]
SELECT *
FROM msrepl_commands
WHERE publisher_database_id = 1
AND command_id = 1
AND xact_seqno = 0x00099979000038D6000100000000
We can get artical id information from above query, which can be passed to below proc:
EXEC Sp_browsereplcmds
#article_id = 813,
#command_id = 1,
#xact_seqno_start = '0x00099979000038D60001',
#xact_seqno_end = '0x00099979000038D60001',
#publisher_database_id = 1
Above query will give information about, whether it was a update statement or delete statement.
In Case of Delete Statement
That record can be directly deleted from msrepl_commands objects so that replication wont make retry attempts for the record
DELETE FROM msrepl_commands
WHERE publisher_database_id = 1
AND command_id =1
AND xact_seqno = 0x00099979000038D6000100000000
In case of update statement:
You need to insert that record manually from publisher DB to subscriber DB:

Resources