Deleting Rows from a SQL Table marked for Replication - sql-server

I erroneously delete all the rows from a MS SQL 2000 table that is used in merge replication (the table is on the publisher). I then compounded the issue by using a DTS operation to retrieve the rows from a backup database and repopulate the table.
This has created the following issue:
The delete operation marked the rows for deletion on the clients but the DTS operation bypasses the replication triggers so the imported rows are not marked for insertion on the subscribers. In effect the subscribers lose the data although it is on the publisher.
So I thought "no worries" I will just delete the rows again and then add them correctly via an insert statement and they will then be marked for insertion on the subscribers.
This is my problem:
I cannot delete the DTSed rows because I get a "Cannot insert duplicate key row in object 'MSmerge_tombstone' with unique index 'uc1MSmerge_tombstone'." error. What I would like to do is somehow delete the rows from the table bypassing the merge replication trigger. Is this possible? I don't want to remove and redo the replication because the subscribers are 50+ windows mobile devices.
Edit: I have tried the Truncate Table command. This gives the following error "Cannot truncate table xxxx because it is published for replication"

Have you tried truncating the table?

You may have to truncate the table and reset the ID field back to 0 if you need the inserted rows to have the same ID. If not, just truncate and it should be fine.

You also could look into temporarily dropping the unique index and adding it back when you're done.

Look into sp_mergedummyupdate

Would creating a second table be an option? You could create a second table, populate it with the needed data, add the constraints/indexes, then drop the first table and rename your second table. This should give you the data with the right keys...and it should all consist of SQL statements that are allowed to trickle down the replication. It just isn't probably the best on performance...and definitely would impose some risk.
I haven't tried this first hand in a replicated environment...but it may be at least worth trying out.

Thanks for the tips...I eventually found a solution:
I deleted the merge delete trigger from the table
Deleted the DTSed rows
Recreated the merge delete trigger
Added my rows correctly using an insert statement.
I was a little worried bout fiddling with the merge triggers but every thing appears to be working correctly.

Related

How to find out the rows affected in SQL Profiler or trace?

I'm using tracing to log all delete or update queries run through the system. The problem is, if I run a query like DELETE FROM [dbo].[Artist] WHERE ArtistId>280, I know how many rows were deleted but I'm unable to find out which rows were deleted (the data they had).
I'm thinking of doing this as a logging system so it would be useful to see which rows were affected and what data they had if at all possible. I don't really want to use triggers for this job but I will if I have to (and if it's feasible).
If you need the original data and are planning on storing all the deleted data in a separate table why not just logically delete the original data rather than physically delete it? i.e.
UPDATE dbo.Artist SET Artist_deleted = 1 WHERE ArtistId>280
Then you only need add one column to your current table rather than creating new tables and scripts to support these. You could then partition the current table based on the deleted flag if you are worried about disk space/performance etc.

Wherescape / ETL / SQL DB Fact tables - add columns

We have a star schema designed in Wherescape. The task is to add new columns to the Fact table.
The fact table have around 30gb in it. Is it possible to add columns without deleting the fact table? Or what technique should be used to retain the current data in the fact table, and at the same time have the new columns available. I keep getting a timeout error if I just try to add columns in management studio.
I think the guy before me actually just modified it in Wherescape (not too sure). In anycase if I have to do it manually in management studio, that works for me too.
thanks
Gemmo
Can't really do this without deleting the table. It's too big and no matter what you do, it will time out. Back up the table, delete it and create the table with the new structure. You'll just have to put the data in again. No shortcuts. For smaller tables, you can easily add a column no problem.
Best way to do this is to add the column to the metadata and then right click on your table/object and then click "Validate against the database".
this would allow you to alter the table instead of having to take the long route of moving data into a temp table, recreating the table
and moving the data back.

SQL Server wiped my table after (incorrectly) creating a new column .. what the heck happened?

I added a new column to an existing table in the SQL Server Management Studio table designer. Type INT, not null. Didn't set a default value.
I generated a change script and ran it, it errored out with a warning that the new column does not allow nulls, and no default value was being set. It said "0 rows affected".
Data was still there, and for some reason my new column was visible in the "columns" folder on the database tree on the left of SSMS even though it said "0 rows affected" and failed to make the database change.
Because the new column was visible in the list, I thought I would go ahead and update all rows and add a value in.
UPDATE MyTable SET NewColumn = 0
Boom.. table wiped clean. Every row deleted.
This is a big problem because it was on a production database that wasn't being backed up unbeknownst to me. But.. recoverable with some manual entry, so not the end of the world.
Anyone know what could have happened here.. and maybe what was going on internally that could have caused my update statement to wipe out every row in the table?
An UPDATE statement can't delete rows unless there is a trigger that performs the delete afterward, and you say the table has no triggers.
So it had to be the scenario I laid out for you in my comment: The rows did not get loaded properly to the new table, and the old table was dropped.
Note that it is even possible for it to have looked right for you, where the rows did get loaded at one point--if the transaction was not committed, and then (for example) later when your session was terminated the transaction was automatically rolled back. The transaction could have been rolled back for other reasons, too.
Also, I may have gotten the order incorrect: it may create the new table under a new name, load the rows, drop the old table, and rename the new one. In this case, you may have been querying the wrong table to find out if the data had been loaded. I can't remember off the top of my head right now which way the table designer structures its scripts--there's more than one way to skin this cat.

SQL Server generic trigger creation

I am trying to create a generic trigger in SQL Server which can copy all column data from Table A and insert them in corresponding fields in Table B.
There are few problems I am facing.
I need this copy to occur under three conditions : INSERT, DELETE and UPDATE.
The trigger needs to be triggered after CUD operations. using AFTER throws SQL error saying ntext etc are not supported in inserted. How do I resolve this error?
Instead of if used can work for INSERT but not for delete. Is there a way to do this for delete operations?
Is there a way I can write a generic code inside the trigger which can work for all sorts of tables (we can assume that all the columns in table a exists in column b)
I am not well versed with triggers or for that matter DDL in SQL Server.
Appreciate if some can provide me some solutions.
Thanks
Ben
CREATE TRIGGER (Transact-SQL)
Use nvarchar(max) instead of ntext.
You can have an instead of trigger for delete.
You can have one trigger that handles insert/update/delete for one table but you can not connect a trigger to more than one table.

Truncate or Drop and Create Table

I have this table in a SQL Server 2008 R2 instance which I have a scheduled process that runs nightly against it. The table can have upward to 500K records in it at any one time. After processing this table I need to remove all rows from it so I am wondering which of the following methods would produce the least overhead (ie Excessive Transaction Log entries):
Truncate Table
Drop and recreate the table
Deleting the contents of the table is out due to time and extra Transaction log entries it makes.
The consensus seems to be Truncation, Thanks everyone!
TRUNCATE TABLE is your best bet. From MSDN:
Removes all rows from a table without logging the individual row
deletes.
So that means it won't bloat your transaction log. Dropping and creating the table not only requires more complex SQL, but also additional permissions. Any settings attached to the table (triggers, GRANT or DENY, etc.) will also have to be re-built.
Truncating the table does not leave row-by-row entries in the transaction log - so neither solution will clutter up your logs too much. If it were me, I'd truncate over having to drop and create each time.
I would go for TRUNCATE TABLE. You can potentially have overheads when indexes, triggers, etc get dropped. Plus you will lose permissions which will also have to be re-created along with any other required objects required for that table.
Also on DROP TABLE in MDSN below it mentions a little gotcha if you execute DROP and CREATE TABLE in the same batch
DROP TABLE and CREATE TABLE should not be executed on the same table
in the same batch. Otherwise an unexpected error may occur.
Dropping the table will destroy any associated objects (indexes, triggers) and may make procedures or views invalid. I would go with truncate, since it won't blow up your log and causes none of the possible issues a drop and create does.

Resources