Making one table equal to another without a delete * - sql-server

I know this is bit of a strange one but if anyone had any help that would be greatly appreciated.
The scenario is that we have a production database at a remote site and a developer database in our local office. Developers make changes directly to the developer db and as part of the deployment process a C# application runs and produces a series of .sql scripts that we can execute on the remote side (essentially delete *, insert) but we are looking for something a bit more elaborate as the downtime from the delete * is unacceptable. This is all reference data that controls menu items, functionality etc of a major website.
I have a sproc that essentially returns a diff of two tables. My thinking is that I can insert all the expected data in to a tmp table, execute the diff, and drop anything from the destination table that is not in the source and then upsert everything else.
The question is that is there an easy way to do this without using a cursor? To illustrate the sproc returns a recordset structured like this:
TableName Col1 Col2 Col3
Dest
Src
Anything in the recordset with TableName = Dest should be deleted (as it does not exist in src) and anything in Src should be upserted in to dest. I cannot think of a way to do this purely set based but my DB-fu is weak.
Any help would be appreciated. Apologies if the explanation is sketchy; let me know if you need anymore details.

Yeah, that sproc would work. Use a FULL JOIN with that table and add a column to indicate insert, update or delete. Then create separate SQL statements for them based on the column indicator. Set based.
Sorry not a FULL JOIN, you'll need to break them down to separate LEFT and RIGHT JOINS. Did this in NotePad, so apologies if it doesn't work:
INSERT INTO tempDeployData(ID,IUDType)
SELECT ed.id, 'D'
FROM tmpDeployData td
RIGHT JOIN existingData ed ON td.id = ed.id
WHERE td.id IS NULL
UPDATE td
SET td.IUDType = CASE WHEN ed.id IS NULL THEN
'I'
ELSE
'U'
END
FROM tmpDeployData td
LEFT JOIN existingData ed ON td.id = ed.id
INSERT INTO existingData(ID,a,b,c)
SELECT td.ID,td.a,td.b,td.c
FROM tmpDeployData td
WHERE td.IUDType = 'I'
DELETE ed
FROM existingData ed
INNER JOIN tmpDeployData td ON ed.ID = td.ID
WHERE td.IUDType = 'D'
UPDATE ed
SET ed.a = td.a,
ed.b = td.b,
ed.c = td.c
FROM existingData ed
INNER JOIN tmpDeployData td ON ed.ID = td.ID
WHERE td.IUDType = 'U'
Just realized you're pulling info into the temptable as a staging table, not the source of the data. In that case you can use the FULL JOIN:
INSERT INTO tmpDeployData(ID,a,b,c,IUDType)
SELECT sd.ID,
sd.a,
sd.b,
sd.c
'IUDType' = CASE WHEN ed.id IS NULL THEN
'I'
WHEN sd.id IS NULL THEN
'D'
ELSE
'U'
END
FROM sourceData sd
FULL JOIN existingData ed ON sd.id = ed.id
Then same DML statements as before.

took at tablediff
tables do not need to participate in replication to run the utility. there's a wonderful -f switch to generate t-sql to put the tables 'in-sync':
Generates a Transact-SQL script to
bring the table at the destination
server into convergence with the table
at the source server. You can
optionally specify a name and path for
the generated Transact-SQL script
file. If file_name is not specified,
the Transact-SQL script file is
generated in the directory where the
utility runs.

There's a much, much easier way to do this assuming you're using SQL Server 2008: The MERGE statement.
Migrating all changes from one table to another is as simple as:
MERGE DestinationTable d
USING SourceTable s
ON d.Id = s.Id
WHEN MATCHED THEN UPDATE
SET d.Col1 = s.Col1, d.Col2 = s.Col2, ...
WHEN NOT MATCHED BY TARGET THEN
INSERT (Id, Col1, Col2, ...)
VALUES (s.Id, s.Col1, s.Col2, ...)
WHEN NOT MATCHED BY SOURCE THEN
DELETE;
That's it. DestinationTable will be identical to SourceTable after that.

Why don't you just take a backup of the production database and restore it over your development database? You should have change scripts for all ddl differences from the production database that you can run on the database after the restore and it would test the deployment to production.
edit:
Sorry, just re-read your question, it looks like you are storing your configuration info in your development db and generating your change scripts from that so this wouldn't work.
I would recommend creating change scripts by hand and storing them in source control. Then use sqlcmd or osql and a batch file to run your change scripts on the database.

Related

How to find SQL Object having the text in SQL Server?

There was some performance issue in my application in production, I did some investigation and found out that one process is blocking my SP execution. I saw the log in SolarWinds DPA and found out that the process having id 12345 is blocking my SP. then it is showing the query in SQL text.
Query which is blocking
SELECT ColX, ColY.........
FROM [dbo].[Table1] As T1
INNER JOIN [dbo].[Table2] AS T2
ON T1.[PaymentFK] = T2.[PaymentPK]
WHERE (([Col1] = #p0)
OR ([ExtBCol1atchFileFK] IS NULL))
AND ([Col2] = #p1)
AND ([Col3] = #p2)
AND (NOT ([Col4] = 1))
But not giving object names like SP/View/Trigger/Job. I searched this text in all the SPs/Views/Triggers. But could not find the blocking query.
So is there any way to find out in which object exactly this query is being used?
This might help if the script is stored in the database.
SELECT DISTINCT OBJECT_SCHEMA_NAME(object_id), OBJECT_NAME(object_id)
FROM sys.sql_modules (NOLOCK)
WHERE definition LIKE '%search_phrase%'

Change Tracking Delete vs Insert and Update

I am working on a system which logs alle data manipulation actions centrally.
Currently SQL Server CHange Tracking is applied, but there is a problem in the data it tracks.
E.g. if I Insert a row in TableX, Update the same row and then Delete it, it seems like only the Delete action is being recorded. Does anyone know how to view the previous actions?
DECLARE
#synchronization_version bigint,
#last_synchronization_version bigint;;
-- Obtain the current synchronization version. This will be used next time that changes are obtained.
SET #synchronization_version = CHANGE_TRACKING_CURRENT_VERSION();
-- Obtain initial data set.
SELECT T.*
FROM TableX AS T;
SELECT DTC.commit_time, CT.*, T.*
FROM TableX AS T
RIGHT OUTER JOIN CHANGETABLE(CHANGES TableX, #last_synchronization_version) AS CT ON T.fldId = CT.fldId
JOIN sys.dm_tran_commit_table DTC ON CT.sys_change_version = DTC.commit_ts;
As pointed out by #AlwaysLearning the solution was to use Change Data Capture (CDC) (Microsoft Docs) instead of Change Tracking (CT).

How can prevent a SQL Server merge using OUTPUT from updating the target table

I know how to update a target table from a source table using a SQL server MERGE statement.
I know I can capture the changes using an OUTPUT statement.
Problem: When using output statement to capture the changes they are still applied to the target table.
Question: How can I prevent the MERGE statement from updating the target and only capture the output?
Extra info: The goal in this case is not to audit, which is the common OUTPUT statement use in MERGE. The goal is to determine the changes and then do further processing on them that cannot be added to the merge. You could also call it doing a Dry-Run.
In order for MERGE to emit anything as part of the OUTPUT clause, it would have to perform an UPDATE or INSERT.
Off the top of my head, I can see two means to accomplish what you want:
One:
Write some SQL to capture the data set that feeds into the MERGE statement and use it to figure out what the MERGE would do. I know I didn't explain that very well so here's an example with pseudo-code.
SELECT
s.*
,d.PrimaryKey
INTO
#SourceForMerge
FROM
[Source] AS s
LEFT OUTER JOIN [Destination] AS d
ON s.MatchingColumnA = d.MatchingColumnA
AND s.MatchingColumnB = d.MatchingColumnB
;MERGE [Destination] AS d
USING
(
SELECT * FROM #SourceForMerge
) AS s
ON
d.PrimaryKey = s.PrimaryKey
WHEN MATCHED THEN
-- UPDATE columns
WHEN NOT MATCHED THEN
-- INSERT columns
OUTPUT
-- columns
INTO
#OutputTempTable
This would allow you to interrogate the #SourceForMerge temp table to see what matched and what didn't.
Alternatively, you could be an outlaw and wrap the MERGE in a transaction, capture the OUTPUT, SELECT it somewhere and then deliberately ROLLBACK to "undo" the changes.
This strikes me as the most accurate method but also a little scary.
When I'm testing some SQL, I'll often do this by wrapping something in a TRANSACTION and then having something like:
SELECT 1 / 0
to trigger a ROLLBACK.

SSIS "Insert bulk failed due to a schema change of the target table

I am developing a SSIS package in bids and getting an inconsistent error SSIS "Insert bulk failed due to a schema change of the target table nOT CONSISTENT for one of my dataflows. It is successful sometimes but also fails sometimes giving the above mentioned error.
I am not sure what is happening.
Following is storedproc which is called from oledb source
CREATE PROCEDURE [dbo].[getPartiesIpoData_SSIS]
AS
BEGIN
SELECT
c.companyId 'companyId',
tpf.transactionPrimaryFeatureName 'transactionPrimaryFeatureName',
os.statusdatetime 'statusdatetime',
st.statusName 'statusName'
FROM ciqCompany c
inner JOIN ciqTransOffering t
ON t.companyId = c.companyId
JOIN ciqTransOfferToPrimaryFeat ttp
ON ttp.transactionId = t.transactionId
JOIN ciqTransPrimaryFeatureType tpf
ON tpf.transactionPrimaryFeatureId = ttp.transactionPrimaryFeatureId
JOIN ciqtransofferingstatustodate os
ON os.transactionId = t.transactionId
JOIN ciqtransactionstatusType st
ON st.statusId = os.statusId AND st.statusId = 2
WHERE tpf.transactionPrimaryFeatureId = 5
CREATE NONCLUSTERED INDEX IX_PartiesIpoData_companyId on CoreReferenceStaging.dbo.PartiesIpoData(companyId) with (DROP_EXISTING =on)
END
The destination schema is as follows
The oledb destination set in SSIS is as follows
Below is some of the action plan you can try, this helped in my case though.
1) Drop the Constraints before the its run and recreate them after the run
2) Disable the Auto update stats (To isolate the issue)
3) Check if any parallel index rebuilds happening.
4) check with without using "fast load" option
If still the issue persists after implementing the above change, collect the Profiler trace to capture the activity when it is failing to further investigation.
Check This
also check setting for SQL Server Destination Adapter
If still the issue persists after implementing the above change, collect the Profiler trace to capture the activity when bcp is failing to further investigation.

Verifying Syntax of a Massive SQL Update Command

I'm new to SQL Server and am doing some cleanup of our transaction database. However, to accomplish the last step, I need to update a column in one table of one database with the value from another column in another table from another database.
I found a SQL update code snippet and re-wrote it for our own needs but would love someone to give it a once over before I hit the execute button since the update will literally affect hundreds of thousands of entries.
So here are the two databases:
Database 1: Movement
Table 1: ItemMovement
Column 1: LongDescription (datatype: text / up to 40 char)
Database 2: Item
Table 2: ItemRecord
Column 2: Description (datatype: text / up to 20 char)
Goal: set Column1 from db1 to the value of Colum2 from db2.
Here is the code snippet:
update table1
set table1.longdescription = table2.description
from movement..itemmovement as table1
inner join item..itemrecord as table2 on table1.itemcode = table2.itemcode
where table1.longdescription <> table2.description
I added the last "where" line to prevent SQL from updating the column where it already matches the source table.
This should execute faster and just update the columns that have garbage. But as it stands, does this look like it will run? And lastly, is it a straightforward process, using SQL Server 2005 Express to just backup the entire Movement db before I execute? And if it messes up, just restore it?
Alternatively, is it even necessary to re-cast the tables as table1 and table 2? Is it valid to execute a SQL query like this:
update movement..itemmovement
set itemmovement.longdescription = itemrecord.description
from movement..itemmovement
inner join item..itemrecord on itemmovement.itemcode = itemrecord.itemcode
where itemmovement.longdescription <> itemrecord.description
Many thanks in advance!
You don't necessarily need to alias your tables but I recommend you do for faster typing and reduce the chances of making a typo.
update m
set m.longdescription = i.description
from movement..itemmovement as m
inner join item..itemrecord as i on m.itemcode = i.itemcode
where m.longdescription <> i.description
In the above query I have shortened the alias using m for itemmovement and i for itemrecord.
When a large number of records are to be updated and there's question whether it would succeed or not, always make a copy in a test database (residing on a test server) and try it out over there. In this case, one of the safest bet would be to create a new field first and call it longdescription_text. You can make it with SQL Server Management Studio Express (SSMS) or using the command below:
use movement;
alter table itemmovement add column longdescription_test varchar(100);
The syntax here says alter table itemmovement and add a new column called longdescription_test with datatype of varchar(100). If you create a new column using SSMS, in the background, SSMS will run the same alter table statement to create a new column.
You can then execute
update m
set m.longdescription_test = i.description
from movement..itemmovement as m
inner join item..itemrecord as i on m.itemcode = i.itemcode
where m.longdescription <> i.description
Check data in longdescription_test randomly. You can actually do a spot check faster by running:
select * from movement..itemmovement
where longdescription <> longdescription_test
and longdescription_test is not null
If information in longdescription_test looks good, you can change your update statement to set m.longdescription = i.description and run the query again.
It is easier to just create a copy of your itemmovement table before you do the update. To make a copy, you can just do:
use movement;
select * into itemmovement_backup from itemmovement;
If update does not succeed as desired, you can truncate itemmovement and copy data back from itemmovement_backup.
Zedfoxus provided a GREAT explanation on this and I appreciate it. It is excellent reference for next time around. After reading over some syntax examples, I was confident enough in being able to run the second SQL update query that I have in my OP. Luckily, the data here is not necessarily "live" so at low risk to damage anything, even during operating hours. Given the nature of the data, the updated executed perfectly, updating all 345,000 entries!

Resources