I have a sql server 2008 database, the code (now corrected) has accidently overwrite one column with wrong data about 50,000 rows
The rows might have changed since the backup, but the primary key is intact now I have two database one with correct data in one column and one with incorrect data.
Can anyone help with a script to recover this columns data.
You could use an update statement to copy data from the restored database:
update wrong
set WrongColumn = [right].WrongColumn
from ProductionDb.dbo.Table1 as wrong
join RestoredDb.dbo.Table1 as [right]
on [right].PrimaryKeyCol = wrong.PrimaryKeyCol
Use tablediff that will generate a script for you:
http://msdn.microsoft.com/en-us/library/ms162843.aspx
Related
I have source and destination tables. In that almost 30 millions of records are there in Source Table in One server. Now I want to copy this data to another table which is in another server. And at the same time, this copy of data is not once. When ever the data changes in source table , need to insert/update/delete in destination by comparing with a key.
Solutions that I tried
Step 1. There is already linked connection between two servers. For inserting data to destination table, I used OPENROWSET function like this for copying data from one server to another server.
INSERT INTO destination table
SELECT *
FROM OPENROWSET('SQLOLEDB', 'provider string', query from source table)
Step 2. After this for making recent changes(delta mode) to destination table , I was using MERGE statement
I have created a procedure for step-1 & 2
Problem
But the problem is since the data is huge it is taking lot of time (more than 2 hours) for inserting the data and for MERGE statement.
Is anybody aware of how can I achieve this case with in less time. Please suggest
Thanks
I am working in SQL Server 2008 R2 and have a production table that I need to replicate exactly in another location to work on. I will first run a job to move everything over (once off) and then run a daily job to update the updates/inserts daily.
The daily job will look at the production table and find any new values that need to be inserted (based on a created date) and also find any existing values that need to be updated (based on a modified date). Any new values are inserts and any modified values are updates.
The job pulls these rows from the production table and applies them to the copy table located elsewhere. I am running into trouble with timestamp columns. The production table has a timestamp column and I don't know how I should handle this when updating the copy table (also created as a timestamp column).
I get an error if I set the production.timestamp_col = copytable.timestamp_col (Cannot update a timestamp column).
Should I leave it out (in which case I don't have an exact copy of the table), convert the column in the copy table & the value in the select from the production table to something else (not sure what), put my own value in (again, won't have an exact copy of the table) or drop/truncate and recreate each time (inefficient due to data volumes)?
What would the best approach be in a situation like this?
Thanks
You can convert the destination timestamp column to varbinary(8) and then insert the values. This will help you create an exact copy but it will break the timestamp functionality. Do this if you need to have a copy only. The actual purpose of timestamp column is to track changes to a row through versioning.
In SQL a timestamp column is system generated, you cannot update it or set it on insert. SQL does this all for you.
https://technet.microsoft.com/en-us/library/ms182776(v=sql.110).aspx
You may be able to pull something off with replication/mirroring to get a 100% exact copy, but it may not be worth it depending on your needs.
I need to remove SQL server duplicated rows when importing file into database with distinct method.
HallGroup is my table in database. I'm using this
Sql procedure:
SELECT DISTINCT * INTO tempdb.dbo.tmpTable
FROM HallGroup
DELETE FROM HallGroup
INSERT INTO HallGroup SELECT * FROM tempdb.dbo.tmpTable
DROP TABLE tempdb.dbo.tmpTable
With this procedure works fine duplicated rows are deleted, but the problem is when i try to import again data to SQL server rows are still duplicating. What i'm missing, So any hint?
How to remove SQL server duplicated rows properly when importing file into database with distinct method?
I am just getting back into SQL after being out for a bit but I would not have solved your problem in that way that you are trying (not that I completely understand why you are doing it that way) as I believe (even if it were working correctly) over time your process will take longer each time you do it as the size of the table increases.
It would be much more efficient if you inserted the new data based on the absence of a key (you indicate you are already using a stored proc). If you don't have a key to use (which very recently happened to me), make one. I just solved a similar problem to yours whereas I am importing data into a table from an external source and wanted to eliminate the possibility of duplicates. In my case, I associate name of the external source datafile (is distinct by dataset to import) with the data to be imported and use that to ensure I am not re-importing already imported data. I load the external data into a table using a dtsx and then run a stored proc to merge that data with an existing table. This gives me the added advantage of having a audit trail of where each record came from.
Hope this helps.
Basically I have a two databases on SQL Developer. I want to take the table data FOR A PARTICULAR RECORD from one database and copy it to another database's table. What should be the query? I don't want to use a restore to avoid data loss... Any ideas?
I got a query from google:
INSERT INTO dbo.ELLIPSE_PFPI.T_ANTENNE
(COLUMNS)
SELECT COLUMNS_IN_SAME_ORDER FROM dbo.ELLIPSE_PFPI.T_ANTENNE
What should be written in the query instead of dbo?
Try this
I havent tested it but i think it works
select * into [databaseTo].dbo.tablename from [databaseFrom].dbo.tablename
Currently I'm working on database migration, for this I'm using Pentaho Kettle and Perl scripts.
The migration is from Tumor-registry SQL Server database to CIDER IBM DB2 database.
In this task I want to achieve two objective.
Initial migration: in this I'm migrating all the rows (e.g. 100000) from Tumor-registry (SQL Server) to CIDER (IBM DB2).
Subsequent migration: the Tumor-registry SQL Server database is constantly updating on and off.
It's constantly adding new rows or edits already existing rows.
I have figured out the first step but facing two problems in second step.
a) If Tumor-registry SQL Server database is updated with for example new 10 rows; how can I get those new 10 rows?
b) If already existing 10 rows are updated then how can I get those 10 rows and also want to know which columns are updated.
My Tumor-registry database contain approximately 50 tables.
Any help is greatly appreciated.
Create a new column with TIMESTAMP datatype.This will keep track of the latest edited records in the table.
OR
You can use CHECKSUM function in Sql server.
i think that will provide u a solution by using triger
create triger trigername on tablename
after insert
as
declare #variablename_1 datatype;
select #variablename_1 = column_name from inserted ;
if you want save data in anohter table that last inserted then create an other table
insert into tablename values (#variablename_1);
You could use IBM Change Data Capture, it will take all the DDL and DML in the source database, and replicate them appropiately in the target database.
http://www-01.ibm.com/software/data/infosphere/change-data-capture/
It seems that there are other solutions from other vendors, take a look at: http://en.wikipedia.org/wiki/Change_data_capture