TSQL copy one database to another - sql-server

I have two identical databases lets call them A & B, what I want to do is to have two copy options:
1- Copy the whole database and overwrite everything from A to B with TSQL
2- With TSQL I want to loop over each table row in A and check the last modified date filed if is greater then last modified date in B table row copy and overwrite the whole row.
Please let me now if something is not clear, any help is very appreciated and thanks in advance.

Using Visual Studio 2010 Premium or Ultimate you can do a schema compare and create the new database on the server you need to. Getting the data over there is another matter entirely and is something you can perform with SSIS or linked server queries, especially considering you're simply pumping data from one side to another.
See this post for schema compare tools:
How can I find the differences between two databases?
Something we've recently complete is using the CHECKSUM and Table Switching to bring in new data, keep the old and compare both sets.
You would basically set up three tables for each doing the switch, one for production, one for _staging and one that's an _old table that's an exact schema match. Three tables are needed since when you perform the switch (which is instantaneous, you need to switch into an empty table). You would pump the data into the staging table and execute a switch to change the pointer for the table definition from production to _old and from _staging to production. AND if you have the checksum calculated on each row, you can company and see what's new or different.
It's worked out well with our use of it, but it's a fair amount of prep work to make it happen, which could be prohibitive depending on your environment.
http://msdn.microsoft.com/en-us/library/ms190273.aspx
http://technet.microsoft.com/en-us/library/ms191160.aspx
http://msdn.microsoft.com/en-us/library/ms189788.aspx
Here's a sample query that we ran on GoDaddy.com's sql web editor:
CREATE TABLE TestTable (Val1 INT NOT NULL)
CREATE TABLE TestTable2 (Val1 INT NOT NULL)
INSERT INTO TestTable (Val1) VALUES (4)
SELECT * FROM TestTable
ALTER TABLE TestTable SWITCH TO TestTable2
SELECT * FROM TestTable2

Related

How to use BULK INSERT to move data between two tables?

How to use BULK INSERT to move data between two tables? I can't find a tutorial with examples. I have 10 mln records to move. SQL Server 2012 SP3. It is one time action. I can't use cmdshell, I can't use SSIS. I have to move data in batches. It is going to be a night job. I say "move" but I don't have to delete records in source. Target table exists already, I don't have to check some constraints and NO foreign keys.
It is very easy to shift data from one database to another, as long as both are located in the same server (as you wrote it).
As you want to use a script and you want to copy records from one table to another you might use this:
INSERT INTO TargetDB.dbo.TableName(col1,col2,col3...)
SELECT col1,col2, col3 ... FROM SourceDB.dbo.TableName
This would copy all rows from here to there.
In your question you do not provide enough information, but you want to use a script. The above is a script...
If you have existing data you should read about MERGE
If Target and Soure do not have the same structure, you can easily adapt the SELECT to return exactly the set you need to insert
If you do not need to copy all rows, just add a WHERE clause
If the user you took to connect has not the necessary rights ask the admin. But - from you question or comment - I take, that this will be applied by an admin anyway...
If the databases live in two different servers, you might read about linked server
And finally you might read about import and export, which is widely supported by SSMS...

SQL temp table to point to real table - using SQL Server 2014

I am working on archiving data in my database being that it's getting really heavy and slowing down the server.
I have a script running automatically every day to move that day's data to a file.
All my existing selects must change now. Instead of selecting from the regular table it could be either the archived file or the regular one. I don't want to have a variable name defining which table to select from because then all my existing stored procedures must turn into dynamic SQL.
So I was going to have a temp table that gets filled either with my archived data or with my current data from the current table. And then I would select from the temp table.
The problem is that if it's the current I don't want to have to select from that table to put it into a temp table. It's a heavy table and I could only select from it with where clauses. Since my stored procedure uses the table multiple times with different where clauses I would have to dump the whole table into my temp table. This is affecting the wait time for the customer.
I thought maybe to have the temp table just pointing to the real table instead of selecting from it.
Is this possible in SQL Server 2014?
If not any ideas?

sql server to dump big table into other table

I'm currently changing the Id field of table to be an IDENTITY field. This is simple: Create a temp-table, copy all the data to the temp-table, adjust all the references from and to the table to point from and to the new temp-table, drop the old table, rename the temp-table to the original name.
Now I've got the problem that the copy step is taking too long. Actually the table doesn't have too many entries (~7.5 million rows), but it still takes multiple hours to do this.
I'm currently moving the data with a query like this:
SET IDENTITY_INSERT MyTable_Temp ON
INSERT INTO MyTable_Temp ([Fields]) SELECT [Fields] FROM MyTable
SET IDENTITY_INSERT MyTable_Temp OFF
I've had a look at bcp in combination with cmdshell and a following BULK INSERT, but I don't like the solution of first writing the data to a temp-file and afterwards dumping it back into the new table.
Is there a more efficient way to copy or move the data from the old to the new table? And can this be done in "pure" T-SQL?
Keep in mind, the data is correct (no external sources involved) and no changes are being made to the data during transfer.
Your approach seems fair, but the transaction generated by the insert command is too large and that is why it takes so long.
My approach when dealing with this in the past, was to use a cursor and a batching mechanism.
Perform the operation for only 100000 rows at a time, and you will see major improvements.
After the copy is made you can rebuild your references and eventually remove the old table... and so on. Be careful to reseed your new table accordingly after the data is copied.

Changing the column order/adding new column for existing table in SQL Server 2008

I have situation where I need to change the order of the columns/adding new columns for existing Table in SQL Server 2008. It is not allowing me to do without drop and recreate. But that is in production system and having data in that table. I can take backup of the data, and drop the existing table and change the order/add new columns and recreate it, insert the backup data into new table.
Is there any best way to do this without dropping and recreating. I think SQL Server 2005 will allow this process without dropping and recreating while changing to existing table structure.
Thanks
You can't really change the column order in a SQL Server 2008 table - it's also largely irrelevant (at least it should be, in the relational model).
With the visual designer in SQL Server Management Studio, as soon as you make too big a change, the only reliable way to do this for SSMS is to re-create the table in the new format, copy the data over, and then drop the old table. There's really nothing you can do about this to change it.
What you can do at all times is add new columns to a table or drop existing columns from a table using SQL DDL statements:
ALTER TABLE dbo.YourTable
ADD NewColumn INT NOT NULL ........
ALTER TABLE dbo.YourTable
DROP COLUMN OldColumn
That'll work, but you won't be able to influence the column order. But again: for your normal operations, column order in a table is totally irrelevant - it's at best a cosmetic issue on your printouts or diagrams..... so why are you so fixated on a specific column order??
There is a way to do it by updating SQL server system table:
1) Connect to SQL server in DAC mode
2) Run queries that will update columns order:
update syscolumns
set colorder = 3
where name='column2'
But this way is not reccomended, because you can destroy something in DB.
One possibility would be to not bother about reordering the columns in the table and simply modify it by add the columns. Then, create a view which has the columns in the order you want -- assuming that the order is truly important. The view can be easily changed to reflect any ordering that you want. Since I can't imagine that the order would be important for programmatic applications, the view should suffice for those manual queries where it might be important.
As the other posters have said, there is no way without re-writing the table (but SSMS will generate scripts which do that for you).
If you are still in design/development, I certainly advise making the column order logical - nothing worse than having a newly added column become part of a multi-column primary key and having it no where near the other columns! But you'll have to re-create the table.
One time I used a 3rd party system which always sorted their columns in alphabetical order. This was great for finding columns in their system, but whenever they revved their software, our procedures and views became invalid. This was in an older version of SQL Server, though. I think since 2000, I haven't seen much problem with incorrect column order. When Access used to link to SQL tables, I believe it locked in the column definitions at time of table linking, which obviously has problems with almost any table definition changes.
I think the simplest way would be re-create the table the way you want it with a different name and then copy the data over from the existing table, drop it, and re-name the new table.
Would it perhaps be possible to script the table with all its data.
Do an edit on the script file in something like notepad++
Thus recreating the table with the new columns but the same.
Just a suggestion, but it might take a while to accomplish this.
Unless you write yourself a small little c# application that can work with the file and apply rules to it.
If only notepadd++ supported a find and move operation

How long does it take to create an identity column?

I have a table that have 40million records.
What's best (faster)? Create a column directly in that table or create another table with identity column and insert data from first?
If I create an identity column in the table that have 40million records, is it possible estimate how long does it take to create it?
This kind of depends. Creating an identity column won't take that long (well ok this is relative to the size of the table), assuming you appended it to the end of the table. If you didn't, the server has to create a new table with the identity column at the desired position, export all the rows to the new table, and then change the table name. I am guessing that is what is taking so long.
I'm guessing it's blocked - did you use the GUI or a query window (do you know the SPID it's running under?)
Try these - let us know if they give results and you're not sure what to do:
USE master
SELECT * FROM sysprocesses WHERE blocked <> 0
SELECT * FROM sysprocesses WHERE status = 'runnable' AND spid <> ##SPID
If you used ALTER TABLE [...] ADD ... in a query window, it is pretty fast, in fact it would have finished long ago. If you used the Management Studio table designer it is copying the table into a new table, dropping the old one, then renaming the new table as the old one. It will take a while, specially if you did not pre-grow the database and the log to accommodate the extra space needed. Because is all one single transaction, it would take about another 16 hours to rollback if you stop it now.
Isn't it something you'll only have to do once, and therefore isn't really a problem how long it takes? (Assuming it doesn't take days...)
Can you not create a test copy of the database and create the column on that to see how long it takes?
I think a lot depends upon the hardware and which DBMS you are in. In my environment, creating a new table and copying the old data into it would take about 3 or 4 hours. I would expect the addition of an identity column to take around the same amount of time, just based on other experiences. I'm on Oracle with multiple servers on a SAN, so things can run faster than in a single server environment. You may just have to sit back and wait.

Resources