Push data to another SQL server - sql-server

A C# program runs a SQL command on localhost and inserts 4 rows in a localhost\default\DB\tblSN.
There are about 10 localhosts each runs one instance of the C# program and so, 10 local SQL instances. Each SQL instance therefore has a table called tblSN that the C# command once executed adds 4 lines of data to.
My question is, how can I push those 4 rows from 10 local SQL instances of tblSN onto one centralized SQL instance? My goal is to view all the data scattered along 10 local SQL instances in just one local database.

For those interested in what's the answer, I used this approach
Insert data into remote database table from local database table
and managed to solve the problem

Related

How to post data from a database on one sql instance to a database on a different sql instance Using SQLExpress2012

I have a production SQL instance and database using SQL Server 2012 Express. I need to archive (on demand) from this database to a different database on a different SQL Server 2012 instance (on a different machine). The archiving process requires that the original data is deleted after successful archiving.
I need to execute in real time and for it to be meaningfully quick to the user. Not much to ask really! The archiving is performed by a user from a browser screen calling a .Net service (in C#).
I am limited to Javascript client (Sencha ExtJS), C# on the back end server and SQLExpress2012 on both database servers.
I have been using a stored procedure on the source database which is called from a .Net service (C#).
The source data to be archived resides on a parent table and 1 or more child tables. The destination database contains tables of the same format. I am currently using Linked Servers to point from source server to destination server. I am using a stored procedure on the source database to execute a distributed transaction between the two databases but the transaction can take 30 seconds to 90 seconds to transfer 5000 parent rows and 10000 to 15000 child rows.
Is there a better and faster way of doing this?
I have considered using table parameters on both servers but I have read that I cannot write to a table parameter within a stored procedure and I that cannot pass them between SQL instances.

How to copy data from one table to another in SQL Server

I am trying to copy data from views on a trusted SQL Server 2012 to tables on a local instance of SQL Server on a scheduled transfer. What would be the best practice for this situation?
Here are the options I have come up with so far:
Write an executable program in C# or VB to delete existing local table, query the data from remote database and then write results to tables in the local database. The executable would run on a scheduled task.
Use BCP to copy data to a file and then upload into local table.
Use SSIS
Note: The connection between local and remote SQL Server is very slow.
Since the transfers are scheduled, so I suppose you want this data to be up-to-date.
My recommendation would be to use SSIS and schedule it using SQL Agent. If you wrote a C# program, I think the best outcome you will gain is a program imitating SSIS. Moreover, SSIS will be a very easy to amend the workflow anytime.
Either way, to make such program/package up-to-date, you will have to answer an important question: Is the source table updatable or is it like a log (inserts only)?
This question is so important because it will determine how you will fetch the new updates from the source table. For example, if the table represents logs, you will most probably use the Primary Key to detect new records, if not, you might want to seek a column representing update date/time. If you have the authority to alter the source table, you might want to add timestamp column which represent the row version (timestamp differs than datetime)
For building an SSIS package, it will mainly contain the following components:
Execute SQL Task to get the maximum value from source table.
Execute SQL Task to get the last value where it should start from at the destination table. You can get this value either by selecting the maximum value from the destination table or if the table is pretty large you can store that value in another table (configuration table for example).
Data Flow which moves the data from source table starting after the value fetched in step 2 to the value fetched in step 1.
Execute SQL Task for updating the new maximum value back to the configuration table if you chose this technique.
BCP can be used to export the data compress and transfer over network which can be then imported into local instance of SQL.
Also with BCP data exports can be contained with smaller batches of data for easier management of data.
https://msdn.microsoft.com/en-us/library/ms191232.aspx
https://technet.microsoft.com/en-us/library/ms190923(v=sql.105).aspx

Creating a snapshot database to another SQL Server

I'm trying to save the values of several columns of one table to another table on a different server. I am using SQL Server. I would like to do this without running external programs that query from this database and insert the results into the new database. Is there any way to do this from within the SQL Server Management Studio?
This is a recurring event that occurs every hour. I have tried scheduling maintenance tasks that execute custom T-SQL scripts but I'm having trouble getting the connection to the remote server.
Any help would be appreciated.
If you can set up the remote server as a linked server you should be able to configure the SQL Server Agent to execute jobs that contain queries that access tables on both the local and linked server. Remember that you might have to configure the access rights for the account used to run SQL Server Agent so that it has permissions to read/write tables on both servers.
This practice might not be without issues though as this article discusses.
You can use a 4 part name like;
INSERT [InstanceName].[DatabaseName].[SchemaName].[TableName]
SELECT * FROM [SourceInstanceName].[SourceDatabaseName].[SourceSchemaName].[SourceTableName]
But first you will have to set the remote server as a linked server as so;
https://msdn.microsoft.com/en-us/library/aa560998.aspx

SQL Server data transfer

In SQL Server, I have a data source server which has 22 databases and in each database there are 5 tables. Every db has the same table includes different data separated through years.
I want to collect all this data into one single database. Destination database will have only 5 tables, while source has 22 x 5 = 110 tables. I'm using import-export wizard to transfer data but it takes too long and really annoying stuff. For 110 tables I'm going to have to start import-export wizard.
Is there a simple way, tool to do this? There is no linked server between servers.
Here is a simple figure that explains my situation.
Posting my comment as an answer:
Back up each database, restore it to server 2 and then insert the records across using a simple INSERT .. SELECT statement, then drop the restored database and restore the next? You should be able to script this to work unattended, even the creation of all the backups could be scripted to only need a single 'run' which will run for all databases
Your other option (if space permits) is to create a new database on server 1 (potentially a restore of the database on server 2 if it has data already in it), then import all records across into this new database, then backup this database and restore it on server 2.
It depends on several thing like how often do you want the data to be moved, will it be changed on the destinations DB's?
There are 4 methods of High Availability on SQL Server. One of them will surely fits to your scenario (probably a merge replication)
http://msdn.microsoft.com/en-us/library/ms190202.aspx

Table searches fast on local SQL Server but slow on host SQL Server

I have a SQL Server 2008 database with a table that contains about a 100,000 rows, I have setup all the correct indexes and primary keys and Full Text Search and made sure its fine tuned.
I created a stored procedure that searches a nvarchar column in the table, the SP is long and has multiple select / insert statements.
Now when I run the SP on my local SQL Server I get the result in one second, then I backed up the database and restored it on another machine and tried the same search and I got it in one second.
My problem is when I create the database in my hosts shared SQL Server and import all the data from my SQL Server and run the search I get the result in 7 seconds! although I made sure I have the same indexes and primary keys and FTS working.
The only difference in my database in the hosts sql server is that I created the schema using a script then I imported all the data using the import/export wizard, so to check if this is whats causing this problem I created a new database on my machine and imported the data using the wizard, and I got the search result in 3 seconds!
I checked the execution plan of the SP in the database on the hosts SQL Server and found one index scan is taking 49% of the cost while on my machine it was 0%
Can somebody please point out why this is happening?
Thanks.
EDIT:
Now when I run the execution plan for both servers I get exactly the same plan and costs, but still the speed difference is huge (1 second and 7 seconds).
Although I didn't solve the strange behavior in SQL Server, I ended up replacing this stored procedure with Full Text Search and it gave a faster result.

Resources