Insert Data over a linked server or different database - sql-server

I would like to know what would be a safe way to insert data over a linked/local server into a empty copy of the database(This database will have the same table structure) using "INSERT INTO SELECT"?
I will be getting data from the local server and this data will be inserted over a linked server or local server to an archive DB.
This process needs to happen through Dynamic SQL as I am writing a Generic script.
If you have a sample script that you can supply to me then I would appreciate it.
EXAMPLE
EXEC [server_name].[OI_OAF_Archive_20210218_20210222].[dbo].sp_executesql N'SET IDENTITY_INSERT [OI_OAF_Archive_20210218_20210222].dbo.[ENT_Entry] ON'
EXEC [server_name].[OI_OAF_Archive_20210218_20210222].[dbo].sp_executesql N'ALTER TABLE [OI_OAF_Archive_20210218_20210222].dbo.[ENT_Entry] NOCHECK CONSTRAINT ALL'
IF (EXISTS (SELECT * FROM [OI_OAF_Archive_20210218_20210222].INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'dbo' AND TABLE_NAME = 'ENT_Entry'))
BEGIN
PRINT 'Inserting into the table ENT_Entry'
INSERT INTO [server_name].[OI_OAF_Archive_20210218_20210222].dbo.[ENT_Entry] (entryID,details,createdByID,createdDate,lastModifiedByID,lastModifiedDate,localityID,refNumber,entryDate,gps,reloadEscStep,createdAt,reloadOccurrenceChecklistRule) SELECT entryID,details,createdByID,createdDate,lastModifiedByID,lastModifiedDate,localityID,refNumber,entryDate,gps,reloadEscStep,createdAt,reloadOccurrenceChecklistRule FROM OPENQUERY([server_name], 'SELECT entryID,details,createdByID,createdDate,lastModifiedByID,lastModifiedDate,localityID,refNumber,entryDate,gps,reloadEscStep,createdAt,reloadOccurrenceChecklistRule FROM TMP_Archive_ENT_Entry_70CDC91A ');
END
EXEC [server_name].[OI_OAF_Archive_20210218_20210222].[dbo].sp_executesql N'SET IDENTITY_INSERT [OI_OAF_Archive_20210218_20210222].dbo.[ENT_Entry] OFF'
EXEC [server_name].[OI_OAF_Archive_20210218_20210222].[dbo].sp_executesql N'ALTER TABLE [OI_OAF_Archive_20210218_20210222].dbo.[ENT_Entry] CHECK CONSTRAINT ALL'
This Example is where I INSERT the DATA from The archive server, but I want to insert it from the local server into the archive server, would that be possible?

Related

Delete all data in tables except system versioned (history) tables

I am trying to delete all data in all tables except system versioned (because we can't). This is part of an integration test that clears data before running each test.
The following statement returns 1 for history tables (the ones that we cannot execute delete from on them.
SELECT OBJECTPROPERTY(OBJECT_ID('table_name'), 'TableTemporalType')
So my attempt was as follows:
-- Remove check constraints
EXEC sp_MSForEachTable 'ALTER TABLE ? NOCHECK CONSTRAINT ALL'
-- Delete data
EXEC sp_MSforeachtable 'SET QUOTED_IDENTIFIER ON; IF OBJECTPROPERTY(OBJECT_ID(''?''), ''TableTemporalType'') != 1 DELETE FROM ?'
-- Restore check constraints
EXEC sp_MSForEachTable 'ALTER TABLE ? WITH CHECK CHECK CONSTRAINT ALL'
However, I am still getting the error:
Cannot delete rows from a temporal history table 'dbo.table_name'.
I am not sure what I am doing wrong!
Any hints are appreciated!
I would do this by generating some dynamic sql using sys.tables. Something like this should be pretty close to what you are trying to do.
declare #sql nvarchar(max) = ''
select #sql = #sql + 'delete ' + name + ';'
from sys.tables
where temporal_type = 0
select #sql
--uncomment the line below when you are ready to blow away all your data
--exec sp_executesql #sql

is there a way i can get an alert whenever a table gets records in sql server

i have a table in SQl server which occasionally gets data from a linked server, and than i have to do activities on it .
but the problem is there is no way to check if the data is inserted in table (table is always truncated after performing the activity so next time when data is pushed table is already empty) i manually check daily for data if it is inserted or not .
what i want is to get auto alert on my email (i already have db_mail configured and working) whenever the data is pushed in a table .
i have sa admin and complete privileges on Database and also on Windows server 2012 R2
You can do this with a trigger but you will have to do some preparations with privileges so the executor (the login that's inserting the records on your tracking table) can send email correctly:
CREATE TRIGGER dbo.TrackingTableNameAfterInsert ON TrackingTable
AFTER INSERT
AS
BEGIN
EXEC msdb.dbo.sp_send_dbmail
#profile_name = 'YourConfiguredProfile',
#recipients = 'youremail#mail.com',
#subject = 'Records were inserted on TrackingTable',
#body = ''
END
You might want to encapsulate the email sending on an SP and configure it's permissions there.
In regards to the following:
...table is always truncated after performing the activity so next time
when data is pushed table is already empty...
You can create a historical table and use a trigger to also insert inserted records on this table, so the TRUNCATE or DROP of the original one won't affect the copied records.
CREATE TABLE TrackingTableMirror (
/*Same columns and types*/
InsertedDate DATETIME DEFAULT GETDATE())
GO
CREATE TRIGGER dbo.TrackingTableInsertMirror ON TrackingTable
AFTER INSERT
AS
BEGIN
INSERT INTO TrackingTableMirror (
/*Column list*/)
SELECT
/*Column list*/
FROM
inserted AS I
END
This way you can check all records on this mirrored table and not the volatile one (and avoid all the email sending).
1) Create Profile and Account
You need to create a profile and account using the Configure Database Mail Wizard which can be accessed from the Configure Database Mail context menu of the Database Mail node in Management Node. This wizard is used to manage accounts, profiles, and Database Mail global settings.
2) Run Query
sp_CONFIGURE 'show advanced', 1
GO
RECONFIGURE
GO
sp_CONFIGURE 'Database Mail XPs', 1
GO
RECONFIGURE
GO
3)
USE msdb
GO
EXEC sp_send_dbmail #profile_name='yourprofilename',
#recipients='test#Example.com',
#subject='Test message',
#body='This is the body of the test message.
Congrates Database Mail Received By you Successfully.'
through the table
DECLARE #email_id NVARCHAR(450), #id BIGINT, #max_id BIGINT, #query NVARCHAR(1000)
SELECT #id=MIN(id), #max_id=MAX(id) FROM [email_adresses]
WHILE #id<=#max_id
BEGIN
SELECT #email_id=email_id
FROM [email_adresses]
set #query='sp_send_dbmail #profile_name=''yourprofilename'',
#recipients='''+#email_id+''',
#subject=''Test message'',
#body=''This is the body of the test message.
Congrates Database Mail Received By you Successfully.'''
EXEC #query
SELECT #id=MIN(id) FROM [email_adresses] where id>#id
END
4) Trigger Code
CREATE TRIGGER [dbo].[Customer_INSERT_Notification]
ON [dbo].[Customers]
AFTER INSERT
AS
BEGIN
SET NOCOUNT ON;
DECLARE #CustomerId INT
SELECT #CustomerId = INSERTED.CustomerId
FROM INSERTED
declare #body varchar(500) = 'Customer with ID: ' + CAST(#CustomerId AS VARCHAR(5)) + ' inserted.'
EXEC msdb.dbo.sp_send_dbmail
#profile_name = 'Email_Profile'
,#recipients = 'recipient#gmail.com'
,#subject = 'New Customer Record'
,#body = #body
,#importance ='HIGH'
END
I refer this link.

Table already exists in a new SQL Server database

I'm writing a script to create a bunch of tables in SQL Server. As I write the script, I want to create and delete the database. The problem is that the I get an error saying that the object already exists.
Here is my example script
DECLARE #db_name varchar(20);
DECLARE #command varchar(100);
SET #db_name='testdb';
SET #command='DROP DATABASE ' + #db_name
IF EXISTS(SELECT * FROM sys.databases WHERE name=#db_name)
exec(#command)
SET #command='CREATE DATABASE ' + #db_name
EXEC(#command)
--listing databaes
SELECT name from master.dbo.sysdatabases
-- using our database
SET #command='USE ' + #db_name
EXEC(#command)
PRINT 'listing tables'
SET #command = 'SELECT table_name FROM ' + #db_name + '.INFORMATION_SCHEMA.TABLES WHERE table_type = "base TABLE"'
EXEC(#command)
CREATE TABLE stuff(
name VARCHAR(30) PRIMARY KEY NOT NULL,
weight INT,
quantity INT)
and the output I get is
name
------------
master
tempdb
model
msdb
testdb
SW
(6 rows affected)
listing tables
table_name
Error:
Msg 2714, Level 16, State 6, Server Orange, Line 22
There is already an object named 'stuff' in the database.
I run this on a Linux mint machine, a freshly installed SQL Server, and I use sqlcmd. I guess I can put a drop/delete command before the creating the table, but this shouldn't happen to begin with. What is going on here?
When you execute a USE statement from dynamic SQL, the database context reverts back to the original database context (master?) when the executed batch completes. You'll need to add a USE to the CREATE TABLE script and execute it using dynamic SQL too:
SET #command = N'USE' + QUOTENAME(#db_name) + N';
CREATE TABLE stuff(
name VARCHAR(30) PRIMARY KEY NOT NULL,
weight INT,
quantity INT);
';
bind the create table statement inside a object existence check. like this
IF OBJECT_ID('stuff') IS NULL
BEGIN
CREATE TABLE stuff(
name VARCHAR(30) PRIMARY KEY--NOT NULL is not needed as the primary key does not allow NULL,
weight INT,
quantity INT)
END

Update value in Multiple Tables Sql Server

I have around 50 tables in my database. In all tables where there is userid column (Not all the tables contain this column), I need to change the value of it from "User1" to "User2". This query would be re-used many times with changing values of "User1" and "User2"
Probably create a stored procedure to do the same like
create procedure sp_update_table(#tbl_name varchar(30))
as
begin
DECLARE #sql AS NVARCHAR(MAX)
SET #sql = N'UPDATE ' + QUOTENAME(#tbl_name ) +
'SET userid='User2' WHERE userid='User1''
EXEC sp_executesql #sql
end
then just call your procedure as many times you want passing the table name like
exec sp_update_table('mytable')
EDIT:
You can easily find all tables which contains userid column from INFORMATION_SCHEMA.COLUMNS as below
Use [DatabaseName]
Select table_name From INFORMATION_SCHEMA.COLUMNS Where column_name = 'userid'
Write 50 update statements:
UPDATE <TABLE NAME>
SET userid='User2'
WHERE userid='User1'
It should be easy enough to generate these in a simple text editor and then paste into SQL Server Management Studio.

Linked Server Insert-Select Performance

Assume that I have a table on my local which is Local_Table and I have another server and another db and table, which is Remote_Table (table structures are the same).
Local_Table has data, Remote_Table doesn't. I want to transfer data from Local_Table to Remote_Table with this query:
Insert into RemoteServer.RemoteDb..Remote_Table
select * from Local_Table (nolock)
But the performance is quite slow.
However, when I use SQL Server import-export wizard, transfer is really fast.
What am I doing wrong? Why is it fast with Import-Export wizard and slow with insert-select statement? Any ideas?
The fastest way is to pull the data rather than push it. When the tables are pushed, every row requires a connection, an insert, and a disconnect.
If you can't pull the data, because you have a one way trust relationship between the servers, the work around is to construct the entire table as a giant T-SQL statement and run it all at once.
DECLARE #xml XML
SET #xml = (
SELECT 'insert Remote_Table values (' + '''' + isnull(first_col, 'NULL') + ''',' +
-- repeat for each col
'''' + isnull(last_col, 'NULL') + '''' + ');'
FROM Local_Table
FOR XML path('')
) --This concatenates all the rows into a single xml object, the empty path keeps it from having <colname> </colname> wrapped arround each value
DECLARE #sql AS VARCHAR(max)
SET #sql = 'set nocount on;' + cast(#xml AS VARCHAR(max)) + 'set nocount off;' --Converts XML back to a long string
EXEC ('use RemoteDb;' + #sql) AT RemoteServer
It seems like it's much faster to pull data from a linked server than to push data to a linked server: Which one is more efficient: select from linked server or insert into linked server?
Update: My own, recent experience confirms this. Pull if possible -- it will be much, much faster.
Try this on the other server:
INSERT INTO Local_Table
SELECT * FROM RemoteServer.RemoteDb.Remote_Table
The Import/Export wizard will be essentially doing this as a bulk insert, where as your code is not.
Assuming that you have a Clustered Index on the remote table, make sure that you have the same Clustered index on the local table, set Trace flag 610 globally on your remote server and make sure remote is in Simple or bulk logged recovery mode.
If you're remote table is a Heap (which will speed things up anyway), make sure your remote database is in simple or bulk logged mode change your code to read as follows:
INSERT INTO RemoteServer.RemoteDb..Remote_Table WITH(TABLOCK)
SELECT * FROM Local_Table WITH (nolock)
The reason why it's so slow to insert into the remote table from the local table is because it inserts a row, checks that it inserted, and then inserts the next row, checks that it inserted, etc.
Don't know if you figured this out or not, but here's how I solved this problem using linked servers.
First, I have a LocalDB.dbo.Table with several columns:
IDColumn (int, PK, Auto Increment)
TextColumn (varchar(30))
IntColumn (int)
And I have a RemoteDB.dbo.Table that is almost the same:
IDColumn (int)
TextColumn (varchar(30))
IntColumn (int)
The main difference is that remote IDColumn isn't set up as as an ID column, so that I can do inserts into it.
Then I set up a trigger on remote table that happens on Delete
Create Trigger Table_Del
On Table
After Delete
AS
Begin
Set NOCOUNT ON;
Insert Into Table (IDColumn, TextColumn, IntColumn)
Select IDColumn, TextColumn, IntColumn from MainServer.LocalDB.dbo.table L
Where not exists (Select * from Table R WHere L.IDColumn = R.IDColumn)
END
Then when I want to do an insert, I do it like this from the local server:
Insert Into LocalDB.dbo.Table (TextColumn, IntColumn) Values ('textvalue', 123);
Delete From RemoteServer.RemoteDB.dbo.Table Where IDColumn = 0;
--And if I want to clean the table out and make sure it has all the most up to date data:
Delete From RemoteServer.RemoteDB.dbo.Table
By triggering the remote server to pull the data from the local server and then do the insert, I was able to turn a job that took 30 minutes to insert 1258 lines into a job that took 8 seconds to do the same insert.
This does require a linked server connection on both sides, but after that's set up it works pretty good.
Update:
So in the last few years I've made some changes, and have moved away from the delete trigger as a way to sync the remote table.
Instead I have a stored procedure on the remote server that has all the steps to pull the data from the local server:
CREATE PROCEDURE [dbo].[UpdateTable]
-- Add the parameters for the stored procedure here
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for procedure here
--Fill Temp table
Insert Into WebFileNamesTemp Select * From MAINSERVER.LocalDB.dbo.WebFileNames
--Fill normal table from temp table
Delete From WebFileNames
Insert Into WebFileNames Select * From WebFileNamesTemp
--empty temp table
Delete From WebFileNamesTemp
END
And on the local server I have a scheduled job that does some processing on the local tables, and then triggers the update through the stored procedure:
EXEC sp_serveroption #server='REMOTESERVER', #optname='rpc', #optvalue='true'
EXEC sp_serveroption #server='REMOTESERVER', #optname='rpc out', #optvalue='true'
EXEC REMOTESERVER.RemoteDB.dbo.UpdateTable
EXEC sp_serveroption #server='REMOTESERVER', #optname='rpc', #optvalue='false'
EXEC sp_serveroption #server='REMOTESERVER', #optname='rpc out', #optvalue='false'
If you must push data from the source to the target (e.g., for firewall or other permissions reasons), you can do the following:
In the source database, convert the recordset to a single XML string (i.e., multiple rows and columns combined into a single XML string).
Then push that XML over as a single row (as a varchar(max), since XML isn't allowed over linked databases in SQL Server).
DECLARE #xml XML
SET #xml = (select * from SourceTable FOR XML path('row'))
Insert into TempTargetTable values (cast(#xml AS VARCHAR(max)))
In the target database, cast the varchar(max) as XML and then use XML parsing to turn that single row and column back into a normal recordset.
DECLARE #X XML = (select '<toplevel>' + ImportString + '</toplevel>' from TempTargetTable)
DECLARE #iX INT
EXEC sp_xml_preparedocument #ix output, #x
insert into TargetTable
SELECT [col1],
[col2]
FROM OPENXML(#iX, '//row', 2)
WITH ([col1] [int],
[col2] [varchar](128)
)
EXEC sp_xml_removedocument #iX
I've found a workaround. Since I'm not a big fun of GUI tools like SSIS, I've reused a bcp script to load table into csv and vice versa. Yeah, it's an odd case to have the bulk operation support for files, but tables. Feel free to edit the following script to fit your needs:
exec xp_cmdshell 'bcp "select * from YourLocalTable" queryout C:\CSVFolder\Load.csv -w -T -S .'
exec xp_cmdshell 'bcp YourAzureDBName.dbo.YourAzureTable in C:\CSVFolder\Load.csv -S yourdb.database.windows.net -U youruser#yourdb.database.windows.net -P yourpass -q -w'
Pros:
No need to define table structures every time.
I've tested and it worked way faster than inserting directly through
the LinkedServer.
It's easier to manage than XML (which is limited to
varchar(max) length anyway).
No need of an extra layout of abstraction (tools like SSIS).
Cons:
Using the external tool bcp through the xp_cmdshell interface.
Table properties will be lost after ex/im-poring csv (i.e. datatype, nulls,length, separator within value, etc).

Resources