I have two servers, the first one with SQL Server 2005 and the second one with SQL Server 2000. I want to insert data from the 2005 to the 2000 and I want to do it unsync (without distributed transactions because "save transaction" are used).
Once the information is inserted in the tables of the 2000 server, some instead-of triggers are fired to process this information.
In that scenario I decided to use Service Broker. So I have a Stored Procedure to insert information from one server to the other and it works perfectly.
But when I call this procedure from the target queue process message procedure it fails, and I don't know why!!
Also, I know it works because when I use the same structure (queues & stored procedures) to copy form one database to another on the same SQL 2005 server.
So it fails only between machines, anyone knows why or how to get more information about the cause of the failure? Or how to insert data unsync (I can't to use the SQL Agent because I want insert the information more frequently than 1 minute).
The usual issue is that the SSB procedure uses WAITFOR and WAITFOR is incompatible with distributed transactions. The only solution is to get rid of the WAITFOR(RECEIVE) and use a ordinary RECEIVE instead.
Consider using a linked server instead of service broker. With a linked server, you can:
insert into LinkedServer.The2000Db.dbo.The2000Table
(col1, col2, col3)
select col1, col2, col3
from The2005Table
Related
I have read only access to 2 SQL Server databases used by our company's primary application. For reporting and testing purposes, I would like to sync the databases to 2 databases i have on prem at our corporate offices. Because it's a hosted application on azure, replication is not an option. I want to run queries against both databases. I can query one at a time but cant query both.
I can link to one server via ssms with linked servers but not both because the have the same server name. odbc ,ight be a solution for that but we will see.
The issue here is I want to run a task that copies the data from the hosted sql server to the local server without replication as an option.
I have access to sql Server 2012 and sql Server 2019 if that matters.
I can link to one server via ssms with linked servers but not both, because the have the same server name. odbc ,ight be a solution for that but we will see.
The issue here is I want to run a task that copies the data from the hosted sql server to the local server without replication as an option.
I have access to sql Server 2012 and sql Server 2019 if that matters.
I have done manual synchronization tasks before. For example, our phone system has a large call log table in MySQL and I periodically have to replicate that over to a SQL server used for reporting. I basically create a synchronization log table to record when the last synchronization occured. Each time the stored procedure is run, it grabs all records between the last synchronization and now. You can expand this example to multiple tables to synch (i.e. you can add server_name and table_name columns to db_synch_log and servers).
The problem I see is that you want to replicate an entire DB. If you're updating more than just a few tables, this method is not scalable IMO. For my purposes, I was replicating 3 tables and nothing else.
--You need a log table to record when the last synchronization was performed.
CREATE TABLE db_synch_log (
last_synch_date datetime;
);
--Create variables to hold the last_synch and current datetimes.
DECLARE #run_date datetime = GETDATE();
DECLARE #last_run_date datetime;
--Get the last run date from the log table;
SET #last_run_date = (SELECT MAX(last_synch_date) FROM db_synch_log);
--Get all records from remote database between last run time an this run.
INSERT INTO myTable (column1Name, column2Name, ...)
SELECT column1Name, column2Name, ...
FROM remoteserver.dbo.myTableRemote as rt
WHERE rt.date_created > #last_run_date
AND rt.date_created <= #run_date
;
--Update synch log.
INSERT INTO db_synch_log (last_synch_date)
SELECT #run_date
;
I have a table in one server and I need to update same table which is in another server, using a trigger is not a good idea because I was having issues and I read is not a good approach.
I was thinking to use either extended events or service broker, I'm using SQL Server 2014. I don't want to create a SQL Job that runs like every minute to check if table has an insert/update/delete to copy the data into the other server. Not sure which one is the best for my purpose, For example TableA on Server 1 is updated, I need to reflect this update into the other TableA on Server 2 but only when any insert/update/delete happens.
I have code with queries that look like this:
INSERT INTO LinkedServer.DB.dbo.Table1 (column)
SELECT something
FROM LocalDB.dbo.Table2
WHERE something NOT IN (SELECT column FROM LinkedServer.DB.dbo.Table1)
There's a lot of other code in the ecosystem that could be touching those tables. Today, this SQL code hung for way longer than I expected it to. If there were a deadlock between Table1 and Table2, can SQL Server detect that if it's a linked server?
Also, what if LocalDB were only calling out to LinkedServer and not locking any of its own objects? Can it detect the deadlock if the affected objects are all on the remote server.
In this case, LocalDB is 2012 R2 and LinkedServer is 2008
As per my understanding deadlock between two different tables can happen only when there is short of resource in the server level, Are you trying to understand the Deadlock between you insert and select queries . Then yes let say you are inserting some records into the table in the Linkedservers and at the same time you are selecting data in localDB using the data in Linkedservers, But since the insert is not yet completed that can cause the blocking to you sub-query which is trying to fetch data from the Linkedserver and pass the data to localDB select query. in this case blocking will be because of Table1 only which is of linkedserver but from our point of view it looks like table2 is being blocked by table1 but in actual that is not the scenario. And yes off course the deadlock information will be available in the case i explained but those information will be available in the DMV's of the linkedserver.
We are running exec xp_fixeddrives to get the free space for each physical drive associated with the SQL Server.
I am running this code as part of an SSIS package which fetches all the SQL servers free space.
Currently I am creating a table in tempdb and inserting the results of exec xp_fixeddrives into this table. But the issue I am facing is, when ever the server is restarted I am facing access issue as the table is on Tempdb.
I don't really like the idea of creating a table in Master DB or Model DB. A challenge I am facing is we execute on difference instance of SQL Server versions ranging from 2000 - 2014. So obviously there are issues I have to keep in mind.
Any suggestion on this are much appreciated.
That's obvious, SQL Server reset TempDb whenever SQL Services is restarted. In that case, you will face access issues because that table won't exists. I would probably create my own table to store the details if I want to store historical check information also.
If you are running your code from SSIS and you want to send mail just after validating it then you don't even have to create any table. Just fill a object variable in SSIS from execute SQL task which will be running below query
DECLARE #t TABLE
(Drive VARCHAR(1),Size INT)
INSERT INTO #t
EXEC MASTER..xp_fixeddrives
SELECT * FROM #t
Read this object variable in script task to send mail.
We have 2 databases and we need data to be transferred from db 1 to db 2. How can I do that (in SYBASE there are proxy tables) in SQL Server?
As #Nathan says just BULK INSERT the data. Assuming both databases are on the same server then you reference the table usually as databasename.schema.tablename thus db1.dbo.table1 or db2.dbo.table1
As such you can also just create a view in the destination data to use as a 'proxy' and pull the data without actually copying it. The view would be in db2 and be something like:
CREATE VIEW table1 AS SELECT * FROM db1.dbo.table1
I think INSERT INTO would be a good way to go.
http://msdn.microsoft.com/en-us/library/aa933206(v=sql.80).aspx
First of all, you can create a linked server in your destination server to the other server. Then you can do the INSERT INTO.
If you don't want to do that (or are unable), then dump the data to a file and do the very fast BULK INSERT to get the data into your new table.