The merge process could not update the list of subscriptions - sql-server

I have replication set up between a sql-server 2005 instance and multiple sql-server 2000 instances. The replication will successfully for a while before I get the following error message:
Violation of UNIQUE KEY constraint 'unique_pubsrvdb'. Cannot insert duplicate key in object 'dbo.sysmergesubscriptions'. (Source: MSSQLSERVER, Error number: 2627)
When I checked sysmergesubscriptions there were extra entries that appear to be coming from the 2000 instances.
My question is has anyone encountered this issue and how did you deal with it (without rebuilding the entire thing)

In my case handling multiple subscriptions and just had to adapt to delete subscriptions that had problems with:
delete
from sysmergesubscriptions
where pubid not in (select pubid from sysmergepublications)
and subscriber_server = 'SUBSCRIPTIONSERVER'

The problem was that one of the subscribers had old publications and subscriptions in the system tables that were replicated through out the entire system. Which caused the violation of UNIQUE KEY constraint.
Once we removed these old entires we were able to restart replication.
We were able to identify the valid records in sysmergepublication because we knew the state of this table before the invalid entries were replicated. This forum post shows you how to location invalid publications if you need to.
We used the follow sql to check for additional subscription entries:
select *
from sysmergepublications
select *
from sysmergesubscriptions
where pubid in ( select pubid from sysmergepublications)
select *
from sysmergesubscriptions
where pubid not in ( select pubid from sysmergepublications)
Here is the sql that we used to delete the invalid subscriptions:
delete from sysmergesubscriptions
where pubid not in ( select pubid from sysmergepublications)
Note: the code sample above assumes that the sysmergepublication contains only valid publications
Alternatively: You can use the EXEC sp_removedbreplication #dbname='<dbname>' to remove replication from the database completely. This command appears to remove all replication triggers from the database.

Related

how to remove dirty data in yugabyte ( postgresql )

I try to add a column to a table with GUI Tableplus, but no response for long time.
So I turn to the db server, but got these error:
Maybe some inconsistent data generated during the operation through the Tableplus.
I am new to postgresql , and don't know what to do next.
-----updated------
I did some operation as #Dri372 told, and got some progress.
The failed reason for table sys_role and s2 is that the tables are not empty, they have some records.
If I run sql like this create table s3 AS SELECT * FROM sys_role; alter table s3 add column project_code varchar(50);, I successed.
Now how could I still work on the table sys_role?

How does SQL Server handle failed query to linked server?

I have a stored procedure that relies on a query to a linked server.
This stored procedure is roughly structured as follows:
-- Create local table var to stop query from needing round trips to linked server
DECLARE #duplicates TABLE (eid NVARCHAR(6))
INSERT INTO #duplicates(eid)
SELECT eid FROM [linked_server].[linked_database].[dbo].[linked_table]
WHERE es = 'String'
-- Update on my server using data from linked server
UPDATE [my_server].[my_database].[dbo].[my_table]
-- Many things, including
[status] = CASE
WHEN
eid IN (
SELECT eid FROM #duplicates
)
THEN 'String'
ELSE es
END
FROM [my_server].[another_database].[dbo].[view]
-- This view obscures sensitive information and shows only the data that I have permission to see
-- Many other things
The query itself is much more complex, but the key idea is building this temporary table from a linked server (because it takes the query 5 minutes to run if I don't, versus 3 seconds if I do).
I've recently had an issue where I ended up with updates to my table that failed to get checked against the linked server for duplicate information.
The logical chain of events is this:
Get all of the data from the original view
The original view contains maybe 3000 records, of which maybe 30 are
duplicates of the entity in question, but with 1 field having a
different value.
I then have to grab data from a different server to know which of
the duplicates is the correct one.
When the stored procedure runs, it updates each record.
ERROR STEP - when the stored procedure hits a duplicate record, it
updates my_table again - so es gets changed multiple times in a row.
The temp table was added after the fact when we realized incorrect es values were being introduced to my_table.
'my_database` does not contain the data needed to determine which is the correct tuple, hence the requirement for the linked server.
As far as I can tell, we had a temporary network interruption or a connection timeout that stopped my_server from getting the response back from linked_server, and it just passed an empty table to the rest of the procedure.
So, my question is - how can I guard against this happening?
I can't just check if the table is empty, because it could legitimately be empty. I need to definitively know if that initial SELECT from linked_server failed, if it timed out, or if it intentionally returned nothing.
without knowing the definition of the table you're querying you could get into an issue where your data is to long and you get a truncation error on your table.
Better make sure and substring it...
DECLARE #duplicates TABLE (eid NVARCHAR(6))
INSERT INTO #duplicates(eid)
SELECT SUBSTRING(eid,1,6) FROM [linked_server].[linked_database].[dbo].[linked_table]
WHERE es = 'String'
-- Update on my server using data from linked server
UPDATE [my_server].[my_database].[dbo].[my_table]
-- Many things, including
[status] = CASE
WHEN
eid IN (
SELECT eid FROM #duplicates
)
THEN 'String'
ELSE es
END
FROM [my_server].[another_database].[dbo].[view]
I had a similar problem where I needed to move data between servers, could not use a network connection so I ended up doing BCP out and BCP in. This is fast, clean and takes away the complexity of user authentication, drivers, trust domains. also it's repeatable and can be used for incremental loading.

Azure Sql Server V12 Bulk load error on merge statement

I have a query with a simple merge statement to update or insert data in a table:
MERGE INTO table_name AS TARGET
USING (
VALUES (
:a0
,:b0
,:c0
)...
) AS SOURCE(A, B, C)
ON SOURCE.B = TARGET.B
AND SOURCE.C = TARGET.C
WHEN NOT MATCHED
THEN
INSERT (
A
,B
,C
)
VALUES (
SOURCE.A
,SOURCE.B
,SOURCE.C
);
This table has a non clustered index on two columns for uniqueness constraint.
This query works fine on a "Business" database in Azure. After the migration to SQL V12 on an "S2" database, this error happens when i try to merge a huge amount of entries:
Uncaught exception 'PDOException' with message 'SQLSTATE[42000]: [Microsoft][ODBC Driver 11 for SQL Server][SQL Server]Cannot bulk load. The bulk data stream was incorrectly specified as sorted or the data violates a uniqueness constraint imposed by the target table. Sort order incorrect for the following two rows: primary key of first row: (A, B, C), primary key of second row: (A, D, E).
It appears that Microsoft knows this issue : https://support.microsoft.com/en-us/kb/3055799.
But in Azure, i can't update the SQL Server. how can i get it to work ?
After some research and tests, I found a workaround by adding at the end of my query "OPTION (LOOP JOIN, FORCE ORDER);" to bypass the sorting error by modifing the default execution plan.
It also works with "OPTION (MERGE JOIN, FORCE ORDER);" depending on the rows count of the source and the target table.
More infos about the options : https://technet.microsoft.com/en-us/library/ms181714.aspx
we found issue with execution plan generated by the query optimizer.
this will be fixes very soon.
meanwhile another option to workaround this issue is to disable page_lock on the index.
ALTER INDEX [<index name>] ON [<schema>].[<table name>]
REBUILD WITH (ALLOW_PAGE_LOCKS = OFF)
with this workaround you do not need to modify your queries.
Hello #Yochanan Rachamim,
Thanks for your answer.
I tried to apply your fix but it has no effect.
After more research, I found a known bug with concurrency : https://www.mssqltips.com/sqlservertip/3074/use-caution-with-sql-servers-merge-statement/
I finally solved my problem by adding "WITH (HOLDLOCK)" to the MERGE statement.

Unable to carry out operations (create trigger, drop table) for a table I created

I am using a SQL Server database with SQL Server Management Studio where I have existing tables. I add a few tables to it and it works just fine. However, for subsequent operations such as
Drop table XXX --OR
Create Trigger YYY on XXX
I run into a error statement that reads:
i) Cannot drop table XXX as it does not exist or you do not have permissions
ii) The object 'XXX' does not exist or is invalid for this operation
I tried to carry out an Insert operation but that showed me a similar error (The object 'XXX' does not exist). I can see this maybe a permissions issue since I am using an existing database. However, in that case, I should have been unable to create a table as well?
Can anyone pinpoint how I can work myself around this and what the problem is?
What is your default schema?
SELECT name, default_schema_name
FROM sys.database_principals
WHERE type = 'S';
Try qualifying your references to the table as SchemaName.XXX and see if that helps.
Most of times when I had similar situations tables were created in system databases (master, tempdb..). Of course it was my mistake.
So maybe try to search for a tables in other databases?

error when insert into linked server

I want to insert some data on the local server into a remote server, and used the following sql:
select * into linkservername.mydbname.dbo.test from localdbname.dbo.test
But it throws the following error
The object name 'linkservername.mydbname.dbo.test' contains more than the maximum number of prefixes. The maximum is 2.
How can I do that?
I don't think the new table created with the INTO clause supports 4 part names.
You would need to create the table first, then use INSERT..SELECT to populate it.
(See note in Arguments section on MSDN: reference)
The SELECT...INTO [new_table_name] statement supports a maximum of 2 prefixes: [database].[schema].[table]
NOTE: it is more performant to pull the data across the link using SELECT INTO vs. pushing it across using INSERT INTO:
SELECT INTO is minimally logged.
SELECT INTO does not implicitly start a distributed transaction, typically.
I say typically, in point #2, because in most scenarios a distributed transaction is not created implicitly when using SELECT INTO. If a profiler trace tells you SQL Server is still implicitly creating a distributed transaction, you can SELECT INTO a temp table first, to prevent the implicit distributed transaction, then move the data into your target table from the temp table.
Push vs. Pull Example
In this example we are copying data from [server_a] to [server_b] across a link. This example assumes query execution is possible from both servers:
Push
Instead of connecting to [server_a] and pushing the data to [server_b]:
INSERT INTO [server_b].[database].[schema].[table]
SELECT * FROM [database].[schema].[table]
Pull
Connect to [server_b] and pull the data from [server_a]:
SELECT * INTO [database].[schema].[table]
FROM [server_a].[database].[schema].[table]
I've been struggling with this for the last hour.
I now realise that using the syntax
SELECT orderid, orderdate, empid, custid
INTO [linkedserver].[database].[dbo].[table]
FROM Sales.Orders;
does not work with linked servers. You have to go onto your linked server and manually create the table first, then use the following syntax:
INSERT INTO [linkedserver].[database].[dbo].[table]
SELECT orderid, orderdate, empid, custid
FROM Sales.Orders
WHERE shipcountry = 'UK';
I've experienced the same issue and I've performed the following workaround:
If you are able to log on to remote server where you want to insert data with MSSQL or sqlcmd and rebuild your query vice-versa:
so from:
SELECT * INTO linkservername.mydbname.dbo.test
FROM localdbname.dbo.test
to the following:
SELECT * INTO localdbname.dbo.test
FROM linkservername.mydbname.dbo.test
In my situation it works well.
#2Toad: For sure INSERT INTO is better / more efficient. However for small queries and quick operation SELECT * INTO is more flexible because it creates the table on-the-fly and insert your data immediately, whereas INSERT INTO requires creating a table (auto-ident options and so on) before you carry out your insert operation.
I may be late to the party, but this was the first post I saw when I searched for the 4 part table name insert issue to a linked server. After reading this and a few more posts, I was able to accomplish this by using EXEC with the "AT" argument (for SQL2008+) so that the query is run from the linked server. For example, I had to insert 4M records to a pseudo-temp table on another server, and doing an INSERT-SELECT FROM statement took 10+ minutes. But changing it to the following SELECT-INTO statement, which allows the 4 part table name in the FROM clause, does it in mere seconds (less than 10 seconds in my case).
EXEC ('USE MyDatabase;
BEGIN TRY DROP TABLE TempID3 END TRY BEGIN CATCH END CATCH;
SELECT Field1, Field2, Field3
INTO TempID3
FROM SourceServer.SourceDatabase.dbo.SourceTable;') AT [DestinationServer]
GO
The query is run on DestinationServer, changes to right database, ensures the table does not already exist, and selects from the SourceServer. Minimally logged, and no fuss. This information may already out there somewhere, but I hope it helps anyone searching for similar issues.

Resources