INSERT using OPENQUERY does not commit to ORACLE database - sql-server

I have an open query
INSERT INTO OPENQUERY(ORACLE_DB, 'SELECT column1, column2, column3 FROM tablename')
VALUES (value1, value2, value3)
When I am running this query, I get a message:
1 row affected
But nothing gets committed!
When I try this:
SELECT * FROM OPENQUERY(ORACLE_DB, select * from tablename)
I cannot see that record that I inserted above.
Fixes I tried:
In Linked Servers > Properties > Server Options > RPC out is set to True
Tried BEGIN TRANSACTION and COMMIT TRANSACTION, but I am getting another error that says New transaction cannot enlist in the specified transaction coordinator.. So I tried to enable distributed transactions for a linked server using Local DTC properties, but unfortunately I have access restrictions so I cannot see that Firewall option which has this setting.
I tried this option EXEC xp_servicecontrol N'querystate',N'msdtc' to check if MSDTC service is running or not - It was running.
But nothing worked. Not sure what am I missing here. Any help would be appreciated.
Thanks.

Related

Why is this query generating a PK violation error?

So I'm trying, in a single query, to only insert a row if it doesn't exist already.
My query is the following:
INSERT INTO [dbo].[users_roles] ([user_id], [role_id])
SELECT 29851, 1 WHERE NOT EXISTS (
SELECT 1 FROM [dbo].[users_roles] WHERE user_id = 29851 AND role_id = 1)
Sometimes (very rarely, but still), it generates the following error:
Violation of PRIMARY KEY constraint 'PK_USERS_ROLES'. Cannot
insert duplicate key in object 'dbo.users_roles'. The duplicate
key value is (29851, 1).
PK_USERS_ROLES is [user_id], [role_id]. Here is the full SQL of the table's schema:
create table users_roles
(
user_id int not null
constraint FK_USERS_ROLES_USER
references user,
role_id int not null
constraint FK_USERS_ROLES_USER_ROLE
references user_role,
constraint PK_USERS_ROLES
primary key (user_id, role_id)
)
Context:
This is executed by a PHP script hosted on an Apache server, and "randomly" happens once out of hundreds of occurrences (most likely concurrency-related).
More info:
SELECT ##VERSION gives:
Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64) Jun 28 2012
08:36:30 Copyright (c) Microsoft Corporation Enterprise Edition
(64-bit) on Windows NT 6.1 (Build 7601: Service Pack)
SQL Server version: SQL Server 2008 R2
Transaction Isolation level: ReadCommitted
This is executed within an explicit transaction (through PHP statements, but I figure the end result is the same)
Questions:
Could someone explain why/how this is happening?
What would be an efficient way to safely insert in one go (in other words, in a single query)?
I've seen other answers such as this one but the solutions are meant for stored procedures.
Thanks.
It might help to be explicit about this. The below runs this in an explicit transaction, locks the row explicitly.
DECLARE #user_id INT; SET #user_id=29851;
DECLARE #role_id INT; SET #role_id=1;
BEGIN TRY
BEGIN TRANSACTION;
DECLARE #exists INT;
SELECT #exists=1
FROM [dbo].[users_roles] WITH(ROWLOCK,HOLDLOCK,XLOCK)
WHERE user_id=#user_id AND role_id=#role_id;
IF #exists IS NULL
BEGIN
INSERT INTO [dbo].[users_roles] ([user_id], [role_id])
VALUES(#user_id,#role_id);
END
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION;
END CATCH
Is this table truncated or the rows deleted in some moment? And how often? It makes sense to me that the rows should not be found in some moment, as you're running "insert if not exists", and in this moment two or more queries may hit the database to insert the same data... only one will... the other should do nothing if the row was inserted before its "not exists" look up, or fail if the row was inserted after the look up.
I have only an Oracle database right now to do some tests and I can reproduce this problem. My commit mode is explicit:
Create the empty table, the unique constraint and grant select, insert to another user:
CREATE TABLE just_a_test (val NUMBER(3,0));
ALTER TABLE just_a_test ADD CONSTRAINT foobar UNIQUE (val);
GRANT SELECT, INSERT ON just_a_test TO user2;
DB session on user1:
INSERT INTO just_a_test
SELECT 10
FROM DUAL
WHERE NOT EXISTS
(
SELECT 1
FROM just_a_test
WHERE val = 10
)
;
-- no commit yet...
DB session on user2:
INSERT INTO user1.just_a_test
SELECT 10
FROM DUAL
WHERE NOT EXISTS
(
SELECT 1
FROM user1.just_a_test
WHERE val = 10
)
;
-- no commit yet, the db just hangs til the other session commit...
So I commit the first transaction, inserting the row, and then I get the following error on user2 session:
"unique constraint violated"
*Cause: An UPDATE or INSERT statement attempted to insert a duplicate key.
For Trusted Oracle configured in DBMS MAC mode, you may see
this message if a duplicate entry exists at a different level.
Now I rollback the second transaction and run again the same insert on user2 and now I get the following output:
0 rows inserted.
Probably your scenario is just like this one. Hope it helps.
EDIT
I'm sorry. You asked two questions and I answered only the Could someone explain why/how this is happening? one. So I missed What would be an efficient way to safely insert in one go (in other words, in a single query)?.
What exactly means "safely" for you? Let's say you're running an INSERT/SELECT of lots of rows and just one of them is duplicated compared to the stored ones. For your "safety" level you should ignore all rows being inserted or ignore only the duplicated, storing the others?
Again, I don't have a SQL Server right now to give it a try, but looks like you can tell SQL Server whether to deny all rows being inserted in case of any dup or deny only the dups, keeping the others. The same is valid for an insert of a single row... if it's a dup, throw an error... or just ignore it in the other hand.
The syntax should look like this to ignore dup rows only and throw no errors:
ALTER TABLE [TableName] REBUILD WITH (IGNORE_DUP_KEY = ON);
By default this option is OFF, which means SQL Server throws an error and discards non dup rows being inserted as well.
This way you would keep your INSERT/SELECT syntax, which looks good imo.
Hope it helps.
Sources:
https://learn.microsoft.com/en-us/sql/t-sql/statements/alter-table-index-option-transact-sql?view=sql-server-2008
https://stackoverflow.com/a/11207687/1977836

Insert into linkedserver from local table

I am trying to insert some data from my local table into linkedserver via sql server. this is what i am doing but it keeps on throwing syntax error.
TRY 1
EXEC(
'INSERT into test.testschema.testoperation
(viasatsubscriptionID, subscriptionrowdate, phonenumberday, viasatcustomerid)
SELECT * FROM rdata.dbo.testoperation'
)AT REDSHIFT64
TRY 2
EXEC(
'INSERT into test.testschema.testoperation
(viasatsubscriptionID, subscriptionrowdate, phonenumberday, viasatcustomerid)'
)AT REDSHIFT64
SELECT * FROM rdata.dbo.testoperation
Both fails.
Any thoughts where i am going wrong?
testoperation is your local table, and since your query runs on the remote server, the table does not exist.
Why don't you try inserting directly to the remote table:
INSERT into REDSHIFT64.test.testschema.testoperation
(viasatsubscriptionID, subscriptionrowdate, phonenumberday, viasatcustomerid)
SELECT * FROM rdata.dbo.testoperation

Select From SQL Server Stored Procedure Resutls

I am migrating several hundred stored procedures from one server to another, so I wanted to write a stored procedure to execute an SP on each server and compare the output for differences.
In order to do this, I would normally use this syntax to get the results into tables:
select * into #tmp1 from OpenQuery(LocalServer,'exec usp_MyStoredProcedure')
select * into #tmp2 from OpenQuery(RemoteServer,'exec usp_MyStoredProcedure')
I then would union them and do a count, to get how many rows differ in the results:
select * into #tmp3
from ((select * from #tmp1) union (select * from #tmp2))
select count(*) from #tmp1
select count(*) from #tmp3
However, in this case, my stored procedure contains an OpenQuery, so when I try to put the exec into an OpenQuery, the query fails with the error:
The operation could not be performed because OLE DB provider "SQLNCLI"
for linked server "RemoteServer" was unable to begin a distributed transaction.
Are there any good workarounds to this? Or does anybody have any clever ideas for things I could do to make this process go more quickly? Because right now, it seems that I would have to run the SP on each server, script the results into tmp tables, then do the compare. That seems like a poor solution!
Thank you for taking the time to read this, and any help would be appreciated greatly!
I think your method would work - you just need to start the MSDTC. This behavior occurs if the Distributed Transaction Coordinator (DTS) service is disabled or if network DTC access is disabled. By default, network DTC access is disabled in Windows. When running and configured properly, the OLE DB provider would be able start the distributed transaction.
Check out this for instructions- it applies to any Windows Server 2003 or 2008.
Similar to your question.
Insert results of a stored procedure into a temporary table

How to run remote sproc via linked server and store results in temp table on a clustered server

I need to be able to run a remote sproc and store it's results in a temp table so that further processing can be done against the data. I can run the below exec statement on it's own just fine and get the data back, however, when trying to insert into the temp table, I get the following error msg:
OLE DB provider "SQLNCLI" for linked server "LinkedServerName" returned message "No transaction is active.".
Msg 7391, Level 16, State 2, Line 8
The operation could not be performed because OLE DB provider "SQLNCLI" for linked server "LinkedServerName" was unable to begin a distributed transaction.
I don't want to use a join because it is being extremely slow, so I thought I'd try selecting the data I need by calling a remote sproc into a temp table, then work with it that way.
I've tried following instructions here with no luck:
http://sql-articles.com/blogs/linked-server-problem-windows-2003-sp1-setting-msdtc-security-configuration/
I believe the main problem is that the source server (where I'm running the below SQL) is a clustered server, and that I'm missing some setting for DTC.
Any ideas?
--drop table #tmp
CREATE TABLE #tmp
(
col1 int,
col2 int
);
insert into #tmp (col1, col2)
exec [LinkedServerName].[RemoteDBName].dbo.remote_sproc '04/01/2011', '04/06/2011'
select * from #tmp
While I didn't find a way to use distributed transactions on a clustered server setup, I did find an alternative way to grab the data remotely using OPENROWSET. Performance wise, it seemed very similar to using a linked server and is working well in our production environment.
/*
-- run the following once to configure SQL server to use OPENROWSET...
sp_configure 'Show Advanced Options', 1
GO
RECONFIGURE
GO
sp_configure 'Ad Hoc Distributed Queries', 1
GO
RECONFIGURE
GO
*/
-- still need a table to store the result set in to work
-- with the data after we grab it...
declare #table table
(
col1 int,
col2 int
);
-- use openrowset instead of a linked server
insert into #table
select *
FROM OPENROWSET('SQLNCLI', 'Server=HOSTNAME;Uid=USERNAME;Pwd=PASSWORD',
'EXEC DBName.dbo.sprocName ''Param1'', ''Param2''')
select * from #table

SQL Server Full-Text Search: Hung processes with MSSEARCH wait type

We have a SQL Server 2005 SP2 machine running a large number of databases, all of which contain full-text catalogs. Whenever we try to drop one of these databases or rebuild a full-text index, the drop or rebuild process hangs indefinitely with a MSSEARCH wait type. The process can’t be killed, and a server reboot is required to get things running again. Based on a Microsoft forums post1, it appears that the problem might be an improperly removed full-text catalog. Can anyone recommend a way to determine which catalog is causing the problem, without having to remove all of them?
1 [http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=2681739&SiteID=1]
“Yes we did have full text catalogues in the database, but since I had disabled full text search for the database, and disabled msftesql, I didn't suspect them. I got however an article from Microsoft support, showing me how I could test for catalogues not properly removed. So I discovered that there still existed an old catalogue, which I ,after and only after re-enabling full text search, were able to delete, since then my backup has worked”
Here's a suggestion. I don't have any corrupted databases but you can try this:
declare #t table (name nvarchar(128))
insert into #t select name from sys.databases --where is_fulltext_enabled
while exists(SELECT * FROM #t)
begin
declare #name nvarchar(128)
select #name = name from #t
declare #SQL nvarchar(4000)
set #SQL = 'IF EXISTS(SELECT * FROM '+#name+'.sys.fulltext_catalogs) AND NOT EXISTS(SELECT * FROM sys.databases where is_fulltext_enabled=1 AND name='''+#name+''') PRINT ''' +#Name + ' Could be the culprit'''
print #sql
exec sp_sqlexec #SQL
delete from #t where name = #name
end
If it doesn't work, remove the filter checking sys.databases.
Have you tried running process monitor and when it hangs and see what the underlying error is? Using process moniter you should be able to tell whick file/resource it waiting for/erroring on.
I had a similar problem with invalid full text catalog locations.
The server wouldn't bring all databases online at start-up. It would process databases in dbid order and get half way through and stop. Only the older DBs were brought online and the remainder were inaccessible.
Looking at sysprocesses revealed a dozen or more processes with a waittype = 0x00CC , lastwaittype = MSSEARCH. MSSEARCH could not be stopped.
The problem was caused when we relocated the full text catalogs but entered the wrong path for one of them when running the alter database ... modifyfile command.
The solution was to disable MSSEARCH, reboot the server allowing all DBs to come online, find the offending database, take it offline, correct the file path using the alter database command, and bring the DB online. Then start MSSEARCH and set to automatic start-up.

Resources