How to overcome "Failure getting record lock on a record from table"? - openquery

I am running a query using OpenQuery and getting a peculiar error.
This is my query:
select * from OpenQuery("CAPITAOC",'SELECT per.*
FROM pub."re-tenancy" AS t
INNER JOIN pub."re-tncy-person" AS per
ON t."tncy-sys-ref" = per."tncy-sys-ref"
INNER JOIN pub."re-tncy-place" AS place
ON t."tncy-sys-ref" = place."tncy-sys-ref"
WHERE t."tncy-status" = ''CUR'' and place."place-ref"=''GALL01000009''')
This is the error message:
OLE DB provider "MSDASQL" for linked server "CAPITAOC" returned message "[DataDirect][ODBC Progress OpenEdge Wire Protocol driver][OPENEDGE]Failure getting record lock on a record from table PUB.RE-TNCY-PERSON.".
OLE DB provider "MSDASQL" for linked server "CAPITAOC" returned message "[DataDirect][ODBC Progress OpenEdge Wire Protocol driver]Error in row.".
Msg 7330, Level 16, State 2, Line 1
Cannot fetch a row from OLE DB provider "MSDASQL" for linked server "CAPITAOC".
How do I read this data?

The record lock error:
In a multi-user environment it is useful to lock records that are being updated to prevent an other user session from accessing that record. This prevents a "dirty read" of your data.
To overcome this issue, I suggest looking at this article :
http://knowledgebase.progress.com/articles/Article/20255
The Transaction Isolation Level must be set prior to any other
transactions within the session.
And this is how you find out WHO has locked your record :
http://knowledgebase.progress.com/articles/Article/19833
Also, I would like to suggest that if you are using something like SQL explorer which does not Auto-commit your updates unless you ask it to, then the database table might be locked until you commit your changes.

I ran across this issue as well and the other answer's links were not as helpful as I had hoped. I used the following link: https://knowledgebase.progress.com/articles/Article/P12158
Option #1 - applies from OpenEdge 10.1A02 and later.
Use the WITH (NOLOCK) hint in the SELECT query. This ensures that no record locks are acquired. For example,
SELECT * FROM pub.customer WITH (NOLOCK);
The WITH (NOLOCK) hint is similar to using the Read Uncommitted isolation level in that it will result in a dirty read.
Option #2 - applies to all OpenEdge (10.x/11.x) versions using the Read Committed isolation level.
Use the WITH (READPAST) hint in the SELECT query. This option causes a transaction to skip rows locked by other transactions that would ordinarily appear in the result set, rather than block the transaction waiting for the other transactions to release their locks on these rows. For example,
SELECT * FROM pub.customer WITH (READPAST NOWAIT);
SELECT * FROM pub.customer WITH (READPAST WAIT 5);
Please be aware that this can lead to fewer records being returned than expected since locked records are skipped/omitted from the result set.
Option #3 - applies to all Progress/OpenEdge versions.
Change the Isolation Level to Read Uncommitted to ensure that, when a record is read, no record locks are acquired. Using the Read Uncommitted isolation level will result in a dirty read.
This can be done at ODBC DSN level or via the SET TRANSACTION ISOLATION LEVEL <isolation_level_name> statement. For example,
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
Option 1 worked for me.

Related

SQL Server linked server error "The partner transaction manager has disabled its support for remote/network transactions."

I have a linked server (SQL Server 14.0.1000.169). The local server (SQL Server 10.0.1600) receives data in short periods of time, around 1 new row per minute, into Table46. I need to pass some of the information of this new row to the linked server, so I created a trigger in the local server for this:
CREATE TRIGGER New_Event
ON dbo.Table46 FOR INSERT AS
BEGIN
SET NOCOUNT ON;
INSERT INTO [LinkedServer].[Database].[dbo].[TableEvents]
SELECT i.[046_ID] AS [id]
, NP.NoPart + ' ' + CONVERT(VARCHAR(3), T41.[041_No]) AS [name]
, DATEADD(MINUTE, -1 * i.[046_ExeTime], i.[046_DateTime]) AS [eventstart]
, i.[046_DateTime] AS [eventend]
, i.[046_IDRes] AS [resource_id]
, i.[046_ExeTime] AS [execution]
, ISNULL(MIN(T48.[048_MachTime]), 0) AS [same]
, ISNULL(MIN(T48_1.[048_MachTime]), 0) AS [all]
, i.[046_Pallet] AS [pallet]
FROM inserted AS i
INNER JOIN Table41 AS T41
ON i.[046_IDOp] = T41.[041_IDOp]
INNER JOIN NoParts AS NP
ON T41.[041_IDNoPart] = NP.Autonumber
INNER JOIN Table48 AS T48
ON i.[046_IDRes] = T48.[048_IDRes]
AND i.[046_IDOp] = T48.[048_IDOp]
INNER JOIN Table48 AS T48_1
ON i.[046_IDOp] = T48_1.[048_IDOp]
GROUP BY i.[046_ID], NP.NoPart, T41.[041_No], i.[046_MachTime],
i.[046_DateTime], i.[046_IDRes], i.[046_ExeTime], i.[046_Pallet];
END;
The original query after the INSERT INTO works, I just changed Table46 for the inserted virtual table because of the trigger.
Edit 1:
If I add a new row manually to Table46 I get the following error (already started MSDTC service):
OLE DB provider "SQLNCLI10" for linked server "[LinkedServer]" returned message "The partner transaction manager has disabled its support for remote/network transactions.".
Msg 7391, Level 16, State 2, Procedure New_Event, Line 5 [Batch Start Line 15]
The operation could not be performed because OLE DB provider "SQLNCLI10" for linked server "[LinkedServer]" was unable to begin a distributed transaction.
Edit 2:
I have followed these instructions and also allowed MSDTC inbound rules in the Firewall of both servers but now if I try to add the row the query takes a lot of time executing, it hasn't finished yet. The same is happening with a SELECT query to Table46.
What are other ways to insert in the remote server whenever Table46 receives a new row, if triggers don't work?
As mentioned in my comment you need to configure MSDTC to enable distributed transactions between the two linked SQL servers.
If you don't want to do that, then you can use a trigger on the source table to save the required data in a 'queue' table. Then you can have a separate application poll the queue table, fetch data and insert them on the linked server on separate connections (and thus separate transactions). This method may seem suboptimal but does have one advantage: if the linked server is unavailable or slow the source server continues to work at full speed and no data is ever lost.
One way to implement the second approach is to use SQL Server broker. In the trigger send the necessary data to a message queue. On the receiving (linked) server process the messages and insert the data in TableEvents. SQL Server Broker ensures transactional integrity all the way without the use of MSDTC between the two servers while decoupling the two servers. Note that the servers no longer need to be linked anymore (unless you need them linked for some other reason)

SSIS package with CHANGE TRACKING keeps missing records

I have an SSIS package using CHANGE TRACKING that runs every 5 minutes to perform one way synchronization on a table.
These are the DB's involved:
DestDB
SourceDB
DestDB contains a table called TableSyncVersions that is used to keep track of the most recent Sync version used to extract information from the table in SourceDB. This Sync Version is used for the next execution of the package to get the next batch of data.
SourceDB has Snapshot Isolation enabled and the CT Query is being executed by an "OLE DB Source" in SSIS. The Query is as follows:
SET TRANSACTION ISOLATION LEVEL SNAPSHOT;
BEGIN TRAN;
--Using OLE DB parameters to capture the current version within the transaction
SELECT ? = CAST(CHANGE_TRACKING_CURRENT_VERSION() AS NVARCHAR)
SELECT ct.KeyColumn1
, ct.KeyColumn2
, ct.KeyColumn3
, st.Column1
, st.Column2
, st.Column3
, st.Column4
, ct.SYS_CHANGE_OPERATION
FROM TABLE1 AS st
--Using OLE DB Parameters to reference the version # saved in TableSyncVersions
RIGHT OUTER JOIN CHANGETABLE(CHANGES TABLE1, ?) AS ct
ON avq.KeyColumn1 = ct.KeyColumn1
AND avq.KeyColumn2 = ct.KeyColumn2
AND avq.KeyColumn3 = ct.KeyColumn3
COMMIT TRAN;
Here is a screen shot of the Control Flow for this package:
At least once a day, the package misses 5-20 records even though it ran without error, the records are missed at different times everyday. Has anyone experienced anything like this with Change Tracking before?
Any help is greatly appreciated.
Thank you,
Tory Hill

What is the behavior of Coldfusion cftransaction tag when multiple databases are accessed?

The coldfusion documentation, (I'm using CF8) states:
Changes to data that is requested
by the queries are not committed to the datasource until all actions within
the transaction block have executed successfully.
But it also states:
In a transaction block, you can write queries to more than one database, but you must commit or roll back a transaction to one database before writing a query to another
I have multiple transactions in my code base which access 2 databases for both selects and update/inserts. The code assumes that all queries will either succeed or they will all be rolled back. But I don't know if that is true based upon the line in the docs that says: "but you must commit or roll back a transaction to one database before writing a query to another".
What is the behavior if a write to the first database succeeds, then the subsequent write to another database fails? Will the first be rolled back?
What the documentation means is that you must put a <cftransaction action="commit"> after the queries to one database before you can move on to using another datasource. It will throw an error if it detects that you have <cfquery> tags with different datasources inside of a transaction without using the commit. See your database documentation for exact transaction support as the CFML via the database driver is only sending in transaction commands on your behalf, it is not responsible for their execution or behavior. Enable JDBC logging in your database to see this in action.
Won't work:
<cftransaction action="begin">
<cfquery datasource="foo">
select * from foo_test
</cfquery>
<cfquery datasource="bar">
select * from bar_test
</cfquery>
</cftransaction>
Will work
<cftransaction action="begin">
<cfquery datasource="foo">
select * from foo_test
</cfquery>
<cftransaction action="commit"><!-- Commit before switching DSNs --->
<cfquery datasource="bar">
select * from bar_test
</cfquery>
</cftransaction>
If you are using three part names for multiple database access through a single datasource, the transaction control will work.
<cftransaction action="begin">
<cfquery datasource="foo">
INSERT INTO foo_test ( id )
VALUES ( 70 )
</cfquery>
<!-- insert into the bar database via the foo datasource --->
<cfquery datasource="foo">
INSERT INTO bar.dbo.bar_test (id )
VALUES ( 'frank' ) <!-- Fails because not an int and the prior insert into foo is rolled back -->
</cfquery>
</cftransaction>
The default behaviour for CFTransaction is that all writes will be rolled back if there is an exception anywhere within the transaction block. So if one query fails, all queries are rolled back.
This is only if the database supports commits and rollbacks based on Transaction Control Language, a subset of SQL.
However you can granulary control how CF transaction works, beyond the default behaviour, including features such as savepoints and nested transactions.

max(column_name) is returning same value whne the load is heavy on database

The below query is continously hit and records are getting inserted in the table "TRANSACTION_MAIN" but #confrm which is a number more than max(TRN_CNFRM_NBR) is same for couple of transactions, This behaviour is seen only when the load is too high on the DataBase server. Any insights into this, Why this behaviour is being observed, what might be happening behind the scenes?
BEGIN TRANSACTION
DECLARE #Confrm as int;
SET #Confrm = (SELECT isnull
(MAX(CONVERT(int, TRN_CNFRM_NBR)),0)
FROM TRANSACTION_MAIN WHERE
TRN_UC_LOC = #1) + 1;
DECLARE #TMID as int;
INSERT INTO TRANSACTION_MAIN(
TRN_CNFRM_NBR
,TRN_UC_LOC
,TRN_STAT_ANID
,TRN_SRC_ANID
,PRSN_ANID
,TRN_DT
,TRN_ACTL_AMT
,TRN_MTHD
,TRN_IP_ADDR
,DSCT_CD
,TRN_PAID_AMT
,CSHR_ANID
,INVDEPTEQUIP_ANID
,TRN_CMNT
,CPN_DSCT_TOTAL
)
VALUES(#Confrm,#1,#2,#3,#4,#5,#6,#7,#8,#9,#10,#11,#12,#13,#14);
SET #TMID = ##IDENTITY;");
COMMIT TRANSACTION
Well, this isn't safe, so it shouldn't be a surprise that it doesn't work.
There's a few contributors to the final result. First, simultaneous selects for the max value will of course return the same value, because the "new" rows haven't been inserted yet. Second, depending on the transaction isolation level, the select doesn't see a row that has been inserted, but not committed yet.
As a quick fix, it should help to simply set the transaction isolation level higher. This will of course reduce your throughput and increase the risk of deadlocks, but at least it will be correct. Or, if you're on an SQL server high enough, use sequences. Or thread-safe CLR code.
And if you're stuck on an old SQL server and can't handle the higher transaction isolation, you can try implementing this using your own sequences, where incrementing the sequence is an atomic operation. There's a nice example in Microsoft SQL Server Migration Assistant for Oracle.
You have misplaced the addition of 1.
The Query should like this:
SET #Confrm = (
SELECT ISNULL(MAX(CONVERT(INT, TRN_CNFRM_NBR)), 0) + 1 -- you should add 1 here.
FROM TRANSACTION_MAIN
WHERE TRN_UC_LOC = #1
);

Database is in Transition state

Today I was trying to restore a database over an already existing database, I simply right clicked the database in SSMS --> Tasks --> Take Offline so I could restore the database.
A small pop up window appeared and showed Query Executing..... for sometime and then threw an error saying Database is in use cannot take it offline. From which I gathered there are some active connections to that database so I tried to execute the following query
USE master
GO
ALTER DATABASE My_DatabaseName
SET OFFLINE WITH ROLLBACK IMMEDIATE
GO
Again at this point the SSMS showed Query Executing..... for a sometime and then threw the following error:
Msg 5061, Level 16, State 1, Line 1
ALTER DATABASE failed because a lock could not be placed on database 'My_DatabaseName'. Try again later.
Msg 5069, Level 16, State 1, Line 1
ALTER DATABASE statement failed.
After this I could not connect to the database through SSMS. and when I tried to Take it offline using SSMS it threw an error saying:
Database is in Transition. Try later .....
At this point I simply could'nt touch the database anything I tried it returned the same error message Database is in Transition.
I got on google read some questions where people had faced similar issue and they recommended to close the SSMS and open it again, So did I and
Since it was only a dev server I just deleted the database using SSMS and restored on a new database.
My question is what could have possibly caused this ?? and how I can Avoid this to happen in future and if I ever end up in the same situation in future is there any other way of fixing it other then deleting the whole database ???
Thank you
Check this out. This will help you release locks. Works great! https://dba.stackexchange.com/questions/57432/database-is-in-transition-error
use this
select
l.resource_type,
l.request_mode,
l.request_status,
l.request_session_id,
r.command,
r.status,
r.blocking_session_id,
r.wait_type,
r.wait_time,
r.wait_resource,
request_sql_text = st.text,
s.program_name,
most_recent_sql_text = stc.text
from sys.dm_tran_locks l
left join sys.dm_exec_requests r
on l.request_session_id = r.session_id
left join sys.dm_exec_sessions s
on l.request_session_id = s.session_id
left join sys.dm_exec_connections c
on s.session_id = c.session_id
outer apply sys.dm_exec_sql_text(r.sql_handle) st
outer apply sys.dm_exec_sql_text(c.most_recent_sql_handle) stc
where l.resource_database_id = db_id('<YourDatabase>')
order by request_session_id;
and then
for each processnumber
kill <processnumber>
Check out this article.
http://oostdam.info/index.php/sectie-blog/289-sql-error-952-8ways-to-solve-it
I use TSQL most of the time, so I have not run into this issue yet.
What version is the SQL Server database and at what patch level?
Next time, do a usp_who2 to see what threads are running.
http://craftydba.com/wp-content/uploads/2011/09/usp-who2.txt
Since the output is in a table, you can search by database.
Kill all threads using the database before the trying the ALTER statement.
A night about 6 months ago, I had a terrible time getting a 2000 database offline due to an application constantly hitting it. I eventually disabled the user account so I would not get any more logins.

Resources