Why does READPAST work in SSMS but not via OLEDB? - sql-server

We're trying to use READPAST in a SQL select statement to extract data from a SQL Server 2008 database using QlikView, which is set up to use OLEDB connection to the database.
The reason for this being that we want to avoid being locked by other processes but also don't want to read any uncommitted data - otherwise we'd be using NOLOCK.
We tested the approach in SSMS initially - starting a transaction, adding a row, then separately querying the table with READPAST. This didn't return the uncommitted row as we'd want
We then added this to our OLEDB SQL query (same query, same database) in QlikView and ran the code. This time it waited for the transaction to be closed (committed or rolled back) before it finished the query.
We also tried with ODBC and SQL Native Client that are both supported by QlikView but got the same results.
We also tried with NOLOCK as the hint instead and this performs as expected - it returns the uncommitted row in both SSMS and QlikView.
Any idea why this would work in SSMS and not via OLEDB/ODBC/SQLNC?
Is there a configuration option on the database or the connection that needs changing?

Related

How does SQL deal with a long running query via Linked Server if there’s a PC reboot?

I have a SQL Server database and have a linked server connection to an Oracle DB.
I had the following query running on my SQL Server:
INSERT INTO dbo.my_table_on_sql_server
SELECT *
FROM OPENQUERY (linkedservername, ‘SELECT * FROM target_table’)
The target_table has 50 million rows and I'm aware the query takes time to execute but has successfully completed before.
This time though, my PC had an automatic restart in the middle of the query. SSMS 2017 automatically reopened as soon as the PC fired back up, but I could not longer see the query running. my_table_on_sql_server has no data.
I'd like to understand what happens in SQL Server in the event of such a situation. Am I correct in assuming that the query was killed / rolled back? Is there any query running in the background? I've seen some related answers on this forum but wanted to specifically understand this for linked servers, as I use them a lot to retrieve data from other DBs for my job.
I'm more concerned about the Oracle DB as I don't want my query to impact any performance upstream. I only have a read-only access permission to the Oracle DB.
Thank you!
On shutdown the query will be aborted, and the INSERT rolled back. The rollback may happen during shutdown, or after restart, and may take some time to complete.
There's no automatic retry or anything that will access the linked server Oracle after the shutdown.

Entity Framework / SQL Server database locks

I am having trouble to understand what locks my SQL Server database. I am accessing data from a SQL Server database via Entity Framework. While that web application is running, I am also checking data with SQL Server Management Studio.
Apparently, when I am trying to read different tables with right click "Select TOP n rows", I get a message
Failed to retrieve data - Lock timeout period exceeded Error 1222).
This doesn't happen when I read the data manually via Select * from ... statement or at least I haven't noticed it yet. Is there any difference between those 2 approaches?
And more important how can I figure out what locks my database? I tried to do research but still not quite specifically understanding what to do. I tried using
DBCC opentran and than
exec sp_who2 SPID
exec sp_lock SPID
which tells me there is an active transaction from Entity Framework but not which one exactly. I am using a few transactions in my application. But those are in my opinion on other tables than I am trying to access via Management Studio. Are those transactions locking up the whole database?
Appreciate any help.
Try to use a great stored procedure who_isActive written by Adam Machanic:
EXEC sp_WhoIsActive;
In addition, this stored procedure shows an executable sql text.
The stored procedure can be downloaded here.
e.g. query with session_id = 75 is waiting of query with session_id = 90:
This is a reason of your blocking.
sp_who2 is great procedure too. E.g.:
Then you should decide whether this process should be killed:
KILL YourNumberOfSessionID

Using SAVE TRANSACTION with a linked server

Inside a transaction that have a savepoint I have to make a join with a table that is in a linked server. When I try to do it, I get the error message:
“Cannot use SAVE TRANSACTION within a distributed transaction”
The remote table data rarely changes. It is almost fixed. Is is possible to tell SqlServer to exclude this table from the transaction? I've tried a (NOLOCK) hint, but it isn't possible to use this hint for a table in a linked server.
Does anyone knows about a workaround? I'm using the ole SqlServer 2000.
One thing that you could do is to make a local copy of the remote table before you start the transaction. I know that this may sound like a lot of overhead, but remote joins are frequently a performance problem anyway and the SOP fix for that is also to make a local copy.
According to this link, the ability to use SAVEPOINTs in a Distributed transaction was dropped in SQL 7.
To allow application migration from Microsoft SQL Server 6.5 when
savepoints inside distributed transactions are in use, Microsoft SQL
Server 2000 Service Pack 1 introduces a trace flag that allows a
savepoint within a distributed transaction. The trace flag is 8599 and
can be turned on during the SQL Server startup or within an individual
session (that is, prior to enabling a distributed transaction with a
BEGIN DISTRIBUTED TRANSACTION statement) by using the DBCC TRACEON
command. When trace flag 8599 is set to ON, SQL Server allows you to
use a savepoint within a distributed transaction.
So unfortunately, you may either have to drop the bounding ACID transaction, or change the SPROC on the remote server so that it doesn't use SAVEPOINTs.
On a side note (Although I have seen that you have tagged it SQL SERVER 2000) but to make a point that SQL SERVER 2008 has remote proc trans Option for this.
In this case if the distributed table is not too large I would copy it to a temp table. If possible, include any filtering to get the number of rows to a minimum. Then you can proceed normally. Another option since the data changes rarely is copy the data to a permanant table and checking if anything has changed to prevent sending to much data over the network every time you run the transaction. You could only pull over the recent changes.
If you wish to handle transaction from UI level and you have Visual Studio 2008/.net fx 3.5 or + framework then you can wrap your logic with TransactionScope Class. If you dont have any frontends and you are working only on Sql Servers kindly ignore my answer...

TimeoutException in SQL Server 2005 migrated from SQL Server 2000

I've recently updated an MSSQL server from 2000 version to 2005, to make use of UDFs and tweak some results in a system. The thing is that we don't have the source code.
So, I replaced the SQL version, and every worked fine... except when we have to do a large query. I'm getting this error:
System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
I've searched about it, and what I get is that it use to be a CommandTimeout issue that I have to solve programmatically, as it is supposed to be in the client side, but this is weird, because it always worked even with big queries.
My guess is that is not something Client Side because in SQL Server 2000 worked fine.
Is there any way to remove any kind of timeout? The system is completely internal and only a few people uses it, so there's no risk of outages... I prefer a query running forever that this annoying messages.
Thanks in advance!
Have you updated all statistics after the upgrade?
How to: Upgrade to SQL Server 2005 (Setup)
After upgrading the Database Engine to SQL Server 2005, complete the
following tasks:
...
Update statistics - To help optimize query performance, we recommend
that you update statistics on all databases following upgrade. Use the
sp_updatestats stored procedure to update statistics in user-defined
tables in SQL Server 2005 databases.
Update usage counters - In earlier versions of SQL Server, the values
for the table and index row counts and page counts can become
incorrect. To correct any invalid row or page counts, we recommend
that you run DBCC UPDATEUSAGE on all databases following upgrade.
You can set the timeout on the connection either in your config file or at time of setting connection.
SqlCommand1.CommandTimeout = 400000

SQL Server 2008 R2 Distributed Partition View Update/Delete issue

I have a big data table for store articles(has more than 500 million record), therefore I use distributed partition view feature of SQL Server 2008 across 3 servers.
Select and Insert operations work fine. But Delete or Update action take long time and never complete.
In Processes tab of Activity Monitor, I see Wait Type field is "PREEMPTIVE_OLEDBOPS" for Update command.
Any idea what's the problem?
Note: I think problem with MSDTC, because Update command not shown in SQL Profiler of second server. but when check MSDTC status on the same server, status column is Update(active).
What is most likely happening is that all the data from the other server is pulled over to the machine where the query is running before the filter of your update statement is applied. This can happen when you use 4-part naming. Possible solutions are:
Make sure each table has a correct "check constraint" which defines the minimum and maximum value of the partition table. Without this partition elimination is not going to work properly.
Call a stored procedure with 4-part naming on the other server to do the update.
use OPENQUERY() to connect to the other server
To serve 500 million records your server seems to be adequate. A setup with Table Partitioning with a sliding window is probably a more cost effective way of handling the volume.

Resources