Selecting right transaction with ADO Ms Sql Server - sql-server

I have a program and:
Create a table named table1. Now [database1].table1 is empty.
Start my program. My program calls Ado Connection object's BeginTrans method.
My program does time consuming number of inserts on [database1].table1 via AdoQuery.
My program calls Ado Connection object's CommitTrans method.
If I start the program, then issue select * from table1 query on Management Studio, the query does not return until my program is finished.
I want to see an empty resultset without waiting when issue select query on Management Studio.
Which transaction should I use and how can I configure it with ADO programmatically ?
Edits:
As far as I googled, I may select one of the Repeatable Read, Snapshot or Serializable isolation levels and/or optimistic lock. I make some test and answer my question depending on test results.
Edit after Doug_Ivison 's comment:
In my situation, another closed source software uses the tables instead of Management Studio. And we are trying some kind of replication (Sorry for missing detail, I was trying to keep question shorter).
Thank you for reading my post.
Regards
Ömür Ölmez.

Related

SQL Server SPIDS go into a sleeping state and never recover

I have a long running stored procedure that is executed from IIS. On average this stored procedure takes between two and five minutes to complete because it is searching through a large dataset. (although it has take around 20 minutes in some cases)
Most of the time the stored procedure works fine but every now and then the SPIDS go into a sleeping state and never recover. The only solution I have found is to restart the SQL Server and re-run the stored procedure
The are no table inserts in the proc (only table variable inserts), and the other statements are selects on a large table.
I'm stuck for where to start debugging this issue. Any hints one what it might be or suggestions on tools that would help me find the issue would be most helpful
EDIT: More info added:
The actual issue is the proc doesn't return the resultset. My first thought was to look at the spids, they were sleeping but the cputime was still increasing
It's a .Net app so .Net Core 3.1 with ASP.NET Core and a Blazor UI. The libary used for db connection is System.data.SqlClient I believe System.data.SqlClient uses it's own custom driver. Calling code below:
The stored procedure doesn't return multiple result sets, however obviously different instances of the proc run at the same time.
No limits to connection pooling in IIS
#RichardWatts when you say " re-run the stored procedure" you mean that the same stored proc with the same parameter and data works once you restart SQL Server ?
If so look over your loc (sp_loc} inside your table probably another process loc some data and doesnt release it properly, specialy if you have transaction accessing the same tables.
What is your your isolation level on your connexion ? If you can, try to change it to READ UNCOMMITTED to see if that solve your problem.
as an alternate you can also add a WITH (NOLOCK) or (READUNCOMMITTED) to your sql command.
Know that you will need to hold query with a read uncommited or nolock if you have some modification on the structure of your table or index re construction for example or they will in turn block its execution
Nevertheless be cautious this solution depend on your environment, specially if your tables gots lots of update, delete, insert,... this kind of isolation can lead to a Dirty read and doesnt adress the root cause of your problem wich I would bet is uncomited transaction (good article that explain it)
Make also a DBCC CHECKTABLE just to be sure on this side

We are using Sybase/ODBC how to deal with disconnects while running long batch SQL queries?

We are developping an application in C# that uses ODBC and the "Adaptive server enterprise" driver to extract data from a Sybase DB.
We have a long SQL batch query that create a lot of intermediate temporary tables and returns several DataTable objects to the application. We are seeing exceptions saying TABLENAME not found where TABLENAME is one of our intermediate temporary tables. When I check the status of the OdbcConnection object in the debugger it is Closed.
My question is very general. Is this the price you pay for having long-running complicated queries? Or is there a reliable way to get rid of such spurious disconnects?
Many thanks in advance!
There's a couple of ODBC timeout parameters - see SDK docs at:
http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc20116.1550/html/aseodbc/CHDCGBEH.htm
Specifically CommandTimeOut and ConnectionTimeOut which you can set accordingly.
But much more likely is you're being blocked or similar when the process is running - maybe ask your DBA to check your query plan for the various steps in your batch and look for specific problem areas such as tablescans etc which could be masking your timeout issue.

Detect Table Changes In A Database Without Modifications

I have a database ("DatabaseA") that I cannot modify in any way, but I need to detect the addition of rows to a table in it and then add a log record to a table in a separate database ("DatabaseB") along with some info about the user who added the row to DatabaseA. (So it needs to be event-driven, not merely a periodic scan of the DatabaseA table.)
I know that normally, I could add a trigger to DatabaseA and run, say, a stored procedure to add log records to the DatabaseB table. But how can I do this without modifying DatabaseA?
I have free-reign to do whatever I like in DatabaseB.
EDIT in response to questions/comments ...
Databases A and B are MS SQL 2008/R2 databases (as tagged), users are interacting with the DB via a proprietary Windows desktop application (not my own) and each user has a SQL login associated with their application session.
Any ideas?
Ok, so I have not put together a proof of concept, but this might work.
You can configure an extended events session on databaseB that watches for all the procedures on databaseA that can insert into the table or any sql statements that run against the table on databaseA (using a LIKE '%your table name here%').
This is a custom solution that writes the XE session to a table:
https://github.com/spaghettidba/XESmartTarget
You could probably mimic functionality by writing the XE events table to a custom user table every 1 minute or so using the SQL job agent.
Your session would monitor databaseA, write the XE output to databaseB, you write a trigger that upon each XE output write, it would compare the two tables and if there are differences, write the differences to your log table. This would be a nonstop running process, but it is still kind of a period scan in a way. The XE only writes when the event happens, but it is still running a check every couple of seconds.
I recommend you look at a data integration tool that can mine the transaction log for Change Data Capture events. We are recently using StreamSets Data Collector for Oracle CDC but it also has SQL Server CDC. There are many other competing technologies including Oracle GoldenGate and Informatica PowerExchange (not PowerCenter). We like StreamSets because it is open source and is designed to build realtime data pipelines between DB at the schema level. Till now we have used batch ETL tools like Informatica PowerCenter and Pentaho Data Integration. I can near real-time copy all the tables in a schema in one StreamSets pipeline provided I already deployed DDL in the target. I use this approach between Oracle and Vertica. You can add additional columns to the target and populate them as part of the pipeline.
The only catch might be identifying which user made the change. I don't know whether that is in the SQL Server transaction log. Seems probable but I am not a SQL Server DBA.
I looked at both solutions provided by the time of writing this answer (refer Dan Flippo and dfundaka) but found that the first - using Change Data Capture - required modification to the database and the second - using Extended Events - wasn't really a complete answer, though it got me thinking of other options.
And the option that seems cleanest, and doesn't require any database modification - is to use SQL Server Dynamic Management Views. Within this library residing, in the System database, are various procedures to view server process history - in this case INSERTs and UPDATEs - such as sys.dm_exec_sql_text and sys.dm_exec_query_stats which contain records of database transactions (and are, in fact, what Extended Events seems to be based on).
Though it's quite an involved process initially to extract the required information, the queries can be tuned and generalized to a degree.
There are restrictions on transaction history retention, etc but for the purposes of this particular exercise, this wasn't an issue.
I'm not going to select this answer as the correct one yet partly because it's a matter of preference as to how you approach the problem and also because I'm yet to provide a complete solution. Hopefully, I'll post back with that later. But if anyone cares to comment on this approach - good or bad - I'd be interested in your views.

How does SQL server insert data parallely between applications?

I have two applications.
One inserts data into database continuously like it is having an infinity loop.
When the second application inserts data to same database and table what will happen.
If it waits till the other application to complete inserting which will handle this?
Or it will say it is busy?
Or code throws an exception?
SQL servers have something called a connection pool which means that more than once connection to the database can be made at any particular time, and that's where the easy bit ends.
If you were to for example connect to the database on two applications at the same time and insert data in to different tables from each application then the two could happily happen at the same time without issue.
If however those applications wanted to do something like edit the same row then there's an issue with "locking" ...
Essentially any operation on a SQL database requires "acquiring a lock" on a "set" or "row" or "cell" depending on the configuration of the server its hard to say what might happen in your case.
So the simple answer is:
Yes, SQL can make stuff happen (like inserts) at the same time but with some clauses.
And long answer ...
requires in depth knowledge of locking and your database and server configuration.

Why does READPAST work in SSMS but not via OLEDB?

We're trying to use READPAST in a SQL select statement to extract data from a SQL Server 2008 database using QlikView, which is set up to use OLEDB connection to the database.
The reason for this being that we want to avoid being locked by other processes but also don't want to read any uncommitted data - otherwise we'd be using NOLOCK.
We tested the approach in SSMS initially - starting a transaction, adding a row, then separately querying the table with READPAST. This didn't return the uncommitted row as we'd want
We then added this to our OLEDB SQL query (same query, same database) in QlikView and ran the code. This time it waited for the transaction to be closed (committed or rolled back) before it finished the query.
We also tried with ODBC and SQL Native Client that are both supported by QlikView but got the same results.
We also tried with NOLOCK as the hint instead and this performs as expected - it returns the uncommitted row in both SSMS and QlikView.
Any idea why this would work in SSMS and not via OLEDB/ODBC/SQLNC?
Is there a configuration option on the database or the connection that needs changing?

Resources