I have a problem on specific SQL Server 2008 customer installation. I wrote the code below to simulate the problem which happens in more complex system. There are two connections (each one with own transaction) opened and each connection modifies a table. Modified tables do not relate to each other. On development platform and other existing customer installations the code works fine. Only at one specific customer we have a problem that the second update in nested transaction hangs. I could make a workaround by moving the first update after commit of nested transaction.
I assume in that specific installation the db is configured to lock down the whole db when a transaction is started. But using DBCC useroptions results in very similar output on systems where the code works and this one.
How can I identify what's wrong here ?
Here's DBCC useroptions output from the problematic DB (SQL Server 2008) and my simplified test code:
textsize 2147483647
language Deutsch
dateformat dmy
datefirst 1
lock_timeout -1
quoted_identifier SET
arithabort SET
ansi_null_dflt_on SET
ansi_warnings SET
ansi_padding SET
ansi_nulls SET
concat_null_yields_null SET
isolation level read committed
DbCommand command1 =null, command2 = null;
try
{
const string cs = "Provider=SQLOLEDB.1;...";
// open command and a transaction with default isolation level
command1 = DbAccessFactory.CreateInitialzedCommand("System.Data.OleDb", cs, true);
// select something
command1.CommandText = "select * from plannerOrderHeaders where ...";
DataSet ds = BusinessCasesHelper.Fill(command1, null, "plannerOrderHeaders");
// make some changes in the table
...
// update the table in DB
BusinessCasesHelper.Update(command1, ds, true);
// open command and a transaction with default isolation level on the same CS as command1
command2 = DbAccessFactory.CreateInitialzedCommand("System.Data.OleDb", cs, true);
// select something
command2.CommandText = "select * from mdOmOrders where ...";
ds = BusinessCasesHelper.Fill(command2, null, "mdOmOrders");
// make some changes
...
// update the db
BusinessCasesHelper.Update(command2, ds, true);
command2.Transaction.Commit();
cmd2Commited = true;
command1.Transaction.Commit();
}
catch (Exception e) {...}
And why do you use ""Provider=SQLOLEDB.1" to access MS SQL Server?
And why do you commit instead of closing and disposing?
I can only guess how the mentioned BusinessCasesHelper, DbAccessFactory, etc. are implemented.
But your question implies that your consider your snippet opening transaction inside another transaction in the same context (i.e. on one connection) while I see that they are probably opening two connections which are not being disposed.
Related
How do you start a transaction in ODBC? Specifically i happen to be dealing with SQL Server, but the question can work for any data source.
In native T-SQL, you issue the command:
BEGIN TRANSACTION
--...
COMMIT TRANSACTION
--or ROLLBACK TRANSACTION
In ADO.net, you call:
DbConnection conn = new SqlConnection();
DbTransaction tx = conn.BeginTransaction();
//...
tx.Commit();
//or tx.Rollback();
In OLE DB you call:
IDBInitialize init = new MSDASQL();
IDBCreateSession session = (init as IDBCreateSession).CreateSession();
(session as ITransactionLocal).StartTransaction(ISOLATIONLEVEL_READCOMMITTED, 0, null, null);
//...
(session as ITransactionLocal).Commit();
//or (session as ITransactionLocal).Rollback();
In ADO you call:
Connection conn = new Connection();
conn.BeginTrans();
//...
conn.CommitTrans();
//or conn.RollbackTrans();
What about ODBC?
For ODBC, Microsoft gives a hint on their page Transactions in ODBC:
An application calls SQLSetConnectAttr to switch between the two ODBC modes of managing transactions:
Manual-commit mode
All executed statements are included in the same transaction until it is specifically stopped by calling SQLEndTran.
Which means i just need to know what parameters to pass to SQLSetConnectAttr:
HENV environment;
SQLAllocEnv(&environment);
HDBC conn;
SQLAllocConnect(henv, &conn);
SQLSetConnectAttr(conn, {attribute}, {value}, {stringLength});
//...
SQLEndTran(SQL_HANDLE_ENV, environment, SQL_COMMIT);
//or SQLEndTran(SQL_HANDLE_ENV, environment, SQL_ROLLBACK);
But the page doesn't really give any hint about which parameter will start a transaction. It might be:
SQL_COPT_SS_ENLIST_IN_XA
To begin an XA transaction with an XA-compliant Transaction Processor (TP), the client calls the Open Group tx_begin function. The application then calls SQLSetConnectAttr with a SQL_COPT_SS_ENLIST_IN_XA parameter of TRUE to associate the XA transaction with the ODBC connection. All related database activity will be performed under the protection of the XA transaction. To end an XA association with an ODBC connection, the client must call SQLSetConnectAttr with a SQL_COPT_SS_ENLIST_IN_XA parameter of FALSE. For more information, see the Microsoft Distributed Transaction Coordinator documentation.
But since i've never heard of XA, nor do i need MSDTC to be running, i don't think that's it.
maruo answered it. But to clarify:
HENV environment;
SQLAllocEnv(&environment);
HDBC conn;
SQLAllocConnect(henv, &conn);
SQLSetConnectAttr(conn, SQL_ATTR_AUTOCOMMIT, SQL_AUTOCOMMIT_OFF, SQL_IS_UINTEGER);
//...
SQLEndTran(SQL_HANDLE_ENV, environment, SQL_COMMIT);
//or SQLEndTran(SQL_HANDLE_ENV, environment, SQL_ROLLBACK);
SQLSetConnectAttr(conn, SQL_ATTR_AUTOCOMMIT, SQL_AUTOCOMMIT_ON, SQL_IS_UINTEGER);
ODBC can operate in two modes: AUTOCOMMIT_ON and AUTOCOMMIT_OFF. Default is AUTOCOMMIT_ON. When Autocommit is ON each command you start using a Session Handle associated with that Connection will be Auto-Committed.
Let's see how "manual commit" (alias AUTOCOMMIT_OFF) works.
First you switch AUTOCOMMIT OFF Using something like this:
if (!SQL_SUCCEEDED(Or=SQLSetConnectAttr(Oc, SQL_ATTR_AUTOCOMMIT,
(SQLPOINTER)SQL_AUTOCOMMIT_OFF,
SQL_IS_UINTEGER))) {
// error handling here
}
Where "Oc" is the connection handle.
Second You run all commands as usual: Prepare / Execute statements, Bind Parameters, etc.... There's NO specific command to "START" a transaction. All commands after you switched Autocommit OFF are part of the transaction.
Third You commit:
if (!SQL_SUCCEEDED(Or=SQLEndTran(SQL_HANDLE_DBC, Oc, SQL_COMMIT))) {
// Error handling
}
And - again - all new commands from now on are automatically part of a new transaction that you will have to commit using another SQLEndTran command as show here above.
Finally... to switch AUTOCOMMIT_ON Again:
if (!SQL_SUCCEEDED(Or=SQLSetConnectAttr(Oc, SQL_ATTR_AUTOCOMMIT,
(SQLPOINTER)SQL_AUTOCOMMIT_ON, SQL_IS_UINTEGER))) {
// Error Handling
}
I'm trying to use linq to sql for integration testing of stored procedures. I'm trying to call an updating stored procedure and after that retrieving the updated row from db to verify the change. All this should happen in one transaction so that I can rollback the transaction after the verification.
The code fails in assert, because the the row I retrieved does not seem to be updated. I know that my SP works when called from ordinary code. Is it even possible see the updated row in same transaction?
I'm using Sql Server 2008 and used sqlmetal.exe to create linq-to-sql mapping.
I've tried many different things, and right now my code looks following:
DbTransaction transaction = null;
try
{
var context =
new DbConnection(
ConfigurationManager.ConnectionStrings["MyConnectionString"].ConnectionString);
context.Connection.Open();
transaction = context.Connection.BeginTransaction();
context.Transaction = transaction;
const string newUserName= "TestUserName";
context.SpUpdateUserName(136049 , newUserName);
context.SubmitChanges();
// select to verify
var user=
(from d in context.Users where d.NUserId == 136049 select d).First();
Assert.IsTrue(user.UserName == newUserName);
}
finally
{
if (transaction != null) transaction.Rollback();
}
I believe you are coming acress a stale datacontext issue.
Your update is done through a stored procedure so your context does not "see" the changes and has no way to update the Users.
If you use a new datacontext to do the assert, it usually works well. However, since you are using a transaction you probably have to add the second datacontext to the same transaction.
The output of sql commands that is visible to users who interactively run SQL commands from SQL Server Management studio, is different than the output you get back from executing an ADO command or ADO query object.
USE [DBNAME]
BACKUP DATABASE [DBNAME] TO
DISK = 'C:\SqlBackup\Backup.mdf'
The successful completion output is like this:
Processed 465200 pages for database 'DBNAME', file 'filename' on file 2.
Processed 2 pages for database 'DBNAME', file 'filename_log' on file 2.
BACKUP DATABASE successfully processed 465202 pages in 90.595 seconds (40.116 MB/sec).
When I execute either a TADOCommand or TADOQuery with the CommandText or SQL set as above, I do not get any such output. How do I read this "secondary output" from the execution of an SQL command? I'm hoping that perhaps via some raw ADO operations I might be able to execute a command and get back the information above, for success, as well as any errors in performing an Sql backup.
Update: The answer below works better for me than my naive attempt, which did not work, using plain Delphi TADOCommand and TADOConnection classes:
create TADOCommand and TADOConnection.
execute command.
get info-messages back.
The problem I experienced in my own coding attempts, is that my first command is "use dbname" and the only recordset I traversed in my code, was the results of the "use dbname" command, not the second command I was executing. The accepted answer below traverses all recordsets that come back from executing the ADO command, and thus it works much better. Since I'm doing all this in a background thread, I actually think it's better to create the raw Com Objects anyways, and avoid any VCL entanglement in my thread. The code below could be a nice component if anybody is interested, let me know and I might make an open source "SQL Backup for Delphi" component.
Here is an example. I've tested it with D7 and MSSQL2000. And it adds to Memo1 all messages from server:
29 percent backed up.
58 percent backed up.
82 percent backed up.
98 percent backed up.
Processed 408 pages for database 'NorthWind', file 'Northwind' on file 1.
100 percent backed up.
Processed 1 pages for database 'NorthWind', file 'Northwind_log' on file 1.
BACKUP DATABASE successfully processed 409 pages in 0.124 seconds (26.962 MB/sec).
Also if it takes a long time consider to implement a WHILE loop not in the main thread.
uses AdoInt,ComObj;
.....
procedure TForm1.Button1Click(Sender: TObject);
var cmd : _Command;
Conn : _Connection;
RA : OleVariant;
rs :_RecordSet;
n : Integer;
begin
Memo1.Clear;
Conn := CreateComObject(CLASS_Connection) as _Connection;
Conn.ConnectionString := 'Provider=SQLOLEDB.1;Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=NorthWind;Data Source=SQL_Server';
Conn.Open(Conn.ConnectionString,'','',Integer(adConnectUnspecified));
cmd := CreateComObject(CLASS_Command) as _Command;
cmd.CommandType := adCmdText;
cmd.Set_ActiveConnection(Conn);
cmd.CommandText := 'BACKUP DATABASE [NorthWind] TO DISK = N''c:\sql_backup\NorthWind'' WITH INIT , NOUNLOAD , NAME = N''NortWind backup'', NOSKIP , STATS = 10, NOFORMAT;';
rs:=cmd.Execute(RA,0,Integer(adCmdText));
while (rs<>nil) do
begin
for n:=0 to(Conn.Errors.Count-1)do begin
Memo1.Lines.Add(Conn.Errors.Item[n].Description);
end;
rs:=rs.NextRecordset(RA);
end;
cmd.Set_ActiveConnection(nil);
Conn.Close;
cmd := nil;
Conn := nil;
end;
I've found this thread (Russian) for stored procedure and correct it for BACKUP command.
I have a project where I need to monitor changes in a 3rd party database.
SqlDependency seem like a good solution but it causes the following error in the 3rd party application.
INSERT failed because the following SET options have incorrect
settings: 'ANSI_NULLS, QUOTED_IDENTIFIER, ANSI_PADDING'. Verify that
SET options are correct for use with indexed views and/or indexes on
computed columns and/or filtered indexes and/or query notifications
and/or XML data type methods and/or spatial index operations.
(The application works fine when my test program below is not running)
What SET options does this refer to?
The only set operation I have done is ALTER DATABASE TestDb SET ENABLE_BROKER to enable notifications.
I also did:
CREATE QUEUE ContactChangeMessages;
CREATE SERVICE ContactChangeNotifications
ON QUEUE ContactChangeMessages
([http://schemas.microsoft.com/SQL/Notifications/PostQueryNotification]);
Here is my Linqpad test code which works fine if I insert/update/delete records in management studio.
void Main() {
const string cs = "Data Source=.;Initial Catalog=TestDb;Trusted_Connection=True";
var are = new AutoResetEvent(false);
using (var connection = new SqlConnection(cs)) {
connection.Open();
SqlDependency.Start(cs);
using (var cmd = new SqlCommand()) {
cmd.Connection = connection;
cmd.CommandType = CommandType.Text;
cmd.CommandText = "SELECT orderNo FROM dbo.Orders WHERE ProductNo = '111'";
var dep = new SqlDependency(cmd, null, 60);
dep.OnChange += (s,e) => {
Console.WriteLine(e.Info);
are.Set();
};
using (var reader = cmd.ExecuteReader()) {
while (reader.Read()) {
}
}
are.WaitOne();
SqlDependency.Stop(cs);
}
}
}
I do not know, and can not change, how the 3rd part app connects to the database. I can run the sql profiler if more information is required.
It refers exactly to the SET options mentioned in the error message:
SET options have incorrect settings: 'ANSI_NULLS, QUOTED_IDENTIFIER,
ANSI_PADDING'.
The correct settings, along with other restrictions, are described in Creating a Query for Notification:
When a SELECT statement is executed under a notification request, the
connection that submits the request must have the options for the
connection set as follows:
ANSI_NULLS ON
ANSI_PADDING ON
ANSI_WARNINGS ON
CONCAT_NULL_YIELDS_NULL ON
QUOTED_IDENTIFIER ON
NUMERIC_ROUNDABORT OFF
ARITHABORT ON
Note Note
Setting ANSI_WARNINGS to ON implicitly sets ARITHABORT to ON when the
database compatibility level is set to 90. If the database
compatibility level is set to 80 or earlier, the ARITHABORT option
must explicitly be set to ON.
These settings are affected by:
the current database settings, which can be viewed in sys.databases
the session settings, which can be viewed in sys.dm_exec_sessions
by procedure/trigger create settings, which can be viewed using OBJECTPROPERTY().
You need to find which property from the ones mentioned in the error message is non-conforming and why (probably is a database setting). Most likely is a 80 compatibility level set on the database.
Update. Nevermind that, you say that you can successfully create a query notification but then the application itself fails. The application must be explicitly setting one of these settings OFF on it's connection (you can validate by inspecting sys.dm_exec_sessions). You must contact the application vendor, seems like she is very explicitly (albeit probably unintentionally) making his application incompatible with query notifications.
Seam 2.2 , Jboss 6.1, hibernate 3.5.6 and MSQL Server 2008
and have a function like this.
public void deliverFile() {
EntityManager jobbEntityManager = (EntityManager)Component.getInstance("jobbEntityManager");
JobbStatusInterface jobbStatus = new JobbStatus();
jobbStatus.setStatus(PluginStatus.INITIATED);
jobbEntityManager.persist(jobbStatus);
/**
Code here to save a file that takes a minutes
**/
jobbStatus.setStatus(PluginStatus.DONE);
jobbEntityManager.flush();
}
public void checkJobb(){
EntityManager jobbEntityManager = (EntityManager)Component.getInstance("jobbEntityManager");
jobbEntityManager.createQuery("from JobbStatus", JobbStatus.class).getResultList();
}
i have a poll on checkJobb every 10 seconds so if the deliveryFile() function is executed.
the checkJobb queues upp and stops at the query, so when deliveryFile() functions finishes it finish all 6 checkJobbs() at once.
Even if i select from the database directly it is locked and finishes it's query after deliveryFile() is done.
Is there anyway to solove this so i can do my checkJobb() while deliveryFile is executing?
Not sure what you want to achieve using the above code. If you want to check whether a job is initiated by loading the job status in-between..it may not be possible. Since the data is not committed yet and you are using a different session in the other function..you may not be able to see the data.
Only the last value of the status will be committed to the database.
READ_COMMITTED_SNAPSHOT instead to normal READ_COMMITTED isolation.
ALTER DATABASE <dbname> SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
ALTER DATABASE <dbname> SET READ_COMMITTED_SNAPSHOT ON;
ALTER DATABASE <dbname> SET MULTI_USER;