Recently I met a strange problem, see code snips as below:
var
sqlCommand: string;
connection: TADOConnection;
qry: TADOQuery;
begin
connection := TADOConnection.Create(nil);
try
connection.ConnectionString := 'Provider=Microsoft.Jet.OLEDB.4.0;Data Source=Test.MDB;Persist Security Info=False';
connection.Open();
qry := TADOQuery.Create(nil);
try
qry.Connection := connection;
qry.SQL.Text := 'Select * from aaa';
qry.Open;
qry.Append;
qry.FieldByName('TestField1').AsString := 'test';
qry.Post;
beep;
finally
qry.Free;
end;
finally
connection.Free;
end;
end;
First, Create a new access database named test.mdb and put it under the directory of this test project, we can create a new table named aaa in it which has only one text type field named TestField1.
We set a breakpoint at line of "beep", then lunch the test application under ide debug mode, when ide stops at the breakpoint line (qry.post has been executed), at this time we use microsoft access to open test.mdb and open table aaa you will find there are no any changes in table aaa, if you let the ide continue running after pressing f9 you can find a new record is inserted in to table aaa, but if you press ctrl+f2 to terminate the application at the breakpoint, you will find the table aaa has no record been inserted, but in normal circumstance, a new record should be inserted in to the table aaa after qry.post executed.
who can explain this problem , it troubles me so long time. thanks !!!
BTW, the ide is delphi 2010, and the access mdb file is created by microsoft access 2007 under windows 7
Access won't show you records from transactions that haven't been committed yet. At the point where you pause your program, the implicit transaction created by the connection hasn't been committed yet. Haven't experimented, but my guess would be that the implicit transaction will be committed after you free the query. So if you pause just after that, you should see your record in MS Access.
After more information from Ryan (see his answer to himself), I did a little more investigating.
Having a primary key (autonumber or otherwise) doesn't seem to affect the behaviour.
Table with autonumber column as primary key
connection.Execute('insert into aaa (TestField1) values (''Test'')');
connection.Execute('select * from aaa');
connection.Execute('delete * from aaa');
beep;
finally
connection.Free;
end;
Stopping on the "select" does not show the new record.
Stopping on the "delete" shows the new record.
Stopping on the "beep" still shows all records in the table even after repeated refresh's.
Stopping on the "connection.Free" shows no more records in the table. Huh?
Stopping on a "select" inserted between the "delete" and the "beep" shows no more records in the table.
Same table
connection.Execute('insert into aaa (TestField1) values (''Test'')');
beep;
connection.Execute('delete * from aaa');
beep;
beep;
Stopping on each statement shows that Access doesn't receive the "command" until at least one other statement has been executed. In other words: the beep after the "Execute" statement must have been processed before the statement is processed by Access (it may take a couple of refreshes to show up, the first refresh isn't always enough). If you stop on the first beep after the "Execute" statement nothing has happened in Access and won't if you reset the program without executing any other statements.
Stepping into the connection.Execute (Use debug dcu's on): the effect of the executed sql statement is now visible in Access on return to the beep. Actually, it is visible much earlier. For example stepping into the "delete" statement, the record becomes marked #deleted somewhere still in the ADODB code.
In fact, when stepping through the adodb code, the record becomes visible in Access when stopped in the OnExecuteComplete handler. Not when stopped on the "begin", but when stopped on the "if Assigned" immediately thereafter. The same applies to the delete statement. The effect becomes visible in Access when stopped on the if statement in the OnExecuteComplete handler in AdoDb.
Ado does have an ExecuteOption to execute statements asynchronously. It wasn't in effect during all this (its not included by default). And while we are dealing with an out-of-process COM server and with call backs such as the OnExecuteComplete handler, that handler was executed before returning to the statement right after the ConnectionObject.Execute statement in the TAdoConnection.Execute method in AdoDb.
All in all I think it isn't so much a matter of synchronous or asynchronous execution, but more a matter of when references are released (we are dealing with COM and interface reference counting), or with thread and process timing issues (in app, Access and between them), or with a combination thereof.
And the debugger may just be muddling things more than clarifying them. Would be interesting to see what happens in D2010 with its single thread debugging capabilities, but haven't got it available where I am (now and for the next two weeks).
First , Marjan, Thank you for your answer, I am very sure I had clicked the refesh button in that time, but there was still nothing changed....
After many experiments, I found that if I inserted an auto-increment id into table fields as primary key , this strange behaviour would not happen, although i have done this , there is another strange behaviour , I will show my code snips , as below:
procedure TForm9.btn1Click(Sender: TObject);
var
sqlCommand: string;
connection: TADOConnection;
begin
connection := TADOConnection.Create(nil);
try
connection.ConnectionString := 'Provider=Microsoft.Jet.OLEDB.4.0;Data Source=Test.MDB;Persist Security Info=False';
connection.Open();
connection.Execute('insert into aaa (TestField1) values (''Test'')');
connection.Execute('select * from aaa');
connection.Execute('delete * from aaa'); // breakpoint 1
beep; // breakpoint2
finally
connection.Free;
end;
end;
Put two breakpoints at line “delete” and “beep”, when codes stoped at breakpoint1, you can refresh the database , and you would find that the record was inserted, continue running when the codes stoped at the breakpoint2, you would find the record was still in there..... If at this time you pressed ctrl+f2, the record would be not deleted.... if connection.execute is a real sychronouse procedure , this should not happend. sorry for checking your answer so late, because i am on our dragon boat festival...
Marjan, thanks for your response again, but i can't accept this behaviour what the connection enginee processes, today I find something useful on MSDN website, see:
http://msdn.microsoft.com/en-us/library/ms719649(v=VS.85).aspx
I have resolved the problem fortunately according to the article, Actually, the default value of the property "Jet OLEDB:Implicit Commit Sync" is false, According to the explanation of this property, Be false implies that the implicit transaction will use asynchronouse mode. so what we can do is set this property be true by using code snips as below :
connection.Properties.Item['Jet OLEDB:Implicit Commit Sync'].Value := true;
BTW, according to that article, this property can only be set by using the Properties property of the connection object, otherwise if it is set in connection string, an error will occur
Related
I run a fairly basic Xevent on Azure SQL DB, using a Ring Buffer target, that looks for Severity errors over 10, based on a post Brent Ozar made. However, the session stops on its own, and I'm not sure why. I suspect it's filling up, even though all the documentation says it's FIFO and will drop the oldest events. Am I missing something? Do I need to set something differently? Thanks!
update: weirdly, I made a much smaller test, with max_memory = 200 and a much bigger amount of data in the failing code, to try and force it to stop/die, but I show it's looping as I expected. So I'm still confused why it's stopping, but it doesn't seem to be because it's filling up.
CREATE EVENT SESSION
severity_10plus_errors_XE
ON database
ADD EVENT sqlserver.error_reported
(
ACTION(sqlserver.client_app_name,sqlserver.client_hostname,sqlserver.database_id,sqlserver.sql_text,sqlserver.tsql_stack,sqlserver.username)
--ACTION (sqlserver.sql_text, sqlserver.tsql_stack, sqlserver.database_id, sqlserver.username)
WHERE ([severity]> 10)
)
ADD TARGET
package0.ring_buffer
(SET
max_memory = 10000 ) -- Units of KB.
WITH (MAX_DISPATCH_LATENCY = 60SECONDS)
GO
ALTER EVENT SESSION severity_10plus_errors_XE
ON DATABASE
STATE = START;
GO
using sql server and delphi 10.3.1, and firedac.
I am using cached updates, with autocommit on.
I keep managing to get my data into a state where the record has been deleted from the database, and I have also deleted that record in the dataset.
then, when it attempts to commit the change to the database(where the data no longer exists), I get an error:
[my application] raised exception class emssqlNativeException with message [firedac][Phys][odbc][sqlncli11.dll] SQL_NO_DATA
and then I can't clear the cached updates flag on the dataset, because there is stuff 'sitting' there.
my question - how can I get it to NOT return that error? because it's really not an error, it's trying to delete a record that no longer exists. I am not finding ANY documentation on the update options on a query, so is there a flag there I need to set?
You can handle update errors in OnUpdateError and perform any additional checks before deciding how to proceed. Blindly pretending all deletes worked would be something like:
procedure TForm1.FDQuery1UpdateError(ASender: TDataSet; AException:
EFDException; ARow: TFDDatSRow; ARequest: TFDUpdateRequest; var AAction:
TFDErrorAction);
begin
if ARequest = ARDelete then AAction := eaApplied;
end;
Read the online help for OnUpdateError for more information.
static void clean() throws Exception {
final UserTransaction tx = InitialContext.doLookup("UserTransaction");
tx.begin();
try {
final DataSource ds = InitialContext.doLookup(Databases.ADMIN);
Connection connection1 = ds.getConnection();
Connection connection2 = ds.getConnection();
PreparedStatement st1 = connection1.prepareStatement("XXX delete records XXX"); // delete data
PreparedStatement st2 = connection2.prepareStatement("XXX insert records XXX"); // insert new data that is same primary as deleted data above
st1.executeUpdate();
st1.close();
connection1.close();
st2.executeUpdate();
st2.close();
connection2.close();
tx.commit();
} finally {
if (tx.getStatus() == Status.STATUS_ACTIVE) {
tx.rollback();
}
}
}
I have a web app, the DAO taking DataSource as the object to create individual connection to perform database operations.
So I have a UserTransaction, inside there are two DAO object doing separated action, first one is doing deletion and second one is doing insertion. The deletion is to delete some records to allow insertion to take place because insertion will insert same primary key's data.
I take out the DAO layer and translate the logic into the code above. There is one thing I couldn't understand, based on the code above, the insertion operation should fail, because the code (inside the UserTransaction) take two different connections, they don't know each other, and the first deletion haven't committed obviously, so second statement (insertion) should fail (due to unique constraint), because two database operation not in same connection, second connection is not able to detect uncommitted changes. But amazingly, it doesn't fail, and both statement can work perfectly.
Can anyone help explain this? Any configuration can be done to achieve this result? Or whether my understanding is wrong?
Since your application is running in weblogic server, the java-EE-container is managing the transaction and the connection for you. If you call DataSource#getConnection multiple times in a java-ee transaction, you will get multiple Connection instances joining the same transaction. Usually those connections connect to database with the identical session. Using oracle you can check that with the following snippet in a #Stateless ejb:
#Resource(lookup="jdbc/myDS")
private DataSource ds;
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
#Schedule(hour="*", minute="*", second="42")
public void testDatasource() throws SQLException {
try ( Connection con1 = ds.getConnection();
Connection con2 = ds.getConnection();
) {
String sessId1 = null, sessId2 = null;
try (ResultSet rs1 = con1.createStatement().executeQuery("select userenv('SESSIONID') from dual") ){
if ( rs1.next() ) sessId1 = rs1.getString(1);
};
try (ResultSet rs2 = con2.createStatement().executeQuery("select userenv('SESSIONID') from dual") ){
if ( rs2.next() ) sessId2 = rs2.getString(1);
};
LOG.log( Level.INFO," con1={0}, con2={1}, sessId1={2}, sessId2={3}"
, new Object[]{ con1, con2, sessId1, sessId2}
);
}
}
This results in the following log-Message:
con1=com.sun.gjc.spi.jdbc40.ConnectionWrapper40#19f32aa,
con2=com.sun.gjc.spi.jdbc40.ConnectionWrapper40#1cb42e0,
sessId1=9347407,
sessId2=9347407
Note that you get different Connection instances with same session-ID.
For more details see eg this question
The only way to do this properly is to use a transaction manager and two phase commit XA drivers for all databases involved in this transaction.
My guess is that you have autocommit enabled on the connections. This is the default when creating a new connection, as is documented here
https://docs.oracle.com/javase/tutorial/jdbc/basics/transactions.html
System.out.println(connection1.getAutoCommit());
will most likely print true.
You could try
connection1.setAutoCommit(false);
and see if that changes the behavior.
In addition to that, it's not really defined what happens if you call close() on a connection and haven't issued a commit or rollback statement beforehand. Therefore it is strongly recommended to either issue one of the two before closing the connection, see https://docs.oracle.com/javase/7/docs/api/java/sql/Connection.html#close()
EDIT 1:
If autocommit is false, the it's probably due to the undefined behavior of close. What happens if you switch the statements? :
st2.executeUpdate();
st2.close();
connection2.close();
st1.executeUpdate();
st1.close();
connection1.close();
EDIT 2:
You could also try the "correct" way of doing it:
st1.executeUpdate();
st1.close();
st2.executeUpdate();
st2.close();
tx.commit();
connection1.close();
connection2.close();
If that doesn't fail, then something is wrong with your setup for UserTransactions.
Depending on your database this is quite a normal case.
An object implementing UserTransaction interface represents a "logical transaction". It doesn't always map to a real, "physical" transaction that a database engine respects.
For example, there are situations that cause implicit commits (as well as implicit starts) of transactions. In case of Oracle (can't vouch for other DBs), closing a connection is one of them.
From Oracle's docs:
"If the auto-commit mode is disabled and you close the connection
without explicitly committing or rolling back your last changes, then
an implicit COMMIT operation is run".
But there can be other possible reasons for implicit commits: select for update, various locking statements, DDLs, and so on. They are database-specific.
So, back to our code.
The first transaction is committed by closing a connection.
Then another transaction is implicitly started by the DML on the second connection. It inserts non-conflicting changes and the second connection.close() commits them without PK violation. tx.commit() won't even get a chance to commit anything (and how could it? the connection is already closed).
The bottom line: "logical" transaction managers don't always give you the full picture.
Sometimes transactions are started and committed without an explicit reason. And sometimes they are even ignored by a DB.
PS: I assumed you used Oracle, but the said holds true for other databases as well. For example, MySQL's list of implicit commit reasons.
If auto-commit mode is disabled and you close the connection
without explicitly committing or rolling back your last changes,
then an implicit COMMIT operation is executed.
Please check below link for details:
http://in.relation.to/2005/10/20/pop-quiz-does-connectionclose-result-in-commit-or-rollback/
By developing client server applications in Delphi + SQL Server I continuously face the problem to have an exact report of what an action caused on db.
A simple example is:
BEFORE: start "capturing" the DB
user changes a value in an editbox
the user presses a "save" button
AFTER: stop "capturing" the DB
I would like to have a tool that compares the before and after status (I manually capture the BEFORE and AFTER status).
I googled for this kind of tools but all I found are tools for doing data or schema comparison between more datasources.
Thanks.
The following is an extract for an application we have. This code is in a BeforePost event handler that is linked to all of the Dataset components in the application. This linking is done using code as there are a lot of datasets. This doesn't actually log the changes (just lists the fields) but it should be simple enough to change to meet your objective. I don't know if this is exactly right for what you are trying to achieve since you ask for a tool but it would be an effective way of creating a report of all changes
CurrentReport := Format('Table %s modified', [DataSet.Name]);
for i := 0 to DataSet.FieldCount - 1 do
begin
XField := DataSet.Fields[i];
if (XField.FieldKind = fkData) and (XField.Value <> XField.OldValue) then
begin
CurrentReport := CurrentReport + Format(', %s changed', [XField.FieldName])
end;
end;
Note that our code collects a report but only logs it after the post has been successfully completed
I have an old Delphi 7 application that loads data from one database table, make many operations and calculation and finally writes records to a destination table.
This old application calls ApplyUpdates every 500 records, for performances reasons.
The problem is that, sometimes, in this bunch of records lies one that will trigger database constraint; Delphi fires an exception on ApplyUpdates.
My problem is I don't know which record is responsible for this exception. There are 500 candidates!
Is it possible to ask TClientDataset which is the offending record?
I do not want to ApplyUpdates foreach appended record for speed issues.
I think you may try to implement the OnReconcileError event which is being fired once for each record that could not be applied to the dataset. So I would try the following code, raSkip means here to skip the current record:
procedure TForm1.ClientDataSet1ReconcileError(DataSet: TCustomClientDataSet;
E: EReconcileError; UpdateKind: TUpdateKind; var Action: TReconcileAction);
begin
Action := raSkip;
ShowMessage('The record with ID = ' + DataSet.FieldByName('ID').AsString +
' couldn''t be updated!' + sLineBreak + E.Context);
end;
But please note, I've never tried this before and I'm not sure if it's not too late to ignore the errors raised by the ApplyUpdates function. Forgot to mention, try to use the passed parameter DataSet which should contain the record that couldn't be updated; it might be the way to determine what record caused the problem.
And here is described the updates applying workflow.
Implementing OnReconcileError will give you access to the record and data that is responsible for the exception. An easy to accomplish this is to add a “Reconcile Error Dialog”. It is located on the “New Items” dialog which is displayed by File | New | Other. Once you have added it to your project and used it in the form with the clientdataset. The following code shows how it is invoked.
procedure TForm1.ClientDataSetReconcileError(DataSet: TCustomClientDataSet;
E: EReconcileError; UpdateKind: TUpdateKind;
var Action: TReconcileAction);
begin
Action := HandleReconcileError(DataSet, UpdateKind, E);
end;
It will display instead of the exception dialog. It will allow you to view the offending data and select how you want to proceed. It has been over 5 years since I last used it, hopefully I have not forgotten some details.