multiple emails sent by sp_db_sendmail when run as SSIS package - sql-server

I've a procedure which generates a tab delimited text file and also sends an email with a list of students as attachment using msdb.dbo.sp_send_dbmail.
When I execute the procedure thoruhg SQL server management studio, it sends only one email.
But I created a SSIS package and scheduled the job to run nightly. This job sends 4 copies of the email to each recipient.
EXEC msdb.dbo.sp_send_dbmail #profile_name = 'A'
,#recipients = #email_address
,#subject = 'Error Records'
,#query = 'SELECT * FROM ##xxxx'
,#attach_query_result_as_file = 1
,#query_attachment_filename = 'results.txt'
,#query_result_header = 1
,#query_result_width=8000
,#body = 'These students were not imported'
I've set following parameters to 0 (within database mail configuration wizard), to see if it makes any difference. But it didn't resolve the problem.
AccountRetryAttempts 0
AccountRetryDelay 0
DatabaseMailExeMinimumLifeTime 0
Any suggestions?

I assume you have this email wired up to an event, like OnError/OnTaskFailed, probably at the root level.
Every item you add to a Control Flow adds another layer of potential events. Imagine a Control Flow with a Sequence Container which Contains a ForEach Enumerator which contains a Data Flow Task. That's a fairly common design. Each of those objects has the ability to raise/handle events based on the objects it contains. The distance between the Control Flow's OnTaskFailed event handler and the Data Flow's OnTaskFailed event handler is 5 objects deep.
Data flow fails and raises the OnTaskFailed message. That message bubbles all the way up to the Control Flow resulting in email 1 being fired. The data flow then terminates. The ForEach loop receives signal that the Data Flow has completed and the return status was a failure so now the OnTaskFailed error fires for the Foreach loop. Repeat this pattern ad nauseum until every task/container has raised their own event.
Resolution depends, but usually folks get around this by either only putting the notification at the innermost objects (data flow in my example) or disabling the percolation of event handlers.

Check the solution here (it worked for me as I was getting 2 at a time) - Stored procedure using SP_SEND_DBMAIL sending duplicate emails to all recipients
Change the number of retries from X to 0. Now I only get 1 email. It'll be more obvious if your users are getting 4 emails, exactly 1 minute apart.

Related

Records not committed in Camel Route

We have an application that uses Apache Camel and Spring-Data-JPA. We have a scenario where items inserted into the database... disappear. The only good news is that we have an integration test that replicates the behavior.
The Camel route is uses direct on it and has the transaction policy of PROPAGATION_REQUIRED. The idea is that we send in an object with a property of status. And when we change the status we are to send the object into a Camel route to record who and when the status was changed. Is this StatusChange object that isn't being saved correctly.
Our test creates the object, saves it (which sends it to the route), changes the status, and saves it again. After those two saves, we should have two StatusChange objects saved but we only have one. But a second is created. All three of these objects (the original and the 2 StatusChange objects) are Spring-Data-JPA objects managed by JpaRepository objects.
We have a log statement in the service that creates and saves the StatusChanges:
log.debug('Saved StatusChange has ID {}', newStatusChange.id)
So after the first one I see:
Saved StatusChange has ID 1
And the on the re-save:
Saved StatusChange has ID 2
Good! we have the second! And then I see we change the original:
changing [StatusChange#ab2e250f { id: 1, ... }] status change to STATUS_CHANGED
But after the test is done, we only have 1 StatusChange object -- the original with ID:1. I know this because I have this in the cleanup step in my test:
sql.eachRow("select * from StatusChange",{ row->
println "ID -> ${row['ID']}, Status -> ${row['STATUS']}";
})
And the result is :
ID -> 1, Status -> PENDING
I would expect this:
ID -> 1, Status -> STATUS_CHANGED
ID -> 2, Status -> PENDING
This happens in the test in 2 steps -- so we are in the same test so no rollbacks should happen between the two. So what could cause it to be persisted the first time and not the second time?
The problem was -- the service that ran after the Camel route was done threw an exception. It was assumed that the transaction was committed, but it was not. So then the transaction was marked as rollback when the exception hit and that is how things disappeared.
The funniest thing -- the exception happened in the service because the transaction hadn't been committed yet. It's a vicious circle.
EDIT: fixed spelling mistake

How to compare the same db at new moments in time

By developing client server applications in Delphi + SQL Server I continuously face the problem to have an exact report of what an action caused on db.
A simple example is:
BEFORE: start "capturing" the DB
user changes a value in an editbox
the user presses a "save" button
AFTER: stop "capturing" the DB
I would like to have a tool that compares the before and after status (I manually capture the BEFORE and AFTER status).
I googled for this kind of tools but all I found are tools for doing data or schema comparison between more datasources.
Thanks.
The following is an extract for an application we have. This code is in a BeforePost event handler that is linked to all of the Dataset components in the application. This linking is done using code as there are a lot of datasets. This doesn't actually log the changes (just lists the fields) but it should be simple enough to change to meet your objective. I don't know if this is exactly right for what you are trying to achieve since you ask for a tool but it would be an effective way of creating a report of all changes
CurrentReport := Format('Table %s modified', [DataSet.Name]);
for i := 0 to DataSet.FieldCount - 1 do
begin
XField := DataSet.Fields[i];
if (XField.FieldKind = fkData) and (XField.Value <> XField.OldValue) then
begin
CurrentReport := CurrentReport + Format(', %s changed', [XField.FieldName])
end;
end;
Note that our code collects a report but only logs it after the post has been successfully completed

How can I refresh a TClientDataSet without applying pending updates?

Here is what I'm trying to accomplish:
Retrieve 1 record from the database through TSQLDataset's CommandText: SELECT * FROM myTable WHERE ID = 1
Use TClientDataset to modify the record. (1 pending update)
Retrieve next record. SELECT * FROM myTable WHERE ID = 2
Modify the record. (now 2 pending updates)
Finally, send the 2 pending updates back to the database through ApplyUpdates function.
When I do step 3 I got "Must apply updates before refreshing data."
How can I refresh a TClientDataSet without applying pending updates?
You can append data packets manually to your DataSet calling the AppendData method.
In an application where the provider is in the same application with the ClientDataSet you can code something like this:
begin
ConfigureProviderToGetRecordWithID(1);
//make the ClientDataSet fetch this single record and not hit the EOF.
ClientDataSet1.PacketRecords := 1;
ClientDataSet1.Open;
ClientDataSet1.Edit;
ModifyFirstRecord;
ClientDataSet1.Post;
ConfigureProviderToGetRecordWithID(2);
ClientDataSet1.AppendData(DataSetProvider1.Data, False);
//now you have two records in your DataSet without losing the delta.
end;
This is kind of pseudo-code, but shows the general technique you could use.

TClientDataset ApplyUpdates error because of database table constraint

I have an old Delphi 7 application that loads data from one database table, make many operations and calculation and finally writes records to a destination table.
This old application calls ApplyUpdates every 500 records, for performances reasons.
The problem is that, sometimes, in this bunch of records lies one that will trigger database constraint; Delphi fires an exception on ApplyUpdates.
My problem is I don't know which record is responsible for this exception. There are 500 candidates!
Is it possible to ask TClientDataset which is the offending record?
I do not want to ApplyUpdates foreach appended record for speed issues.
I think you may try to implement the OnReconcileError event which is being fired once for each record that could not be applied to the dataset. So I would try the following code, raSkip means here to skip the current record:
procedure TForm1.ClientDataSet1ReconcileError(DataSet: TCustomClientDataSet;
E: EReconcileError; UpdateKind: TUpdateKind; var Action: TReconcileAction);
begin
Action := raSkip;
ShowMessage('The record with ID = ' + DataSet.FieldByName('ID').AsString +
' couldn''t be updated!' + sLineBreak + E.Context);
end;
But please note, I've never tried this before and I'm not sure if it's not too late to ignore the errors raised by the ApplyUpdates function. Forgot to mention, try to use the passed parameter DataSet which should contain the record that couldn't be updated; it might be the way to determine what record caused the problem.
And here is described the updates applying workflow.
Implementing OnReconcileError will give you access to the record and data that is responsible for the exception. An easy to accomplish this is to add a “Reconcile Error Dialog”. It is located on the “New Items” dialog which is displayed by File | New | Other. Once you have added it to your project and used it in the form with the clientdataset. The following code shows how it is invoked.
procedure TForm1.ClientDataSetReconcileError(DataSet: TCustomClientDataSet;
E: EReconcileError; UpdateKind: TUpdateKind;
var Action: TReconcileAction);
begin
Action := HandleReconcileError(DataSet, UpdateKind, E);
end;
It will display instead of the exception dialog. It will allow you to view the offending data and select how you want to proceed. It has been over 5 years since I last used it, hopefully I have not forgotten some details.

Why qry.post executed with asynchronous mode?

Recently I met a strange problem, see code snips as below:
var
sqlCommand: string;
connection: TADOConnection;
qry: TADOQuery;
begin
connection := TADOConnection.Create(nil);
try
connection.ConnectionString := 'Provider=Microsoft.Jet.OLEDB.4.0;Data Source=Test.MDB;Persist Security Info=False';
connection.Open();
qry := TADOQuery.Create(nil);
try
qry.Connection := connection;
qry.SQL.Text := 'Select * from aaa';
qry.Open;
qry.Append;
qry.FieldByName('TestField1').AsString := 'test';
qry.Post;
beep;
finally
qry.Free;
end;
finally
connection.Free;
end;
end;
First, Create a new access database named test.mdb and put it under the directory of this test project, we can create a new table named aaa in it which has only one text type field named TestField1.
We set a breakpoint at line of "beep", then lunch the test application under ide debug mode, when ide stops at the breakpoint line (qry.post has been executed), at this time we use microsoft access to open test.mdb and open table aaa you will find there are no any changes in table aaa, if you let the ide continue running after pressing f9 you can find a new record is inserted in to table aaa, but if you press ctrl+f2 to terminate the application at the breakpoint, you will find the table aaa has no record been inserted, but in normal circumstance, a new record should be inserted in to the table aaa after qry.post executed.
who can explain this problem , it troubles me so long time. thanks !!!
BTW, the ide is delphi 2010, and the access mdb file is created by microsoft access 2007 under windows 7
Access won't show you records from transactions that haven't been committed yet. At the point where you pause your program, the implicit transaction created by the connection hasn't been committed yet. Haven't experimented, but my guess would be that the implicit transaction will be committed after you free the query. So if you pause just after that, you should see your record in MS Access.
After more information from Ryan (see his answer to himself), I did a little more investigating.
Having a primary key (autonumber or otherwise) doesn't seem to affect the behaviour.
Table with autonumber column as primary key
connection.Execute('insert into aaa (TestField1) values (''Test'')');
connection.Execute('select * from aaa');
connection.Execute('delete * from aaa');
beep;
finally
connection.Free;
end;
Stopping on the "select" does not show the new record.
Stopping on the "delete" shows the new record.
Stopping on the "beep" still shows all records in the table even after repeated refresh's.
Stopping on the "connection.Free" shows no more records in the table. Huh?
Stopping on a "select" inserted between the "delete" and the "beep" shows no more records in the table.
Same table
connection.Execute('insert into aaa (TestField1) values (''Test'')');
beep;
connection.Execute('delete * from aaa');
beep;
beep;
Stopping on each statement shows that Access doesn't receive the "command" until at least one other statement has been executed. In other words: the beep after the "Execute" statement must have been processed before the statement is processed by Access (it may take a couple of refreshes to show up, the first refresh isn't always enough). If you stop on the first beep after the "Execute" statement nothing has happened in Access and won't if you reset the program without executing any other statements.
Stepping into the connection.Execute (Use debug dcu's on): the effect of the executed sql statement is now visible in Access on return to the beep. Actually, it is visible much earlier. For example stepping into the "delete" statement, the record becomes marked #deleted somewhere still in the ADODB code.
In fact, when stepping through the adodb code, the record becomes visible in Access when stopped in the OnExecuteComplete handler. Not when stopped on the "begin", but when stopped on the "if Assigned" immediately thereafter. The same applies to the delete statement. The effect becomes visible in Access when stopped on the if statement in the OnExecuteComplete handler in AdoDb.
Ado does have an ExecuteOption to execute statements asynchronously. It wasn't in effect during all this (its not included by default). And while we are dealing with an out-of-process COM server and with call backs such as the OnExecuteComplete handler, that handler was executed before returning to the statement right after the ConnectionObject.Execute statement in the TAdoConnection.Execute method in AdoDb.
All in all I think it isn't so much a matter of synchronous or asynchronous execution, but more a matter of when references are released (we are dealing with COM and interface reference counting), or with thread and process timing issues (in app, Access and between them), or with a combination thereof.
And the debugger may just be muddling things more than clarifying them. Would be interesting to see what happens in D2010 with its single thread debugging capabilities, but haven't got it available where I am (now and for the next two weeks).
First , Marjan, Thank you for your answer, I am very sure I had clicked the refesh button in that time, but there was still nothing changed....
After many experiments, I found that if I inserted an auto-increment id into table fields as primary key , this strange behaviour would not happen, although i have done this , there is another strange behaviour , I will show my code snips , as below:
procedure TForm9.btn1Click(Sender: TObject);
var
sqlCommand: string;
connection: TADOConnection;
begin
connection := TADOConnection.Create(nil);
try
connection.ConnectionString := 'Provider=Microsoft.Jet.OLEDB.4.0;Data Source=Test.MDB;Persist Security Info=False';
connection.Open();
connection.Execute('insert into aaa (TestField1) values (''Test'')');
connection.Execute('select * from aaa');
connection.Execute('delete * from aaa'); // breakpoint 1
beep; // breakpoint2
finally
connection.Free;
end;
end;
Put two breakpoints at line “delete” and “beep”, when codes stoped at breakpoint1, you can refresh the database , and you would find that the record was inserted, continue running when the codes stoped at the breakpoint2, you would find the record was still in there..... If at this time you pressed ctrl+f2, the record would be not deleted.... if connection.execute is a real sychronouse procedure , this should not happend. sorry for checking your answer so late, because i am on our dragon boat festival...
Marjan, thanks for your response again, but i can't accept this behaviour what the connection enginee processes, today I find something useful on MSDN website, see:
http://msdn.microsoft.com/en-us/library/ms719649(v=VS.85).aspx
I have resolved the problem fortunately according to the article, Actually, the default value of the property "Jet OLEDB:Implicit Commit Sync" is false, According to the explanation of this property, Be false implies that the implicit transaction will use asynchronouse mode. so what we can do is set this property be true by using code snips as below :
connection.Properties.Item['Jet OLEDB:Implicit Commit Sync'].Value := true;
BTW, according to that article, this property can only be set by using the Properties property of the connection object, otherwise if it is set in connection string, an error will occur

Resources