How can I refresh a TClientDataSet without applying pending updates? - database

Here is what I'm trying to accomplish:
Retrieve 1 record from the database through TSQLDataset's CommandText: SELECT * FROM myTable WHERE ID = 1
Use TClientDataset to modify the record. (1 pending update)
Retrieve next record. SELECT * FROM myTable WHERE ID = 2
Modify the record. (now 2 pending updates)
Finally, send the 2 pending updates back to the database through ApplyUpdates function.
When I do step 3 I got "Must apply updates before refreshing data."
How can I refresh a TClientDataSet without applying pending updates?

You can append data packets manually to your DataSet calling the AppendData method.
In an application where the provider is in the same application with the ClientDataSet you can code something like this:
begin
ConfigureProviderToGetRecordWithID(1);
//make the ClientDataSet fetch this single record and not hit the EOF.
ClientDataSet1.PacketRecords := 1;
ClientDataSet1.Open;
ClientDataSet1.Edit;
ModifyFirstRecord;
ClientDataSet1.Post;
ConfigureProviderToGetRecordWithID(2);
ClientDataSet1.AppendData(DataSetProvider1.Data, False);
//now you have two records in your DataSet without losing the delta.
end;
This is kind of pseudo-code, but shows the general technique you could use.

Related

Trigger to restrict duplicate record for a particular type

I have a custom object consent and preferences which is child to account.
Requirement is to restrict duplicate record based on channel field.
foe example if i have created a consent of channel email it should throw error when i try to create second record with same email as channel.
The below is the code i have written,but it is letting me create only one record .for the second record irrespective of the channel its throwing me the error:
Trigger code:
set<string> newChannelSet = new set<string>();
set<string> dbChannelSet = new set<string>();
for(PE_ConsentPreferences__c newCon : trigger.new){
newChannelSet.add(newCon.PE_Channel__c);
}
for(PE_ConsentPreferences__c dbcon : [select id, PE_Channel__c from PE_ConsentPreferences__c where PE_Channel__c IN: newChannelSet]){
dbChannelSet.add(dbcon.PE_Channel__c);
}
for(PE_ConsentPreferences__c newConsent : trigger.new){
if(dbChannelSet.contains(newConsent.PE_Channel__c))
newConsent.addError('You are inserting Duplicate record');
}
Your trigger blocks you because you didn't filter by Account in the query. So it'll let you add 1 record of each channel type and that's all.
I recommend not doing it with code. It is going to get crazier than you think really fast.
You need to stop inserts. To do that you need to compare against values already in the database (fine) but also you should protect against mass loading with Data Loader for example. So you need to compare against other records in trigger.new. You can kind of simplify it if you move logic from before insert to after insert, you can then query everything from DB... But it's weak, it's a validation that should prevent save, it logically belongs in before. It'll waste account id, maybe some autonumbers... Not elegant.
On update you should handle update of Channel but also of Account Id (reparenting to another record!). Otherwise I'll create consent with acc1 and move it to acc2.
What about undelete scenario? I create 1 consent, delete it, create identical one and restore 1st one from Recycle Bin. If you didn't cover after undelete - boom, headshot.
Instead go with pure config route (or simple trigger), let the database handle that for you.
Make a helper text field, mark it unique.
Write a workflow / process builder / simple trigger (before insert, before update) that writes to this field combination of Account__c + ' ' + PE_Channel__c. Condition could be ISNEW() || ISCHANGED(Account__c) || ISCHANGED(PE_Channel__c)
Optionally prepare data fix to update existing records.
Job done, you can't break it now. And if you ever need to allow more combinations (3rd field) it's easy for admin to extend it. As long as you keep under 255 chars total.
Or (even better) there are duplicate matching rules ;) give them a go before you do anything custom? Maybe check https://trailhead.salesforce.com/en/content/learn/modules/sales_admin_duplicate_management out.

Flink- multi event dependency sql query on dataStream

Not getting expected behavior, my flink application getting live event and my trigger condition is depend on two event ABC and XYZ. when both event reach then trigger the notification.
application is using StreamTableEnviornment
here is the sql query that I am using
SELECT *
from EventTable
where eventName in ('ABC','XYZ')
and 1 IN (select 1 from EventTable where name='XYZ')
and 1 IN (select 1 from EventTable where name='ABC')
use case: 1
ABC event comes -->nothing happens (as expected and waiting for XYZ event)
XYZ event comes --> condition match and sql query gives two event record(ABC &XYZ) and it trigger the notification (as expected)
Now again if I send 'ABC' event then sql query give the result ABC event and notification triggered.
I was expecting that query will not give result as only one event ABC reached and will wait for event XYZ. could you please help me with this behaviour? Am I missing something to get the expected result?
When the second ABC is added to the dynamic table, the first XYZ is already there, so the conditions are met. The addition of this third row to the input table causes one new row to be appended to the output table.
See Dynamic Tables in the documentation for more information about the model underlying stream SQL.

Records not committed in Camel Route

We have an application that uses Apache Camel and Spring-Data-JPA. We have a scenario where items inserted into the database... disappear. The only good news is that we have an integration test that replicates the behavior.
The Camel route is uses direct on it and has the transaction policy of PROPAGATION_REQUIRED. The idea is that we send in an object with a property of status. And when we change the status we are to send the object into a Camel route to record who and when the status was changed. Is this StatusChange object that isn't being saved correctly.
Our test creates the object, saves it (which sends it to the route), changes the status, and saves it again. After those two saves, we should have two StatusChange objects saved but we only have one. But a second is created. All three of these objects (the original and the 2 StatusChange objects) are Spring-Data-JPA objects managed by JpaRepository objects.
We have a log statement in the service that creates and saves the StatusChanges:
log.debug('Saved StatusChange has ID {}', newStatusChange.id)
So after the first one I see:
Saved StatusChange has ID 1
And the on the re-save:
Saved StatusChange has ID 2
Good! we have the second! And then I see we change the original:
changing [StatusChange#ab2e250f { id: 1, ... }] status change to STATUS_CHANGED
But after the test is done, we only have 1 StatusChange object -- the original with ID:1. I know this because I have this in the cleanup step in my test:
sql.eachRow("select * from StatusChange",{ row->
println "ID -> ${row['ID']}, Status -> ${row['STATUS']}";
})
And the result is :
ID -> 1, Status -> PENDING
I would expect this:
ID -> 1, Status -> STATUS_CHANGED
ID -> 2, Status -> PENDING
This happens in the test in 2 steps -- so we are in the same test so no rollbacks should happen between the two. So what could cause it to be persisted the first time and not the second time?
The problem was -- the service that ran after the Camel route was done threw an exception. It was assumed that the transaction was committed, but it was not. So then the transaction was marked as rollback when the exception hit and that is how things disappeared.
The funniest thing -- the exception happened in the service because the transaction hadn't been committed yet. It's a vicious circle.
EDIT: fixed spelling mistake

How to update datawindow itemstatus for all the rows in the datawindow without using loops

I have a datastore, in certain condition i need all of the rows in that to be inserted into database, So i just made a loop and set each item staus to newmodified! and fired update, its working but time consuming, Is there any other ways to handle this without looping, Pelase suggest.
You can use the Rowscopy method to copy all the rows into a separate datastore. This gives them all a 'NewModified! status which will generate inserts. Something like this
li = ds_1.Rowscopy(1, ds_1.Rowcount(), Primary!, ds_2, 1, Primary!)
IF li > 0 THEN
ds_2.update()...
You have to use the update() method, so all updates will be done to database.
Source: http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.help.ase.15.7/title.htm

TClientDataset ApplyUpdates error because of database table constraint

I have an old Delphi 7 application that loads data from one database table, make many operations and calculation and finally writes records to a destination table.
This old application calls ApplyUpdates every 500 records, for performances reasons.
The problem is that, sometimes, in this bunch of records lies one that will trigger database constraint; Delphi fires an exception on ApplyUpdates.
My problem is I don't know which record is responsible for this exception. There are 500 candidates!
Is it possible to ask TClientDataset which is the offending record?
I do not want to ApplyUpdates foreach appended record for speed issues.
I think you may try to implement the OnReconcileError event which is being fired once for each record that could not be applied to the dataset. So I would try the following code, raSkip means here to skip the current record:
procedure TForm1.ClientDataSet1ReconcileError(DataSet: TCustomClientDataSet;
E: EReconcileError; UpdateKind: TUpdateKind; var Action: TReconcileAction);
begin
Action := raSkip;
ShowMessage('The record with ID = ' + DataSet.FieldByName('ID').AsString +
' couldn''t be updated!' + sLineBreak + E.Context);
end;
But please note, I've never tried this before and I'm not sure if it's not too late to ignore the errors raised by the ApplyUpdates function. Forgot to mention, try to use the passed parameter DataSet which should contain the record that couldn't be updated; it might be the way to determine what record caused the problem.
And here is described the updates applying workflow.
Implementing OnReconcileError will give you access to the record and data that is responsible for the exception. An easy to accomplish this is to add a “Reconcile Error Dialog”. It is located on the “New Items” dialog which is displayed by File | New | Other. Once you have added it to your project and used it in the form with the clientdataset. The following code shows how it is invoked.
procedure TForm1.ClientDataSetReconcileError(DataSet: TCustomClientDataSet;
E: EReconcileError; UpdateKind: TUpdateKind;
var Action: TReconcileAction);
begin
Action := HandleReconcileError(DataSet, UpdateKind, E);
end;
It will display instead of the exception dialog. It will allow you to view the offending data and select how you want to proceed. It has been over 5 years since I last used it, hopefully I have not forgotten some details.

Resources