execute logic beforedelete event trigger - salesforce

Before deleting the record (Ex: Account obj record), I want to update the field on the account record and send it to content management, hold it for a few seconds, and then delete it.
For this scenario, I used beforedelete event and updated the fields in the record, and called the content management with updated record data. The record is updated with new values (i verified after restoring it from recycle bin), But it is not calling the content management before deleting the record. Is there any option that we can wait for a few seconds until the record is updated on content management and delete the record? Please share your suggestions. Thank you.

You can't make a callout straight from a trigger (SF database table/row can't be locked and held hostage until 3rd party system finishes, up to 2 minutes), it has to be asynchronous. So you probably call from #future but by then the main trigger finished, the record is deleted, if you passed an Id - probably the query inside #future returns 0 rows.
Forget the bit about "holding it for few seconds". You need to make some architecture decisions. Is it important that delete succeeds no matter what? or do you want to delete only after the external system acknowledged the message?
You could query your record in the trigger (or take whole trigger.old) and pass to the future method? It's supposed to take only primitives, not objects/collections but you could always JSON.serialize it before passing as string.
You could hide the standard delete button and introduce custom one. There you'd have a controller which can make the callout, wait till success response comes back and then delete?
You could rethink the request-response thing. What if you make the callout (or raise platform event?) and it's the content management system that then reaches to salesforce and deletes (via REST API for example).
What if you just delete right away, hope they stay in recycle bin and then external system can query the bin / make special getDeleted call and pull the data.
See Salesforce - Pull all deleted cases in Salesforce for some more bin-related api calls.

Related

Salesforce -Fire apex trigger only after complete data load

So here is the issue
We are loading data into a CustomObject__c using DataLoader.
Usually the no of records that are passed are 3.
Also, if there is any issue with the data passed, they run the dataloader again and pass the corrected data. Now, the older data has to be deleted.
So, I am handling it in before insert code and calling a batch in after insert code.
Here is the code for my trigger:
trigger TriggerCustom on CustomObject__c (before insert, after insert) {
List<CustomObject__c> customobjectlist = [Select Id from CustomObject__c WHERE CreatedDate = TODAY ];
if (Trigger.isBefore) {
delete exchlisttoday;
}
if(Trigger.isAfter)
{
BatchApex b = BatchApex();
Database.executebatch(b);
}
}
This was designed keeping in mind they pass only 3 records at a time.
However, now they want to pass more than 200 records using data loader.
How can I modify my trigger so that it fires only after one single dataload is completed (for e.g. if they pass 1000 records at once, the trigger has to fire only after the 1000 records are completely inserted
Trigger will not know when you are done, after 3, 203 or 10000 records (you can use bulk api to load large volumes, they'll be chunked into 10K packets but still - triggers will work 200 at a time).
If you have scripted data load - maybe you can update something else as next step. Another object (something dummy that has just 1 record) and have trigger on this?
If you have scripted data load - maybe you can query the Ids and then pass them to delete operation which would run before the upload task. This becomes bit too much for poor little data loader but Talend, Informatica, Azure Data Factory, Jitterbit etc proper ETL tools could do it. (although deleting before is bit brave... what if the load fails? You're screwed... Maybe delete should be after successful update)
Maybe you can guarantee that last record in your daily load will have some flag set and in the trigger - look for that flag?
Maybe you can schedule the batch to run every hour. You can't do it easily from UI but you can write the cron expression and schedule as 1-liner in dev console. In the Schedulable's execute() make it check if there was anything loaded today and if there was even single record - trigger the batch?

SalesForce DML set-based operations and atomic transactions

I have just begun to read about Salesforce APEX and its DML. It seems you can do bulk updates by constructing a list, adding items to be updated to the list, and then issuing an update myList call.
Does such an invocation of update create an atomic transaction, so that if for any reason an update to one of the items in the list should fail, the entire update operation is rolled back? If not, is there a way to wrap an update in an atomic transaction?
Your whole context is an atomic transaction. By the time Apex code runs SF has already started, whether it's a Visualforce button click, a trigger or any other entry point. If you hit a validation error, null pointer reference exception, attempt to divide by zero, thrown exception etc - whole thing will be rolled back.
update myList; works in "all or nothing" mode. If one of records fails on validation rule required field missing etc - you'll get an exception. You can wrap it in a try-catch block but still - whole list just failed to load.
If you need "save what you can" behavio(u)r - read up about Database.update() version of this call. It takes optional parameter that lets you do exactly that.
Last but not least if you're inserting complex scenarios (insert account, insert contacts, one of contacts fails but you had that in try-catch so the account saved OK so what now, do you have to manually delete it? Weak...) you have Database.setSavepoint() and Database.rollback() calls.
https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/langCon_apex_dml_database.htm
https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/langCon_apex_transaction_control.htm
https://salesforce.stackexchange.com/questions/9410/rolling-back-dml-operation-in-apex-method

How don't submit in a database but select the modified data

I have a database with multiple tables
and the user can change the data in the table.
my problems is that I wont that nothing changes in the database until the user click the button "save", and even when he do - it submit only the table he decide to save
but in the meantime it is necessary that the user can see all the changes that he did. and every "select" must give him the modified data ,and not the base data.
how I can on the one hand not submit the data in the database, and On the other hand show the data modified to the user?
I thought to do a transaction and don't submit, (and use read uncommitted) but for that I must don't close the connection (if I close without submit - all the changes are canceled) and I don't wont leave several of connection open.
I also thought to build a list of all the change, and whenever the user make a select - first searching from the list. but it is very complicated , and I prefer a simple solution
Thank you
This is going to be very tricky to handle as you've insisted that you cannot use transactions.
Best I can suggest is to add columns to each table to represent the state - but even then that's going to be tricky on how you'd ensure userA see's the pre-change and userB the post but not yet committed.
Perhaps you could look at using two tables and have a view selecting the pertinent data from both depending on the requirements.
Either way it's a nasty way to go about it and not very performant.
The moment you insisted you couldn't use a transaction is the moment you took away any chance of a simple answer.
A temporary table won't help here (as suggested above) as it's tied to the connection which you state will be closed. The only alternative temp table solution is a global temporary table but that also leads to issues (who creates it, what if you're the last connection to use it, check to see if it exists etc.)
You can use temporary tables to store a temporary data and then move them when it will need.

Conditional associated record deletion in afterDelete()

I have the following setup:
Models:
Team
Task
Change
TasksTeam
TasksTeam is a hasManyThrough, that associates teams to tasks. Change is used to record changes in the details of tasks, including when teams are attached/detached (i.e. through records in TasksTeam).
TasksTeam also cascades deletes of Task. If a task is deleted, all related team associations should also be deleted.
When a TasksTeam is deleted, it means a team has left a task, and I'd like to record a Change for that. I'm using the TasksTeam afterDelete() to record teams leaving. In the TasksTeam beforeDelete I save the data to $this->predelete so it'll be available in the afterDelete().
Here is the non-working code in TasksTeam:
public function afterDelete(){
$team_id = $this->predelete['TasksTeam']['team_id'];
$task_role_id = $this->predelete['TasksTeam']['task_role_id'];
$task_id = $this->predelete['TasksTeam']['task_id'];
// Wanted: only record a change if the task isn't deleted
if($this->Task->exists($task_id)){
$this->Task->Change->removeTeamFromTask($task_id, $team_id, $task_role_id);
}
return true;
}
Problem:
When a task is deleted, the delete cascades to TasksTeam correctly. However, a change will be recorded even if the Task is deleted. From another answer to something similar on SO, I think the reason is that the callbacks are called before Model:del(), meaning the task hasn't yet been deleted when it hits TasksTeam afterDelete()
Question
How can I successfully save a Change only if the task isn't deleted?
Thanks in advance.
If the callbacks are getting called before the actual delete, I see maintaining an assoc. array of flags with task IDs as keys, or a set of task IDs, which are added when afterDelete is called on Task. Then you could create a method in Task, such as isDeleting or similar, which queries the array, to tell you if the task is in the process of being deleted.
Using the suggestion from #James Dunne I ended up adding a tinyint field to the Task model called is_deleted and simply set this boolean true in the Task beforeDelete(). I then check for this flag and only save a Change if the flag is boolean false. It seems wasteful to add a field for something that is only affected just before the record is deleted, but for my purposes it works fine. I think a "real solution" would involve the Cake Events System , avoiding the need for chained callbacks.

How to capture table level data changes in SQL Server 2008 R2?

I have high volume of data normalized into more than 100 tables. There are multiple applications which change underlying data in those tables and I want to raise events on those changes. Possible options that I know of are:
Change Data Capture
Change Tracking
Using Triggers on each table (bad option but possible)
Can someone share the best way of doing this if someone has already done this before?
What I really want in the end is if there is one transaction that affected 12 tables off 100 I should be able to bubble one event up instead of 12. Assume there are concurrent users change these tables.
Two options I can think of:
Triggers ARE the right way to capture change events in the DB layer
Codewise, I make sure in my app that each table is changed through only one place in the code, regardless what the change is (I call it a hub for that table, as it channels many different pathways into one place), it becomes very easy to catch change events that way in the code layer
One possibility is SQL Server Query Notifications: Using Query Notifications
As long as you want to 'batch' multiple changes, I think you should follow the route of Change Data Capture or Change Tracking (depending on whether you just want to know that something changed or what changes happened).
They should be used by a 'polling' procedure, where you poll for changes every few minutes (seconds, miliseconds???) and raise events. The nice thing about this is that as long as you store the last rowversion of the previous poll -for each table- you can check whenever you like for changes since the last poll. You don't rely on a real time triggers approach, that if halted you would loose all events forever. The procedure could be easily created inside a procedure that checks each table and you would need only 1 more table to store last rowversion per table.
Also, the overhead of this approach would be controlled by you and by how frequently the polling happens.

Resources