I've an Attachment model that stores file metadata in MySQL and actual file on file system. I've implemented deletion with Callback Methods:
public function beforeDelete($cascade = true) {
$this->data = $this->findById($this->id);
return parent::beforeDelete($cascade);
}
public function afterDelete() {
$file = new File($this->data['Attachment']['path']);
$file->delete();
}
Is there a way to determine if there's an open transaction and only perform the file system deletion if such transaction gets committed? (Transaction is of course handled in the controller, which may not even be AttachmentsCrontroller but some other.)
That's already a little tricky in CakePHP 3.x, where there are actual events after things have been committed, and where options objects are passed throughout the whole saving/delete process that can store information about the transaction, and even there you'd have to somehow invoke such a process yourself if you'd manually wrap a save/delete operation in a transaction.
You could for example try to implement executing things transactionally in a behavior, your models could then store references to the files to delete in that behavior on beforeDelete, and the behavior could dispatch events on the involved models after things have been committed, something like afterCommit, which your models could listen to and then delete the files.
Related
I'm using cqrs pattern with multiples databases (one for query and another for search). Should I put the insert inside Repository
CommunityRepository{
Add(Community community){
Database1.Insert(community);
Database2.Insert(community);
}
}
and then:
CommunityCommands{
Handler(AddCommunityCommand community){
communityRepository.Add(community);
}
or should i put this in Commands, like this:
CommunityCommands{
Handler(AddCommunityCommand community){
db1.Insert(community);
db2.Insert(community);
}
or maybe something like this, using the main repository + database2
CommunityCommands{
Handler(AddCommunityCommand community){
communityRepository.Add(community);
db2.Insert(community);
}
I would do neither of those options as you'd be basically coupling the Command and Query immplementations.
Instead, publish events from the Command side, like OrderPlacedEvent and subscribe to them from the Query side. This not only allows you to separate the implementations of the Command and Query sides, but it will also allow you to implement other side effects of the events without coupling the code from the multiple features (eg. "when an order is placed, send a confirmation email").
You can implement the pub/sub synchronously (in process) or asynchronously (with a messaging system). If you use messaging, note that you'll have to deal with eventual consistency (the read data is slightly behind the write data, but eventually it catches up).
refreshing the Query Models should be handled in an offline operation. You should do something like this:
process your Command logic (whatever it is)
right before your Command handler returns, send an Event to a message bus
then in a background service you can listen to those Events and update the Query side.
Bonus tip: you can use the Outbox pattern to get more reliability. The idea is to store the Event messages in a table on your Write DB in the same transaction as your previous write operation, instead of sending them directly. A background service would check for "pending" messages and dispatch them. The rest is unchanged.
I am implementing a ReviewableBehavior to implement a four eyes principle. The behavior implements beforeDelete(), beforeSave() and afterSave() and uses a reviews table to store CUD requests.
for added records, a $review record is created and saved in afterSave() (because only the we have the id of the newly added record, which we need to store in the $review)
for edited records, in beforeSave() the values that have been changed are saved in a $review record, in the edited record these field values are set back to their original values (so basically no changes are saved)
for deleted records, in beforeDelete() a $review is saved to store the delete request, and false is returned in order to cancel the delete
Number 3. is the challenge, because although $review always had a correctly set primary key value as if the save was actually successful, and save($review) returned true as if everything went well, it was not actually saved in the database.
The reason, as far as I understand: Deletions are by default done within a transaction. The transaction is started in delete() of the table, then beforeDelete() of the behavior is triggered. With in the event handler, I call ReviewTable->save($review). As a transaction has been started, this save() happens within the transaction. Then I return false, because I want the deletion to be stopped. This rolls back the transaction and with it the ReviewTable->save($review).
Solution attempts:
If I do not return false, the $review is saved in the database, but the "main" record is also deleted. Disadvantage: Not a feasible approach as record is deleted which we do not want.
If I call delete($entity, ['atomic' => false]); then there is no transaction started, hence ReviewTable->save($review) is executed. Disadvantage: For any model that uses this behavior, we would need to amend every call to delete() to switch the atomic of. Also this switches the use of transactions off, which does not appear a good approach to me.
"Overwrite" delete method in ReviewableBehavior, so for any table using this behavior, when delete() is called, actually the delete() of the _ReviewableBehavior_is performed. Disadvantage: technically not possible to overwrite table methods with a behavior.
Create a new table class, overwrite delete() method in it, and derive any table using the ReviewableBehavior from the table class. Disadvantage: ugly approach having to use both the behavior and a new table class.
Create a new method deleteRequest() in the ReviewableBehavior, and instead of calling Table->delete() we call Table->deleteRequest(). In it, we can save the delete request in a $review record, deletion is not done anyway as we did not actually call delete(). Disdavantage: For any model that uses this behavior, we would need to change every call to delete() into deleteRequest()
Currently I go with the last approach, but I would really like to hear some opinions about this, and also whether there is any better method to somehow keep the transaction, but saving something "in between".
Using a separate method is a sensible approach. Another option might be to circumvent the deletion process by stopping the Model.beforeDelete event, when doing so you can return true to indicate a successful delete operation, ie no rollback will happen.
It should be noted that stopping the event will cause possible other listeners in the queue to not be notified! Also halting the regular deletion process will prevent cascading deletes (ie deleting associated records), and the Model.afterDelete event, so if you need cascading deletes, then you'd need to trigger them manually, and the afterDelete event should usually be triggered for successful deletes in any case.
Here's a quick and dirty example, see also \Cake\ORM\Table::_processDelete() for insight on how the regular deletion process works:
public function beforeDelete(
\Cake\Event\Event $event,
\Cake\Datasource\EntityInterface $entity,
\ArrayObject $options
) {
// this will prevent the regular deletion process
$event->stopPropagation();
$table = $event->getSubject();
// this deletes associated records
$table->associations()->cascadeDelete(
$entity,
['_primary' => false] + $options->getArrayCopy()
);
$result = /* create review record */;
if (!$result) {
// this will cause a rollback
return false;
}
$table->dispatchEvent('Model.afterDelete', [
'entity' => $entity,
'options' => $options,
]);
return true;
}
See also
Cookbook > Database Access & ORM > Behaviors > Defining Event Listeners
Cookbook > Events > Stopping Events
GGTS 3.4 Grails 2.3.3 - When generating controllers this version includes a number of #Transactional lines I haven't seen before, and I don't fully understand what they are doing.
At the top of the controller there is the line:
#Transactional(readOnly = true)
Then just before certain dB changing actions: 'save', 'update' and 'delete' there is the line:
#Transactional
I presume that this switches the readOnly to false for each dB changing action. Does it open a new transaction that can be committed or rolled back as well? Is there simple way to force a rollback?
The 'create' action does not have #Transactional line before it despite it carrying out a 'new' db command to create a new instance of the specific domain class. What happens to this newly created but unsaved instance if the save transaction is not completed or if it is rolled back? By not completed I am thinking of introducing a 'cancel' button in the 'create' view to enable users to pull out of the creation if they choose to - also a user could simply navigate out of the create view without invoking the save.
-mike
The standard #Transactional without any properties set uses the platform defaults. These depend upon your transaction manager and your data source. It does however, create a transaction that can be comitted or rolled back.
Conroller methods without any annotation do not particpate in any transactions (provided the entire class isn't annotated as well).
In the case of create there is no need for a transaction because you aren't interacting with the database/transaction manager. Simply creating a new instance of a domain class e.g. new MyDomainClass() doesn't interact with the database at all, which is what you are seeing in the create method.
So in short, you don't need to worry about that instance if your users navigate away from the page or click cancel.
You can you use "withTransaction" method of domains to manage your Transaction manually as follow:
Account.withTransaction { status ->
try{
write your code or business logic here
}catch(Exception e){
status.setRollbackOnly()
}
}
if exception generate then this Transaction will be rollback
I need help with a design issue and what happens in Batch Apex.
This is the scenario we have:
We have a territory object, then when you update a single field needs to update a field on UPTO hundreds of thoursands of contacts. To do this, I am using Batch Apex. And invoking it on the territory record before it’s updated.
Question:
Say the user updates the territory from A to B, and clicks save. This causes a big batch of contacts to get updated and take a while Then, he changes B to C. Are we guaranteed that the final update on all impacted records will be C? How come?
Or, is there a way to schedule your batch jobs? I’m looking into asyncApexJob and using that as a framework…
Is there a better design?
Batch Apex doesn't work the same way a Trigger works. The only way the situation described in your Question 1 would occur is if you were to call/execute a batch from a Trigger, and I would highly recommend avoiding that, if it's even possible.
(and 3.) Batches are typically scheduled to run over-night, or during off hours, using the Apex Scheduler. This is the recommended solution.
First, you will want to put the logic in the AFTER UPDATE trigger on the Territory object, not the BEFORE UPDATE section. As a general rule, if you need to update a field or value on the record/object that the trigger action is for (i.e. the Territory object in your case) then you use the BEFORE UPDATE or BEFORE INSERT section(s), and if you want to create/update/delete other records/objects (i.e. Contacts in your case) you use the AFTER UPDATE or AFTER INSERT section(s).
Second, I do not think there is anything wrong with initiating the batch apex process from your trigger.
For example, let us say you have a batch class called "BatchUpdateContactsBasedOnTerritory." And this class has three (3) key features:
it implements "Database.Stateful" in addition to "Database.Batchable"
it has a constructor method that takes a list of territories as an argument/parameter
it has a member variable to hold the list of territories that are passed in
Part of your batch class:
global list<Territory> TerritoryList;
global BatchUpdateContactsBasedOnTerritory(list<Territory> updatedTerritories){
TerritoryList = updatedTerritories;
}
Your trigger:
trigger TerritoryTrigger on Territory (after delete, after insert, after undelete, after update, before delete, before insert, before update)
{
if(trigger.isInsert)
{
if(Trigger.isBefore){
// before insert event not implemented
}
else if(Trigger.isAfter){
// after insert event not implemented
}
}else if(trigger.isUpdate){
if(Trigger.isBefore){
// before update event not implemented
}
else if(Trigger.isAfter){
// after update event - call batch class to process 1000 records at a time
Database.ExecuteBatch(new BatchUpdateContactsBasedOnTerritory(trigger.new),1000);
}
}else if(trigger.isDelete){
if(Trigger.isBefore){
// before delete event not implemented
}
else if(Trigger.isAfter){
// after delete event not implemented
}
}
else if(Trigger.isUnDelete){
// undelete event not implemented
}
}
I have a situation where, in a model's afterSave callback, I'm trying to access data from a distant association (it's a legacy data model with a very wonky association linkage). What I'm finding is that within the callback I can execute a find call on the model, but if I exit right then, the record is never inserted into the database. The lack of a record means that I can't execute a find on the related model using data that was just inserted into the current.
I haven't found any mention of when data is actually committed with respect to when the afterSave callback is engaged. I'm working with legacy code, but I see no indication that we're specifically engaging transactions, so I'm trying to figure out what my options might be.
Thanks.
UPDATE
The gist of the scenario is this: We're taking event registrations, but folks can be wait listed. A user can register (or be registered) for a given Date. After a registration is complete, I need to check the wait list for the existence of a record for the registering user (WaitList.user_id) on the date being registered for (WaitList.date_id). If such a record exists, it can be deleted because it's become an active registration.
The legacy schema puts me in a place where the registration isn't directly tied to a date so I can't get the Date.id easily. Instead, Registration->Registrant->Ticket->Date. Unintuitive, I know, but it is what it is for now. Even better (sarcasm included), we have a view named attendees that rolls all of this info up and from which I would be able to use the newly created Registration->id to return Attendee.date_id. Since the record doesn't exist, it's not available in the view.
Hopefully that provides a little more context.
What's the purpose of the find query inside of your afterSave?
Update
Is it at all possible to properly associate the records? Or are we talking about way too much refactoring for it to be worth it? You could move the check to the controller if it's not possible to modify the associations between the records.
Something like (in psuedo code)
if (save->isSuccessful) {
if (onWaitList) {
// delete record
}
}
It's not best practice, but it will get you around your issue.