Store sync: Many deletions, some failed - extjs

I have a store in which the user could delete multiple records with a single destroy operation.
Now, a few of these records are locked in the database (because someone else is working on them), and thus cannot be deleted. How can the server tell the frontend that the deletion of records with Id a, b, c was successful, but that records with Id x, y, z could not be deleted and should be moved back into the store and displayed in the grid?
The ExtJS store should know after the sync() which records were really deleted server-side, and which weren't.

I think there's no straightforward solution to this problem. I have opted for the following workaround:
The records now have an "IsDeleted" flag that is set to false by default:
fields:[{
...
},{
name: 'IsDeleted'
type: 'bool',
defaultValue: false
The store has a filter that hides entries where the flag is set to true:
filters:[{
property:'IsDeleted',
value:false
}]
When the user opts to delete, I don't remove entries from the store, instead I set the IsDeleted flag to true on these entries. The filter makes the user think that the entry has been deleted.
When the store syncs, it does an update operation, not a destroy operation. So the update endpoint of the API then has to delete all entries where IsDeleted is transmitted as true. If it can't delete an entry from the database, the corresponding json as returned to the client gets IsDeleted set to false, so that the frontend gets to know that the deletion of that entry failed.

Related

Trigger to restrict duplicate record for a particular type

I have a custom object consent and preferences which is child to account.
Requirement is to restrict duplicate record based on channel field.
foe example if i have created a consent of channel email it should throw error when i try to create second record with same email as channel.
The below is the code i have written,but it is letting me create only one record .for the second record irrespective of the channel its throwing me the error:
Trigger code:
set<string> newChannelSet = new set<string>();
set<string> dbChannelSet = new set<string>();
for(PE_ConsentPreferences__c newCon : trigger.new){
newChannelSet.add(newCon.PE_Channel__c);
}
for(PE_ConsentPreferences__c dbcon : [select id, PE_Channel__c from PE_ConsentPreferences__c where PE_Channel__c IN: newChannelSet]){
dbChannelSet.add(dbcon.PE_Channel__c);
}
for(PE_ConsentPreferences__c newConsent : trigger.new){
if(dbChannelSet.contains(newConsent.PE_Channel__c))
newConsent.addError('You are inserting Duplicate record');
}
Your trigger blocks you because you didn't filter by Account in the query. So it'll let you add 1 record of each channel type and that's all.
I recommend not doing it with code. It is going to get crazier than you think really fast.
You need to stop inserts. To do that you need to compare against values already in the database (fine) but also you should protect against mass loading with Data Loader for example. So you need to compare against other records in trigger.new. You can kind of simplify it if you move logic from before insert to after insert, you can then query everything from DB... But it's weak, it's a validation that should prevent save, it logically belongs in before. It'll waste account id, maybe some autonumbers... Not elegant.
On update you should handle update of Channel but also of Account Id (reparenting to another record!). Otherwise I'll create consent with acc1 and move it to acc2.
What about undelete scenario? I create 1 consent, delete it, create identical one and restore 1st one from Recycle Bin. If you didn't cover after undelete - boom, headshot.
Instead go with pure config route (or simple trigger), let the database handle that for you.
Make a helper text field, mark it unique.
Write a workflow / process builder / simple trigger (before insert, before update) that writes to this field combination of Account__c + ' ' + PE_Channel__c. Condition could be ISNEW() || ISCHANGED(Account__c) || ISCHANGED(PE_Channel__c)
Optionally prepare data fix to update existing records.
Job done, you can't break it now. And if you ever need to allow more combinations (3rd field) it's easy for admin to extend it. As long as you keep under 255 chars total.
Or (even better) there are duplicate matching rules ;) give them a go before you do anything custom? Maybe check https://trailhead.salesforce.com/en/content/learn/modules/sales_admin_duplicate_management out.

How to get fullDocument from MongoDB changeStream when a document is deleted?

My Code
I have a MongoDB with two collections, Items and Calculations.
Items
value: Number
date: Date
Calculations
calculation: Number
start_date: Date
end_date: Date
A Calculation is a stored calcluation based off of Item values for all Items in the DB which have dates in between the Calculation's start date and end date.
Mongo Change Streams
I figure a good way to create / update Calculations is to create a Mongo Change Stream on the Items collection which listens for changes to the Items collection to then recalculate relevant Calculations.
The issue is that according to the Mongo Change Event docs, when a document is deleted, the fullDocument field is omitted which would prevent me from accessing the deleted Item's date which would inform which Calculations should be updated.
Question
Is there any way to access the fullDocument of a Mongo Change Event that was fired due to a document deletion?
No I don't believe there is a way. From https://docs.mongodb.com/manual/changeStreams/#event-notification:
Change streams only notify on data changes that have persisted to a majority of data-bearing members in the replica set.
When the document was deleted and the deletion was persisted across the majority of the nodes in a replica set, the document has ceased to exist in the replica set. Thus the changestream cannot return something that doesn't exist anymore.
The solution to your question would be transactions in MongoDB 4.0. That is, you can adjust the Calculations and delete the corresponding Items in a single atomic operation.
fullDocument is not returned when a document is deleted.
But there is a workaround.
Right before you delete the document, set a hint field. (Obviously use a name that does not collide with your current properties.)
await myCollection.updateOne({_id:theId}, {_deleting: true})
await myCollection.deleteOne({_id:theId})
This will trigger one final change event in the stream, with a hint that the document is getting deleted. Then in your stream watcher, you simple check for this value.
stream.on('change', event => {
if (!event.fullDocument) {
// The document was deleted
}
else if (event.fullDocument._deleting) {
// The document is going to be deleted
}
else {
// Document created or updated
}
})
My oplog was not getting updated fast enough, and the update was looking up a document that was already removed, so I needed to add a small delay to get this working.
myCollection.updateOne({_id:theId}, {_deleting: true})
setTimeout( ()=> {
myCollection.deleteOne({_id:theId})
}, 100)
If the document did not exist in the first place, it won't be updated or deleted, so nothing gets triggered.
Using TTL Indexes
Another way to make this work is to add a ttl index, and then set that field to the current time. This will trigger an update first, and then delete the document automatically.
myCollection.setIndex({_deleting:1}, {expireAfterSeconds:0})
myCollection.updateOne({_id:theId}, {$set:{_deleting:new Date()}})
The problem with this approach is that mongo prunes TTL documents only during certain intervals, 60s or more as stated in the docs, so I prefer to use the first approach.

Records not committed in Camel Route

We have an application that uses Apache Camel and Spring-Data-JPA. We have a scenario where items inserted into the database... disappear. The only good news is that we have an integration test that replicates the behavior.
The Camel route is uses direct on it and has the transaction policy of PROPAGATION_REQUIRED. The idea is that we send in an object with a property of status. And when we change the status we are to send the object into a Camel route to record who and when the status was changed. Is this StatusChange object that isn't being saved correctly.
Our test creates the object, saves it (which sends it to the route), changes the status, and saves it again. After those two saves, we should have two StatusChange objects saved but we only have one. But a second is created. All three of these objects (the original and the 2 StatusChange objects) are Spring-Data-JPA objects managed by JpaRepository objects.
We have a log statement in the service that creates and saves the StatusChanges:
log.debug('Saved StatusChange has ID {}', newStatusChange.id)
So after the first one I see:
Saved StatusChange has ID 1
And the on the re-save:
Saved StatusChange has ID 2
Good! we have the second! And then I see we change the original:
changing [StatusChange#ab2e250f { id: 1, ... }] status change to STATUS_CHANGED
But after the test is done, we only have 1 StatusChange object -- the original with ID:1. I know this because I have this in the cleanup step in my test:
sql.eachRow("select * from StatusChange",{ row->
println "ID -> ${row['ID']}, Status -> ${row['STATUS']}";
})
And the result is :
ID -> 1, Status -> PENDING
I would expect this:
ID -> 1, Status -> STATUS_CHANGED
ID -> 2, Status -> PENDING
This happens in the test in 2 steps -- so we are in the same test so no rollbacks should happen between the two. So what could cause it to be persisted the first time and not the second time?
The problem was -- the service that ran after the Camel route was done threw an exception. It was assumed that the transaction was committed, but it was not. So then the transaction was marked as rollback when the exception hit and that is how things disappeared.
The funniest thing -- the exception happened in the service because the transaction hadn't been committed yet. It's a vicious circle.
EDIT: fixed spelling mistake

Insert or update data in aerospike

I want to insert some record in Aerospike, if the record already exists then I only want to update it.
Currently I am using this query(to insert) -
client.put(wPolicy, key,bin1,bin2)
Can someone please inform me how to update or insert depending on whether the record is duplicate?
Use the default write policy, which does the following:
(1) If the specified bins do not yet exist, they will be inserted; and
(2) If the specified bins exist and have values, those values will be replaced.
To use the default write policy, if you're using the Java client, just pass in null to the writePolicy parameter. I suspect other clients will be similar.
If there are more sub-parts to your question, you can add details to your question and I'll revisit later.
As Aaron mentioned, the default write policy for existence is AS_POLICY_EXISTS_IGNORE, which means "Write the record, regardless of existence. (i.e. create or update.)". Therefore, you don't have to set the existence policy explicitly, as it already does what you expect.
You can choose to have a more SQL-like behavior, with AS_POLICY_EXISTS_CREATE (with the write failing if the record already exists), AS_POLICY_EXISTS_UPDATE (with the write failing if the record doesn't already exists), and AS_POLICY_EXISTS_REPLACE (with the write failing if the record doesn't exist, AND what you write always replacing the previous version completely) and AS_POLICY_EXISTS_CREATE-OR-REPLACE (which either creates a new record if none exists, or completely overwrites the record if it does).
In the Python client you would set one of these alternative existence write policies on the aerospike.Client.put():
from __future__ import print_function
import aerospike
from aerospike.exception import RecordError
config = {
'hosts': [ ('127.0.0.1', 3000) ],
'timeout': 1500
}
client = aerospike.client(config).connect()
try:
key = ('test', 'users', 1)
bins = {
'username': 'ninjastar',
'age': 47,
'hp': 1234
}
client.put(key, bins,
policy={'exists': aerospike.POLICY_EXISTS_CREATE},
meta={'ttl': 3600})
except RecordError as e:
print("The user record already exists: {0} [{1}]".format(e.msg, e.code))
sys.exit(1)
finally:
client.close()
The possible values for exists are aerospike.POLICY_EXISTS_*.

How to set value in a field by UI?

I use three fields in Sqlserver Datavbase tables, for prevent delete records permanently by user:
IsDelete (bit)
DeletedDate (DateTime)
DeletedUserID (bigint)
I wish to set third field (DeletedUserID) by UI by some thing like this:
this.ExamdbDataSet.AcceptChanges();
DataRowView row = (DataRowView)this.BindingSource.Current;
row.BeginEdit();
row["DeletedUserID"] = User.User.Current.ID;
row.EndEdit();
this.ExamdbDataSet.AcceptChanges();
row.Delete();
and other two fields ,'IsDeleted' field and 'DeletedDate' are set automatically in table's 'After Delete Trigger'.
then commit changes to database with desire command successfuly with this code:
this.TableAdapterManager.UpdateAll(this.ExamdbDataSet);
but problem is , the 'DeletedUserID' is null in database.
and Question is : How to set 'DeletedUserID' field value by true way in UI?
I don't think it is a good way to do that. You have sliced a simple logic to separate parts, each being done in a different part of the application (UI, Trigger, ...). You set value of some field, and then DELETE the whole record! Don't expect anything else that the current situation.
You would better set all fields in UI (i.e. no trigger in this case), and change the query that loads data. For example,
Select * from table1 where IsDeleted=0
You didn't tell us anything about whether your use ASP.Net or WinForms. Give us more info.

Resources