Cloudkit Sync With CoreData Issue with CKModifyOperation - database

Am trying to sync cloudkit and coredata. Have two tables as:
Parent
Child (Child has a CKReference to Parent. i.e Backward reference from child to parent.)
Now according to apple this are the steps we must follow.
Fetch local changes - Done by maintaining a update variable with every record. say 3-delete, 1-create and 2-update.
Upload the local changes to cloud - Here I use CKModifyRecordsOperation and provide the inserted records as one with update value 1 or 3 and 2 as deleted records. (Atomic to avoid inconsistency)
Correct the conflicts if any (So here records with greater modification date are chosen and conflicts are resolved.)
Fetch server changes (Here any change that are made to server from last change token are fetched with CKFetchChangesOperation)
Apply server changes to local()
Now say I have 2 devices and have already synced them with following data
Parent-1
P1-Child1 (references Parent-1)
Now in 1 device I delete Parent-1 and P1-Child1 and let it sync to cloud. On cloudkit dashboard i verify that the Parent and child both have been successfully deleted.
In Device 2, Now I add P1-Child2 another child to previous parent. Consider the above steps
LocalChanges :- (P1-Child2)
Upload to cloud :- (P1-Child2)
Conflicts :- None
Fetch changes from cloud :(Inserted :P1-Child2, Deleted:Parent-1,P1-Child1)
Apply this to local.
P1-Child2 is saved successfully on Cloud without a parent. So now am left with a child record with no parent.
Can you guys help me in figuring out the right method to solve this.
I thought if apple could have given an error on CKModifyOperation as mentioned in its doc then I could know that a parent record does not exist and I can re-save or upload a parent record along with the child.

Related

execute logic beforedelete event trigger

Before deleting the record (Ex: Account obj record), I want to update the field on the account record and send it to content management, hold it for a few seconds, and then delete it.
For this scenario, I used beforedelete event and updated the fields in the record, and called the content management with updated record data. The record is updated with new values (i verified after restoring it from recycle bin), But it is not calling the content management before deleting the record. Is there any option that we can wait for a few seconds until the record is updated on content management and delete the record? Please share your suggestions. Thank you.
You can't make a callout straight from a trigger (SF database table/row can't be locked and held hostage until 3rd party system finishes, up to 2 minutes), it has to be asynchronous. So you probably call from #future but by then the main trigger finished, the record is deleted, if you passed an Id - probably the query inside #future returns 0 rows.
Forget the bit about "holding it for few seconds". You need to make some architecture decisions. Is it important that delete succeeds no matter what? or do you want to delete only after the external system acknowledged the message?
You could query your record in the trigger (or take whole trigger.old) and pass to the future method? It's supposed to take only primitives, not objects/collections but you could always JSON.serialize it before passing as string.
You could hide the standard delete button and introduce custom one. There you'd have a controller which can make the callout, wait till success response comes back and then delete?
You could rethink the request-response thing. What if you make the callout (or raise platform event?) and it's the content management system that then reaches to salesforce and deletes (via REST API for example).
What if you just delete right away, hope they stay in recycle bin and then external system can query the bin / make special getDeleted call and pull the data.
See Salesforce - Pull all deleted cases in Salesforce for some more bin-related api calls.

Azure Mobile Service offline syncing behavior with relational database using .NET backend

We have parent child tables
Delviery
DeliveryItems
Delivery Table contains status updates;
Based on Delivery Status update,
we have triggered to insert items into another system.
From my mobile application I am inserting DeliveryItems first (offline),
and then updating Delivery Staus (offline).
Now when I am syncing with Azure mobile service.
Delivery record getting updated before completing insertion of all items.
I want insert/update/delete to be done sequentially, how do I achieve this?
That's based on your sync order.If you want to sync delivery items first, place the sync for "sync delivery items" above "delivery".
Its not currently possible to guarantee order for two main reasons:
An error in inserting delivery item 2, will not inherently stop the attempt to insert delivery item 3. (You can address this via a Handler)
Multiple actions taken on the same item are combined (So an offline insert, and update will go as one Insert to the server when you come online)
If it's the first case that is tripping you up, you can have the sync handler abort the sync (so items 3, 4, .. and the Delivery don't go up)
Handling the second case is more complex, with the simplest (but maybe unreasonable) approach is don't edit the Delivery until after you have inserted/edited all the items.

Salesforce-to-salesforce round trip field update issue

it's been a couple releases since I've had to do a S2S integration, but I ran into an unexpected issue that hopefully someone can solve more effectively.
I have two orgs, sharing contacts over S2S.
Contacts in each org have the identical schema, it's standard fields plus custom fields. I've reproduced a base case with just two custom fields: checkbox field A, and Number(18,0) field B.
Org 1 publishes field A, and subscribes to field B.
Org 2 subscribes to field A, and publishes field B.
Org 1 initiates all S2S workflow by sharing contacts to Org 2 over S2S. Org 2 has auto-accept on.
Org 2 has a Contact Before Insert trigger that simply uses field A to calculate the value for field B. e.g. if field A is checked, populate field B with 2, if unchecked, 0. (This of course is a drastic over-simplification of what I really need to do, but it's the base reproducible case.)
That all works fine in Org 2 - contacts come across fine with field A, and I see the field results get calculated into field B.
The problem is that the result - field B - does not get auto-shared back to Org 1 until the next contact update. It can be as simple as me editing a non-shared field on that same contact, like "Description", in Org 2, and then I instantly see the previously calculated value of field B get pushed back to Org 1.
I'm assuming that this is because, since the calculation of field B is occurring within a Before Insert, the S2S connection assumes the current update transaction was only performed by itself (I can see how this logic would make sense to prevent infinite S2S update loops).
I first tried creating a workflow field update that forcibly updated a (new, dummy) shared field when field B changed, but that still did not cause the update to flow back, presumably because it's in the same execution context which Salesforce deems exempt from re-sharing. Also tried a workflow rule that forwarded the Lead back to the connection queue when the field is changed, and it also didn't work.
I then tried a re-update statement in an AfterUpdate trigger - if the shared field is updated, reload and re-update the shared object. That also didn't work.
I did find a solution, which is a Future method called by the AfterUpdate trigger which reloads and touches any record that had its shared field changed by the BeforeUpdate trigger. This does cause the field results to show up in near-real-time in the originating organization.
This solution works for me for now, but I feel like I MUST be missing something. It causes way more Future calls and DML to be executed than should be necessary.
Does anyone have a more elegant solution for this?
Had the same problem and an amazing Salesforce support rep unearthed this documentation, which covers Salesforce's specific guidance here: https://web.archive.org/web/20210603004222/https://help.salesforce.com/articleView?id=sf.business_network_workflows.htm&type=5
Sometimes it makes sense to use an Apex trigger instead of a workflow. Suppose that you have a workflow rule that updates a secondary field, field B, when field A is updated. Even if your Salesforce to Salesforce partner subscribed to fields A and B, updates to field B that are triggered by your workflow rule aren’t sent to your partner’s organization. This prevents a loop of updates.
If you want such secondary field updates to be sent to your Salesforce to Salesforce partners, replace the workflow with an Apex trigger that uses post-commit logic to update the secondary field.
In bi-directional connections, Salesforce to Salesforce updates are triggered back only on “after” triggers (for example, “after insert” or “after update”), not on “before” triggers.
This is what OP ended up doing, but this documentation from Salesforce at least clears up the assumptions and guesses that were made here as part of the discussion. It also helpfully points out that it's not best practice to use "before" triggers in these circumstances, for future reference.
I think there is no better workaround then what you are doing. Limits for Future Callouts is increased to fairly highlevel, that should not be your concern.
May be other thing you can do is (not sure if this will work as we are still in same context) -
Org 1 -
Field A is Updated, Publishes Contract
Org 2 -
Before Update of Contract in Org 2; If A has been updated - Save ID of the Contract in NEW Custom Object.
In After Update of NEW Custom Object, Update Field B for Given Contract ID. Updates on B will be published

Return value for correct session?

I'm working on a project in dead ASP (I know :( )
Anyway it is working with a kdb+ database which is major overkill but not my call. Therefore to do inserts etc we're having to write special functions so they can be handled.
Anyway we've hit a theoretical problem and I'm a bit unsure how it should be dealt with in this case.
So basically you register a company, when you submit validation will occur and the page will be processed, inserting new values to the appropriate tables. Now at this stage I want to pull ID's from the tables and use them in the session for further registration screens. The user will never add a specific ID of course so it needs to be pulled from the database.
But how can this be done? I'm particularly concerned with 2 user's simultaneously registering, how can I ensure the correct ID is passed back to the correct session?
Thank you for any help you can provide.
Instead of having the ID set at the point of insert, is it possible for you to "grab" an ID value before hand, and then use that value throughout the process?
So:
Start the registration.
System connects to the database, creates an ID (perhaps from an ID table) and Stores in ASP Session.
Company registers.
You validate and insert data into DB (including the ID session)
The things you put in the Session(...) collection is only visible to that session (i.e. the session is used only by the browser windows on one computer). The session is identified by a GUID value that is stored in a cookie on the client machine. It is "safe" to store your IDs there (other users won't be able to read them easily) .
either your id can include date and time - so it will be example - id31032012200312 - but if you still think that 2 people can register at the same type then I would use recordset locks liek the ones here - http://www.w3schools.com/ado/prop_rs_locktype.asp
To crea ids like above in asp you do - replace(date(),"/","") ' and then same with time with ":"
Thanks

how to update part of object

It is necessary to insert some data in DB once each web service method is called: in the beginning of the request processing and in the end.
My intention is to insert record that will contains all income information in the beginning of request processing and after that update the same record once request is processed and data are ready to be send back (or error is occurred and I is need to store error message).
The problem is that income data can be pretty long and LINQ To SQL before update need to fetch object data from DB and then "store" it again. In this case "income data" is going 3 times:
1st time when inserting - it goes into DB;
2nd time before object update - it is fetched from DB;
3rd time on update - it is going to DB again.
Is there any possibility to optimize such process if I already have object fetched from DB?
Is the same applied to Entity Framework? Does it allow to update only the part of object?
An ORM is geared towards converting complete rows to complete objects, and back again - so updates are always to the full object.
However, both Linq-to-SQL as well as Entity Framework are definitely smart enough to find out what properties have changed on an entity, so if you only update some fields, the generated SQL command using UPDATE will only update those changed fields.
So basically: you just try it! Fire up SQL profiler and see what SQL goes to the database; in Entity Framework, I'm positive that if you only change some fields, only those changed fields will be updated in an UPDATE statement and nothing else.

Resources