it's been a couple releases since I've had to do a S2S integration, but I ran into an unexpected issue that hopefully someone can solve more effectively.
I have two orgs, sharing contacts over S2S.
Contacts in each org have the identical schema, it's standard fields plus custom fields. I've reproduced a base case with just two custom fields: checkbox field A, and Number(18,0) field B.
Org 1 publishes field A, and subscribes to field B.
Org 2 subscribes to field A, and publishes field B.
Org 1 initiates all S2S workflow by sharing contacts to Org 2 over S2S. Org 2 has auto-accept on.
Org 2 has a Contact Before Insert trigger that simply uses field A to calculate the value for field B. e.g. if field A is checked, populate field B with 2, if unchecked, 0. (This of course is a drastic over-simplification of what I really need to do, but it's the base reproducible case.)
That all works fine in Org 2 - contacts come across fine with field A, and I see the field results get calculated into field B.
The problem is that the result - field B - does not get auto-shared back to Org 1 until the next contact update. It can be as simple as me editing a non-shared field on that same contact, like "Description", in Org 2, and then I instantly see the previously calculated value of field B get pushed back to Org 1.
I'm assuming that this is because, since the calculation of field B is occurring within a Before Insert, the S2S connection assumes the current update transaction was only performed by itself (I can see how this logic would make sense to prevent infinite S2S update loops).
I first tried creating a workflow field update that forcibly updated a (new, dummy) shared field when field B changed, but that still did not cause the update to flow back, presumably because it's in the same execution context which Salesforce deems exempt from re-sharing. Also tried a workflow rule that forwarded the Lead back to the connection queue when the field is changed, and it also didn't work.
I then tried a re-update statement in an AfterUpdate trigger - if the shared field is updated, reload and re-update the shared object. That also didn't work.
I did find a solution, which is a Future method called by the AfterUpdate trigger which reloads and touches any record that had its shared field changed by the BeforeUpdate trigger. This does cause the field results to show up in near-real-time in the originating organization.
This solution works for me for now, but I feel like I MUST be missing something. It causes way more Future calls and DML to be executed than should be necessary.
Does anyone have a more elegant solution for this?
Had the same problem and an amazing Salesforce support rep unearthed this documentation, which covers Salesforce's specific guidance here: https://web.archive.org/web/20210603004222/https://help.salesforce.com/articleView?id=sf.business_network_workflows.htm&type=5
Sometimes it makes sense to use an Apex trigger instead of a workflow. Suppose that you have a workflow rule that updates a secondary field, field B, when field A is updated. Even if your Salesforce to Salesforce partner subscribed to fields A and B, updates to field B that are triggered by your workflow rule aren’t sent to your partner’s organization. This prevents a loop of updates.
If you want such secondary field updates to be sent to your Salesforce to Salesforce partners, replace the workflow with an Apex trigger that uses post-commit logic to update the secondary field.
In bi-directional connections, Salesforce to Salesforce updates are triggered back only on “after” triggers (for example, “after insert” or “after update”), not on “before” triggers.
This is what OP ended up doing, but this documentation from Salesforce at least clears up the assumptions and guesses that were made here as part of the discussion. It also helpfully points out that it's not best practice to use "before" triggers in these circumstances, for future reference.
I think there is no better workaround then what you are doing. Limits for Future Callouts is increased to fairly highlevel, that should not be your concern.
May be other thing you can do is (not sure if this will work as we are still in same context) -
Org 1 -
Field A is Updated, Publishes Contract
Org 2 -
Before Update of Contract in Org 2; If A has been updated - Save ID of the Contract in NEW Custom Object.
In After Update of NEW Custom Object, Update Field B for Given Contract ID. Updates on B will be published
Related
I've to create an automation process to check that no new opportunities has been created for an account in past 12 months and update the account field based on that.
Tried process builder, but it doesn't seem to work.
Tricky
A flow/workflow/process builder needs some triggering condition to fire. If an account was created 5 years ago, not updated since, haven't had any opportunities - it will not trigger any flows until somebody touches it.
And even if you somehow to manage to make a time-based workflow for example (to enqueue making a Task 1 year from now if there are no Opps by then) - it'll "queue" actions only from the moment it was created, it will not retroactively tag old unused accounts.
The time-based actions suck a bit. Say you made it work, it enqueued some future tasks/field updates/whatevers. Then you realise you need to exclude Accounts of certain record type from it. You need to deactivate the workflow/flow to do it - and deactivation wipes the enqueued actions out. So you'd need to save your changes and somehow "touch" all accounts again so they're checked again.
Does it have to be a field on Account? Can it be just a report (which you could make a reporting snapshot of if needed)? You could embed a report on account layout right? A query? Worst case some apex nightly job that runs and tags the accounts? It would dutifully run through them all and set/clear your helper field, easy to change (well, for a developer).
SELECT Id, Name
FROM Account
WHERE Id NOT IN (SELECT AccountId FROM Opportunity WHERE CreatedDate = LAST_N_DAYS:365)
Reporting way would be "cross filter": https://salesforce.vidyard.com/watch/aQ6RWvyPmFNP44brnAp8tf, https://help.salesforce.com/s/articleView?id=sf.reports_cross_filters.htm&type=5
Salesforce question :
We have to update the accounts from a schedulable Job. New records inserting are no problem, but existing records should only be updated if a particular checkbox field is set to true on that (otherwise not to be updated). Also since we know that the apex code runs from a system context.
I am looking for a way which DOES NOT involve pulling the record from the code by searching using the Id and then checking that field value before upserting.
Thank you for helping.
Code
List<Account> accountList = new List<Account>(accountsToUpdate);
upsert accountList MY_COMPOSITE_KEY__c;
Make a validation rule simply has Your_Checkbox__c as error condition. Or make a before insert, before update trigger on Account (if you don't have one already) that would inspect all records in trigger.new and call addError() on them. Validation rule is slightly preferred because it's just config, no code.
The problem with either is that it will cause your whole transaction to die. If your batch updates 200 accounts and one of them has this checkbox - this fail will block updating them all. This is done to ensure system's state is stable (read up about "atomic operations" or "ACID"), you wouldn't want data that's halfway updated...
So probably you'll have to mitigate that by calling Database.upsert(accountsToUpdate, External_ID__c, false); so it saves what it can and doesn't throw exceptions...
I'm new to Flink and I'm trying to use it to have a bunch of live views of my application. At least one of the dynamic views I'd like to build would be to show entries that have not met an SLA -- or essentially expired -- and the condition for this would be a simple timestamp comparison. So I would basically want an entry to show up in my dynamic table if it has NOT been touched by an event recently. In playing around with Flink 1.6 (constrained to this due to AWS Kinesis) in a dev environment, I'm not seeing that Flink is re-evaluating a condition unless an event touches that entry.
I've got my dev environment plugged into a Kinesis stream that's sending in live access log events from a web server. This isn't my real use case but it was an easy one to begin testing with. I've written a simple table query that pulls in a request path, its last access time, and computes a boolean flag to indicate whether it hasn't been accessed in the last minute. I'm debugging this via a retract stream connected to PrintSinkFunction so all updates/deletes are printed to my console.
tEnv.registerDataStream("AccessLogs", accessLogs, "username, status, request, responseSize, referrer, userAgent, requestTime, ActionTime.rowtime");
Table paths = tEnv.sqlQuery("SELECT request AS path, MAX(requestTime) as lastTime, CASE WHEN MAX(requestTime) < CURRENT_TIMESTAMP - INTERVAL '1' MINUTE THEN 1 ELSE 0 END AS expired FROM AccessLogs GROUP BY request");
DataStream<Tuple2<Boolean, Row>> retractStream = tEnv.toRetractStream(paths, Row.class);
retractStream .addSink(new PrintSinkFunction<>());
I expect that when I access a page, an Add event is sent to this stream. Then if I wait 1 minute (do nothing), the CASE statement in my table will evaluate to 1, so I should see a Delete and then Add event with that flag set.
What I actually see is that nothing happens until I load that page again. The Delete event actually has the flag set, while the Add event that immediate follows that has it cleared again (as it should since it's no longer "expired).
// add/delete, path, lastAccess, expired
(true,/mypage,2019-05-20 20:02:48.0,0) // first page load, add event
(false,/mypage,2019-05-20 20:02:48.0,1) // second load > 2 mins later, remove event for the entry with expired flag set
(true,/mypage,2019-05-20 20:05:01.0,0) // second load, add event
Edit: The most useful tip I've come across in my searching is to create a ProcessFunction. I think this is something I could make work with my dynamic tables (in some cases I'd end up with intermediate streams to look at computed dates), but hopefully it doesn't have to come to that.
I've gotten the ProcessFunction approach to work but it required a lot more tinkering than I initially thought it would:
I had to add a field to my POJO that changes in the onTimer() method (could be a date or a version that you simply bump each time)
I had to register this field as part of the dynamic table
I had to use this field in my query in order for the query to get re-evaluated and change the boolean flag (even though I don't actually use the new field). I just added it as part of my SELECT clause.
Your approach looks promising but a comparison with a moving "now" timestamp is not supported by Flink's Table API / SQL (yet).
I would solve this in two steps.
register the dynamic table in upsert mode, i.e., a table that is upserted per key (request in your case) based on a version timestamp (requestTime in your case). The resulting dynamic table would hold the latest row for every request.
Have a query with a simple filter predicate like yours that compares the version timestamp of the rows of the dynamic (upsert) table and filters out all rows that have timestamps which are too close to now.
Unfortunately, neither of both features (upsert conversions and comparisons against the moving "now" timestamp) are available in Flink, yet. There is some ongoing work for upsert table conversions though.
Okay, just to clarify: I have a SQL Table (contains ID, School, Student ID, Name, Fee $, Fee Type, and Paid (as the columns)) that needs to be posted on a Grid that will uploaded on a website. The Grid shows everything correctly and shows what Fees need to be Paid. The Paid column has a bit data type for 1 or 0 (basically a checklist.) I am being asked to add two more columns: User and DateChanged. The reason why is to log which staff changed the "Paid" column. It would only capture the Username of the staff who changed it in the SQL Table and also the time. So to clarify even more, I need to create 2 columns: "User, DateChanged" and the columns would log when someone changed the "Paid" column.
For example: User:Bob checks the Paid column for X student on 5/2/17 at 10pm.
In the same row of X student's info, under User column Tom would appear there. Under DateChanged it would show 2017-05-02 10pm.
What steps would I take to make this possible.
I'm currently IT Intern and all this SQL stuff is new to me. Let me know if you need more clarification. FYI The two new columns: User, DateChanged will not be on the grid.
The way to do this as you've described is to use a trigger. I have an example of some code below but be warned as triggers can have unexpected side-effects, depending on how the database and app interface are set up.
If it is possible for you to change the application code that sends SQL queries to the database instead, that would be much safer than using a trigger. You can still add the new fields, you would just be relying on the app to keep them updated instead of doing it all in SQL.
Things to keep in mind about this code:
If any background processes or procedures make updates to the table, it will overwrite the timestamp and username automatically, because this is triggered on any update to the row(s) in question.
If the users don't have any direct access to SQL Server (in other words, the app is the only thing connecting to the database), then it is possible that the app will only be using one database login username for everyone, and in that case you will not be able to figure out which user made the update unless you can change the application code.
If anyone changes something by accident and then changes it back, it will overwrite your timestamp and make it look like the wrong person made the update.
Triggers can potentially bog down the database system if there are a very large number of rows and/or a high number of updates being made to the table constantly, because the trigger code will be executed every time an update is made to a row in the table.
But if you don't have access to change the application code, and you want to give triggers a try, here's some example code that should do what you are needing:
create trigger TG_Payments_Update on Payments
after update
as
begin
update Payments
set DateChanged = GetDate(), UserChanged = USER_NAME()
from Payments, inserted
where Payments.ID = inserted.ID
end
The web app already knows the current user working on the system, so your update would just include that user's ID and the current system time for when the action took place.
I would not rely on SQL Server triggers since that hides what's going on within the system. Plus, as others have said, they have side effects to deal with too.
I'm working on a project in dead ASP (I know :( )
Anyway it is working with a kdb+ database which is major overkill but not my call. Therefore to do inserts etc we're having to write special functions so they can be handled.
Anyway we've hit a theoretical problem and I'm a bit unsure how it should be dealt with in this case.
So basically you register a company, when you submit validation will occur and the page will be processed, inserting new values to the appropriate tables. Now at this stage I want to pull ID's from the tables and use them in the session for further registration screens. The user will never add a specific ID of course so it needs to be pulled from the database.
But how can this be done? I'm particularly concerned with 2 user's simultaneously registering, how can I ensure the correct ID is passed back to the correct session?
Thank you for any help you can provide.
Instead of having the ID set at the point of insert, is it possible for you to "grab" an ID value before hand, and then use that value throughout the process?
So:
Start the registration.
System connects to the database, creates an ID (perhaps from an ID table) and Stores in ASP Session.
Company registers.
You validate and insert data into DB (including the ID session)
The things you put in the Session(...) collection is only visible to that session (i.e. the session is used only by the browser windows on one computer). The session is identified by a GUID value that is stored in a cookie on the client machine. It is "safe" to store your IDs there (other users won't be able to read them easily) .
either your id can include date and time - so it will be example - id31032012200312 - but if you still think that 2 people can register at the same type then I would use recordset locks liek the ones here - http://www.w3schools.com/ado/prop_rs_locktype.asp
To crea ids like above in asp you do - replace(date(),"/","") ' and then same with time with ":"
Thanks