Saleforce Admin Sharing Rule - salesforce

[Note: There is a Teacher Object with the fields such as Teacher Name, DateofJoining, and also a formula field called Experience]
My Task was to create a Public Group consisting of another user
and this user should only see teachers who have experience greater than 2 years
But when i create a sharing rule based on criteria the field name called Experience doesn't show up as it is a formula field.
So i got an idea of creating a new field(maybe a text or number data type) which would have the value of Experience in it. (But i have no idea on how to implement this)
Is there a way to implement this?
Any other solution is also well appreciated!

Hard to say.
Normal trick would be to create a helper field (text, number, whatever) and have piece of functionality that populates it. An "early flow" or "before insert, before update" trigger ideally. Worst case a normal flow, process builder or "after insert, after update" trigger. Something like "if Experience__c != 'your formula here' then Experience__c = 'your formula here'". Consult normal SF help and trailhead if you never used early flows
You'd make an one-off data fix to populate existing records and job done, normal field should be selectable as sharing rule criteria.
=====
But I smell trouble with your formula. What exactly you have there, something like Experience__c = (TODAY() - DateofJoining__c) / 365? That's bit evil. Formulas with TODAY(), NOW() or anything with $ (roughly speaking who's looking at the data, user's name, profile role... not what's actually on the record itself) are "nondeterministic". Unpredictable.
A "today()" changes just like that, without updating the record. Sure, when you watch the record a fresh value will be calculated but other than that LastModifiedDate doesn't change, there's no magical trigger running at midnight that rechecks sharing. (especially that there's no single midnight, you could have users in multiple timezones). SF just doesn't allow nondeterministic fields in many places, see https://salesforce.stackexchange.com/q/32122/799
So if you do rely on TODAY() in your formula you might have to make a "scheduled flow" or read about schedulable, batchable apex. Create nightly job that would run and recalculate your helper field with right experience. You'd probably even need both solutions, a "before save" flow for new data created today and nightly job to advance the clock on existing old data...

Related

If I am having any random accountId then How to find its ultimate parent account - looking for best optimized solution (for multiple level hierarchy)

If I am having any random accountId then How to find its ultimate parent account - looking for best-optimized solution (for multiple level hierarchy)
except 10 levels of formula field solution
It depends. Optimized for what, read operations (instant simple answer when querying) or write (easy save but more work when reading).
If you want easy read - you need to put some effort when saving the data. And remember you can't get away with a simple custom lookup called "Ultimate Parent" - because for standalone account SF will not let you form a cycle, create record that looks up itself. You might need 2 text fields (Id and Name) or some convention that yes, you'll make a lookup to Account - but if it's blank - the reading process needs to check the ParentId field too to determine what exactly is going on. (you could make a formula field to simplify reading but still - don't think you're getting away with simple lookup)
How much data you have, how deep hierarchies? The basic answer is to keep track of ultimate parent on every insert, update, delete and undelete. Write a trigger, SOQL query can go "up" 5 "dots" max
SELECT ParentId,
Parent.ParentId,
Parent.Parent.ParentId,
Parent.Parent.Parent.ParentId,
Parent.Parent.Parent.Parent.ParentId,
Parent.Parent.Parent.Parent.Parent.ParentId
FROM Account
WHERE Id IN :trigger.new
It gets messier if you need multiple queries (but still, this form would be most effective). And also you might hit performance issues when something reparents close to top of the tree and you're suddenly looking at having to cascade update hundreds of accounts. Remember you have a limit of 10K rows inserted/updated/deleted in single operation. You might have to propagate the changes down as a batch/future/queueable async process.
Another option would be to have a flat helper object aside from account table, with unique id set to account id. Have a background process periodically refreshing that table, even every hour. Using a batch job or reporting snapshot. Still not great if you have milions of accounts, waste of storage... but maybe you could use Big Objects.
Have you ever used platform cache? If the ultimate parent has to be fetched via apex (instead of being a real field on Account) - you could try to make some kind of "linked list" implementation where you store Id -> ParentId in cache and can travel it without wasting any queries. Cache's max is 48h (so might still need a nightly job to rebuild it) and you'd still have to update it on every insert/update/delete/undelete...
So yeah, "it depends". Write more about your requirement.

Want to capture fields which get updated in Salesforce

I wish to create a generic component which can save the Object Name and field Names with old and new values in a BigObject.
The brute force algo says, on every update of each object, get field API names using describe and check old and new value of those fields. If it gets modified insert it into new BigObject.
But it will consume a lot of CPU time and I am looking for an optimum solution to handle this.
Any suggestions are appreciated.
Well, do you have any code written already? Maybe benchmark it and then see what you can optimise instead of overdesigning it from the start... Keep it simple, write test harness and then try to optimise (without breaking unit tests).
Couple random ideas:
You'd be doing that in a trigger? So your "describe" could happen only once. You don't need to describe every single field, you need only one operation outside of trigger's main loop.
Set<String> fieldNames = Account.sObjectType.getDescribe().fields.getMap().keyset();
System.debug(fieldNames);
This will get you "only" field names but that's enough. You don't care whether they're picklists or dates or what. Use that with generic sObject.get('fieldNameHere') and it's a good start.
or maybe without describe at all. sObject's getPopulatedFieldsAsMap() will give you cool Map which you can easily iterate & compare.
or JSON.serialize the old & new version of the object and if they aren't identical - you know what to do. No idea if they'll always serialise with same field order though so checking if the maps are identical might be better
do you really need to hand-craft this field history tracking like that? You have 1M records free storage but it could explode really easily in busier SF org. Especially if you have workflows, processes, other triggers that would translate to multiple updates (= multiple trigger runs) in same transaction. Perhaps normal field history tracking + chatter feed tracking + even salesforce shield (it comes with 60 more fields tracked I think) would be more sensible for your business needs.

Ways to design a digest email implementation?

I'm looking for some thoughts about how to design a digest email feature. I'm not concerned about the actual business code; instead I'd like to focus on the gist of it.
Let's tackle this with a known example: articles. Here's a general overview of some important features:
The user is able to choose the digest frequency (e.g. daily or weekly);
The digest only contains new articles;
"New articles" are to be considered relative to the previous digest that was sent to a specific user;
I've been thinking about the following:
Introduce per-user tracking of articles previously included in a digest and filter those out?
Requires a new database table;
Could become expensive when the table contains millions of rows;
What to do in case of including multiple types of models in the digest? Multiple tracking tables? Polymorphic table? ...?
Use article creation dates to include articles between current date and the chosen digest frequency?
Uses current date and information already present in the database, so no new tables required;
What happens when a user changes from daily to weekly emails? He could receive the same article again in the weekly digest. Should this edge case be considered? If so, how to mitigate?
For some reason the creation date of an article is being updated to today, positively triggering the date comparison again. Should this edge case be considered? If so, how to mitigate?
Or can you think of other ways to implement this feature?
I'm eager to learn your insights.
You can make an additional table that will contain information about digest subscription by each user. This way gives the ability to make a database design cleaner and more universal because mailing is a separate logical module. Aside from that, the additional table gives the ability to expand stored data about digest subscription easy in future. For example:
With help usage of this table, you would manage data easy. For example, you can select all recipient of daily digest:
SELECT *
FROM digest_subscription
WHERE interval_type = 'daily'
AND last_date_distribution <= NOW()
or select all recipient of the weekly digest
SELECT *
FROM digest_subscription
WHERE interval_type = 'weekly'
AND last_date_distribution <= NOW() - INTERVAL 7 DAY
Condition by interval type and compare the last date distribution by rule "equal or less" give the ability to avoid problems of untimely sending of emails (for example technical failures on a server, etc.)
Also, you can make correct articles list with help information of the last data distribution. Usage of the last data distribution gives the ability to avoid problems of interval change. For example:
SELECT *
FROM articles
WHERE created_at >= <the last date distribution of the user>
Of course, you don't avoid the problems of updated creation date. But you should minimize the reasons for that happening. For example, your code can update the modification date but your code shouldn't modify the creation date.

Recurring Events Database Model

I've being searching for a solution for recurring events, so far I've found two approaches:
First approach:
Create an instance for each event, so if the user has a daily event for one year, it would be necessary 365 rows in the table.
It sounds plausible for a fixed time frame, but how to deal with events that has no end date?
Second approach:
Create a Reccuring pattern table that creates future events on runtime using some kind of Temporal expression (Martin Fowler).
Is there any reason to not choose the first approach instead of the second one?
The first approach is going to overpopulate the database and maybe affect performance, right?!
There's a quote about the approach number 1 that says:
"Storing recurring events as individual rows is a recipe for disaster." (https://github.com/bmoeskau/Extensible/blob/master/recurrence-overview.md)
What do you guys think about it? I would like some insights on why that would be a disaster.
I appreaciate your help
The proper answer is really both, and not either or.
Setting aside for a moment the issue of no end date for recurrence: what you want is a header that contains recurrence rules for the whole pattern. That way if you need to change the pattern, you've captured that pattern in a single record that can be edited without risking update anomalies.
Now, joining against some kind of recurrence pattern in SQL is going to be a great big pain in the neck. Furthermore, what if your rules allow you to tweak (edit, or even delete) specific instances of this recurrence pattern?
How do you handle this? You have to create an instance table with one row per recurring instance with a link (foreign key) back to the single rule that was used to create it. This let's you modify an individual child without losing sight of where it came from in case you need to edit (or delete) the entire pattern.
Consider a calendaring tool like Outlook or Google Calendar. These applications use this approach. You can move or edit an instance. You can also change the whole series. The apps ask you which you mean to do whenever you go into an editing mode.
There are some limitations to this. For example, if you edit an instance and then edit the pattern, you need to have a rule that says either (a) new parent wins or (b) modified children always win. I think Outlook and Google Calendar use approach (a).
As for why having each instance recorded explicitly, the only disastrous thing I can think of would be that if you didn't have the link back to the original recurrence pattern you would have a heck of a time cancelling the whole series in one action.
Back to no end date - This might be a case of discretion being the better part of valour and using some kind of rule of thumb that imposes a practical limit on how far into the future you extend such a series - or alternatively you could just not allow that kind of rule in a pattern. Force an end to the pattern and let the rule's creator worry about extending it at whatever future point it becomes necessary.
Store the calendar's event as a rule rather than just as a materialized event.
Storing recurring event materialized as a row is a recipe for disaster for the apparent reason, that the materialization will ideally be of infinite length. Since endless length table is not possible, the developer will try to mimic that behavior using some clever, incomprehensive trick - resulting in erratic behavior of the application.
My suggestion: Store the rules and materialize them and add as rows, only when queried - leading to a hybrid approach.
So you will have two tables store your information, first for storing rules, second, for storing rows materialized from any rule in the rules' table.
The general guidelines can be:
For a one-time event, add a row to the second table.
For a recurring event, add a row to the first table and materialize some of into the second table.
For a query about a future date, materialize the rules and save them in the second table.
For a modification of a specific instance of a recurring event, materialize the event up till the instance you want to modify, and then modify the last instance and store it.
Further, if the event is too far in the future, do not materialize it. Instead save it as a rule also and execute it later when the time arrives.
Plain tables will not be enough to store what you are trying to save. Keeping this kind of information in the database is best maintained when supported with Stored Procedures for access and modifications.
from the answers in the blog post and answers here:
1- eat DB storage and memory with these recurrences (with no need) , with the extreme case of "no-end date"
2- impact performance (for query / join / update / ...)
3- in case of update (or generally in any case you need to handle the recurrence set as a set not as individual occurrences) , you will need to update all rows

Best practices for managing updating a database with a complex set of changes

I am writing an application where I have some publicly available information in a database which I want the users to be able to edit. The information is not textual like a wiki but is similar in concept because the edits bring the public information increasingly closer to the truth. The changes will affect multiple tables and the update needs to be automatically checked before affecting the public tables.
I'm working on the design and I'm wondering if there are any best practices that might help with some particular issues.
I want to provide undo capability.
I want to show the user the combined result of all their changes.
When the user says they're done, I need to check the underlying public data to make sure it hasn't been changed by somebody else.
My current plan is to have the user work in a set of tables setup to be a private working area. Once they're ready they can kick off a process to check everything and update the public tables. Undo can be recorded using Command pattern saving to a table.
Are there any techniques I might have missed or useful papers or patterns?
Thanks in advance!
I would do it like this:
Use insert only databases, you never update data only add new rows
Each row has a valid from date, a valid to date and who made the change
Read the data through a query where the valid to date = null, and the row is approved, this gives the most recent row
When a user adds data, he can see his changes by selecting the last row that he added
When the user is happy with the changes he has made he can mark them as approved
When they are approved they can be seen by other users
Undo is not a problem since you have all the previous versions, you can mark a row as no longer being approved and revert to a previous version.

Resources