How to know a Salesforce table field is auto-calculated? - salesforce

Salesforce provides CaseMilestone table. Each time I call the API to get a same object, I noticed that TimeRemainingInMins field has a different value. So I guessed this field is auto-calculated each time I call the API.
Is there a way to know what fields in a table are auto-calculated ?
Note : I am using python simple-salesforce library.

Case milestone is special because it's used as countdown to service level agreement (SLA) violation, drives some escalation rules. Depending on how admin configured the clock you may notice it stops for weekends, bank holidays or maybe count only Mon-Fri 9-17...
Out of the box other place that may have similar functionality is OpportunityHistory table. Don't remember exactly but it's used by SF for for duration reporting, how long oppty spent in each stage.
That's standard. For custom fields that change every time you read them despite nothing actually changing the record (lastmodifiedate staying same) - your admin could have created formula fields based on "NOW()" or "TODAY()", these would also recalculate every time you read them. You'd need some "describe" calls to get the field types and the formula itself.

Related

Saleforce Admin Sharing Rule

[Note: There is a Teacher Object with the fields such as Teacher Name, DateofJoining, and also a formula field called Experience]
My Task was to create a Public Group consisting of another user
and this user should only see teachers who have experience greater than 2 years
But when i create a sharing rule based on criteria the field name called Experience doesn't show up as it is a formula field.
So i got an idea of creating a new field(maybe a text or number data type) which would have the value of Experience in it. (But i have no idea on how to implement this)
Is there a way to implement this?
Any other solution is also well appreciated!
Hard to say.
Normal trick would be to create a helper field (text, number, whatever) and have piece of functionality that populates it. An "early flow" or "before insert, before update" trigger ideally. Worst case a normal flow, process builder or "after insert, after update" trigger. Something like "if Experience__c != 'your formula here' then Experience__c = 'your formula here'". Consult normal SF help and trailhead if you never used early flows
You'd make an one-off data fix to populate existing records and job done, normal field should be selectable as sharing rule criteria.
=====
But I smell trouble with your formula. What exactly you have there, something like Experience__c = (TODAY() - DateofJoining__c) / 365? That's bit evil. Formulas with TODAY(), NOW() or anything with $ (roughly speaking who's looking at the data, user's name, profile role... not what's actually on the record itself) are "nondeterministic". Unpredictable.
A "today()" changes just like that, without updating the record. Sure, when you watch the record a fresh value will be calculated but other than that LastModifiedDate doesn't change, there's no magical trigger running at midnight that rechecks sharing. (especially that there's no single midnight, you could have users in multiple timezones). SF just doesn't allow nondeterministic fields in many places, see https://salesforce.stackexchange.com/q/32122/799
So if you do rely on TODAY() in your formula you might have to make a "scheduled flow" or read about schedulable, batchable apex. Create nightly job that would run and recalculate your helper field with right experience. You'd probably even need both solutions, a "before save" flow for new data created today and nightly job to advance the clock on existing old data...

Ways to design a digest email implementation?

I'm looking for some thoughts about how to design a digest email feature. I'm not concerned about the actual business code; instead I'd like to focus on the gist of it.
Let's tackle this with a known example: articles. Here's a general overview of some important features:
The user is able to choose the digest frequency (e.g. daily or weekly);
The digest only contains new articles;
"New articles" are to be considered relative to the previous digest that was sent to a specific user;
I've been thinking about the following:
Introduce per-user tracking of articles previously included in a digest and filter those out?
Requires a new database table;
Could become expensive when the table contains millions of rows;
What to do in case of including multiple types of models in the digest? Multiple tracking tables? Polymorphic table? ...?
Use article creation dates to include articles between current date and the chosen digest frequency?
Uses current date and information already present in the database, so no new tables required;
What happens when a user changes from daily to weekly emails? He could receive the same article again in the weekly digest. Should this edge case be considered? If so, how to mitigate?
For some reason the creation date of an article is being updated to today, positively triggering the date comparison again. Should this edge case be considered? If so, how to mitigate?
Or can you think of other ways to implement this feature?
I'm eager to learn your insights.
You can make an additional table that will contain information about digest subscription by each user. This way gives the ability to make a database design cleaner and more universal because mailing is a separate logical module. Aside from that, the additional table gives the ability to expand stored data about digest subscription easy in future. For example:
With help usage of this table, you would manage data easy. For example, you can select all recipient of daily digest:
SELECT *
FROM digest_subscription
WHERE interval_type = 'daily'
AND last_date_distribution <= NOW()
or select all recipient of the weekly digest
SELECT *
FROM digest_subscription
WHERE interval_type = 'weekly'
AND last_date_distribution <= NOW() - INTERVAL 7 DAY
Condition by interval type and compare the last date distribution by rule "equal or less" give the ability to avoid problems of untimely sending of emails (for example technical failures on a server, etc.)
Also, you can make correct articles list with help information of the last data distribution. Usage of the last data distribution gives the ability to avoid problems of interval change. For example:
SELECT *
FROM articles
WHERE created_at >= <the last date distribution of the user>
Of course, you don't avoid the problems of updated creation date. But you should minimize the reasons for that happening. For example, your code can update the modification date but your code shouldn't modify the creation date.

memgraphdb: Support for time travel queries in graph databases

Let's say I want to model a graph with sales people. They belong to an organisation, have a manager, etc.. They are assigned to specific territories and/or client accounts. Your company may work with external partners, which must be managed, and so on. A nice, none-trivial, graph.
Elements in this graph keep on changing all the time: sales people come and go, or move within the organisation and thus change responsibilities; customers sign contract or cancel them, ...
In my specific use cases, the point in time is very important. How did the graph look like at the end of last month? End of last fiscal year? last Monday when we run job ABC. E.g. what was the manager hierarchy end of last month? Which clients did the sale person manage end of last month? and so on.
In our use cases, DELETE doesn't delete anything, but some sort of end_date gets updated. UPDATE doen't update anything, but a new version of the record is created.
I'm sure I can add CREATED and START-/END_DATE properties to nodes as well as relations, and for sure I can also create queries. But these queries are a pain to write, and almost unreadable, with tons of repeating where clauses everywhere.
I wish graph databases (and their graphical query builder) would allow me to travel in time more easily, e.g. by setting a session variable to a point in time, and all the where clauses are automatically added for all nodes and references that have the start/end date properties. The algorithm should not fail for objects that don't have these properties, but consider the condition met.
What are you thoughts about this use case und what help does memgraph provide for these use cases?
thanks a lot
Juergen
As far as I am aware there is not any graph database that supports the type of functionality you are asking about directly although as #buda points out you can model and query against time series data. I agree with #buda that the way in which you would like this to work seems a bit undefined and very application specific so I would not expect this to be a feature of any database.
The closest I can think of to out of the box support for something like this would be to use a Tinkerpop-enabled database with a PartitionStrategy or SubgraphStrategy to create the subgraph of only the times you wanted and then query against that. Another option would be creating a domain specific language to minimize the amount of times you need to repeated code in your queries.
PartitionStrategy
SubgraphStrategy
Domain Specific Languages

Update field based on null or not null value of field in another table

NOTE: Microsoft Access being used
I'm currently working on a database system in Access for loan management of resources.
I currently have a resource table (which holds all the resources info) which has a quantity field. I need it so when someone takes out a loan of a certain resource (identified using ResourceID as PK) than Quantity is decremented by 1 and when someone returns the book it increments by 1 (DateOut field and DateReturned field possibly used?).
I just need to find a way in Microsoft Access to be able to implement this but can't come up with anything.
If you use VBA then your form will need to fire an event, such as AfterUpdate.
Another option you have is to use Data Macros. You can set up a Data Macro to "trigger" after the table is updated. In this trigger you would check if the conditions have been met, and then if they have, you could update the value. If you need to increment / decrement in another record or table, then you'll need to use the AfterUpdate event. If, however, the increment/decrement field is in the same record you can use the BeforeChange event, and it becomes much easier to implement.
Of course, you should also consider your design. Perhaps the way the table is designed currently can be improved upon, given the new requirement. If you simply maintaining a running "on hand" / "on loan" count for each book (each book having its own record), then what will you do when the counts don't balance? Maybe you want to have another table to keep track of Who signed it out / returned it and when.

Track time spent in MSCRM

My requirement is I need to be able to track time for each sales person on activities. Also a report that administrators can run to see the amount of time each user spent working on calls/sales/opportunities etc.
What should I use to track how long is user using particular activities?
I think I can do it using auditing.
Do you have any better ideas?
User Auditing isnt going to be much help here, for starters it only reports every couple of hours and secondly you cant write report against for the audit table.
I would suggest adding a duration field to entities you want to track time against - activities already have a field for this. Then users just have to manually populate this.
Or if you want to automate you could use form JavaScript, for example:
New Field: Number, Duration
On Form Load: Capture a start time
On Form Save: Capture an end time
On Form Save: Work out the difference between the two, then add to the duration field
You would have to do this for every entity you want to track though. Its also not guaranteed to be accurate, for example if a user opens a couple of records at once, or goes to lunch, or just doesnt save the record immediately a much longer Duration could be recorded than actually occurred.

Resources