Bounce email handling mailbox full using salesforce triggers - salesforce

I'm trying to solve the bouncing email issue based on "Mail box full" by creating a trigger that resends the message if the message contains "mail box full".
The issue I'm facing is that I need to limit the number of resend to 3 times.
What I have now keeps resending the email as soon as a bounced email is received.
My trigger is
trigger trgBouncedEmails on EmailMessage (after insert) {
for(EmailMessage myEmail: trigger.New) {
//mail box full bounced email
if (myEmail.HtmlBody.contains('full'))
{
Case[] parentCase = [Select c.Id from Case c where c.Id =: myEmail.ParentId];
if (myEmail.Subject.contains('Financial Review'))
parentCase[0].Resend_Email_Send__c = true; // this will trigger a workflow to send the email again.
Update parentCase;
}
}
}
How can I limit the resending, is there a way I can set a wait time before I do the "Update parentCase"
Is there a better way to solve this issue, knowing that I have different types of emails, each one has a different template and a different purpose.
EDIT
The system should automatically re-send the email 3 times in the frequency of 24hours, and then stops resending after 24hrs. My trigger keeps resending indefinitely and I'm trying to find a way to put a wait so it can only send 3 times in 24hrs period, like once every 8hrs.

#grigriforce beat me to the punch - I would also suggest using a field to count the number of retries, rather than a simple boolean value. Here's a "bulkified" trigger with essentially the same logic as the one you posted:
trigger trgBouncedEmails on EmailMessage (after insert) {
List<Id> parentCaseIds = new List<Id>();
for ( EmailMessage myEmail : trigger.New ) {
// mail box full bounced email for Financial Review emails
if ( myEmail.HtmlBody.contains('full') && myEmail.Subject.contains('Financial Review') )
parentCaseIds.add(myEmail.ParentId);
}
Case[] parentCases = [select c.Id from Case c where c.Id in :parentCaseIds];
for ( Case c : parentCases ) {
c.Resend_Email_Count__c += 1; // this will trigger workflow to send the email again
c.Resend_Email_Time__c = System.now(); // keep track of when it was last retried
}
update parentCases;
}
Update to space emails out evenly over a 24-hour period:
Rework your workflow to make sure it has been 8 hours since Resend_Email_Time__c was last set, and then schedule an Apex job to run every hour to pick up eligible Cases that need to have their emails resent, and call update on them to make sure the workflow doesn't go too long without firing:
global class ResendCaseEmails implements Schedulable {
global void execute(SchedulableContext sc) {
Contact[] cs = [select id, Resend_Email_Count__c, Resend_Email_Time__c from Contact where Resend_Email_Count__c < 4];
List<Contact> ups = new List<Contact>();
for ( Contact c : cs ) {
if ( c.Resend_Email_Time__c != null && c.Resend_Email_Time__c.addHours(8) < System.now() )
ups.add(c);
}
update ups;
}
}
**note that it's not a best practice to have this code in the class that implements Schedulable - this code would ideally be in another class that is called by the ResendCaseEmails class.
You can schedule this job to run once an hour by calling this code from the developer console:
ResendCaseEmails sched = new ResendCaseEmails();
String cron = '0 0 * * * ?';
System.schedule('Resend Case Email Job', cron, sched);

You could simply change the resend boolean on the case to an integer count of the send attempts and have your workflow rule re-send only while that count is less than 3.
Case[] parentCase = [Select c.Id, c.Resend_Email_Count__c from Case c where c.Id =: myEmail.ParentId];
if (myEmail.Subject.contains('Financial Review'))
parentCase[0].Resend_Email_Count__c += 1; // this will trigger a workflow to send the email again.
Update parentCase;
Also, I assume that you simplified the trigger to show the problem but if not you really need to bulkify it.

So, here's what you'd like to happen (feel free to correct me if I am wrong). You send an email, and if it bounces you'd like to re-send an email every 8 hours. The number of resends should be 3 max.
I would not use only triggers for this scenario. I'd instead design a solution using a Trigger, Scheduler, and maybe a custom table to keep track of the bounced email(s).
Let's call this table/object "Bounce Email Tracker". It'll have the following 3 fields:
Email Name (some unique description of email)
Email status (Sent, Bounced, ReSent, Failed)
Resend Count
Sent email Timestamp
If and when an email is sent you'd create an entry in this table using a trigger with the status set to "Sent" and the "Sent email . If the email bounces, another trigger would "update" the entry in the table to change the status of the record to "Bounced".
A scheduler will run regularly which will retrieve all records from this new table where status equals "Bounced", and check the last time the email was sent using the value in the "Sent Email Timestamp". It will do the following actions depending on the time sent, and resend count.
If the resend count is less than 3, and if the last email was sent more than or equal to 8 hours ago, send another email from the scheduler. Change the status of the record to "Sent".
If the resend count is more than 3, and the status is "Bounced", change the status to "Failed".
If the resend count it less than 3 but the last email was sent less than 8 hours ago, don't do anything.
I know this is a lot of effort, and I am sure it probably needs more thought, but this will provide a robust framework to track and resend bounced emails.
Hope this helps!
Anup

Related

Flink CEP sql restrict output

I have a use case where I have 2 input topics in kafka.
Topic schema:
eventName, ingestion_time(will be used as watermark), orderType, orderCountry
Data for first topic:
{"eventName": "orderCreated", "userId":123, "ingestionTime": "1665042169543", "orderType":"ecommerce","orderCountry": "UK"}
Data for second topic:
{"eventName": "orderSucess", "userId":123, "ingestionTime": "1665042189543", "orderType":"ecommerce","orderCountry": "USA"}
I want to get all the userid for orderType,orderCountry where user does first event but not the second one in a window of 5 minutes for a maximum of 2 events per user for a orderType and orderCountry (i.e. upto 10 mins only).
I have union both topics data and created a view on top of it and trying to use flink cep sql to get my output, but somehow not able to figure it out.
SELECT *
FROM union_event_table
MATCH_RECOGNIZE(
PARTITION BY orderType,orderCountry
ORDER BY ingestion_time
MEASURES
A.userId as userId
A.orderType as orderType
A.orderCountry AS orderCountry
ONE ROW PER MATCH
PATTERN (A not followed B) WITHIN INTERVAL '5' MINUTES
DEFINE
A As A.eventName = 'orderCreated'
B AS B.eventName = 'orderSucess'
)
First thing is not able to figure it out what to use in place of A not followed B in sql, another thing is how can I restrict the output for a userid to maximum of 2 events per orderType and orderCountry, i.e. if a user doesn't perform 2nd event after 1st event in 2 consecutive windows for 5 minutes, the state of that user should be removed, so that I will not get output of that user for same orderType and orderCountry again.
I don't believe this is possible using MATCH_RECOGNIZE. This could, however, be implemented with the DataStream CEP library by using its capability to send timed out patterns to a side output.
This could also be solved at a lower level by using a KeyedProcessFunction. The long ride alerts exercise from the Apache Flink Training repo is an example of that -- you can jump straight away to the solution if you want.

What are the right data-structures to model this relationship, taking advantage of TTL in Redis

We are newbie's to redis and are trying to model the below relationship where there is a userid which could have multiple jid's and each jid will have it's own expiry time and need's to be expired at that time automatically and removed and we need to ensure that there could only be 5 jid's associated with a userid at any given point of time
userid1 : { jid1 : 1551140357883,
jid2 : 1551140357882,
jid3 : 1551140357782,
jid4 : 1551140357782,
jid5 : 1551140357682 }
From the last requirement figured that that the sortedset zset might be a good fit with name of set being the userid and field being jid and score of field being expiration time lie below
zadd userid1 1551140357883 jid1
this help us to enforce the 5 jid's per userid by checking the zset but then we are stuck on how to delete the jid's on expiry as we can't set the TTL for each element of the set. Any help in pointing us in the right direction to use the right data-structure for this use case would be great
Note : we may not want to introduce a batch job at this time to delete the expired tokens, and all our queries to redis will be by userid

hMailServer sending limit per day

I want to set a max number of email sent a day for each mailbox on hmailserver to avoid spamming.
I am looking for an issue to do this in hmailserver administration and COM API.
I believe there's no such property in hMailServer. You must define a script on OnAcceptMessage event to implement this behaviour hMailserve doc | OnAcceptMessage.
For the number of emails sent per day you must create some kind of counter (a database table with username, date and count fields) and get the current number of messages in the body of the OnAcceptMessage function. If the count per current user and current day will be reached then reject the email with return code 1 or 2 and a meaningful message. If the count is less return 0 and the email will be sent.

Keeping count of user metrics based around time

I want to keep count of some kind of achievements for users in a community based website.
The idea is to give achievements for logging in 5 days in a row, or once every weekend for an entire month.
I'm also gonna give achievements for getting 100 posts, but this is easy to determine. The time based examples I just gave are a little harder I think.
How can I make some kind of generic system to keep count of these metrics per user? Or will I end up with a big table with fields such as "every_weekend_for_month" and "5_days_in_a_row" and once those integers reach 4 and 5, they have been 'achieved'. But then I also, for both fields, have to keep score of the last weekend/day.
You will need to track all data that is (even partially) required to get the achievement.
For the achievements around logging in, you need to track each login once per day, having a table like:
user_id | login
1 | 2013-07-20
1 | 2013-07-19
1 | 2013-07-16
2 | 2013-07-20
...
Whenever the tracking event is triggered, you also check for the achievements.
event onLogin {
// get the last 4 logins before the current login
statement = (
SELECT login FROM tracking_user_login
WHERE user_id = 1
ORDER BY login DESC
LIMIT 1,4
);
statement.execute();
// did the user even login at least 4 times already?
if (statement.rowCount == 4) {
date lastLogin = todaysLogin;
int consecutiveLogins = 1;
// iterate descending through the last days
foreach (row in statement) {
if (row.login == (lastLogin - 1day)) {
consecutiveLogins++; // increment consecution
lastLogin = (lastLogin - 1day); // prepare next comparison
} else {
// consecution interrupted, ignore the rest
break;
}
}
// enough to achieve something?
if (consecutiveLogins >= 5) {
user.addAchievement('5 CONSECUTIVE LOGINS');
}
}
}
You can basically add all achievements around login in this event.
You could track all logins and use that data to extrapolate the achievements. Tracking and searching individual logins can be slower. If you have very large user base and lots of logins, it may not be trivial to do these counts on login.
If you want to be faster you could track the last login date for a particular achievement and then increment a counter which sounds like what you were thinking.

How do I easily keep records in a database linked together?

I've got a requirement that I believe must occur very frequently around the world. I have two records that are linked together and whenever a change is made to them a new pair of records is to be created and retain the same link.
The requirement I'm working on has to do with the insurance industry which requires me to deactivate the current insurance policies and re-activate them in a new row in order to show the history of changes made to the insurance policies. When they are re-created they still need to be linked together.
An example of how this process is intended to work from the view of rows in a database:
Insurance Id, Insurance Type, Master Insurance Id, Status
1, Auto Insurance, null, Active
2, Wind Screen Insurance, 1, Active
Note in the above how the link between these policies is denoted by the Master Insurance Id of the second row pointing the the Insurance Id of the first row.
In the code I am writing I am processing each of the policies one at a time so after the first step I have the following:
1, Auto Insurance, null, Inactive
2, Wind Screen Insurance, 1, Active
3, Auto Insurance, null, Active
When I process the second policy I get the following:
1, Auto Insurance, null, Inactive
2, Wind Screen Insurance, 1, Inactive
3, Auto Insurance, null, Active
4, Wind Screen Insurance, 1, Active //needs to be 3 not 1
You'll notice that when I create the new Window Insurance that since we copy the old row we end up with the Master Id insurance pointing to the inactive row.
In order to get around this, I have to keep track of the master insurance id of the previous policy that was processed which has led to the following code:
int masterInsuranceId = -1;
foreach(Policy policy in policyList)
{
//copy the old policy so the new policy has
//the same details as the old one
Policy newPolicy = policyManager.Copy(policy);
//if the new policy is not the master insurance store
//the master its new master insuance
if(newPolicy.MasterInsuranceId.HasValue)
{
newPolicy.MasterInsuranceId = masterInsuranceId;
}
//save the details of the new policy
policyManager.SavePolicy(newPolicy);
//record the master id so we can update the master id
//reference on the next policy
if(newPolicy.MasterInsuranceId == null)
{
masterInsuranceId = newPolicy.Id;
}
else
{
masterInsuranceId = -1;
}
//inactivate the current policy
policy.Status = Inactive;
policyManager.UpdatePolicy(policy);
}
Does anyone know how this can be simplified? What is the best way to ensure two records will remain linked to each other even as a history of the changes is recorded for each change made to the record?
What sort of database schema are you using? Usually, this is where relationship should be stored and I think this should be hanlded at the data processing level rather than code.
Here's a very simplified recommendation
insurance (< insurance_id >, name, description)
insurance_item(< item_id >, < insurance_id >, name, description)
insurance_item_details(< item_id >, < policy_id >, when_changed)
insurance__policy has a 1 to many relationship with insurance_item. An insurance__item has a one to many relationship with insurance_item_details. Each row in insurance__item__details represents a change in policy.
This way, a SQL can quick quickly retrieve the latest two items
SELECT FROM insurance_item_details, insurance_item, insurance where
insurance_item_details.item_id = insurance_item.item_id
AND insurance_item.insurance_id = insurance.insurance_id
ORDER BY when_changed
LIMIT 1
Or you can even retrieve the history.
(The SQL has not been tried)
So the idea is you don't duplicate insurance_item -- you have another table to store the elements that would be changed, and slap a timestamp with it represent that change as a relationship.
I'm not a SQL guru (unfortnately), but all you need to do is to insert into the insurance_item_details table, instead of making copies. From the way it looks, making copies like in your original example seems to violate 2NF, I think.
IF you had a bad code design and you needed to make a change would you refactor? THen why would you not consider refactoring a bad database design? This is something that is more ealisy handled in the database through good design.
If you are working in the insurance industry which is data intensive and you are not strong in database design and query skills, I would suggest you make it a priority to become so.
Thanks everyone who has provided answers. Unfortunately, due to conditions at work I'm unable to implement database changes and am stuck trying to make due with a coding solution.
After spending some time over the weekend on this problem I have come up with a solution that I believe simplifies the code a bit even though it is still nowhere near perfect.
I extracted the functionality out into a new method and pass in the master policy that I want the new policy linked to.
Policy Convert(Policy policy, Policy masterPolicy)
{
Policy newPolicy = policyManager.Copy(policy);
//link the policy to it's master policy
if(masterPolicy != null)
{
newPolicy.MasterPolicyId = masterPolicy.Id;
}
SavePolicy(newPolicy);
//inactivate the current policy
policy.Status = Inactive;
policyManager.UpdatePolicy(policy);
return newPolicy;
}
This allows me to then loop through all the policies and pass in the policy that needs to be linked as long as the policies are sorted in the correct order...which in my case is by start date and then by master policy id.
Policy newPolicy = null;
foreach(Policy policy in policyList)
{
Policy masterPolicy = policy.MasterPolicyId.HasValue ? newPolicy : null;
newPolicy = Convert(policy, masterPolicy);
}
When all is said and done, it isn't all that much less code but I believe it is much more understandable and it allows individual policies to be converted.

Resources