I am running the following batch and I am making a query inside of a for loop, here is the query:
lgm=[select Group.Name, group.type,group.id, group.ownerID from GroupMember where UserOrGroupId =: u.id];
The batch is looping into every user in the org, and gets its permission sets, and the public group and queues to which he is assigned and also the name and id of that user, then populate those informations into a custom object called ConsolidatedUser
I have not yet run the batch with a large amount of records to see if the governor limits are reached and would like your opinion, as per now the batch works correctly,
can I ask you how many soql queries are allowed per transaction in a batch to make sure that I don't get into any conflict.
here is my code and thank you for your help.
global class TDTRMIS_GetUserDetails implements Database.Batchable<sObject>, Database.Stateful {
global string UserPermissionSets='';
global string UserGroups='';
global string UserQueues='';
global integer i;
global Database.QueryLocator start(Database.BatchableContext bc) {
return Database.getQueryLocator(
'SELECT Id, name, (select PermissionSet.Name, AssigneeId FROM PermissionSetAssignments) from user'
);
}
global void execute(Database.BatchableContext bc, List<User> scope){
// process each batch of records
// process each batch of records
List<ConsolidatedUser__c> lcu = new List<ConsolidatedUser__c>();
list<GroupMember> lgm= new list<GroupMember>();
for (User u : scope)
{
ConsolidatedUser__c cu= new ConsolidatedUser__c();
lgm=[select Group.Name, group.type,group.id, group.ownerID from GroupMember where UserOrGroupId =: u.id];
for(PermissionSetAssignment ps : u.PermissionSetAssignments)
{
UserPermissionSets=UserPermissionSets+ps.PermissionSet.name+'|';
}
for(GroupMember gm : lgm)
{
if(gm.group.type=='Regular' )
{
UserGroups=UserGroups+gm.group.Name+'|';
}
else if(gm.group.type=='Queue' )
{
UserQueues=UserQueues+gm.group.Name+'|';
}
}
cu.PermSet__c=UserPermissionSets ;
cu.PublicGroupList__c=UserGroups;
cu.QueueGroupList__c= UserQueues;
cu.User_Lookup__c=u.id;
cu.name=u.name;
lcu.add(cu);
}
try{
upsert lcu;
}
catch(exception e)
{
system.debug(e);
}
}
global void finish(Database.BatchableContext bc){
//to be added later
}
}
Batch Apex is considered asynchronous Apex, so the "Total number of SOQL queries issued" limit in your case will be 200.
This limit is for one execution of the batch.
I suggest you remove the SOQL from your user for loop. List collect the user ID's in a list within the user for loop and then make a single SOQL query to get the GroupMember list using the user ID list.
Here is a related link : https://help.salesforce.com/articleView?id=000176644&language=en_US&type=1
This explains all you need to know about SOQL governor limits:
https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_gov_limits.htm
Basically, the SOQL governor limit per transaction is 100.
It's never a good idea to put SOQL queries within loops. See the following for alternatives:
https://developer.salesforce.com/page/Apex_Code_Best_Practices
Related
Firstly sorry as I am new to Apex and not sure where to start and what I need to do. My background is from Java and although it looks familiar I am unsure of what to do or how to start it.
I am trying to do a batch apex job that:
If uplift start date(new filed) is 10 months from start date(existing field) then:
create an amendment quote for the contract and set the amendment date to uplift date (a new field)
Copy the products that was previously added, set the existing quote line item to quantity of 0 and set the end date field to uplift start date and add a new start date (uplift) and keep original end date
Complete quote by by-passing validation.
I do apologies, from what I have seen I know people show a code sample of what they have done and tried but as I am unfamiliar, I am not sure what I need to do or how to even find where to code in Apex.
Your question is very specific to your org, full of jargon that won't make much sense in other Salesforce orgs. We don't even know what's your link between quote and contract? Out of the box there's no direct link as far as I know, I guess you could go "up" to Opportunity then "down" to Quotes... What does it even mean to "by-passing validation", do you set some special field on Quote or what.
You'll need to throw a proper business analyst at it or ask the user to demonstrate what they need manually (maybe record user's screen even) and then you can have a stab at automating it.
What's "amendment quote", a "record type"?
I mean "If uplift start date(new filed) is 10 months from start date(existing field)" - where are these fields, on Opportunity? Quote? Contract? lets say 10 contracts meet this condition. Let's say you have this batch wrtitten and it runs every night. What will happen tomorrow? Will it take same 10 records and process them again, kill & create new quotes? Where's some "today" element in this spec?
As for technicalities... you'll need few moving parts.
A skeleton of a batch job that can be scheduled to run daily
public class Stack74373895 implements Database.Batchable<SObject>, Schedulable {
public Database.QueryLocator start(Database.BatchableContext bc) {
return Database.getQueryLocator([SELECT Id FROM ...]);
}
public void execute(Database.BatchableContext bc, List<sObject> scope) {
...
}
public void finish(Database.BatchableContext bc) {
}
public void execute(SchedulableContext sc) {
Database.executeBatch(this);
}
}
Some way to identify eligible... contracts? Experiment with queries in Developer Console and put one in start() method once you're happy. SOQL doesn't let you compare field to field (only field to literal) so if you really need 2 fields - you could cheat by making a formula field of type Checkbox (boolean) and something like ADDMONTHS(StartDate__c, 10) = UpliftStartDate__c. Then in your query you can SELECT Id FROM Contract WHERE MyField__c = true (although as I said above, this sounds bit naive and if your batch will run every night - will it keep creating quotes every time?)
Actual code to "deep clone" 1 quote with line items? Something like this should be close enough, no promises it compiles!
Id quoteId = '0Q0...';
Quote original = [SELECT Id, AccountId, ContractId, OpportunityId, Pricebook2Id,
(SELECT Id, Description, OpportunityLineItemId, PricebookEntryId, Product2Id, Quantity, UnitPrice
FROM QuoteLineItems
ORDER BY LineNumber)
FROM Quote
WHERE Id = :quoteId];
Quote clone = original.clone(false, true);
insert clone;
List<QuoteLineItem> lineItems = original.QuoteLineItems.deepClone(false);
for(QuoteLineItem li : lineItems){
li.QuoteId = clone.Id;
}
insert lineItems;
// set quantities on old line items to 0?
for(QuoteLineItem li : original.QuoteLineItems){
li.Quantity = 0;
}
update original.QuoteLineItems;
I'm working with objectify.
In my app, i have Employee entities and Task entities. Every task is created for a given employee. Datastore is generating id for Task entities.
But i have a restriction, i can't save a task for an employee if that task overlaps with another already existing task.
I don't know how to implement that. I don't even know how to start. Should i use a transaction ?
#Entity
class Task {
#Id
private Long id;
#Index
private long startDate; // since epoch
#Index
private long endDate;
#Index
private Ref<User> createdFor;
public Task(String id, long startDate, long endDate, User assignedTo) {
this.id = null;
this.startDate = startDate;
this.endDate = endDate;
this.createdFor = Ref.create(assignedTo);
}
}
#Entity
class Employee {
#Id
private String id;
private String name;
public Employee(String id, String name) {
this.id = id;
this.name = name;
}
}
You can't do it with the entities you've set up, because between the time you queried for tasks, and inserted the new task, you can't guarantee that someone wouldn't have already inserted a new task that conflicts with the one you're inserting. Even if you use a transaction, any concurrently added, conflicting tasks won't be part of your transaction's entity group, so there's the potential for violating your constraint.
Can you modify your architecture so instead of each task having a ref to the employee its created for, every Employee contains a collection of tasks created for that Employee? That way when your querying the Employee's tasks for conflicts, the Employee would be timestamped in your transaction's Entity Group, and if someone else put a new task into it before you finished putting your new task, a concurrent modification exception would be thrown and you would then retry. But yes, have both your query and your put in the same Transaction.
Read here about Transactions, Entity Groups and Optimistic Concurrency: https://code.google.com/p/objectify-appengine/wiki/Concepts#Transactions
As far as ensuring your tasks don't overlap, you just need to check whether either of your new task's start of end date is within the date range of any previous Tasks for the same employee. You also need to check that your not setting a new task that starts before and ends after a previous task's date range. I suggest using a composite.and filter for for each of the tests, and then combining those three composite filters in a composite.or filter, which will be the one you finally apply. There may be a more succint way, but this is how I figure it:
Note these filters would not apply in the new architecture I'm suggesting. Maybe I'll delete them.
////////Limit to tasks assigned to the same employee
Filter sameEmployeeTask =
new FilterPredicate("createdFor", FilterOperator.EQUAL, thisCreatedFor);
/////////Check if new startDate is in range of the prior task
Filter newTaskStartBeforePriorTaskEnd =
new FilterPredicate("endDate", FilterOperator.GREATER_THAN, thisStartDate);
Filter newTaskStartAfterPriorTaskStart =
new FilterPredicate("startDate", FilterOperator.LESS_THAN, thisStartDate);
Filter newTaskStartInPriorTaskRange =
CompositeFilterOperator.and(sameEmployeeTask, newTaskStartBeforePriorTaskEnd, newTaskStartAfterPriorTaskStart);
/////////Check if new endDate is in range of the prior task
Filter newTaskEndBeforePriorTaskEnd =
new FilterPredicate("endDate", FilterOperator.GREATER_THAN, thisEndDate);
Filter newTaskEndAfterPriorTaskStart =
new FilterPredicate("startDate", FilterOperator.LESS_THAN, thisEndDate);
Filter newTaskEndInPriorTaskRange =
CompositeFilterOperator.and(sameEmployeeTask, newTaskEndBeforePriorTaskEnd, newTaskEndAfterPriorTaskStart);
/////////Check if this Task overlaps the prior one on both sides
Filter newTaskStartBeforePriorTaskStart =
new FilterPredicate("startDate", FilterOperator.GREATER_THAN_OR_EQUAL, thisStartDate);
Filter newTaskEndAfterPriorTaskEnd =
new FilterPredicate("endDate", FilterOperator.LESS_THAN_OR_EQUAL, thisEndDate);
Filter PriorTaskRangeWithinNewTaskStartEnd = CompositeFilterOperator.and(sameEmployeeTask ,newTaskStartBeforePriorTaskStart, newTaskEndAfterPriorTaskEnd);
/////////Combine them to test if any of the three returns one or more tasks
Filter newTaskOverlapPriorTask = CompositeFilterOperator.or(newTaskStartInPriorTaskRange,newTaskEndInPriorTaskRange,PriorTaskRangeWithinNewTaskStartEnd);
/////////Proceed
Query q = new Query("Task").setFilter(newTaskOverlapPriorTask);
PreparedQuery pq = datastore.prepare(q);
If you don't return any results, then you don't have any overlaps, so go ahead and save the new task.
Ok, I hope I can be a bit more helpful. I've attempted to edit your question and change your entities to the right architecture. I've added an embedded collection of tasks and an attemptAdd method to your Employee. I've added a detectOverlap method to both your Task and your Employee. With these in place, your could use something like the transaction below. You will need to deal with the cases where you task doesn't get added because there's a conflicting task, and also the case where the add fails due to a ConcurrentModificationException. But you could make another question out of those, and you should have the start you need in the meantime.
Adapted from: https://code.google.com/p/objectify-appengine/wiki/Transactions
Task myTask = new Task(startDate,endDate,description);
public boolean assignTaskToEmployee(EmployeeId, myTask) {
ofy().transact(new VoidWork() {
public void vrun() {
Employee assignee = ofy().load().key(EmployeeId).now());
boolean taskAdded = assignee.attemptAdd(myTask);
ofy().save().entity(assignee);
return taskAdded;
}
}
}
We have written a de-duplication logic for contact records where we call a batch job from trigger (Yes, it sounds weird but the only thing that seems to work as we have variable criteria for each account). To overcome batch schedule limit of 5, we are using data loader with bulk API enabled and the natch size set to 1000 so that we can upload 5000 records successfully without hitting the 5 batch job limit. When I am testing with 3000 thousand contact records, let say they are named from Test0001 to Test3000 I am observing a strange behavior.
For 3000 records, 3 batch jobs start to run (as batch size is 1000). I am passing the newly inserted records in parameters to the stateful batch class. What I expect is that 1000 records will be passed for each of the 3 batch jobs and they will be compared to existing records for duplicates (which I query in start method of batch) but I only get Test0001 to Test0200 i.e. from batch of 1000 records inserted via data loader API only FIRST 200 records are passed in parameter to the batch class and rest 800 are not. This is something strance as it means only first 200 records are processes if I insert using a batch size of 1000 through data loader with Bulk API enabled.
Has anyone of you encountered this issue or have any ideas to share on how to deal with it? I can share code as well but I think the question is more conceptual. Any help is much appreciated.
Thanks
EDIT: Here is my code:
This is the call from after insert triiger -->
ContactTriggerHandler trgHandler = new ContactTriggerHandler();
trgHandler.deDupAndCreateOfficebyBatch(accountIdContactMap);
//accountIdContactMap is the map which contains List of new contacts w.r.t thier account.
This is the call from handler class -->
public void deDupAndCreateOfficebyBatch (Map<String,List<Contact>> accountIdContactMap){
ContactDeDuplicationBatch batchObj = new ContactDeDuplicationBatch(accountIdContactMap);
String jobId = Database.executeBatch(batchObj,100);
}
This is the batch -->
global class ContactDeDuplicationBatch implements Database.Batchable<sObject>, Database.Stateful{
//set of duplicate contacts to delete
global Set<Contact> duplicateContactSet;
//Map of list of new contacts with account id as key
global Map<String,List<Contact>> newAccIdContactMap;
/*Constructor*/
public ContactDeDuplicationBatch(Map<String,List<Contact>> accountIdContactMap){
System.Debug('## accountIdContactMap size = '+ accountIdContactMap.keySet().size());
newAccIdContactMap = accountIdContactMap;
duplicateContactSet = new Set<Contact>();
}
/*Start Method */
global Database.QueryLocator start(Database.BatchableContext BC){
System.Debug('## newAccIdContactMap size = '+ newAccIdContactMap.keySet().size());
if(newAccIdContactMap.keySet().size() > 0 && newAccIdContactMap.values().size() > 0){
//Fields to be fetched by query
String fieldsToBeFetched = 'Id, AccountId ';
//Add account Id's for contacts which are to be matched
String accountIds = '(';
for(String id : newAccIdContactMap.keySet()){
if(accountIds == '('){
accountIds += '\''+id+'\'';
}else{
accountIds += ', \''+id+'\'';
}
}
accountIds += ')';
String query = 'SELECT '+fieldsToBeFetched+' FROM Contact WHERE Target_Type__c <> \'Office\' AND AccountId IN '+accountIds;
return Database.getQueryLocator(query);
} else {
return null;
}
}
/*Execute Method */
global void execute(Database.BatchableContext BC, List<sObject> scope){
System.Debug('## scope.zixe '+scope.size());
System.Debug('## newAccIdContactMap.zixe '+newAccIdContactMap.size());
//In My execute method I get only 200 records in newAccIdContactMap per batch
}
/*Finish Method */
global void finish(Database.BatchableContext BC){
//Some logic using the two global variables
}
}
In My execute method I get only 200 records in newAccIdContactMap per batch
Thanks
The Batch Apex limit of 5 applies to both scheduled and running processes.
You should take great care when executing batch apex from a trigger. You will almost always hit limits. It would be better to load your data first, and then run the batch second (not from the trigger) to process everything at once.
Triggers are processed in batches of 200 records at the most, so for a batch load of 1000 records, your trigger will get called 5 times with 5 different sets of 200 records.
Setup:
A--< B >-- C . On A there is a RFS on B, and then there is an after update trigger that when run populates fields on B. One of B's Fields are then rolled-up into a field on C.
Question:
The trigger works, but i need to run it on the existing records in the DB to bring everything up to date. How do I do that? I already tried running a 'force mass recalculation with the RFS on A and C.
You could write rather simple Batch Apex (see docs) job class to touch all records you want to recalculate:
global class TouchRecords implements Database.Batchable<sObject>{
private String query;
global TouchRecords(String query) {
this.query = query;
}
global Database.QueryLocator start(Database.BatchableContext BC){
return Database.getQueryLocator(query);
}
global void execute(Database.BatchableContext BC, List<sObject> scope){
update scope;
}
global void finish(Database.BatchableContext BC){
}
}
The job can then be executed by running the following (for example via execute anonymous):
Id batchInstanceId = Database.executeBatch(new TouchRecords('select id from A__c'));
or
Id batchInstanceId = Database.executeBatch(new TouchRecords('select id from Contact'));
to touch all contacts
This should run the trigger on all records being touched (supports max 50 million records). Same idea as the proposed dataloader solution, but kept inside SFDC for easier reuse
Found out a work-around.
Use Data loader- do an export on Id's and the update. this will change the last modified date, and cause the after update trigger to fire.
A better approach is to create a dummy field on the object and then do a bulk update on that field. There could be further issues with this, so you would want to reduce the batch size to a number so that you do not SOQL governor limit if doing any DML.
I have created a simple class and visual force page that displays a "group by". The output is perfect, it will display the number of opportunities a given account has:
lstAR = [ select Account.Name AccountName, AccountId, Count(CampaignID) CountResult from Opportunity where CampaignID != null group by Account.Name,AccountId having COUNT(CampaignID) > 0 LIMIT 500 ];
I would like to be able to say, if an account has more then 10 opportunities, then assign the opportunity to another account that has less then 10.
I used the following code to get the results in my visual force page:
public list<OppClass> getResults() {
list<OppClass> lstResult = new list<OppClass>();
for (AggregateResult ar: lstAR) {
oppClass objOppClass = new oppClass(ar);
lstResult.add(objOppClass);
}
return lstResult;
}
class oppClass {
public Integer CountResult { get;set; }
public String AccountName { get;set; }
public String AccountID { get;set; }
public oppClass(AggregateResult ar) {
//Note that ar returns objects as results, so you need type conversion here
CountResult = (Integer)ar.get('CountResult');
AccountName = (String)ar.get('AccountName');
AccountID = (String)ar.get('AccountID');
}
What would be the best approach to check the count greater then a given number and then assign an account with less then that given number the opportunities?
As I said, code wise I have a nice little controller and vf page that will display the account and count in a grid. Just not sure of a good approach to do the reassigning opportunity.
Thanks
Frank
I'm not sure why you'd be moving your opportunity to another account b/c typically the account is the organization/person buying the stuff?
But that said, ignoring the why and focusing on the how...
Trigger on the Opportunity, before insert
loop over trigger.new and count how many oppties you have per account (or owner) in that batch, put that into a map accountId to count [because you could be inserting 10 oppties for the same account!]. If ever your count is > 10 change the assignment using whatever assignment helper class you have.
Also populate set of accountIds.
Then run your aggregate for each account where Id in set of accountIds, you'll have to group by AccountId.
Loop over results, and update the map of accountId to count.
Then loop over trigger.new and for each oppty, look up in the map by accountId the count. If the count > 10 then do your assignment using your helper class.
And done.
Of course your assignment helper class is another issue to tackle - how do you know which account/user to assign the opportunity to, are you going to use queues, custom objects, custom settings to govern the rules, etc...
But the concept above should work...