Setup:
A--< B >-- C . On A there is a RFS on B, and then there is an after update trigger that when run populates fields on B. One of B's Fields are then rolled-up into a field on C.
Question:
The trigger works, but i need to run it on the existing records in the DB to bring everything up to date. How do I do that? I already tried running a 'force mass recalculation with the RFS on A and C.
You could write rather simple Batch Apex (see docs) job class to touch all records you want to recalculate:
global class TouchRecords implements Database.Batchable<sObject>{
private String query;
global TouchRecords(String query) {
this.query = query;
}
global Database.QueryLocator start(Database.BatchableContext BC){
return Database.getQueryLocator(query);
}
global void execute(Database.BatchableContext BC, List<sObject> scope){
update scope;
}
global void finish(Database.BatchableContext BC){
}
}
The job can then be executed by running the following (for example via execute anonymous):
Id batchInstanceId = Database.executeBatch(new TouchRecords('select id from A__c'));
or
Id batchInstanceId = Database.executeBatch(new TouchRecords('select id from Contact'));
to touch all contacts
This should run the trigger on all records being touched (supports max 50 million records). Same idea as the proposed dataloader solution, but kept inside SFDC for easier reuse
Found out a work-around.
Use Data loader- do an export on Id's and the update. this will change the last modified date, and cause the after update trigger to fire.
A better approach is to create a dummy field on the object and then do a bulk update on that field. There could be further issues with this, so you would want to reduce the batch size to a number so that you do not SOQL governor limit if doing any DML.
Related
Firstly sorry as I am new to Apex and not sure where to start and what I need to do. My background is from Java and although it looks familiar I am unsure of what to do or how to start it.
I am trying to do a batch apex job that:
If uplift start date(new filed) is 10 months from start date(existing field) then:
create an amendment quote for the contract and set the amendment date to uplift date (a new field)
Copy the products that was previously added, set the existing quote line item to quantity of 0 and set the end date field to uplift start date and add a new start date (uplift) and keep original end date
Complete quote by by-passing validation.
I do apologies, from what I have seen I know people show a code sample of what they have done and tried but as I am unfamiliar, I am not sure what I need to do or how to even find where to code in Apex.
Your question is very specific to your org, full of jargon that won't make much sense in other Salesforce orgs. We don't even know what's your link between quote and contract? Out of the box there's no direct link as far as I know, I guess you could go "up" to Opportunity then "down" to Quotes... What does it even mean to "by-passing validation", do you set some special field on Quote or what.
You'll need to throw a proper business analyst at it or ask the user to demonstrate what they need manually (maybe record user's screen even) and then you can have a stab at automating it.
What's "amendment quote", a "record type"?
I mean "If uplift start date(new filed) is 10 months from start date(existing field)" - where are these fields, on Opportunity? Quote? Contract? lets say 10 contracts meet this condition. Let's say you have this batch wrtitten and it runs every night. What will happen tomorrow? Will it take same 10 records and process them again, kill & create new quotes? Where's some "today" element in this spec?
As for technicalities... you'll need few moving parts.
A skeleton of a batch job that can be scheduled to run daily
public class Stack74373895 implements Database.Batchable<SObject>, Schedulable {
public Database.QueryLocator start(Database.BatchableContext bc) {
return Database.getQueryLocator([SELECT Id FROM ...]);
}
public void execute(Database.BatchableContext bc, List<sObject> scope) {
...
}
public void finish(Database.BatchableContext bc) {
}
public void execute(SchedulableContext sc) {
Database.executeBatch(this);
}
}
Some way to identify eligible... contracts? Experiment with queries in Developer Console and put one in start() method once you're happy. SOQL doesn't let you compare field to field (only field to literal) so if you really need 2 fields - you could cheat by making a formula field of type Checkbox (boolean) and something like ADDMONTHS(StartDate__c, 10) = UpliftStartDate__c. Then in your query you can SELECT Id FROM Contract WHERE MyField__c = true (although as I said above, this sounds bit naive and if your batch will run every night - will it keep creating quotes every time?)
Actual code to "deep clone" 1 quote with line items? Something like this should be close enough, no promises it compiles!
Id quoteId = '0Q0...';
Quote original = [SELECT Id, AccountId, ContractId, OpportunityId, Pricebook2Id,
(SELECT Id, Description, OpportunityLineItemId, PricebookEntryId, Product2Id, Quantity, UnitPrice
FROM QuoteLineItems
ORDER BY LineNumber)
FROM Quote
WHERE Id = :quoteId];
Quote clone = original.clone(false, true);
insert clone;
List<QuoteLineItem> lineItems = original.QuoteLineItems.deepClone(false);
for(QuoteLineItem li : lineItems){
li.QuoteId = clone.Id;
}
insert lineItems;
// set quantities on old line items to 0?
for(QuoteLineItem li : original.QuoteLineItems){
li.Quantity = 0;
}
update original.QuoteLineItems;
When you use NHibernate to "fetch" a mapped object, it outputs a SELECT query to the database. It outputs this using parameters; so if I query a list of cars based on tenant ID and name, I get:
select Name, Location from Car where tenantID=#p0 and Name=#p1
This has the nice benefit of our database creating (and caching) a query plan based on this query and the result, so when it is run again, the query is much faster as it can load the plan from the cache.
The problem with this is that we are a multi-tenant database, and almost all of our indexes are partition aligned. Our tenants have vastly different data sets; one tenant could have 5 cars, while another could have 50,000. And so because NHibernate does this, it has the net effect of our database creating and caching a plan for the FIRST tenant that runs it. This plan is likely not efficient for subsequent tenants who run the query.
What I WANT to do is force NHibernate NOT to parameterize certain parameters; namely, the tenant ID. So I'd want the query to read:
select Name, Location from Car where tenantID=55 and Name=#p0
I can't figure out how to do this in the HBM.XML mapping. How can I dictate to NHibernate how to use parameters? Or can I just turn parameters off altogether?
OK everyone, I figured it out.
The way I did it was overriding the SqlClientDriver with my own custom driver that looks like this:
public class CustomSqlClientDriver : SqlClientDriver
{
private static Regex _partitionKeyReplacer = new Regex(#".PartitionKey=(#p0)", RegexOptions.Compiled);
public override void AdjustCommand(IDbCommand command)
{
var m = _tenantIDReplacer.Match(command.CommandText);
if (!m.Success)
return;
// replace the first parameter with the actual partition key
var parameterName = m.Groups[1].Value;
// find the parameter value
var tenantID = (IDbDataParameter ) command.Parameters[parameterName];
var valueOfTenantID = tenantID.Value;
// now replace the string
command.CommandText = _tenantIDReplacer.Replace(command.CommandText, ".TenantID=" + valueOfTenantID);
}
} }
I override the AdjustCommand method and use a Regex to replace the tenantID. This works; not sure if there's a better way, but I really didn't want to have to open up NHibernate and start messing with core code.
You'll have to register this custom driver in the connection.driver_class property of the SessionFactory upon initialization.
Hope this helps somebody!
I am running the following batch and I am making a query inside of a for loop, here is the query:
lgm=[select Group.Name, group.type,group.id, group.ownerID from GroupMember where UserOrGroupId =: u.id];
The batch is looping into every user in the org, and gets its permission sets, and the public group and queues to which he is assigned and also the name and id of that user, then populate those informations into a custom object called ConsolidatedUser
I have not yet run the batch with a large amount of records to see if the governor limits are reached and would like your opinion, as per now the batch works correctly,
can I ask you how many soql queries are allowed per transaction in a batch to make sure that I don't get into any conflict.
here is my code and thank you for your help.
global class TDTRMIS_GetUserDetails implements Database.Batchable<sObject>, Database.Stateful {
global string UserPermissionSets='';
global string UserGroups='';
global string UserQueues='';
global integer i;
global Database.QueryLocator start(Database.BatchableContext bc) {
return Database.getQueryLocator(
'SELECT Id, name, (select PermissionSet.Name, AssigneeId FROM PermissionSetAssignments) from user'
);
}
global void execute(Database.BatchableContext bc, List<User> scope){
// process each batch of records
// process each batch of records
List<ConsolidatedUser__c> lcu = new List<ConsolidatedUser__c>();
list<GroupMember> lgm= new list<GroupMember>();
for (User u : scope)
{
ConsolidatedUser__c cu= new ConsolidatedUser__c();
lgm=[select Group.Name, group.type,group.id, group.ownerID from GroupMember where UserOrGroupId =: u.id];
for(PermissionSetAssignment ps : u.PermissionSetAssignments)
{
UserPermissionSets=UserPermissionSets+ps.PermissionSet.name+'|';
}
for(GroupMember gm : lgm)
{
if(gm.group.type=='Regular' )
{
UserGroups=UserGroups+gm.group.Name+'|';
}
else if(gm.group.type=='Queue' )
{
UserQueues=UserQueues+gm.group.Name+'|';
}
}
cu.PermSet__c=UserPermissionSets ;
cu.PublicGroupList__c=UserGroups;
cu.QueueGroupList__c= UserQueues;
cu.User_Lookup__c=u.id;
cu.name=u.name;
lcu.add(cu);
}
try{
upsert lcu;
}
catch(exception e)
{
system.debug(e);
}
}
global void finish(Database.BatchableContext bc){
//to be added later
}
}
Batch Apex is considered asynchronous Apex, so the "Total number of SOQL queries issued" limit in your case will be 200.
This limit is for one execution of the batch.
I suggest you remove the SOQL from your user for loop. List collect the user ID's in a list within the user for loop and then make a single SOQL query to get the GroupMember list using the user ID list.
Here is a related link : https://help.salesforce.com/articleView?id=000176644&language=en_US&type=1
This explains all you need to know about SOQL governor limits:
https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_gov_limits.htm
Basically, the SOQL governor limit per transaction is 100.
It's never a good idea to put SOQL queries within loops. See the following for alternatives:
https://developer.salesforce.com/page/Apex_Code_Best_Practices
I'm working with objectify.
In my app, i have Employee entities and Task entities. Every task is created for a given employee. Datastore is generating id for Task entities.
But i have a restriction, i can't save a task for an employee if that task overlaps with another already existing task.
I don't know how to implement that. I don't even know how to start. Should i use a transaction ?
#Entity
class Task {
#Id
private Long id;
#Index
private long startDate; // since epoch
#Index
private long endDate;
#Index
private Ref<User> createdFor;
public Task(String id, long startDate, long endDate, User assignedTo) {
this.id = null;
this.startDate = startDate;
this.endDate = endDate;
this.createdFor = Ref.create(assignedTo);
}
}
#Entity
class Employee {
#Id
private String id;
private String name;
public Employee(String id, String name) {
this.id = id;
this.name = name;
}
}
You can't do it with the entities you've set up, because between the time you queried for tasks, and inserted the new task, you can't guarantee that someone wouldn't have already inserted a new task that conflicts with the one you're inserting. Even if you use a transaction, any concurrently added, conflicting tasks won't be part of your transaction's entity group, so there's the potential for violating your constraint.
Can you modify your architecture so instead of each task having a ref to the employee its created for, every Employee contains a collection of tasks created for that Employee? That way when your querying the Employee's tasks for conflicts, the Employee would be timestamped in your transaction's Entity Group, and if someone else put a new task into it before you finished putting your new task, a concurrent modification exception would be thrown and you would then retry. But yes, have both your query and your put in the same Transaction.
Read here about Transactions, Entity Groups and Optimistic Concurrency: https://code.google.com/p/objectify-appengine/wiki/Concepts#Transactions
As far as ensuring your tasks don't overlap, you just need to check whether either of your new task's start of end date is within the date range of any previous Tasks for the same employee. You also need to check that your not setting a new task that starts before and ends after a previous task's date range. I suggest using a composite.and filter for for each of the tests, and then combining those three composite filters in a composite.or filter, which will be the one you finally apply. There may be a more succint way, but this is how I figure it:
Note these filters would not apply in the new architecture I'm suggesting. Maybe I'll delete them.
////////Limit to tasks assigned to the same employee
Filter sameEmployeeTask =
new FilterPredicate("createdFor", FilterOperator.EQUAL, thisCreatedFor);
/////////Check if new startDate is in range of the prior task
Filter newTaskStartBeforePriorTaskEnd =
new FilterPredicate("endDate", FilterOperator.GREATER_THAN, thisStartDate);
Filter newTaskStartAfterPriorTaskStart =
new FilterPredicate("startDate", FilterOperator.LESS_THAN, thisStartDate);
Filter newTaskStartInPriorTaskRange =
CompositeFilterOperator.and(sameEmployeeTask, newTaskStartBeforePriorTaskEnd, newTaskStartAfterPriorTaskStart);
/////////Check if new endDate is in range of the prior task
Filter newTaskEndBeforePriorTaskEnd =
new FilterPredicate("endDate", FilterOperator.GREATER_THAN, thisEndDate);
Filter newTaskEndAfterPriorTaskStart =
new FilterPredicate("startDate", FilterOperator.LESS_THAN, thisEndDate);
Filter newTaskEndInPriorTaskRange =
CompositeFilterOperator.and(sameEmployeeTask, newTaskEndBeforePriorTaskEnd, newTaskEndAfterPriorTaskStart);
/////////Check if this Task overlaps the prior one on both sides
Filter newTaskStartBeforePriorTaskStart =
new FilterPredicate("startDate", FilterOperator.GREATER_THAN_OR_EQUAL, thisStartDate);
Filter newTaskEndAfterPriorTaskEnd =
new FilterPredicate("endDate", FilterOperator.LESS_THAN_OR_EQUAL, thisEndDate);
Filter PriorTaskRangeWithinNewTaskStartEnd = CompositeFilterOperator.and(sameEmployeeTask ,newTaskStartBeforePriorTaskStart, newTaskEndAfterPriorTaskEnd);
/////////Combine them to test if any of the three returns one or more tasks
Filter newTaskOverlapPriorTask = CompositeFilterOperator.or(newTaskStartInPriorTaskRange,newTaskEndInPriorTaskRange,PriorTaskRangeWithinNewTaskStartEnd);
/////////Proceed
Query q = new Query("Task").setFilter(newTaskOverlapPriorTask);
PreparedQuery pq = datastore.prepare(q);
If you don't return any results, then you don't have any overlaps, so go ahead and save the new task.
Ok, I hope I can be a bit more helpful. I've attempted to edit your question and change your entities to the right architecture. I've added an embedded collection of tasks and an attemptAdd method to your Employee. I've added a detectOverlap method to both your Task and your Employee. With these in place, your could use something like the transaction below. You will need to deal with the cases where you task doesn't get added because there's a conflicting task, and also the case where the add fails due to a ConcurrentModificationException. But you could make another question out of those, and you should have the start you need in the meantime.
Adapted from: https://code.google.com/p/objectify-appengine/wiki/Transactions
Task myTask = new Task(startDate,endDate,description);
public boolean assignTaskToEmployee(EmployeeId, myTask) {
ofy().transact(new VoidWork() {
public void vrun() {
Employee assignee = ofy().load().key(EmployeeId).now());
boolean taskAdded = assignee.attemptAdd(myTask);
ofy().save().entity(assignee);
return taskAdded;
}
}
}
Like I said in the topic,
My team developed social website based on Zend Framework (1.11).
The problem is that our client wants a debug (on screen list of DB queries) with
execution time, how many rows were affected and statement sentence.
Time and statement Zend_DB_profiler gets for us with no hassle, but
we need also the amount of rows that query affected (fetched, update, inserted or deleted).
Please help, how to cope with this task?
This is the current implementation which prevents you from getting what you want.
$qp->start($this->_queryId);
$retval = $this->_execute($params);
$prof->queryEnd($this->_queryId);
Possible solution would be:
Create your own class for Statement, let's say extended from Zend_Db_Statement_Mysqli or what have you.
Redefine its execute method so that $retval is carried on into $prof->queryEnd($this->_queryId);
Redefine $_defaultStmtClass in Zend_Db_Adapter_Mysqli with your new statement class name
Create your own Zend_Db_Profiler with public function queryEnd($queryId) redefined, so it accepts $retval and handles $retval