Firstly sorry as I am new to Apex and not sure where to start and what I need to do. My background is from Java and although it looks familiar I am unsure of what to do or how to start it.
I am trying to do a batch apex job that:
If uplift start date(new filed) is 10 months from start date(existing field) then:
create an amendment quote for the contract and set the amendment date to uplift date (a new field)
Copy the products that was previously added, set the existing quote line item to quantity of 0 and set the end date field to uplift start date and add a new start date (uplift) and keep original end date
Complete quote by by-passing validation.
I do apologies, from what I have seen I know people show a code sample of what they have done and tried but as I am unfamiliar, I am not sure what I need to do or how to even find where to code in Apex.
Your question is very specific to your org, full of jargon that won't make much sense in other Salesforce orgs. We don't even know what's your link between quote and contract? Out of the box there's no direct link as far as I know, I guess you could go "up" to Opportunity then "down" to Quotes... What does it even mean to "by-passing validation", do you set some special field on Quote or what.
You'll need to throw a proper business analyst at it or ask the user to demonstrate what they need manually (maybe record user's screen even) and then you can have a stab at automating it.
What's "amendment quote", a "record type"?
I mean "If uplift start date(new filed) is 10 months from start date(existing field)" - where are these fields, on Opportunity? Quote? Contract? lets say 10 contracts meet this condition. Let's say you have this batch wrtitten and it runs every night. What will happen tomorrow? Will it take same 10 records and process them again, kill & create new quotes? Where's some "today" element in this spec?
As for technicalities... you'll need few moving parts.
A skeleton of a batch job that can be scheduled to run daily
public class Stack74373895 implements Database.Batchable<SObject>, Schedulable {
public Database.QueryLocator start(Database.BatchableContext bc) {
return Database.getQueryLocator([SELECT Id FROM ...]);
}
public void execute(Database.BatchableContext bc, List<sObject> scope) {
...
}
public void finish(Database.BatchableContext bc) {
}
public void execute(SchedulableContext sc) {
Database.executeBatch(this);
}
}
Some way to identify eligible... contracts? Experiment with queries in Developer Console and put one in start() method once you're happy. SOQL doesn't let you compare field to field (only field to literal) so if you really need 2 fields - you could cheat by making a formula field of type Checkbox (boolean) and something like ADDMONTHS(StartDate__c, 10) = UpliftStartDate__c. Then in your query you can SELECT Id FROM Contract WHERE MyField__c = true (although as I said above, this sounds bit naive and if your batch will run every night - will it keep creating quotes every time?)
Actual code to "deep clone" 1 quote with line items? Something like this should be close enough, no promises it compiles!
Id quoteId = '0Q0...';
Quote original = [SELECT Id, AccountId, ContractId, OpportunityId, Pricebook2Id,
(SELECT Id, Description, OpportunityLineItemId, PricebookEntryId, Product2Id, Quantity, UnitPrice
FROM QuoteLineItems
ORDER BY LineNumber)
FROM Quote
WHERE Id = :quoteId];
Quote clone = original.clone(false, true);
insert clone;
List<QuoteLineItem> lineItems = original.QuoteLineItems.deepClone(false);
for(QuoteLineItem li : lineItems){
li.QuoteId = clone.Id;
}
insert lineItems;
// set quantities on old line items to 0?
for(QuoteLineItem li : original.QuoteLineItems){
li.Quantity = 0;
}
update original.QuoteLineItems;
Related
I'm working with objectify.
In my app, i have Employee entities and Task entities. Every task is created for a given employee. Datastore is generating id for Task entities.
But i have a restriction, i can't save a task for an employee if that task overlaps with another already existing task.
I don't know how to implement that. I don't even know how to start. Should i use a transaction ?
#Entity
class Task {
#Id
private Long id;
#Index
private long startDate; // since epoch
#Index
private long endDate;
#Index
private Ref<User> createdFor;
public Task(String id, long startDate, long endDate, User assignedTo) {
this.id = null;
this.startDate = startDate;
this.endDate = endDate;
this.createdFor = Ref.create(assignedTo);
}
}
#Entity
class Employee {
#Id
private String id;
private String name;
public Employee(String id, String name) {
this.id = id;
this.name = name;
}
}
You can't do it with the entities you've set up, because between the time you queried for tasks, and inserted the new task, you can't guarantee that someone wouldn't have already inserted a new task that conflicts with the one you're inserting. Even if you use a transaction, any concurrently added, conflicting tasks won't be part of your transaction's entity group, so there's the potential for violating your constraint.
Can you modify your architecture so instead of each task having a ref to the employee its created for, every Employee contains a collection of tasks created for that Employee? That way when your querying the Employee's tasks for conflicts, the Employee would be timestamped in your transaction's Entity Group, and if someone else put a new task into it before you finished putting your new task, a concurrent modification exception would be thrown and you would then retry. But yes, have both your query and your put in the same Transaction.
Read here about Transactions, Entity Groups and Optimistic Concurrency: https://code.google.com/p/objectify-appengine/wiki/Concepts#Transactions
As far as ensuring your tasks don't overlap, you just need to check whether either of your new task's start of end date is within the date range of any previous Tasks for the same employee. You also need to check that your not setting a new task that starts before and ends after a previous task's date range. I suggest using a composite.and filter for for each of the tests, and then combining those three composite filters in a composite.or filter, which will be the one you finally apply. There may be a more succint way, but this is how I figure it:
Note these filters would not apply in the new architecture I'm suggesting. Maybe I'll delete them.
////////Limit to tasks assigned to the same employee
Filter sameEmployeeTask =
new FilterPredicate("createdFor", FilterOperator.EQUAL, thisCreatedFor);
/////////Check if new startDate is in range of the prior task
Filter newTaskStartBeforePriorTaskEnd =
new FilterPredicate("endDate", FilterOperator.GREATER_THAN, thisStartDate);
Filter newTaskStartAfterPriorTaskStart =
new FilterPredicate("startDate", FilterOperator.LESS_THAN, thisStartDate);
Filter newTaskStartInPriorTaskRange =
CompositeFilterOperator.and(sameEmployeeTask, newTaskStartBeforePriorTaskEnd, newTaskStartAfterPriorTaskStart);
/////////Check if new endDate is in range of the prior task
Filter newTaskEndBeforePriorTaskEnd =
new FilterPredicate("endDate", FilterOperator.GREATER_THAN, thisEndDate);
Filter newTaskEndAfterPriorTaskStart =
new FilterPredicate("startDate", FilterOperator.LESS_THAN, thisEndDate);
Filter newTaskEndInPriorTaskRange =
CompositeFilterOperator.and(sameEmployeeTask, newTaskEndBeforePriorTaskEnd, newTaskEndAfterPriorTaskStart);
/////////Check if this Task overlaps the prior one on both sides
Filter newTaskStartBeforePriorTaskStart =
new FilterPredicate("startDate", FilterOperator.GREATER_THAN_OR_EQUAL, thisStartDate);
Filter newTaskEndAfterPriorTaskEnd =
new FilterPredicate("endDate", FilterOperator.LESS_THAN_OR_EQUAL, thisEndDate);
Filter PriorTaskRangeWithinNewTaskStartEnd = CompositeFilterOperator.and(sameEmployeeTask ,newTaskStartBeforePriorTaskStart, newTaskEndAfterPriorTaskEnd);
/////////Combine them to test if any of the three returns one or more tasks
Filter newTaskOverlapPriorTask = CompositeFilterOperator.or(newTaskStartInPriorTaskRange,newTaskEndInPriorTaskRange,PriorTaskRangeWithinNewTaskStartEnd);
/////////Proceed
Query q = new Query("Task").setFilter(newTaskOverlapPriorTask);
PreparedQuery pq = datastore.prepare(q);
If you don't return any results, then you don't have any overlaps, so go ahead and save the new task.
Ok, I hope I can be a bit more helpful. I've attempted to edit your question and change your entities to the right architecture. I've added an embedded collection of tasks and an attemptAdd method to your Employee. I've added a detectOverlap method to both your Task and your Employee. With these in place, your could use something like the transaction below. You will need to deal with the cases where you task doesn't get added because there's a conflicting task, and also the case where the add fails due to a ConcurrentModificationException. But you could make another question out of those, and you should have the start you need in the meantime.
Adapted from: https://code.google.com/p/objectify-appengine/wiki/Transactions
Task myTask = new Task(startDate,endDate,description);
public boolean assignTaskToEmployee(EmployeeId, myTask) {
ofy().transact(new VoidWork() {
public void vrun() {
Employee assignee = ofy().load().key(EmployeeId).now());
boolean taskAdded = assignee.attemptAdd(myTask);
ofy().save().entity(assignee);
return taskAdded;
}
}
}
We have an app that is made up of almost entirely custom objects. Creating a smooth workflow for our users is vital for this app. I'm new to apex, but familiar with basic code writing.
Here are the relationships between the objects: (--< = 1 to many, M/D = Master/Detail)
Object_a --< Object_b --M/D--< Object_c;
Object_a --M/D--< Object_d
When a user populates a date field on Object_c and saves, we'd like a date field on all related records on Object_d (i.e. all Object_d's records for that specific Object_a record) to be updated with the same value.
Any help is appreciated.
You'll need to write a trigger on insert / update of Object_c as workflow rules can't fire on update of multiple objects and rollup summary fields won't help (master-details aren't set everywhere).
Before you deep dive into coding please consider what should happen if you have more than 1 Object__c modified at the same time (with Data Loader for example, through integration with some external system or other piece of code). If you have 2 records that both are related to same Object_A and one has "today" while other has "today + 7", which value should "win"?
One way of solving it would be to start with a map of Object_A Ids and date values to set. The way I'll be building the map means I'll keep overwriting the values on case of duplicate id which is not what you might need (select later of the dates maybe?).
Map<Id, Date> datesToSet = new Map<Id, Date>(); // you could make it Map<Id, Object_C__c> too, principle would be similar
for(Object_C__c c : [SELECT Id, Date_Field__c, Object_B__r.Object_A__c FROM Object_C__c WHERE Id IN :trigger.new AND Object_B__r.Object_A__c != null]){
datesToSet.put(c.dObject_B__r.Object_A__c, c.Date_Field__c);
}
System.debug(datesToSet);
Now we have map of unique A ids and values we should apply to all their child D records.
List<Object_D__c> childsToUpdate = [SELECT Id, Some_Other_Date_Field__c, Object_A__c FROM Object_D__c WHERE Object_A__c IN :datesToSet.keyset()];
System.debug('BEFORE: ' + childsToUpdate);
for(Object_D__c d : childsToUpdate){
d.Some_Other_Date_Field__c = datesToSet.get(d.Object_A__c);
}
System.debug('AFTER: ' + childsToUpdate);
update childsToUpdate;
am trying to upload a CSV file / insert a bulk of records using the import wizard. In short I would like to keep the latest record, in case if duplicates are found. Duplicates record are a combination of First name, Last name and title
For example if my CSV file looks like the following:
James,Wistler,34,New York,Married
James,Wistler,34,London,Married
....
....
James,Wistler,34,New York,Divorced
This should only keep in my org: James,Wistler,34,New York,Divorced
I have been trying to write a trigger before an update / insert but so far no success Here is my trigger code: (The code is not yet finished (only filering with Firstname), I am having a problem deleting found duplicate in my CSV ) Any hints. Thanks for reading!
trigger CheckDuplicateInsert on Customer__c(before insert,before update){
Map <String, Customer__c> customerFirstName = new Map<String,Customer__c>();
list <Customer__c> CustomerList = Trigger.new;
for (Customer__c newCustomer : CustomerList)
{
if ((newCustomer.First_Name__c != null) && System.Trigger.isInsert )
{
if (customerFirstName.containsKey(newCustomer.First_Name__c) )
//remove the duplicate from the map
customerFirstName.remove(newCustomer.First_Name__c);
//end of the if clause
// add this stage we dont have any duplicate, so lets add a new customer
customerFirstName.put(newCustomer.First_Name__c , newCustomer);
}
else if ((System.Trigger.oldMap.get(newCustomer.id)!= null)&&newCustomer.First_Name__c !=System.Trigger.oldMap.get(newCustomer.id).First_Name__c )
{//field is being updated, lets mark it with UPDATED for tracking
newCustomer.First_Name__c=newCustomer.First_Name__c+'UPDATED';
customerFirstName.put(newCustomer.First_Name__c , newCustomer);
}
}
for (Customer__c customer : [SELECT First_Name__c FROM Customer__c WHERE First_Name__c IN :customerFirstName.KeySet()])
{
if (customer.First_Name__c!=null)
{
Customer__c newCustomer=customerFirstName.get(customer.First_Name__c);
newCustomer.First_Name__c=Customer.First_Name__c+'EXIST_DB';
}
}
}
Purely non-SF solution would be to sort them & deduplicate in Excel for example ;)
Good news - you don't need a trigger. Bad news - you might have to ditch the import wizard and start using Data Loader. The solution is pretty long and looks scary but once you get the hang of it it should start to make more sense and be easier to maintain in future than writing code.
You can download the Data Loader in setup area of your Production org and here's some basic info about the tool.
Anyway.
I'd make a new text field on your Contact, call it "unique key" or something and mark it as External Id. If you have never used ext. ids - Jeff Douglas has a good post about them.
You might have to populate the field on your existing data before proceeding. Easiest would be to export all Contacts where it's blank (from a report for example), fill it in with some Excel formulas and import back.
If you want, you can even write a workflow rule to handle the generation of the unique key. This might help you when Mrs. Jane Doe gets married and becomes Jane Bloggs and also will make previous point easier (you'd just import Contacts without changes, just "touching" them and the workflow will fire). Something like
condition: ISBLANK(Unique_key__c) || ISCHANGED(FirstName) || ISCHANGED(LastName) || ISCHANGED(Title)
new value: Title + FirstName + ' ' + LastName
Almost there. Fire Data Loader and prepare an upsert job (because we want to insert some records and when duplicate is found - update them instead).
My only concern is what happens when what's effectively same row will appear more than once in 1 "batch" of records sent to SF like in your example. Upsert will not know which value is valid (it's like setting x = 7; and x = 5; in same save to DB) and will decide to fail these rows. So you might have to tweak the amount of records in a batch in Data Loader's settings.
Setup:
A--< B >-- C . On A there is a RFS on B, and then there is an after update trigger that when run populates fields on B. One of B's Fields are then rolled-up into a field on C.
Question:
The trigger works, but i need to run it on the existing records in the DB to bring everything up to date. How do I do that? I already tried running a 'force mass recalculation with the RFS on A and C.
You could write rather simple Batch Apex (see docs) job class to touch all records you want to recalculate:
global class TouchRecords implements Database.Batchable<sObject>{
private String query;
global TouchRecords(String query) {
this.query = query;
}
global Database.QueryLocator start(Database.BatchableContext BC){
return Database.getQueryLocator(query);
}
global void execute(Database.BatchableContext BC, List<sObject> scope){
update scope;
}
global void finish(Database.BatchableContext BC){
}
}
The job can then be executed by running the following (for example via execute anonymous):
Id batchInstanceId = Database.executeBatch(new TouchRecords('select id from A__c'));
or
Id batchInstanceId = Database.executeBatch(new TouchRecords('select id from Contact'));
to touch all contacts
This should run the trigger on all records being touched (supports max 50 million records). Same idea as the proposed dataloader solution, but kept inside SFDC for easier reuse
Found out a work-around.
Use Data loader- do an export on Id's and the update. this will change the last modified date, and cause the after update trigger to fire.
A better approach is to create a dummy field on the object and then do a bulk update on that field. There could be further issues with this, so you would want to reduce the batch size to a number so that you do not SOQL governor limit if doing any DML.
I have created a simple class and visual force page that displays a "group by". The output is perfect, it will display the number of opportunities a given account has:
lstAR = [ select Account.Name AccountName, AccountId, Count(CampaignID) CountResult from Opportunity where CampaignID != null group by Account.Name,AccountId having COUNT(CampaignID) > 0 LIMIT 500 ];
I would like to be able to say, if an account has more then 10 opportunities, then assign the opportunity to another account that has less then 10.
I used the following code to get the results in my visual force page:
public list<OppClass> getResults() {
list<OppClass> lstResult = new list<OppClass>();
for (AggregateResult ar: lstAR) {
oppClass objOppClass = new oppClass(ar);
lstResult.add(objOppClass);
}
return lstResult;
}
class oppClass {
public Integer CountResult { get;set; }
public String AccountName { get;set; }
public String AccountID { get;set; }
public oppClass(AggregateResult ar) {
//Note that ar returns objects as results, so you need type conversion here
CountResult = (Integer)ar.get('CountResult');
AccountName = (String)ar.get('AccountName');
AccountID = (String)ar.get('AccountID');
}
What would be the best approach to check the count greater then a given number and then assign an account with less then that given number the opportunities?
As I said, code wise I have a nice little controller and vf page that will display the account and count in a grid. Just not sure of a good approach to do the reassigning opportunity.
Thanks
Frank
I'm not sure why you'd be moving your opportunity to another account b/c typically the account is the organization/person buying the stuff?
But that said, ignoring the why and focusing on the how...
Trigger on the Opportunity, before insert
loop over trigger.new and count how many oppties you have per account (or owner) in that batch, put that into a map accountId to count [because you could be inserting 10 oppties for the same account!]. If ever your count is > 10 change the assignment using whatever assignment helper class you have.
Also populate set of accountIds.
Then run your aggregate for each account where Id in set of accountIds, you'll have to group by AccountId.
Loop over results, and update the map of accountId to count.
Then loop over trigger.new and for each oppty, look up in the map by accountId the count. If the count > 10 then do your assignment using your helper class.
And done.
Of course your assignment helper class is another issue to tackle - how do you know which account/user to assign the opportunity to, are you going to use queues, custom objects, custom settings to govern the rules, etc...
But the concept above should work...