datastore - using queries and transactions - google-app-engine

I'm working with objectify.
In my app, i have Employee entities and Task entities. Every task is created for a given employee. Datastore is generating id for Task entities.
But i have a restriction, i can't save a task for an employee if that task overlaps with another already existing task.
I don't know how to implement that. I don't even know how to start. Should i use a transaction ?
#Entity
class Task {
#Id
private Long id;
#Index
private long startDate; // since epoch
#Index
private long endDate;
#Index
private Ref<User> createdFor;
public Task(String id, long startDate, long endDate, User assignedTo) {
this.id = null;
this.startDate = startDate;
this.endDate = endDate;
this.createdFor = Ref.create(assignedTo);
}
}
#Entity
class Employee {
#Id
private String id;
private String name;
public Employee(String id, String name) {
this.id = id;
this.name = name;
}
}

You can't do it with the entities you've set up, because between the time you queried for tasks, and inserted the new task, you can't guarantee that someone wouldn't have already inserted a new task that conflicts with the one you're inserting. Even if you use a transaction, any concurrently added, conflicting tasks won't be part of your transaction's entity group, so there's the potential for violating your constraint.
Can you modify your architecture so instead of each task having a ref to the employee its created for, every Employee contains a collection of tasks created for that Employee? That way when your querying the Employee's tasks for conflicts, the Employee would be timestamped in your transaction's Entity Group, and if someone else put a new task into it before you finished putting your new task, a concurrent modification exception would be thrown and you would then retry. But yes, have both your query and your put in the same Transaction.
Read here about Transactions, Entity Groups and Optimistic Concurrency: https://code.google.com/p/objectify-appengine/wiki/Concepts#Transactions
As far as ensuring your tasks don't overlap, you just need to check whether either of your new task's start of end date is within the date range of any previous Tasks for the same employee. You also need to check that your not setting a new task that starts before and ends after a previous task's date range. I suggest using a composite.and filter for for each of the tests, and then combining those three composite filters in a composite.or filter, which will be the one you finally apply. There may be a more succint way, but this is how I figure it:
Note these filters would not apply in the new architecture I'm suggesting. Maybe I'll delete them.
////////Limit to tasks assigned to the same employee
Filter sameEmployeeTask =
new FilterPredicate("createdFor", FilterOperator.EQUAL, thisCreatedFor);
/////////Check if new startDate is in range of the prior task
Filter newTaskStartBeforePriorTaskEnd =
new FilterPredicate("endDate", FilterOperator.GREATER_THAN, thisStartDate);
Filter newTaskStartAfterPriorTaskStart =
new FilterPredicate("startDate", FilterOperator.LESS_THAN, thisStartDate);
Filter newTaskStartInPriorTaskRange =
CompositeFilterOperator.and(sameEmployeeTask, newTaskStartBeforePriorTaskEnd, newTaskStartAfterPriorTaskStart);
/////////Check if new endDate is in range of the prior task
Filter newTaskEndBeforePriorTaskEnd =
new FilterPredicate("endDate", FilterOperator.GREATER_THAN, thisEndDate);
Filter newTaskEndAfterPriorTaskStart =
new FilterPredicate("startDate", FilterOperator.LESS_THAN, thisEndDate);
Filter newTaskEndInPriorTaskRange =
CompositeFilterOperator.and(sameEmployeeTask, newTaskEndBeforePriorTaskEnd, newTaskEndAfterPriorTaskStart);
/////////Check if this Task overlaps the prior one on both sides
Filter newTaskStartBeforePriorTaskStart =
new FilterPredicate("startDate", FilterOperator.GREATER_THAN_OR_EQUAL, thisStartDate);
Filter newTaskEndAfterPriorTaskEnd =
new FilterPredicate("endDate", FilterOperator.LESS_THAN_OR_EQUAL, thisEndDate);
Filter PriorTaskRangeWithinNewTaskStartEnd = CompositeFilterOperator.and(sameEmployeeTask ,newTaskStartBeforePriorTaskStart, newTaskEndAfterPriorTaskEnd);
/////////Combine them to test if any of the three returns one or more tasks
Filter newTaskOverlapPriorTask = CompositeFilterOperator.or(newTaskStartInPriorTaskRange,newTaskEndInPriorTaskRange,PriorTaskRangeWithinNewTaskStartEnd);
/////////Proceed
Query q = new Query("Task").setFilter(newTaskOverlapPriorTask);
PreparedQuery pq = datastore.prepare(q);
If you don't return any results, then you don't have any overlaps, so go ahead and save the new task.

Ok, I hope I can be a bit more helpful. I've attempted to edit your question and change your entities to the right architecture. I've added an embedded collection of tasks and an attemptAdd method to your Employee. I've added a detectOverlap method to both your Task and your Employee. With these in place, your could use something like the transaction below. You will need to deal with the cases where you task doesn't get added because there's a conflicting task, and also the case where the add fails due to a ConcurrentModificationException. But you could make another question out of those, and you should have the start you need in the meantime.
Adapted from: https://code.google.com/p/objectify-appengine/wiki/Transactions
Task myTask = new Task(startDate,endDate,description);
public boolean assignTaskToEmployee(EmployeeId, myTask) {
ofy().transact(new VoidWork() {
public void vrun() {
Employee assignee = ofy().load().key(EmployeeId).now());
boolean taskAdded = assignee.attemptAdd(myTask);
ofy().save().entity(assignee);
return taskAdded;
}
}
}

Related

How to perform a batch update using apex to do uplifts?

Firstly sorry as I am new to Apex and not sure where to start and what I need to do. My background is from Java and although it looks familiar I am unsure of what to do or how to start it.
I am trying to do a batch apex job that:
If uplift start date(new filed) is 10 months from start date(existing field) then:
create an amendment quote for the contract and set the amendment date to uplift date (a new field)
Copy the products that was previously added, set the existing quote line item to quantity of 0 and set the end date field to uplift start date and add a new start date (uplift) and keep original end date
Complete quote by by-passing validation.
I do apologies, from what I have seen I know people show a code sample of what they have done and tried but as I am unfamiliar, I am not sure what I need to do or how to even find where to code in Apex.
Your question is very specific to your org, full of jargon that won't make much sense in other Salesforce orgs. We don't even know what's your link between quote and contract? Out of the box there's no direct link as far as I know, I guess you could go "up" to Opportunity then "down" to Quotes... What does it even mean to "by-passing validation", do you set some special field on Quote or what.
You'll need to throw a proper business analyst at it or ask the user to demonstrate what they need manually (maybe record user's screen even) and then you can have a stab at automating it.
What's "amendment quote", a "record type"?
I mean "If uplift start date(new filed) is 10 months from start date(existing field)" - where are these fields, on Opportunity? Quote? Contract? lets say 10 contracts meet this condition. Let's say you have this batch wrtitten and it runs every night. What will happen tomorrow? Will it take same 10 records and process them again, kill & create new quotes? Where's some "today" element in this spec?
As for technicalities... you'll need few moving parts.
A skeleton of a batch job that can be scheduled to run daily
public class Stack74373895 implements Database.Batchable<SObject>, Schedulable {
public Database.QueryLocator start(Database.BatchableContext bc) {
return Database.getQueryLocator([SELECT Id FROM ...]);
}
public void execute(Database.BatchableContext bc, List<sObject> scope) {
...
}
public void finish(Database.BatchableContext bc) {
}
public void execute(SchedulableContext sc) {
Database.executeBatch(this);
}
}
Some way to identify eligible... contracts? Experiment with queries in Developer Console and put one in start() method once you're happy. SOQL doesn't let you compare field to field (only field to literal) so if you really need 2 fields - you could cheat by making a formula field of type Checkbox (boolean) and something like ADDMONTHS(StartDate__c, 10) = UpliftStartDate__c. Then in your query you can SELECT Id FROM Contract WHERE MyField__c = true (although as I said above, this sounds bit naive and if your batch will run every night - will it keep creating quotes every time?)
Actual code to "deep clone" 1 quote with line items? Something like this should be close enough, no promises it compiles!
Id quoteId = '0Q0...';
Quote original = [SELECT Id, AccountId, ContractId, OpportunityId, Pricebook2Id,
(SELECT Id, Description, OpportunityLineItemId, PricebookEntryId, Product2Id, Quantity, UnitPrice
FROM QuoteLineItems
ORDER BY LineNumber)
FROM Quote
WHERE Id = :quoteId];
Quote clone = original.clone(false, true);
insert clone;
List<QuoteLineItem> lineItems = original.QuoteLineItems.deepClone(false);
for(QuoteLineItem li : lineItems){
li.QuoteId = clone.Id;
}
insert lineItems;
// set quantities on old line items to 0?
for(QuoteLineItem li : original.QuoteLineItems){
li.Quantity = 0;
}
update original.QuoteLineItems;

Delete multiple objects with a single query (or in transaction)

I'm using Dapper with Dapper-Extensions. I'm currently deleting all the objects one-by-one:
dbConnection.Delete<MyObj>(data);
This is bad not only for performance, but also because if a delete fails I would like to rollback the entire operation. Is there a way to perform a "massive" delete, for example passing a list of objects instead of data?
You may pass IPredicate to delete multiple records based on condition (WHERE clause) in one go.
If you simply pass empty IPredicate, all records from the table will be deleted.
Following function handles both the cases:
protected void DeleteBy(IPredicate where)
{//If 'where' is null, this method will delete all rows from the table.
if(where == null)
where = new PredicateGroup { Operator = GroupOperator.And, Predicates = new List<IPredicate>() };//Send empty predicateGroup to delete all records.
var result = connection.Delete<TPoco>(predicate, ......);
}
In above code, TPoco is your POCO type which is mapped to database table you are talking about.
You can build the predicate something like below:
var predicateGroup = new PredicateGroup { Operator = GroupOperator.And, Predicates = new List<IPredicate>() };
if(!string.IsNullOrEmpty(filterValue))
predicateGroup.Predicates.Add(Predicates.Field<MyPoco>(x => x.MyProperty, Operator.Eq, PredicateGroup));
Transaction is different thing. You can put all your current code in transaction. You can put my code in transaction as well. With my code, transaction does not make much difference though; although it is recommended to always use transactions.
About passing list of objects, I do not see any way. Following are two extension methods of Dapper Extensions for deleting the record:
public static bool Delete<T>(this IDbConnection connection, object predicate, IDbTransaction transaction = null, int? commandTimeout = default(int?)) where T : class;
public static bool Delete<T>(this IDbConnection connection, T entity, IDbTransaction transaction = null, int? commandTimeout = default(int?)) where T : class;
None of it accepts list of objects. One accept predicate and other accepts single object.

GAE datastore inequality filter two properties advice

I've got a scenario where I need to query the datastore for some random users who have been active in the last X minutes.
Each of my User entities have a property called 'random'. When I want to find some random users I generate a random min and max value and use them to query the datastore against the users random property.
This is what I've got so far:
public static List<Entity> getRandomUsers(Key filterKey, String gender, String language, int maxResults) {
ArrayList<Entity> nonDuplicateEntities = new ArrayList<>();
HashSet<Entity> hashSet = new HashSet<>();
int attempts = 0;
while (nonDuplicateEntities.size() < maxResults) {
attempts++;
if (attempts >= 10) {
return nonDuplicateEntities;
}
double ran1 = Math.random();
double ran2 = Math.random();
Filter randomMinFilter = new Query.FilterPredicate(Constants.KEY_RANDOM, Query.FilterOperator.GREATER_THAN_OR_EQUAL, Math.min(ran1, ran2));
Filter randomMaxFilter = new Query.FilterPredicate(Constants.KEY_RANDOM, Query.FilterOperator.LESS_THAN_OR_EQUAL, Math.max(ran1, ran2));
Filter languageFilter = new Query.FilterPredicate(Constants.KEY_LANGUAGE, Query.FilterOperator.EQUAL, language);
Filter randomRangeFilter;
if (gender == null || gender.equals(Constants.GENDER_ANY)) {
randomRangeFilter = Query.CompositeFilterOperator.and(randomMinFilter, randomMaxFilter, languageFilter);
} else {
Filter genderFilter = new Query.FilterPredicate(Constants.KEY_GENDER, Query.FilterOperator.EQUAL, gender);
randomRangeFilter = Query.CompositeFilterOperator.and(randomMinFilter, randomMaxFilter, genderFilter, languageFilter);
}
Query q = new Query(Constants.KEY_USER_CLASS).setFilter(randomRangeFilter);
PreparedQuery pq = DatastoreServiceFactory.getDatastoreService().prepare(q);
List<Entity> entities = pq.asList(FetchOptions.Builder.withLimit(maxResults - nonDuplicateEntities.size()));
for (Entity entity : entities) {
if (filterKey.equals(entity.getKey())) {
continue;
}
if (hashSet.add(entity)) {
nonDuplicateEntities.add(entity);
}
if (nonDuplicateEntities.size() == maxResults) {
return nonDuplicateEntities;
}
}
}
return nonDuplicateEntities;
}
I now need just users who have been active recently.
Each of the User entities also have a 'last active' property, which I want to include in the query e.g. last active > 30 minutes ago.
This would mean having an inequality filter on two properties, which I can't do.
What is the most efficient way to do this?
I could get all user entities active in the last X minutes, and then pick some random ones. I could leave my code as is and do a check for last active before adding them to the non duplicate entity list, but this might involve lots of calls to the datastore.
Is there some other way I can do this just using the query?
Given the above comments as requested here is one approach.
With the assumption you have a "last active" property which stores a date time stamp you can then perform a keys only query where the last active datetime_stamp > "a datetime stamp of interest".
On retrieving the keys perform a random choice on the result set, then explicitly fetch the key with a get operation. This will limit costs to small ops and a get.
I would consider then caching this set of keys in memcache, with a defined expiry period, so you can re-use the set of keys if you need another random choice in the next nominated period rather than re-querying, 2 secs later. Accuracy doesn't appear to be too important given the random choice.
If you do adopt the caching strategy, you do have to deal with cache expiry and refreshing the cache.
A potential issue here is running into the dogpile effect, where multiple requests all fail to retrieve the cache at the same time and each handler starts building the cache. In a lightly loaded system this may not be an issue, in a heavily loaded system with a lot of activity, you may want to keep the cache active with a task. - Just something to think about.

fire a trigger on existing data?

Setup:
A--< B >-- C . On A there is a RFS on B, and then there is an after update trigger that when run populates fields on B. One of B's Fields are then rolled-up into a field on C.
Question:
The trigger works, but i need to run it on the existing records in the DB to bring everything up to date. How do I do that? I already tried running a 'force mass recalculation with the RFS on A and C.
You could write rather simple Batch Apex (see docs) job class to touch all records you want to recalculate:
global class TouchRecords implements Database.Batchable<sObject>{
private String query;
global TouchRecords(String query) {
this.query = query;
}
global Database.QueryLocator start(Database.BatchableContext BC){
return Database.getQueryLocator(query);
}
global void execute(Database.BatchableContext BC, List<sObject> scope){
update scope;
}
global void finish(Database.BatchableContext BC){
}
}
The job can then be executed by running the following (for example via execute anonymous):
Id batchInstanceId = Database.executeBatch(new TouchRecords('select id from A__c'));
or
Id batchInstanceId = Database.executeBatch(new TouchRecords('select id from Contact'));
to touch all contacts
This should run the trigger on all records being touched (supports max 50 million records). Same idea as the proposed dataloader solution, but kept inside SFDC for easier reuse
Found out a work-around.
Use Data loader- do an export on Id's and the update. this will change the last modified date, and cause the after update trigger to fire.
A better approach is to create a dummy field on the object and then do a bulk update on that field. There could be further issues with this, so you would want to reduce the batch size to a number so that you do not SOQL governor limit if doing any DML.

Query Google DataStore

I have following Objectify entity to store data in Google DataStore.
public class Record implements Serializable {
private static final long serialVersionUID = 201203171843L;
#Id
private Long id;
private String name; // John Smith
private Integer age; // 43
private String gender; // Male/Female
private String eventName; // Name of the marathon/event
private String eventCityName; // City of the event
private String eventStateName; // State of the event
private Date eventDate; // event date
//Getters & Setters
}
Now, my question is how can I query my database to get count of Records for a given eventName or event City+State? Or get a list of all City+Name.
On App Engine counting is very expensive: you basically need to query with certain condition (eventName = something), then count all results. The cost is a key-only query (1 read + 1 small operation) and increases with number of entities counted. For example counting 1 million entities would cost $0.8.
What is normally done is to keep count of things as a property inside a dedicated entity: increase the property value when count goes up (entity added) and decrease when it goes down (entity deleted).
If you plan to do this on a larger scale then understand there is a write/update limitation of about 5 writes/s per entity (entity group actually). See sharded counters for how to work around this.

Resources