I wonder if my case is consistency issue...
I have an entity class Player, which has a field lastAttackDate. I set lastAttackDate = sysdate in a transaction, then I commit that transaction, and then I query for players with lastAttackDate < sysdate - 10min (please see the simplified code).
private static final Logger log = Logger.getLogger(MyEndpoint.class.getName());
#ApiMethod(name = "attack")
public MyResult attack(#Named("id") Long id, User user) throws Exception, OAuthRequestException, IOException {
PersistenceManager mgr = null;
Player defender = null;
Transaction tx = null;
try {
mgr = getPersistenceManager();
tx = mgr.currentTransaction();
tx.begin();
long now = new Date().getTime();
defender = getPlayer(mgr, id);//I get player by param ID
defender.setLastAttackDate(now);
tx.commit(); //mgr.close(); mgr = getPersistenceManager();--> I tried that, it did not help
long param = now - 10*60*1000;//sysdate - 10 minutes
Query q = pm.newQuery("select from " + Player.class.getName() +" where lastAttackDate < lastAttackDateParam parameters long lastAttackDateParam");
Set<Player> result = new HashSet<Player>();
result.addAll((List<Player>) q.execute(param));
Iterator<Player> it = results.iterator();
while(it.hasNext())
{
Player p = it.next();
if (p.getLastAttackDate() >= param)
{
log.log(Level.SEVERE, "It really just gave me a result that doesn't meet the criteria");//defender (the recently updated player) falls in this category
it.remove();
}
}
}
finally {
if (tx != null && tx.isActive())
tx.rollback();
if (mgr != null)
mgr.close();
return null;
}
}
What bothers me is that this query still gives me defender as a result. What bothers me even more, is that if I iterate through result and I check if it meets the criteria, I can see that it does not. If it was consistency issue, I'd suspect that getLastAttackDate() should return old, not updated value, but it gives the right one, the most recent one. What do I do wrong? What can I do to make it work?
At the moment, I iterate through the result set and remove the entries that do not meet my search criteria, but it is expensive (reads, cpu time, possibly additional query to try again).
Related
I have an Apex class whose purpose it is to retrieve and delete overdue tasks on the contact role (related to an account) that the user just called. I need to modify it so that it queries for all overdue tasks assigned to the user on ALL contact roles on the account but am struggling to get the right query.
Here is a portion of the code in question, the part I think is most relevant:
/***************************************************
Brief Description: Deletes overdue Tasks or Events.
****************************************************/
public class DSDenali_DeleteOverDueActivities {
private static List<Task> followUpTasksToDelete = new List<Task>();
private static List<Event> followUpEventsToDelete = new List<Event>();
private static Map<Id, Set<String>> ownerOfActivities = new Map<Id, Set<String>>();
#InvocableMethod (label = 'DialSource Denali Delete Overdue Activities')
public static void gatherData(List<Data> requests)
{
Map<Id, String> results = new Map<Id, String>();
for (Data request : requests)
results.put(request.contactRoleID, request.assignedTo);
for (Id key : results.keySet())
{
Set<String> assignedToValues = parseAssignedTo(results.get(key));
System.debug('assignedToValues: ' + assignedToValues);
ownerOfActivities.put(key, assignedToValues);
System.debug(ownerOfActivities);
}
queryAndFilterData();
deleteOverdueActivities();
}
//Query for the Tasks and Events and filter the ones to delete
private static void queryAndFilterData()
{
List<Contact_Role__c> contactRoles = [SELECT Id,
(SELECT Id, Owner.UserRole.Name, OwnerId FROM Tasks WHERE status != 'Completed' AND ActivityDate <= :system.TODAY() AND Type__c = 'Outbound Call'),
(SELECT Id, Owner.UserRole.Name, OwnerId, Description FROM Events WHERE EndDateTime <= :system.NOW())
FROM Contact_Role__c
WHERE Id IN :ownerOfActivities.keySet()];
for (Contact_Role__c contactRole : contactRoles)
{
for (Task currentTask : contactRole.Tasks)
{
if (ownerOfActivities.get(contactRole.Id).contains(currentTask.OwnerId))
{
if (currentTask.OwnerId != '0050B000006ET37' && currentTask.Owner.UserRole != NULL && Pattern.matches('.*Altair.*', currentTask.Owner.UserRole.Name))
followUpTasksToDelete.add(currentTask);
else if (currentTask.OwnerId == '0050B000006ET37')
followUpTasksToDelete.add(currentTask);
else
continue;
}
else if (ownerOfActivities.get(contactRole.Id).contains('ALL'))
{
if (currentTask.Owner.UserRole != NULL && Pattern.matches('.*Altair.*', currentTask.Owner.UserRole.Name))
followUpTasksToDelete.add(currentTask);
else
continue;
}
}
for (Event currentEvent : contactRole.Events)
{
if (ownerOfActivities.get(contactRole.Id).contains(currentEvent.OwnerId) && currentEvent.Description == NULL)
{
if (currentEvent.OwnerId != '0050B000006ET37' && currentEvent.Owner.UserRole != NULL && Pattern.matches('.*Altair.*', currentEvent.Owner.UserRole.Name))
followUpEventsToDelete.add(currentEvent);
else if (currentEvent.OwnerId == '0050B000006ET37')
followUpEventsToDelete.add(currentEvent);
else
continue;
}
else if (ownerOfActivities.get(contactRole.Id).contains('ALL') && currentEvent.Description == NULL)
{
if (currentEvent.Owner.UserRole != NULL && Pattern.matches('.*Altair.*', currentEvent.Owner.UserRole.Name))
followUpEventsToDelete.add(currentEvent);
else
continue;
}
}
}
}
//Delete overdue Events/Tasks
private static void deleteOverdueActivities()
{
try{
delete followUpTasksToDelete;
}
catch (DmlException e){
System.debug('The following error occured (DSDenali_DeleteOverDueActivities): ' + e);
}
try{
delete followUpEventsToDelete;
}
catch (DmlException e){
System.debug('The following error occured (DSDenali_DeleteOverDueActivities): ' + e);
}
}
//Parse the CSVs of possible owners
private static Set<String> parseAssignedTo(String assignedTo)
{
Set<String> assignedToValues = new Set<String>();
assignedToValues.addAll(assignedTo.deleteWhitespace().split(','));
return assignedToValues;
}
public class Data
{
#InvocableVariable (required=true)
public String assignedTo;
#InvocableVariable (required=false)
public Id contactRoleID;
}
}
(updated after OP posted more code and asked for code review)
It's not bad code, could use some comments. Consider posting it in https://codereview.stackexchange.com/ (although not many SF-related posts end up there) or on https://salesforce.stackexchange.com
gatherData()
Your input variable (after some parsing) is Map<Id, Set<String>> where key is contact role's id. That set of strings for users (owners) is bit misleading. At a glance you immediately ask yourself why it can't be Set<Id>. Only deep down in the code you see that apparently "All" is one of allowed values. This is... not great. I'd be tempted to make 2 methods here, one taking legit Map<Id, Set<Id>> and other taking simply Set<Id> if you know that you're effectively skipping the second parameter.
queryAndFilterData()
You have only one query and it's not in a loop, very good. My trick (from before edit) won't work for you (you don't really have the Account__c or however is the field called anywere in the input, you have only record ids. If you want to check / delete all tasks for roles under this account cleanest might be to use two queries
// 1st the helper to collect all accounts...
Set<Id> allAccounts = new Map<Id, Account>([SELECT Id
FROM Account
WHERE Id IN (SELECT Account__c FROM Contact_Role__c WHERE Id IN :ownerOfActivities.keyset()]).keyset();
// then the outline of the main query would be bit like this
SELECT Id,
(SELECT ... FROM Tasks WHERE ...),
(SELECT ... FROM Events WHERE ...)
FROM Contact_Role__c
WHERE Account__c IN :allAccounts
AND ...
I'd check how much of this filtering logic could be pushed to the query itself rather than manually inspecting each returned row. I mean look at that:
Imagine we're going the simple route (ignoring concept of "All" users) and let's say you have another Set<Id> allUsers; variable (composed out of all ids mentioned in all your data pieces)
The query for Tasks could become something as simple as
(SELECT Id
FROM Tasks
WHERE Status != 'Completed'
AND ActivityDate <= TODAY
AND Type__c = 'Outbound Call'
AND OwnerId IN :allUsers
AND (OwnerId = '0050B000006ET37' OR Owner.UserRole.Name LIKE '%.Altair.%')
)
You still have to loop through them to verify whether each can really really be deleted (it's not enough to match out of all users, it has to check also if it's OK for user on this particular Contact_Role__c, right?) but something like that should return less rows and no more regular expression matching... should be bit faster.
I wouldn't have a magic variable for that one special owner's id like that. Ideally there would be something else that describes this special user (role? profile? custom field on User record? permission to "Author Apex" in profile?). At the very least move it to helper Id variable at top of the file so it's not copy-pasted all over the place. And ask your business user what should happen if that guy (default task owner? some integration account?) ever leaves the company because poo poo will hit the propeller hard.
If you'll get comfortable with this version of the query the "ALL" version would become even simpler? No "all users", no "magic id" & job done?
(SELECT Id
FROM Tasks
WHERE Status != 'Completed'
AND ActivityDate <= TODAY
AND Type__c = 'Outbound Call'
Owner.UserRole.Name LIKE '%.Altair.%'
)
Don't trust random guy on internet who doesn't know your business process but yeah, there's some room for improvement. Just test thoroughly :)
deleteOverdueActivities()
This is not great try-catch pattern. You just raise it in debug log but silently swallow the error. Make it fail hard (let the error bubble up to user) or do some proper error handling like inserting something to helper Log__c object maybe or sending an email / chatter post to admin...
parseAssignedTo()
Not bad. I expect it explodes horribly when you pass a null to it. You're kind of protected from it by marking the variable required in last lines but I think this annotation applies only to Flows. Calling it from other Apex code isn't protected enough.
I want three values, they are aggValueInLastHour aggValueInLastDay aggValueInLastThreeDay.
I've tried like below.
But I don't want to wait, means that I'm not prefer to use sliding window to do aggregation.(3 day window must wait three days' data, this is unbearable for our system.)
How can I get last 3 day aggregation value when first event come?
Thanks for any advice in advance!
If you want to get more frequent updates you can use QueryableState, polling the state at a rate that suits your use case.
You can make use of the ContinuousEventTimeTrigger, which will cause your window to fire on a shorter time period than the the full window, allowing you to see the intermediate state. You can optionally wrap that in a PurgingTrigger if the downstream consumers of your sink are expecting each output to be a partial aggregation (rather than the full current state) and sums them up.
I've tried CEP.
code:
AfterMatchSkipStrategy strategy = AfterMatchSkipStrategy.skipShortOnes();
Pattern<RiskEvent, ?> loginPattern = Pattern.<RiskEvent>begin("start", strategy)
.where(eventTypeCondition)
.timesOrMore(1)
.greedy()
.within(Time.hours(1));
KeyedStream<RiskEvent, String> keyedStream = dataStream.keyBy(new KeySelector<RiskEvent, String>() {
#Override
public String getKey(RiskEvent riskEvent) throws Exception {
// key by user for aggregation
return riskEvent.getEventType() + riskEvent.getDeviceFp();
}
});
PatternStream<RiskEvent> eventPatternStream = CEP.pattern(keyedStream, loginPattern);
eventPatternStream.select(new PatternSelectFunction<RiskEvent, RiskResult>() {
#Override
public RiskResult select(Map<String, List<RiskEvent>> map) throws Exception {
List<RiskEvent> list = map.get("start");
ArrayList<Long> times = new ArrayList<>();
for (RiskEvent riskEvent : list) {
times.add(riskEvent.getEventTime());
}
Long min = Collections.min(times);
Long max = Collections.max(times);
Set<String> accountList = list.stream().map(RiskEvent::getUserName).collect(Collectors.toSet());
logger.info("时间范围:" + new Date(min) + " --- " + new Date(max) + " 事件:" + list.get(0).getEventType() + ", 设备指纹:" + list.get(0).getDeviceFp() + ", 关联账户:" + accountList.toString());
return null;
}
});
maybe you notice that, the skip strategy skipShortOnes is a customized strategy.
Show you my modification in CEP lib.
add strategy in Enum.
public enum SkipStrategy{
NO_SKIP,
SKIP_PAST_LAST_EVENT,
SKIP_TO_FIRST,
SKIP_TO_LAST,
SKIP_SHORT_ONES
}
add access method in AfterMatchSkipStrategy.java
public static AfterMatchSkipStrategy skipShortOnes() {
return new AfterMatchSkipStrategy(SkipStrategy.SKIP_SHORT_ONES);
}
add strategy actions in discardComputationStatesAccordingToStrategy method at NFA.java.
case SKIP_SHORT_ONES:
int i = 0;
List>> tempResult = new ArrayList<>(matchedResult);
for (Map> resultMap : tempResult) {
if (i++ == 0) {
continue;
}
matchedResult.remove(resultMap);
}
break;
I am at 71%, 4 lines of code cannot be run in the test for some reason.
When I test myself in Salesforce it works (those lines of code are running).
How can I get these lines of code to run in the test?
Lines not running, in second for loop
nextId=Integer.Valueof(c.next_id__c);
Lines not running, in third for loop
btnRecord.next_id__c = newid + 1;
btnRecord.last_id__c = newId;
btnRecord.last_assigned_starting_id__c = nextId;
btnRecord.last_assigned_ending_id__c = newId;
Below is my code:
trigger getNextId on tracking__c (before insert, before update) {
Integer newId;
Integer lastId;
Integer nextId;
newId=0;
lastId=0;
nextId =0;
//add the total accounts to the last_id
for (tracking__c bt: Trigger.new) {
//get the next id
List<tracking_next_id__c> btnxtid = [SELECT next_id__c FROM tracking_next_id__c];
for (tracking_next_id__c c : btnxtid )
{
nextId=Integer.Valueof(c.next_id__c);
}
newId = Integer.Valueof(bt.total_account__c) + nextId;
bt.starting_id__c = nextId;
bt.ending_id__c = newId;
tracking_next_id__c[] nextIdToUpdate = [SELECT last_id__c, next_id__c, last_assigned_starting_id__c, last_assigned_ending_id__c FROM tracking_next_id__c];
for(tracking_next_id__c btnRecord : nextIdToUpdate ){
btnRecord.next_id__c = newid + 1;
btnRecord.last_id__c = newId;
btnRecord.last_assigned_starting_id__c = nextId;
btnRecord.last_assigned_ending_id__c = newId;
}
update nextIdToUpdate ;
}
}
Even though test coverage is increased by using seeAllData=true, it is not best practice to use seeAllData unless and until it is really required. Please find the blog here for details.
Another way to increase the coverage is by creating test Data for tracking_next_id__c object.
#isTest
private class getNextIdTest {
static testMethod void validateOnInsert(){
tracking_next_id__c c = new tracking_next_id__c(next_id__c='Your next_id',
last_id__c='Your last_id', last_assigned_starting_id__c='Your last_assigned_starting_id',
last_assigned_ending_id__c='last_assigned_ending_id');
insert c;
tracking__c b = new tracking__c(total_account__c=Integer.Valueof(99));
System.debug('before insert : ' + b.total_account__c);
insert b;
System.debug('after insert : ' + b.total_account__c);
List<tracking__c> customObjectList =
[SELECT total_account__c FROM tracking__c ];
for(bid_tracking__c ont : customObjectList){
ont.total_account__c = 5;
}
update customObjectList;
}
}
I have added below line so that, when 2 queries get executed before FOR loops (which were not covered previously) it will fetch data as we have inserted it in test class now.
tracking_next_id__c c = new tracking_next_id__c(next_id__c='Your next_id',
last_id__c='Your last_id', last_assigned_starting_id__c='Your last_assigned_starting_id',
last_assigned_ending_id__c='last_assigned_ending_id');
insert c;
Just an observation, it is best to avoid SOQL query in FOR loop to avoid Runtime Exception (101:Too many SOQL query)
#isTest
private class getNextIdTest {
static testMethod void validateOnInsert(){
tracking__c b = new tracking__c(total_account__c=Integer.Valueof(99));
System.debug('before insert : ' + b.total_account__c);
insert b;
System.debug('after insert : ' + b.total_account__c);
List<tracking__c> customObjectList =
[SELECT total_account__c FROM tracking__c ];
for(bid_tracking__c ont : customObjectList){
ont.total_account__c = 5;
}
update customObjectList;
}
}
Added #isTest(SeeAllData=true) and it moved to 100%
https://developer.salesforce.com/forums/ForumsMain?id=9060G000000I5f8
I am using dtSearch on combination with a SQL database and would like to maintain a table that includes all DocIds and their related FileNames. From there, I will add a column with my foreign key to allow me to combine text and database searches.
I have code to simply return all the records in the index and add them one by one to the DB. This, however, takes FOREVER, and doesn't address the issue of how to simply append new records as they are added to the index. But just in case it helps:
MyDatabaseContext db = new StateScapeEntities();
IndexJob ij = new dtSearch.Engine.IndexJob();
ij.IndexPath = #"d:\myindex";
IndexInfo indexInfo = dtSearch.Engine.IndexJob.GetIndexInfo(#"d:\myindex");
bool jobDone = ij.Execute();
SearchResults sr = new SearchResults();
uint n = indexInfo.DocCount;
for (int i = 1; i <= n; i++)
{
sr.AddDoc(ij.IndexPath, i, null);
}
for (int i = 1; i <= n; i++)
{
sr.GetNthDoc(i - 1);
//IndexDocument is defined elsewhere
IndexDocument id = new IndexDocument();
id.DocId = sr.CurrentItem.DocId;
id.FilePath = sr.CurrentItem.Filename;
if (id.FilePath != null)
{
db.IndexDocuments.Add(id);
db.SaveChanges();
}
}
To keep the DocId in the index you must use the flag dtsIndexKeepExistingDocIds in the IndexJob
You can also look the dtSearch Text Retrieval Engine Programmer's Reference when the DocID is changed
When a document is added to an index, it is assigned a DocId, and DocIds are always numbered sequentially.
When a document is reindexed, the old DocId is cancelled and a new DocId is assigned.
When an index is compressed, all DocIds in the index are renumbered to remove the cancelled DocIds unless the dtsIndexKeepExistingDocIds flag is set in IndexJob.
When an index is merged into another index, DocIds in the target index are never changed. The documents merged into the target index will all be assigned new, sequentially-numbered DocIds, unless (a) the dtsIndexKeepExistingDocIds flag is set in IndexJob and (b) the indexes have non-overlapping ranges of doc ids.
To improve your speed you can search for the word "xfirstword" and get all documents in an index.
You can also look to the faq How to retrieve all documents in an index
So, I used part of user2172986's response, but combined it with some additional code to get the solution to my question. I did indeed have to set the dtsKeepExistingDocIds flag in my index update routine.
From there, I only wanted to add the newly created DocIds to my SQL database. For that, I used the following code:
string indexPath = #"d:\myindex";
using (IndexJob ij = new dtSearch.Engine.IndexJob())
{
//make sure the updated index doesn't change DocIds
ij.IndexingFlags = IndexingFlags.dtsIndexKeepExistingDocIds;
ij.IndexPath = indexPath;
ij.ActionAdd = true;
ij.FoldersToIndex.Add( indexPath + "<+>");
ij.IncludeFilters.Add( "*");
bool jobDone = ij.Execute();
}
//create a DataTable to hold results
DataTable newIndexDoc = MakeTempIndexDocTable(); //this is a custom method not included in this example; just creates a DataTable with the appropriate columns
//connect to the DB;
MyDataBase db = new MyDataBase(); //again, custom code not included - link to EntityFramework entity
//get the last DocId in the DB?
int lastDbDocId = db.IndexDocuments.OrderByDescending(i => i.DocId).FirstOrDefault().DocId;
//get the last DocId in the Index
IndexInfo indexInfo = dtSearch.Engine.IndexJob.GetIndexInfo(indexPath);
uint latestIndexDocId = indexInfo.LastDocId;
//create a searchFilter
dtSearch.Engine.SearchFilter sf = new SearchFilter();
int indexId = sf.AddIndex(indexPath);
//only select new records (from one greater than the last DocId in the DB to the last DocId in the index itself
sf.SelectItems(indexId, lastDbDocId + 1, int.Parse(latestIndexDocId.ToString()), true);
using (SearchJob sj = new dtSearch.Engine.SearchJob())
{
sj.SetFilter(sf);
//return every document in the specified range (using xfirstword)
sj.Request = "xfirstword";
// Specify the path to the index to search here
sj.IndexesToSearch.Add(indexPath);
//additional flags and limits redacted for clarity
sj.Execute();
// Store the error message in the status
//redacted for clarity
SearchResults results = sj.Results;
int startIdx = 0;
int endIdx = results.Count;
if (startIdx==endIdx)
return;
for (int i = startIdx; i < endIdx; i++)
{
results.GetNthDoc(i);
IndexDocument id = new IndexDocument();
id.DocId = results.CurrentItem.DocId;
id.FileName= results.CurrentItem.Filename;
if (id.FileName!= null)
{
DataRow row = newIndexDoc.NewRow();
row["DocId"] = id.DocId;
row["FileName"] = id.FileName;
newIndexDoc.Rows.Add(row);
}
}
newIndexDoc.AcceptChanges();
//SqlBulkCopy
using (SqlConnection connection =
new SqlConnection(db.Database.Connection.ConnectionString))
{
connection.Open();
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(connection))
{
bulkCopy.DestinationTableName =
"dbo.IndexDocument";
try
{
// Write from the source to the destination.
bulkCopy.WriteToServer(newIndexDoc);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
}
}
newIndexDoc.Clear();
db.UpdateIndexDocument();
}
Here is my new solution with AddDoc method from the SearchResults interface:
First get the StartingDocID and the LastDocID from the IndexInfo and walk the loop like this:
function GetFilename(paDocID: Integer): String;
var
lCOMSearchResults: ISearchResults;
lSearchResults_Count: Integer;
begin
if Assigned(prCOMServer) then
begin
lCOMSearchResults := prCOMServer.NewSearchResults as ISearchResults;
lCOMSearchResults.AddDoc(GetIndexPath(prIndexContent), paDocID, 0);
lSearchResults_Count := lCOMSearchResults.Count;
if lSearchResults_Count = 1 then
begin
lCOMSearchResults.GetNthDoc(0);
Result := lCOMSearchResults.DocDetailItem['_Filename'];
end;
end;
end
I have a small Android app and currently I am firing a sql statement in android to get the count of rows in database for a specific where clause.
Following is my sample code:
public boolean exists(Balloon balloon) {
if(balloon != null) {
Cursor c = null;
String count_query = "Select count(*) from balloons where _id = ?";
try {
c = getReadableDatabase().rawQuery(count_query, new String[] {balloon.getId()});
if (c.getCount() > 0)
return true;
} catch(SQLException e) {
Log.e("Running Count Query", e.getMessage());
} finally {
if(c!=null) {
try {
c.close();
} catch (SQLException e) {}
}
}
}
return false;
}
This method returns me a count 1 even when the database table is actually empty. I am not able to figure out; why that would happen. Running the same query in database gets me a count of 0.
I was wondering if it's possible to log or see in a log, all the sql queries that eventually get fired on database after parameter substitution; so that I can understand what's going wrong.
Cheers
That query will always return one record (as will any SELECT COUNT). The reason is, that the record it returns contains a field that indicates how many records are present in the "balloons" table with that ID.
In other words, the one record (c.getcount()==1) is NOT saying that the number of records found in balloons is one, rather it is the record generated by Sqlite which contains a field with the result.
To find out the number of balloons, you should c.movetofirst() and then numberOfBalloons = c.getInt(0)