i have a code segment in my DataBase.Batchable
if(pageToken!=null)
deleteEvents(accessToken , pageToken,c.CalendarId__c);
}
and delete event function code is
public static void deleteEvents(String accessToken,String pageToken,String CalendarId){
while(pageToken != null)
{ String re = GCalendarUtil.doApiCall(null,'GET','https://www.googleapis.com/calendar/v3/calendars/'+CalendarId+'/events?pageToken='+pageToken,accessToken);
System.debug('next page response is'+ re);
re = re.replaceAll('"end":','"end1":');
re = re.replaceAll('"dateTime":','"dateTime1":');
JSON2Apex1 aa = JSON2Apex1.parse(re);
List<JSON2Apex1.Items> ii = aa.items ;
System.debug('size of ii'+ii.size());
pageToken = aa.nextPageToken ;
List<String> event_id =new List<String>();
if(ii!= null){
for(JSON2Apex1.Items i: ii){
event_id.add(i.id);
}
}
for(String ml: event_id)
System.debug('Hello Worlds'+ml);
for(String s:event_id)
{
GCalendarUtil.doApiCall(null,'DELETE','https://www.googleapis.com/calendar/v3/calendars/'+CalendarId+'/events/'+s,accessToken);
}
}
}
because there were nearly about 180 event thats why i am getting this error but for deleting them i have an option that i create a custom object and a text field (id of event)in it and then create one more Database.Batchable class for deleting them and pass it as a batch of 9 or any other method so that i am able to do more than 100 callouts with in an Database.Batchable interface??
10 is the max. And it doesn't seem like Google's calendar API supports mass delete.
You could try chunking your batch job (the size of list of records passed to each execute() call). For example something like this:
global Database.QueryLocator start(Database.BatchableContext bc){
return Database.getQueryLocator([SELECT Id FROM Event]); // anything that loops through Events and not say Accounts
}
global void execute(Database.BatchableContext BC, List<Account> scope){
// your deletes here
}
Database.executeBatch(new MyBatchClass(), 10); // this will make sure to work on 10 events (or less) at a time
Another option would be to read up about daisy-chaining of batches. In execute you'd be deleting up to 10, in finish() method you check again if there's still more work to do and you can fire a batch again.
P.S. Don't use hardcoded "10". Use Limits.getCallouts() and Limits.getLimitCallouts() so it will automatically update if Salesforce ever increases the limit.
In a trigger you could for example set up 10 #future methods, each with fresh limit of 10 callouts... Still batch sounds like a better idea.
Related
I want to Create An Instance Of Room Data base in Composable
But
val db = Room.databaseBuilder(applicationContext, UserDatabase::class.java,"users.db").build()
is not working here not getting applicationContext
How to create an instance of context in composable
Have you tried getting the context with : val context = LocalContext.current and then adding this to get your applicationContext?
Like this: context.applicationContext or using simply val db = Room.databaseBuilder(context, UserDatabase::class.java,"users.db").build()
Room (and the underlying SQliteOpenHelper) only need the context to open the database (or more correctly to instantiate the underlying SQLiteOpenHelper).
Room/Android SQLiteOpenHelper uses the context to ascertain the Application's standard (recommended) location (data/data/<the_package_name>/databases). e.g. in the following demo (via Device Explorer):-
The database, as it is still open includes 3 files (the -wal and -shm are the Write Ahead Logging files that will at sometime be committed/written to the actual database (SQLite handles that)).
so roughly speaking Room only needs to have the context so that it can ascertain /data/data/a.a.so75008030kotlinroomgetinstancewithoutcontext/databases/testit.db (in the case of the demo).
So if you cannot use the applicationContext method then you can circumvent the need to provide the context, if using a singleton approach AND if after instantiating the singleton.
Perhaps consider this demo:-
First some pretty basic DB Stuff (table (#Entity annotated class), DAO functions and #Database annotated abstract class WITH singleton approach). BUT with some additional functions for accessing the instance without the context.
#Entity
data class TestIt(
#PrimaryKey
val testIt_id: Long?=null,
val testIt_name: String
)
#Dao
interface DAOs {
#Insert(onConflict = OnConflictStrategy.IGNORE)
fun insert(testIt: TestIt): Long
#Query("SELECT * FROM testit")
fun getAllTestItRows(): List<TestIt>
}
#Database(entities = [TestIt::class], exportSchema = false, version = 1)
abstract class TestItDatabase: RoomDatabase() {
abstract fun getDAOs(): DAOs
companion object {
private var instance: TestItDatabase?=null
/* Extra/not typical for without a context (if wanted)*/
fun isInstanceWithoutContextAvailable() : Boolean {
return instance != null
}
/******************************************************/
/* Extra/not typical for without a context */
/******************************************************/
fun getInstanceWithoutContext(): TestItDatabase? {
if (instance != null) {
return instance as TestItDatabase
}
return null
}
/* Typically the only function*/
fun getInstance(context: Context): TestItDatabase {
if (instance==null) {
instance = Room.databaseBuilder(context,TestItDatabase::class.java,"testit.db")
.allowMainThreadQueries() /* for convenience/brevity of demo */
.build()
}
return instance as TestItDatabase
}
}
}
And to demonstrate (within an activity for brevity) :-
class MainActivity : AppCompatActivity() {
lateinit var roomInstance: TestItDatabase
lateinit var dao: DAOs
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
roomInstance = TestItDatabase.getInstance(this) /* MUST be used before withoutContext functions but could be elsewhere shown here for brevity */
dao = roomInstance.getDAOs()
//dao.insert(TestIt(testIt_name = "New001")) /* Removed to test actually doing the database open with the without context */
logDataWithoutContext()
addRowWithoutContext()
addRowWithApplicationContext()
logDataWithoutContext()
}
private fun logDataWithoutContext() {
Log.d("${TAG}_LDWC","Room DB Instantiated = ${TestItDatabase.isInstanceWithoutContextAvailable()}")
for (t in TestItDatabase.getInstanceWithoutContext()!!.getDAOs().getAllTestItRows()) {
Log.d("${TAG}_LDWC_DATA","TestIt Name is ${t.testIt_name} ID is ${t.testIt_id}")
}
}
private fun addRowWithoutContext() {
Log.d("${TAG}_LDWC","Room DB Instantiated = ${TestItDatabase.isInstanceWithoutContextAvailable()}")
if (TestItDatabase.getInstanceWithoutContext()!!.getDAOs()
.insert(TestIt(System.currentTimeMillis(),"NEW AS PER ID (the time to millis) WITHOUT CONTEXT")) > 0) {
Log.d("${TAG}_ARWC_OK","Row successfully inserted.")
} else {
Log.d("${TAG}_ARWC_OUCH","Row was not successfully inserted (duplicate ID)")
}
}
private fun addRowWithApplicationContext() {
TestItDatabase.getInstance(applicationContext).getDAOs().insert(TestIt(System.currentTimeMillis() / 1000,"NEW AS PER ID (the time to seconds) WITH CONTEXT"))
}
}
The result output to the log showing that the database access, either way, worked:-
2023-01-05 12:45:39.020 D/DBINFO_LDWC: Room DB Instantiated = true
2023-01-05 12:45:39.074 D/DBINFO_LDWC: Room DB Instantiated = true
2023-01-05 12:45:39.077 D/DBINFO_ARWC_OK: Row successfully inserted.
2023-01-05 12:45:39.096 D/DBINFO_LDWC: Room DB Instantiated = true
2023-01-05 12:45:39.098 D/DBINFO_LDWC_DATA: TestIt Name is NEW AS PER ID (the time to seconds) WITH CONTEXT ID is 1672883139
2023-01-05 12:45:39.098 D/DBINFO_LDWC_DATA: TestIt Name is NEW AS PER ID (the time to millis) WITHOUT CONTEXT ID is 1672883139075
note that the shorter id was the last added but appears first due to it being selected first as it appears earlier in the index that the SQlite Query Optimiser would have used (aka the Primary Key).
basically the same date time second wise but the first insert included milliseconds whilst the insert via AddRowWithApplicationContext drops the milliseconds.
I need to compare the previous session to averages from different sessions for the same user. I'm using MapState to keep the previous session, but somehow the mapstate never contains any previous keys, so every session is new. here's my code:
SessionIdentificationProcessFunction (this is a function that gather all the events that belongs to the same session.
static SingleOutputStreamOperator<SessionEvent> sessionUser(KeyedStream<Event, String> stream) {
return stream.window(EventTimeSessionWindows.withGap(Time.minutes(PropertyFileReader.getGAP_SECTION())))
.allowedLateness(Time.minutes(PropertyFileReader.getLATENCY_ALLOWED()))
.process(new SessionIdentificationProcessFunction<Event, SessionEvent, String, TimeWindow>() {
#Override
public void open(Configuration parameters) {
/*state configured to live just one day to avoid garbage accumulation*/
StateTtlConfig ttlConfig = StateTtlConfig
.newBuilder(org.apache.flink.api.common.time.Time.days(1))
.cleanupFullSnapshot()
.build();
MapStateDescriptor<String, SessionEvent> map_descriptor = new MapStateDescriptor<>("prevMapUserSession", String.class, SessionEvent.class);
map_descriptor.enableTimeToLive(ttlConfig);
previous_user_sessions_state = getRuntimeContext().getMapState(map_descriptor);
}
#Override
public SessionEvent generateSessionRecord(String s, Context context, Iterable<Event> elements) {
Comparator<Event> sortFunc = (o1, o2) -> ((o1.timestamp.before(o2.timestamp)) ? 0 : 1);
Event start = StreamSupport.stream(elements.spliterator(), false).max(sortFunc).orElse(new Event());
Event end = StreamSupport.stream(elements.spliterator(), false).max(sortFunc).orElse(new Event());
SessionEvent session_user = (end.timestamp.equals(Timestamp.from(Instant.EPOCH))) ? new SessionEvent(start) : new SessionEvent(end);
session_user.sessionEvents = StreamSupport.stream(elements.spliterator(), false).count();
session_user.sessionDuration = sd;
try {
if (previous_user_sessions_state.contains(s)) {
SessionEvent previous = previous_user_sessions_state.get(s);
/*Update values of the session with the values of the previous which never exist and delete the previous session in the map to create a new entry with the new values updated*/
previous_user_sessions_state.remove(s);
} else {
/*always get here and create a new session*/
}
previous_user_sessions_state.put(s, session_user);
} catch (Exception e) {
e.printStackTrace();
}
return session_user;
}
})
.name("User Sessions");
}
Without seeing how SessionIdentificationProcessFunction is implemented, I'm not sure exactly what's going wrong, but Flink's session windows are rather special, so it's not terribly surprising that this isn't working. Part of the problem is that any given session window has a very short lifetime before it is merged with another session window. (As each new event arrives it is initially assigned to its own session window, after which the set of all current session windows is processed and any possible merges are performed (based on the session gap).)
What I can recommend is rather than using getRuntimeContext().getMapState(), use context.globalState().getMapState() instead (where context is the ProcessWindowFunction.Context passed to the process() method of a ProcessWindowFunction). This globalState is a KeyedStateStore meant for precisely this purpose -- keeping keyed state that is global/shared among all window instances for that key.
I have a kafka messages something like the following pattern:
{ user: 'someUser', value: 'SomeValue' , timestamp:000000000}
With Flink stream calculation that do some count action on those items .
Now I want to declare a session , to collect same user + value in a range of X seconds as a single , with the latest timestamp , then it will be forwarded to the next stream just one time
So I Wrote something like that:
data.assignTimestampsAndWatermarks(new AssignerWithPeriodicWatermarks<Data>() {
.....
})
.keyBy(new KeySelector<Data, String>(){
.......
})
.window(EventTimeSessionWindows.withGap(Time.minutes(10)))
.aggregate(new AggregateFunction<Data, Data, Data>() {
#Override
public Data createAccumulator() {
return null;
}
#Override
public Data add(Data value, Data accumulator) {
if(accumulator == null) {
accumulator = value;
}
return accumulator;
}
#Override
public Data getResult(Data accumulator) {
return accumulator;
}
#Override
public Data merge(Data a, Data b) {
return a;
}
});
But the problem is that the getResult function is called on each element , not just in the end of the window.
My problem is how to not to forward the aggregation result until the end of the window to the next stream. as far that I know also process stream result is moving forward when there is no more elements, even though the windows isn't end yes
any advice?
Thanks
Flink provides two distinct approaches for evaluating windows. In this case you want to use the other one.
One approach evaluates each window's contents incrementally. This is what you get with reduce and aggregate. As elements are assigned to the window, the ReduceFunction or AggregateFunction is called and that element immediately makes its contribution to the final result.
The alternative is to use process with a ProcessWindowFunction. With this approach, the window isn't evaluated until the window is complete, at which point the ProcessWindowFunction is called once with an Iterable containing all of the elements that were assigned to the window. This has the disadvantage of needing to store all of the elements until the window is triggered, and if the ProcessWindowFunction has to do a lot of work to compute its result that can temporarily disrupt the pipeline, but some calculations need to be done this way -- like counting distinct elements.
See the documentation for more info.
I already have a Schedule apex, that works on 3 objects to do a basic Query and update. I wanted to make this class batch. But unable to add multiple objects in Batch apex and loop around them .
Here is how my Schedule apex looks like
`global class scheduleWorkday implements Schedulable {
global void execute(SchedulableContext ctx) {
//Get Accounts
List<Account> getbdayaccount = [Select ID, Name, Address from Account where State= CT];
if(!getbdayaccount .isEmpty()) {
for(Account a : getbdayaccount ) {
a.name = 'Test';
a.State= 'NJ';
}
update getbdayaccount ;
}
//get Leads
List<Lead> getPreApprovalFollow = [Select ID, Name, State, LeadSource from Lead where State = 'CT' ];
if(!getPreApprovalFollow .isEmpty()) {
for(Lead l: getPreApprovalFollow ) {
l.LeadSource = 'Referral';
l.State = 'NJ';
}
update getPreApprovalFollow ;
}
//get Opportunities
List<Opportunity> getopps = [Select Id, CloseDate, State from Lead where State = 'CT'];
if(!getopps.isEmpty()){
for(Opportunity o : getopps){
o.CloseDate = Date.Today();
o.State = 'CT';
}
update get opps;
}
}
}`
I tried making batch apex something like this -
global class LeadProcessor implements Database.Batchable <SObject> {
//START METHOD
global Database.QueryLocator start(Database.BatchableContext bc){
String Query='Select id,LeadSource, State from Lead where state = 'CT';
return Database.getQueryLocator(Query);
}
//EXECUTE METHOD
global void execute(Database.BatchableContext bc, List<Lead> scope){
for(Lead l: scope){
l.LeadSource='Referral';
l.State = 'NJ';
}
update scope;
}
//FINISH METHOD
global void finish(Database.BatchableContext bc){
}
}
How can I change this batch apex to return multiple queries, add a loop and update them .
That's not how Batch Apex works. You can iterate over exactly one query in a Batch Apex job.
Because you are mutating three different objects in three different ways, the options of batch chaining and use of Dynamic SOQL don't really apply here. You'll simply need to build three different batch classes.
You can run these classes in parallel, or have each one chain into the next in its finish() method. But you can't do it all in one batch.
I have an Apex class whose purpose it is to retrieve and delete overdue tasks on the contact role (related to an account) that the user just called. I need to modify it so that it queries for all overdue tasks assigned to the user on ALL contact roles on the account but am struggling to get the right query.
Here is a portion of the code in question, the part I think is most relevant:
/***************************************************
Brief Description: Deletes overdue Tasks or Events.
****************************************************/
public class DSDenali_DeleteOverDueActivities {
private static List<Task> followUpTasksToDelete = new List<Task>();
private static List<Event> followUpEventsToDelete = new List<Event>();
private static Map<Id, Set<String>> ownerOfActivities = new Map<Id, Set<String>>();
#InvocableMethod (label = 'DialSource Denali Delete Overdue Activities')
public static void gatherData(List<Data> requests)
{
Map<Id, String> results = new Map<Id, String>();
for (Data request : requests)
results.put(request.contactRoleID, request.assignedTo);
for (Id key : results.keySet())
{
Set<String> assignedToValues = parseAssignedTo(results.get(key));
System.debug('assignedToValues: ' + assignedToValues);
ownerOfActivities.put(key, assignedToValues);
System.debug(ownerOfActivities);
}
queryAndFilterData();
deleteOverdueActivities();
}
//Query for the Tasks and Events and filter the ones to delete
private static void queryAndFilterData()
{
List<Contact_Role__c> contactRoles = [SELECT Id,
(SELECT Id, Owner.UserRole.Name, OwnerId FROM Tasks WHERE status != 'Completed' AND ActivityDate <= :system.TODAY() AND Type__c = 'Outbound Call'),
(SELECT Id, Owner.UserRole.Name, OwnerId, Description FROM Events WHERE EndDateTime <= :system.NOW())
FROM Contact_Role__c
WHERE Id IN :ownerOfActivities.keySet()];
for (Contact_Role__c contactRole : contactRoles)
{
for (Task currentTask : contactRole.Tasks)
{
if (ownerOfActivities.get(contactRole.Id).contains(currentTask.OwnerId))
{
if (currentTask.OwnerId != '0050B000006ET37' && currentTask.Owner.UserRole != NULL && Pattern.matches('.*Altair.*', currentTask.Owner.UserRole.Name))
followUpTasksToDelete.add(currentTask);
else if (currentTask.OwnerId == '0050B000006ET37')
followUpTasksToDelete.add(currentTask);
else
continue;
}
else if (ownerOfActivities.get(contactRole.Id).contains('ALL'))
{
if (currentTask.Owner.UserRole != NULL && Pattern.matches('.*Altair.*', currentTask.Owner.UserRole.Name))
followUpTasksToDelete.add(currentTask);
else
continue;
}
}
for (Event currentEvent : contactRole.Events)
{
if (ownerOfActivities.get(contactRole.Id).contains(currentEvent.OwnerId) && currentEvent.Description == NULL)
{
if (currentEvent.OwnerId != '0050B000006ET37' && currentEvent.Owner.UserRole != NULL && Pattern.matches('.*Altair.*', currentEvent.Owner.UserRole.Name))
followUpEventsToDelete.add(currentEvent);
else if (currentEvent.OwnerId == '0050B000006ET37')
followUpEventsToDelete.add(currentEvent);
else
continue;
}
else if (ownerOfActivities.get(contactRole.Id).contains('ALL') && currentEvent.Description == NULL)
{
if (currentEvent.Owner.UserRole != NULL && Pattern.matches('.*Altair.*', currentEvent.Owner.UserRole.Name))
followUpEventsToDelete.add(currentEvent);
else
continue;
}
}
}
}
//Delete overdue Events/Tasks
private static void deleteOverdueActivities()
{
try{
delete followUpTasksToDelete;
}
catch (DmlException e){
System.debug('The following error occured (DSDenali_DeleteOverDueActivities): ' + e);
}
try{
delete followUpEventsToDelete;
}
catch (DmlException e){
System.debug('The following error occured (DSDenali_DeleteOverDueActivities): ' + e);
}
}
//Parse the CSVs of possible owners
private static Set<String> parseAssignedTo(String assignedTo)
{
Set<String> assignedToValues = new Set<String>();
assignedToValues.addAll(assignedTo.deleteWhitespace().split(','));
return assignedToValues;
}
public class Data
{
#InvocableVariable (required=true)
public String assignedTo;
#InvocableVariable (required=false)
public Id contactRoleID;
}
}
(updated after OP posted more code and asked for code review)
It's not bad code, could use some comments. Consider posting it in https://codereview.stackexchange.com/ (although not many SF-related posts end up there) or on https://salesforce.stackexchange.com
gatherData()
Your input variable (after some parsing) is Map<Id, Set<String>> where key is contact role's id. That set of strings for users (owners) is bit misleading. At a glance you immediately ask yourself why it can't be Set<Id>. Only deep down in the code you see that apparently "All" is one of allowed values. This is... not great. I'd be tempted to make 2 methods here, one taking legit Map<Id, Set<Id>> and other taking simply Set<Id> if you know that you're effectively skipping the second parameter.
queryAndFilterData()
You have only one query and it's not in a loop, very good. My trick (from before edit) won't work for you (you don't really have the Account__c or however is the field called anywere in the input, you have only record ids. If you want to check / delete all tasks for roles under this account cleanest might be to use two queries
// 1st the helper to collect all accounts...
Set<Id> allAccounts = new Map<Id, Account>([SELECT Id
FROM Account
WHERE Id IN (SELECT Account__c FROM Contact_Role__c WHERE Id IN :ownerOfActivities.keyset()]).keyset();
// then the outline of the main query would be bit like this
SELECT Id,
(SELECT ... FROM Tasks WHERE ...),
(SELECT ... FROM Events WHERE ...)
FROM Contact_Role__c
WHERE Account__c IN :allAccounts
AND ...
I'd check how much of this filtering logic could be pushed to the query itself rather than manually inspecting each returned row. I mean look at that:
Imagine we're going the simple route (ignoring concept of "All" users) and let's say you have another Set<Id> allUsers; variable (composed out of all ids mentioned in all your data pieces)
The query for Tasks could become something as simple as
(SELECT Id
FROM Tasks
WHERE Status != 'Completed'
AND ActivityDate <= TODAY
AND Type__c = 'Outbound Call'
AND OwnerId IN :allUsers
AND (OwnerId = '0050B000006ET37' OR Owner.UserRole.Name LIKE '%.Altair.%')
)
You still have to loop through them to verify whether each can really really be deleted (it's not enough to match out of all users, it has to check also if it's OK for user on this particular Contact_Role__c, right?) but something like that should return less rows and no more regular expression matching... should be bit faster.
I wouldn't have a magic variable for that one special owner's id like that. Ideally there would be something else that describes this special user (role? profile? custom field on User record? permission to "Author Apex" in profile?). At the very least move it to helper Id variable at top of the file so it's not copy-pasted all over the place. And ask your business user what should happen if that guy (default task owner? some integration account?) ever leaves the company because poo poo will hit the propeller hard.
If you'll get comfortable with this version of the query the "ALL" version would become even simpler? No "all users", no "magic id" & job done?
(SELECT Id
FROM Tasks
WHERE Status != 'Completed'
AND ActivityDate <= TODAY
AND Type__c = 'Outbound Call'
Owner.UserRole.Name LIKE '%.Altair.%'
)
Don't trust random guy on internet who doesn't know your business process but yeah, there's some room for improvement. Just test thoroughly :)
deleteOverdueActivities()
This is not great try-catch pattern. You just raise it in debug log but silently swallow the error. Make it fail hard (let the error bubble up to user) or do some proper error handling like inserting something to helper Log__c object maybe or sending an email / chatter post to admin...
parseAssignedTo()
Not bad. I expect it explodes horribly when you pass a null to it. You're kind of protected from it by marking the variable required in last lines but I think this annotation applies only to Flows. Calling it from other Apex code isn't protected enough.