I have set a time dependent workflow rule on certain condition which is below...
after 6 days of particular(1st follow-up date ->in my case) date the workflow rule should update the picklist field(current status -> in my case)....
My question is at what time on 6th day it will execute ?
Do we have control on this time ?
Regards,
Ankit
We only control the day of execution because :
1. Salesforce evaluates time-based workflow on the organization's time zone, not the users'. Users in different time zones may see differences in behavior.
2. Time-dependent actions aren't executed independently. They're grouped into a single batch that starts executing within one hour after the first action enters the batch.
3. Time triggers don't support minutes or seconds.
4. Salesforce limits the number of time triggers an organization can execute per hour. If an organization exceeds the limits for its Edition,
Salesforce defers the execution of the additional time triggers to the next hour.
Related
Problem :Does it make sense to have time travel setting as 1 and retention setting as 30 days at table level (permanent table) in snowflakes .
There is no real setting called time travel. data_retention_time_in_days is actually the setting that controls the duration of your time travel. If you are asking whether an account setting of 1 day and a table-level setting of 30 days makes sense, then the answer is 'Yes'. You can set this parameter at multiple levels (account, database, schema, table) to customize your data retention period based on those object levels.
As mentioned there is no setting for Time Travel. No tasks are required to enable Time Travel. It is automatically enabled with the standard, 1-day retention period.
To increase this retention period DATA_RETENTION_TIME_IN_DAYS parameter is used which can se set upto 90 days which can be set databases, schemas, and tables.
https://docs.snowflake.com/en/user-guide/data-time-travel.html
https://www.youtube.com/watch?v=F1pevMhm7lg
On top of that there is Fail-Safe period which provides a (non-configurable) 7-day period during which historical data is recoverable by Snowflake. This period starts immediately after the Time Travel retention period ends.
https://docs.snowflake.com/en/user-guide/data-failsafe.html
I have to process each event with last 1 hour , 1 week and 1 month data. Like how many times same ip occurred in last 1 month corresponding to that event.
I think window is for fixed time i can't calculate with last 1 hour corresponding to current event.
If you have any clue please guide what should i use Table, ProcessFunction or global window. Or what approach should i take ?
There's a reason why this kind of windowing isn't supported out of the box with Flink, which has to do with the memory requirements of keeping around the necessary state. Counting events per hour in the way that is normally done (i.e., for the hour from 10:00 to 11:00) only requires keeping around a counter that starts at zero and increments with each event. A timer fires at the end of the hour, and the counter can be emitted.
Providing at every moment the count of events in the previous 60 minutes would require that the window operator keep in memory the timestamps of every event, and do a lot of counting every time a result is to be emitted.
If you really intend to do this, I suggest you determine how often you need to provide updated results. For an update once per minute, for example, you could then get away with only storing the minute-by-minute counts, rather than every event.
That helps, but the situation is still pretty bad. You might, for example, be tempted to use a sliding window that provides, every minute, the count of events for the past month. But that's also going to be painful, because you would instantiate 60 * 24 * 30 = 43,200 window objects, all counting in parallel.
The relevant building blocks in the Flink API are ProcessFunction, which is an interesting alternative to doing this with windows, and custom Triggers and Evictors if you decide to stick with windows. Note that it's also possible to query the state held in Flink, rather than emitting it on a schedule -- see queryable state.
I have a project that creates a batch service to run jobs created in my main project. My issue is that with System.Timers.Timer it is not able to schedule past 28.4 days and the interval can not be variable. My batch service has around 9 operators on it and some are executed every few minutes, some daily, weekly, and monthly.
I have everything working except the monthly part and we are wanting to execute it on the 3rd Thursday of the month. I already know how to gt the third Thursday so that is not the issue. What is the problem is that I can't go past the 28.4 days and every month the interval will change. I have looked into System.Threading, but from what I can see is that I cannot use it with multiple jobs of different intervals.
Anyone have a solution or something for me to look into that might lead me in the right direction?
You may want to rethink your design to kick your timer off with shorter intervals (days, hours or even minutes) to check if it has reached some pre-stored date / time.
The problem with what you are suggesting above would be that if you restart the service or reboot the box, you will lose your timer.
So... For the monthly run, set the timer up as 24 hours. Store the next time it should run the job somewhere (database, file system, registry) and check this every time the timer event fires and run what needs to be run.
I'm currently designing a database that:
1) Has a list of Tasks, such as:
Clean the floor.
Wipe the sink.
Take swabs.
2) Has a list of Areas, such as:
Kitchen.
Servery.
3) Tasks are scheduled against an area, either as "Hourly", "Daily", "Weekly", "Monthly" or "Annually". I'll call this AreaTask (Area, Task, Frequency) :-
Kitchen, Clean the floor, Daily
4) An AreaTask will become due either at the start of a working day (if it is Daily, Weekly, Monthly or Annually), or at the start of the hour if it is Hourly - based on the schedule. For example, if "Clean the floor" is scheduled "Weekly" on Wednesdays, then at the start of each Wednesday it will become Due, and remain Due for the day until it has been done (Worked, Signed off, etc) - or it will become OverDue if it goes beyond a certain time.
5) When work is done against an AreaTask, it is logged in the database (Area, Task, User [whom did the work], DateTime [that the work was done]) : -
Kitchen, Clean the floor, Joe Bloggs, 2012-05-23 10:50:00
Here is what I'm trying to decide:
I can determine the various states of a AreaTask at any particular time by queries alone because all of the data is there (i.e. I can determine that an AreaTask will of become Due on Wednesday, and I can determine that it became overdue if no Work was done against that AreaTask before a set time). However, I'm wondering if instead I should have a AreaTaskDue table that is populated perhaps by a CRON job, or some other means.
This way I have a formal entry in the database to query and store data against, for example:
ScheduledTask (Area, Task, ScheduledDateTime)
Kitchen, Clean the floor, 2012-05-23 06:00:00
This would also allow a task to be scheduled manually should the need arise.
Then when work is done against a ScheduledTask, it can be logged against the ScheduledTask itself:
ScheduledTaskWork (Area,Task,ScheduledDateTime,User,DateTime)
Kitchen, Clean the floor, 2012-05-23 06:00:00, Joe Bloggs, 2012-05-23 11:30:00
I hope that makes some sense.
PS this is for a RDBMS based database - not OO. I tend to use Views to see data from different perspectives.
Thanks.
PS perhaps too the CRON job would mark ScheduledTask as OverDue too rather than determining that. I guess the question is about whether these formal states should be stored in the database, or determined. The only way I can store them is to have some kind of CRON job running (which is fine, as long as I know there isn't a better way).
EDIT: One argument against deriving the state is that the Schedule may change - however I do keep history in the database, so I could still derive - but the more I think about it the more I'm leaning towards using a CRON job to schedule tasks based on the schedule.
Take a look at this model:
Every time a task is started, insert a new row into WORK table. When it's finished, set WORK.COMPLETED_AT.
You can find daily tasks (and their areas) that have not yet been done today like this:
SELECT *
FROM SCHEDULE
WHERE
FREQUENCY = 'daily'
AND NOT EXIST (
SELECT * FROM WORK
WHERE
SCHEDULE.AREA_ID = WORK.AREA_ID
AND SCHEDULE.TASK_ID = WORK.TASK_ID
AND DAY(COMPLETED_AT) = TODAY
)
Replace DAY and TODAY with whatever is specific to your database, and you'd probably want to use integers instead of strings for FREQUENCY.
Similar queries can be devised for other frequencies.
Manually scheduled tasks could be modeled through a table similar to SCHEDULE, but with FREQUENCY replaced by explicit time(s).
I discovered a troubling anomaly this morning while reviewing our monthly Sitecore Analytics reports. Our average 'time on site' this month reach an average of about 9 minutes. This is up from an average of around 1-2 minutes for the previous month.
My first reaction was "great, looks like we're doing better this month", but after further investigation, it appears as though each and every visit to the site is recording a 20-25 minute 'time on site' statistic - even for single page visits.
Has anyone experienced this before? It appears as though the addition of a SessionEnd processor causes Sitecore to keep each and every Session alive for the default duration of 20 minutes. If that's true, how does one add a custom SessionEnd pipeline processor without affecting the 'time on site' statistic for every visit?
Sitecore version: 6.4.1 Update 1
UPDATE
Unfortunately, site traffic is still being recorded above 20 minutes for each visit... and this is with the custom SessionEnd processor completely removed. I am currently investigating other possible causes.
UPDATE 2
We are seeing many Analytics warning message appear in our logs that look like the following:
Analystics: Max size of insert queue reached. Dropped 3826.
I now believe this is somehow related...
UPDATE 3
I discovered that the 'time on site' statistics would go back to normal after restarting the Sitecore application. From there, the average time on site would gradually climb at a rate of about 1 minute every 10 minutes or so until leveling out around 20 minutes. I believe that's around the same time we start seeing the 'Max size of insert queue reached' warnings in our logs.
I also discovered that the actual 'time on site' figure is calculated from the average time-span between the [Session].[Timestamp] and [Session].[LastPageTimestamp] columns in the [Sessions] table. What's interesting here is that the newest records entering the Session table appear to have a [LastPageTimestamp] of the actual time they are inserted into the table. It's as if the INSERT statement uses GETDATE() to stamp each record as they are inserted into the database. If that's true, then I think I found the culprit. I believe I have a performance issue on my hands, and to make matters worse, the queued Sessions are being inserting into the database incorrectly.
Don't know the answer to this question... but the first thing I would do is take apart the existing sessionEnd pipeline code with Reflector and see if it's doing something tricky that you are essentially undoing by adding another processor.
In my web.config, the only processor seems to be Sitecore.Pipelines.SessionEndSaveRecentDocuments.