This may or may not be a bug, but I would like some help understanding the behavior of Timer.
Here is a test program that sets up Timer.periodic with a duration of 1000 microseconds (1 millisecond). The callback that fires increments a count. Once the count reaches 1000 intervals, the program prints the time elapsed and exits. The point being to get close to 1 second in execution time. Consider the following:
import 'dart:async'
main() {
int count = 0;
var stopwatch = new Stopwatch();
stopwatch.start();
new Timer.periodic(new Duration(microseconds: 1000), (Timer t) {
count++;
if(count == 1000){
print(stopwatch.elapsed);
stopwatch.stop();
}
});
The result is:
0:00:01.002953
That is, just over a second (assuming the remainder is coming from start time of the stopwatch).
However, if you change the resolution to be anything under 1 millisecond e.g. 500 microseconds, the Timer seems to ignore the duration entirely and executes as quickly as possible.
Result being:
0:00:00.008911
I would have expected this to be closer to half a second. Is this an issue with the granularity of the Timer? This issue can also be observed when applying a similar scenario to Future.delayed
The minimal resolution of the timer is 1ms. When asking for a 500ns duration is rounded to 0ms, aka: as fast as possible.
The code is:
int milliseconds = duration.inMilliseconds;
if (milliseconds < 0) milliseconds = 0;
return _TimerFactory._factory(milliseconds, callback, true);
Maybe it should take 1ms as a minimum, if that is its actual minimum, or it should handle microseconds internally, even if it only triggers every 10-15 milliseconds and runs the events pending so far.
If you are in VM it looks like a bug. Please file an issue.
If you are in JS side see the following note on the documentation of the Timer class:
Note: If Dart code using Timer is compiled to JavaScript, the finest granularity available in the browser is 4 milliseconds.
Related
private System.Timers.Timer timerSys = new System.Timers.Timer();
ticks are set to start at the beginning of each multiple of 5 seconds
entry exit
10:5:00.146 10:5:00.205
10:5:05.129 10:5:05.177
10:5:10.136 10:5:10.192
10:5:15.140 10:5:15.189
10:5:20.144 10:5:20.204
amd then a delay of 28 second
note that Windows 10 compensates for missing ticks
by firing them at close intervals
10:5:48.612 10:5:48.692
10:5:48.695 10:5:48.745
10:5:48.748 10:5:48.789
10:5:48.792 10:5:49.90
10:5:43.93 10:5:49.131
and another delay of 27 seconds
again Windows 10 crams ticks to compensate
but this time there is an even inside the second tick
that lasts about 28 seconds that makes the tick very long
10:6:16.639 10:6:16.878
this one is very long
10:6:16.883 10:6:42.980
10:6:42.984 10:6:43.236
10:6:43.241 10:6:43.321
10:6:43.326 10:6:43.479
The PC is running just two applications that I wrote.
They communicate via files and also via SQL tables.
This event happens maybe once every two months.
Questions:
What could be happening?
Is there a way to create a log file of all processes over time
My applications keep tabs of the time down to milliseconds.
So if there were a way of logging processes, I could match.
Alternately is there a way for my app to know what the OS is doing.
I have an Angular 6 application that performs an API call every 10 seconds to update price quotes. The timing of the API calls is manages using RxJS interval().
For some reason, on MS Edge, the timings vary wildly, from a couple of seconds, to minutes. Any ideas what might be the cause?
Here is the code:
const refreshPriceInterval = interval(10000);
let startTime = new Date(), endTime: Date;
refreshPriceInterval.pipe(
startWith(0),
flatMap(() => {
endTime = new Date();
let timeDiff = endTime.getTime() - startTime.getTime(); //in ms
// strip the ms
timeDiff /= 1000;
console.log(timeDiff);
startTime = new Date();
return this.timeseriesService.getQuote(symbol, this.userInfo.apiLanguage);
})
)
Here is the console output:
0.001
18.143
4.111
11.057
13.633
12.895
3.003
12.394
7.336
31.616
20.221
10.461
Is there a way to increase the accuracy?
EDIT:
Performance degrades over time.
Reducing the code in the interval() to only a console.log does not perform any better.
Might be an Angular issue.
It is up to the browser to decide how many CPU cycles are allocated per browser tab. Depending on resources (for instance; battery) or activity (background tab vs foreground tab) your browser page will receive more or less cpu slices.
some background: https://github.com/WICG/interventions/issues/5
This is shipped in Edge as well now (as of EdgeHTML14). The clamping to 1Hz in background tabs, not anything more intensive.
Apart from this fact; you are measuring the latency of your call as well:
timeseriesService.getQuote() so it might also be that this call just takes some time.
It was indeed an Angular issue. Timed processes cause Angular to constantly re-render the app, which can be quite resource intensive.
I used runOutsideAngular() to circumvent this issue. It turned out that I only had to run one function outside of Angular:
this.ngZone.runOutsideAngular(() => {
this.handleMouseEvents();
});
Thanks to Mark van Straten for your input!
I want to run a periodic erlang process every 10ms (based on wall clock time), the 10ms should be as accurate as possible; what should be the right way to implement it?
If you want really reliable and accurate periodic process you should rely on actual wall clock time using erlang:monotonic_time/0,1. If you use method in Stratus3D's answer you will eventually fall behind.
start_link(Period) when Period > 0, is_integer(Period) ->
gen_server:start_link({local, ?SERVER}, ?MODULE, Period, []).
...
init(Period) ->
StartT = erlang:monotonic_time(millisecond),
self() ! tick,
{ok, {StartT, Period}}.
...
handle_info(tick, {StartT, Period} = S) ->
Next = Period - (erlang:monotonic_time(millisecond)-StartT) rem Period,
_Timer = erlang:send_after(Next, self(), tick),
do_task(),
{noreply, S}.
You can test in the shell:
spawn(fun() ->
P = 1000,
StartT = erlang:monotonic_time(millisecond),
self() ! tick,
(fun F() ->
receive
tick ->
Next = P - (erlang:monotonic_time(millisecond)-StartT) rem P,
erlang:send_after(Next, self(), tick),
io:format("X~n", []),
F()
end
end)()
end).
If you really want to be as precise as possible and you are sure your task will take less time than the interval you want it performed at you could have one long running process instead of spawning a process every 10ms. Erlang could spawn a new process every 10ms but unless there is a reason you cannot reuse the same process it's usually not worth the overhead (even though it's very little).
I would do something like this in an OTP gen_server:
-module(periodic_task).
... module exports
start_link() ->
gen_server:start_link({local, ?SERVER}, ?MODULE, [], []).
... Rest of API and other OTP callbacks
init([]) ->
Timer = erlang:send_after(0, self(), check),
{ok, Timer}.
handle_info(check, OldTimer) ->
erlang:cancel_timer(OldTimer),
Timer = erlang:send_after(10, self(), check),
do_task(), % A function that executes your task
{noreply, Timer}.
Then start the gen_server like this:
periodic_task:start_link().
As long as the gen_server is running (if it crashes so will the parent process since they are linked) the function do_task/0 will be executed almost every 10 milliseconds. Note that this will not be perfectly accurate. There will be a drift in the execution times. The actual interval will be 10ms + time it takes receive the timer message, cancel the old timer, and start the new one.
If you want to start a separate process every 10ms you could have the do_task/0 spawn a process. Note that this will add additional overhead, but won't necessarily make the interval between spawns less accurate.
My example was taken from this answer: What's the best way to do something periodically in Erlang?
I have a Silverlight application that uses an overridden AudioSink.OnSamples() to record sound, and MediaStreamSource.GetSampleAsync() to play sound.
For instance:
protected override void GetSampleAsync(MediaStreamType mediaStreamType)
{
try
{
logger.LogSampleRequested();
var memoryStream = AudioController == null ? new MemoryStream() : AudioController.GetNextAudioFrame();
timestamp += AudioConstants.MillisecondsPerFrame * TimeSpan.TicksPerMillisecond;
var sample = new MediaStreamSample(
mediaStreamDescription,
memoryStream,
0,
memoryStream.Length,
timestamp, // (DateTime.Now - startTime).Ticks, // Testing shows that incrementing a long with a good-enough value is ~100x faster than calculating the ticks each time.
emptySampleDict);
ReportGetSampleCompleted(sample);
}
catch (Exception ex)
{
ClientLogger.LogDebugMessage(ex.ToString);
}
}
Both of these methods should normally be called every 20 milliseconds, and on most machines, that's exactly what happens. However, on some machines, they get called not every 20 ms, but closer to 22-24 ms. That's troublesome, but with some appropriate buffering, the audio is still more-or-less usable. The bigger problem is that in certain scenarios, such as when the CPU is running close to its limit, the interval between calls rises to as much as 30-35 ms.
So:
(1) Has anyone else seen this?
(2) Does anyone have any suggested workarounds?
(3) Does anyone have any tips for troubleshooting this problem?
For what it's worth, after much investigation, the basic solution to this problem is simply not to use as much CPU. In our case, this meant keeping track of the CPU utilization, and switching to a codec that didn't use as much CPU (G711 vs. Speex) when the CPU starts consistently running at 80% or higher.
I want to write MJPEG picture internet stream viewer. I think getting jpeg images using sockets it's not very hard problem. But i want to know how to make accurate streaming.
while (1)
{
get_image()
show_image()
sleep (SOME_TIME) // how to make it accurate?
}
Any suggestions would be great.
In order to make it accurate, there are two possibilities:
Using framerate from the streaming server. In this case, the client needs to keep the same framerate (calculate each time when you get frame, then show and sleep for a variable amount of time using feedback: if the calculated framerate is higher than on server -> sleep more; if lower -> sleep less; then, the framerate on the client side will drift around the original value from server). It can be received from server during the initialization of streaming connection (when you get picture size and other parameters) or it can be configured.
The most accurate approach, actually, is using of timestamps from server per each frame (which is either taken from file by demuxer or generated in image sensor driver in case of camera device). If MJPEG is packeted into RTP stream, these timestamps are already in RTP header. So, client's task is trivial: show picture using time calculating from time offset, current timestamp and time base.
Update
For the first solution:
time_to_sleep = time_to_sleep_base = 1/framerate;
number_of_frames = 0;
time = current_time();
while (1)
{
get_image();
show_image();
sleep (time_to_sleep);
/* update time to sleep */
number_of_frames++;
cur_time = current_time();
cur_framerate = number_of_frames/(cur_time - time);
if (cur_framerate > framerate)
time_to_sleep += alpha*time_to_sleep;
else
time_to_sleep -= alpha*time_to_sleep;
time = cur_time;
}
, where alpha is a constant parameter of reactivity of the feedback (0.1..0.5) to play with.
However, it's better to organize queue for input images to make the process of showing smoother. The size of queue can be parametrized and could be somewhere around 1 sec time of showing, i.e. numerically equal to framerate.