I have an Angular 6 application that performs an API call every 10 seconds to update price quotes. The timing of the API calls is manages using RxJS interval().
For some reason, on MS Edge, the timings vary wildly, from a couple of seconds, to minutes. Any ideas what might be the cause?
Here is the code:
const refreshPriceInterval = interval(10000);
let startTime = new Date(), endTime: Date;
refreshPriceInterval.pipe(
startWith(0),
flatMap(() => {
endTime = new Date();
let timeDiff = endTime.getTime() - startTime.getTime(); //in ms
// strip the ms
timeDiff /= 1000;
console.log(timeDiff);
startTime = new Date();
return this.timeseriesService.getQuote(symbol, this.userInfo.apiLanguage);
})
)
Here is the console output:
0.001
18.143
4.111
11.057
13.633
12.895
3.003
12.394
7.336
31.616
20.221
10.461
Is there a way to increase the accuracy?
EDIT:
Performance degrades over time.
Reducing the code in the interval() to only a console.log does not perform any better.
Might be an Angular issue.
It is up to the browser to decide how many CPU cycles are allocated per browser tab. Depending on resources (for instance; battery) or activity (background tab vs foreground tab) your browser page will receive more or less cpu slices.
some background: https://github.com/WICG/interventions/issues/5
This is shipped in Edge as well now (as of EdgeHTML14). The clamping to 1Hz in background tabs, not anything more intensive.
Apart from this fact; you are measuring the latency of your call as well:
timeseriesService.getQuote() so it might also be that this call just takes some time.
It was indeed an Angular issue. Timed processes cause Angular to constantly re-render the app, which can be quite resource intensive.
I used runOutsideAngular() to circumvent this issue. It turned out that I only had to run one function outside of Angular:
this.ngZone.runOutsideAngular(() => {
this.handleMouseEvents();
});
Thanks to Mark van Straten for your input!
Related
I am developing an app on React where countdown timers are the main component (at the same time there can be 10-20 timers on the page). From the server I get: how long the timer should go and how much is left in seconds. Then every second I recount how much is left. The source data is stored in redux, and calculated in the component local state.
These timers should show the same values for every user.
The problem is when I duplicate the tabs in the browser, the api request does not occur, respectively, the timers in a new tab are rolled back to the old state.
Updating data every second in redux seems to me not to be the best option, but I don’t see others yet.
You said that the server sends you the remaining time in seconds. So you can calculate on the client side when the countdown should end in client time. You can store that in local storage. When a new tab is opened you can use that value to initialize your timer.
It does not require the client time to be correct or in sync with the server time as all tabs share the same (possibly wrong) client time. You are only interested in the difference in seconds between the current client time and the client time you saved to correctly initialize your timer.
A solution to calculate it could roughly look like this:
// when receiving the remaining seconds in the first tab
const onReceivedRemaining = (remaining) => {
const now = new Date(); // current client time
now.setSeconds(now.getSeconds() + remaining); // timer end in client time
localStorage.set('elapsing', JSON.stringify(now));
}
// when initializing the timer in a second tab
const getInitial = () => {
const elapsing_string = localStorage.get('elapsing');
if (!elapsing_string) return null;
const now = new Date();
const elapsing = Date.parse(elapsing_string);
return (elapsing - now) / 1000; // remaining time in seconds
}
What ist the best practice for parallel execution of multiple scenarios? For example 30% Users execute scenario1 and 70% users scenario2.
Is the code below the right way or is it better to have one scenario with contional executions of REST calls?
class MySimulation extends Simulation {
val userIdsData = csv(userIdsCSV).queue
...
val scenario1 = scenario("Scenario 1")
.feed(userIdsData)
.get(...)
val scenario2 = scenario("Scenario 2")
.feed(userIdsData)
.get(...)
.post(...)
setUp(scenario1.inject(rampUsers(30) over (ramp seconds))
.protocols(HttpConfig.value(baseURL)),
scenario2.inject(rampUsers(70) over (ramp seconds))
.protocols(HttpConfig.value(baseURL))
)
}
Whatever you are doing is absolutely fine.
The way you are running the setup you will see that the requests are running in parallel.
Gatling will run each item within SetUp in parallel where as each item defined in a scenario will be run sequentially. As you can see from the link
The definition of the injection profile of users is done with the
inject method. This method takes as argument a sequence of injection
steps that will be processed sequentially.
So your above code will run scenario01 ramp to 30 over x seconds and scenario02 ramp to 70 over y seconds in parallel.
You can also try with below code .
scenario1.inject(rampConcurrentUsers(0) to (6) during(10),constantConcurrentUsers(6) during(60 seconds)),
scenario2.inject(rampConcurrentUsers(0) to (4) during(10),constantConcurrentUsers(4) during(60 seconds))
This may or may not be a bug, but I would like some help understanding the behavior of Timer.
Here is a test program that sets up Timer.periodic with a duration of 1000 microseconds (1 millisecond). The callback that fires increments a count. Once the count reaches 1000 intervals, the program prints the time elapsed and exits. The point being to get close to 1 second in execution time. Consider the following:
import 'dart:async'
main() {
int count = 0;
var stopwatch = new Stopwatch();
stopwatch.start();
new Timer.periodic(new Duration(microseconds: 1000), (Timer t) {
count++;
if(count == 1000){
print(stopwatch.elapsed);
stopwatch.stop();
}
});
The result is:
0:00:01.002953
That is, just over a second (assuming the remainder is coming from start time of the stopwatch).
However, if you change the resolution to be anything under 1 millisecond e.g. 500 microseconds, the Timer seems to ignore the duration entirely and executes as quickly as possible.
Result being:
0:00:00.008911
I would have expected this to be closer to half a second. Is this an issue with the granularity of the Timer? This issue can also be observed when applying a similar scenario to Future.delayed
The minimal resolution of the timer is 1ms. When asking for a 500ns duration is rounded to 0ms, aka: as fast as possible.
The code is:
int milliseconds = duration.inMilliseconds;
if (milliseconds < 0) milliseconds = 0;
return _TimerFactory._factory(milliseconds, callback, true);
Maybe it should take 1ms as a minimum, if that is its actual minimum, or it should handle microseconds internally, even if it only triggers every 10-15 milliseconds and runs the events pending so far.
If you are in VM it looks like a bug. Please file an issue.
If you are in JS side see the following note on the documentation of the Timer class:
Note: If Dart code using Timer is compiled to JavaScript, the finest granularity available in the browser is 4 milliseconds.
I want to store data in database in every minute . For the same what should I use Service, AsyncTask or anything else. I go through various link which made me more confused .
I read the developer guide and came to know about getWritableDatabase
Database upgrade may take a long time, you should not call this method from the application main thread,
Then first I think I will use AsyncTask then about this
AsyncTasks should ideally be used for short operations (a few seconds at the most.)
After that I think I can use Service then about Service
A Service is not a thread. It is not a means itself to do work off of the main thread (to avoid Application Not Responding errors).
Here I am not able to understand what should I use to store data in database periodically. Please help me here as struck badly.
Thanks in advance
you cant do a lot work on the UI thread, so making database operations you could choose different approaches, few of them that I prefer to use are listed below;
Create a thread pool and execute each database operation via a thread, this reduces load on UI thread, also it never initializes lot of threads.
You can use services for updating the database operations. since services running on UI thread you cant write your operations in Services, so that you have to create a separate thread inside service method. or you can use Intent service directly since it is not working on UI Thread.
here is developer documentation on thread pool in android
and this is the documentation for IntentService
UPDATE
This will send an intent to your service every minute without using any processor time in your activity in between
Intent myIntent = new Intent(context, MyServiceReceiver.class);
PendingIntent pendingIntent = PendingIntent.getBroadcast(context, 0, myIntent, 0);
AlarmManager alarmManager = (AlarmManager)context.getSystemService(Context.ALARM_SERVICE);
Calendar calendar = Calendar.getInstance();
calendar.setTimeInMillis(System.currentTimeMillis());
calendar.add(Calendar.SECOND, 60); // first time
long frequency= 60 * 1000; // in ms
alarmManager.setRepeating(AlarmManager.RTC_WAKEUP, calendar.getTimeInMillis(), frequency, pendingIntent);
Before that check if you really need a service to be started in each minute. or if you can have one service which checks for the data changes in each minute, starting new service would consume maybe more resources than checking itself.
UPDATE 2
private ping() {
// periodic action here.
scheduleNext();
}
private scheduleNext() {
mHandler.postDelayed(new Runnable() {
public void run() { ping(); }
}, 60000);
}
int onStartCommand(Intent intent, int x, int y) {
mHandler = new android.os.Handler();
ping();
return STICKY;
}
this is a simple example like that you can do
I have a Silverlight application that uses an overridden AudioSink.OnSamples() to record sound, and MediaStreamSource.GetSampleAsync() to play sound.
For instance:
protected override void GetSampleAsync(MediaStreamType mediaStreamType)
{
try
{
logger.LogSampleRequested();
var memoryStream = AudioController == null ? new MemoryStream() : AudioController.GetNextAudioFrame();
timestamp += AudioConstants.MillisecondsPerFrame * TimeSpan.TicksPerMillisecond;
var sample = new MediaStreamSample(
mediaStreamDescription,
memoryStream,
0,
memoryStream.Length,
timestamp, // (DateTime.Now - startTime).Ticks, // Testing shows that incrementing a long with a good-enough value is ~100x faster than calculating the ticks each time.
emptySampleDict);
ReportGetSampleCompleted(sample);
}
catch (Exception ex)
{
ClientLogger.LogDebugMessage(ex.ToString);
}
}
Both of these methods should normally be called every 20 milliseconds, and on most machines, that's exactly what happens. However, on some machines, they get called not every 20 ms, but closer to 22-24 ms. That's troublesome, but with some appropriate buffering, the audio is still more-or-less usable. The bigger problem is that in certain scenarios, such as when the CPU is running close to its limit, the interval between calls rises to as much as 30-35 ms.
So:
(1) Has anyone else seen this?
(2) Does anyone have any suggested workarounds?
(3) Does anyone have any tips for troubleshooting this problem?
For what it's worth, after much investigation, the basic solution to this problem is simply not to use as much CPU. In our case, this meant keeping track of the CPU utilization, and switching to a codec that didn't use as much CPU (G711 vs. Speex) when the CPU starts consistently running at 80% or higher.