Web Audio API - accurately measure timings of user interactions - reactjs

I have a web app that generates and plays back rhythmic musical notation and I would like to build "playalong mode".
e.g. the user clicks/taps to play along during playback and their accuracy is assessed along the way.
In order to achieve this I am measuring the currentTime of the Web Audio API AudioContext at each user interaction, resolving this to a position within the measure of music.
No matter how I build it I can't seem to achieve 100% accuracy. The problem seems to be the latency caused by JS event handlers such as 'onClick', 'onPointerDown' etc. The interaction is always read slightly late and inconsistently late each time so that I can't reliably account for this latency.
Does anyone have any idea how I could solve this problem?
A few approaches I've tried so far
Measuring the AudioContext time directly from user interaction onClick, onPointerDown
Reducing the AudioContext time received by a constant latency value
Reducing the AudioContext time received by measuring event execution time with an approach similar to: How to measure latency between interaction and mousedown event in JavaScript?
My approach seems fundamentally flawed as there is always unpredictable and hard to measure latency. I am hoping to find a way to measure the AudioContext time with 100% accuracy on a users interaction

Related

How to know when React component is ready for interaction

Just curious what can be done to measure TTI (web vital) for a React app. As we are only interested for specific component TTI, so lighthouse won't be able to help here.
Lighthouse's definition of Time to Interactive (TTI) isn't the ideal metric to use when trying to measure component interactivity for a few reasons:
TTI is a lab metric where First Input Delay (FID) is a better proxy to use for real world data, but at the page level
TTI approximates when a page becomes idle by finding the first 5s "quiet" window of network (<=2 in-flight network requests) and CPU activity (no long tasks on the main thread). This approximation isn't the best approach to use when observing things at a component-level.
When trying to measure component-level performance in React, I would suggest either of the following:
Use the official React Developer Tools to get a high-level overview of your entire application's performance sliced in different ways (component profiling, interactions, scheduling)
Use the Profiler API to measure how long any component tree in your application takes to render. You can also use the User Timing API in conjunction to add marks and measures directly in the onRender callback which will then automatically show in Chrome DevTool's Performance panel and in the User Timings section in Lighthouse. This brings you full circle to being able to measure component interactivity directly in Lighthouse :)

How to find out if application runs slowly?

I'm currently developing some sort of I/O-pipeline system. Simply said: You can run simultaneous workers which do some stuff, either import or export data. I don't want to limit the user in how many workers will run simultaneously as performance, logically, depends on how they work.
If one wants to import 30 small images simultaneously, then he shall do this. But I want to equip my application with a monitor which notices when the application, especially the main thread, runs to slow, so that it will lag visibly. If this happens, the pipeline should reduce the amount of workers and maybe pause some of them in order so stabilize the application.
Is there any way to do this? How can I effectively monitor the speed, so I can say it is definetly to slow?
EDIT:
Okay I may have been a bit unclear. Sorry for that. The problem is more the status invokes. The user should be able to see what's going on. Therefore every worker invokes status updates. This happens in real-time. As you can imagine, this causes massive lags as there are a few hundred invokes per second. I tackled this problem by adding a lock which filters reports so that just every 50ms a report gets really invoked. The problem is: It still causes lags when there are about 20-30 workers active. I thought the solution was: Adjusting the time-lock by the current CPU load.
But I want to equip my application with a monitor which notices when the application, especially the main thread, runs to slow, so that it will lag visibly.
I don't quote understand this. I believe there are two common cases of UI lag:
Application that runs significant amounts of CPU-bound code on the UI thread or that performs synchronous IO operations there.
Showing way too many items in your UI, like a data grid with many thousand rows. Such UI is not useful to the user, so you should figure out a better way of presenting all those items.
In both cases, the lag can be avoided and you don't need any monitor for that, you need to redesign your application.

how to run project starting without debuging like starting debugging mode?

i'm using C++ managed 2010 for designing a GUI in a form.h file. The GUI acts as a master querying data streaming from slave card.
Pressing a button a function (in the ApplicationIO.cpp file) is called in which 2 threads are created by using API win32 (CREATETHREAD(...)): the former is for handling data streaming, and the latter is for data parsing and data monitoring on real time grpah on the GUI.
The project has two different behaviour: if it starts in debugging mode it is able to update GUI controls as textbox (using invoke) and graph during data straming, contrariwise when it starts without debugging no data appears in the textbox, and data are shown very slowly on the chart.
has anyone ever addressed a similar problem? Any suggestion, please?
A pretty classic mistake is to use Control::Begin/Invoke() too often. You'll flood the UI thread with delegate invoke requests. UI updates tend to be expensive, you can easily get into a state where the message loop doesn't get around to doing its low-priority duties. Like painting. This happens easily, invoking more than a thousand times per second is the danger zone, depending on how much time is spent by the delegate targets.
You solve this by sending updates at a realistic rate, one that takes advantage of the ability of the human eye to distinguish them. At 25 times per second, the updates turn into a blur, updating it any faster is just a waste of cpu cycles. Which leaves lots of time for the UI thread to do what it needs to do.
This might still not be sufficiently slow when the updates are expensive. At which point you'll need to skip updates or throttle the worker thread. Note that Invoke() automatically throttles, BeginInvoke() doesn't.

PollingDuplexHttpBinding in Silverlight

I understand that polling will inevitably have delays in getting real time updates. Currently I am using websync from frozen mountain dot com for achieving this and it works very well. But I would like to still know if PollingDuplexHttpBinding is a worthwhile one.
Has some one used this in critical systems?
Is its performance better over comet? When I say performance, how many simultaneous client connections it can handle?
Is it possible to configure the time interval of polling? I mean once in 30 seconds, 60 seconds etc.,
We're currently using this for our Alanta web conferencing platform. Our default binding is Net.TCP, because it's faster and more performant. But not everyone lets Net.TCP through on the right ports, so if we can't connect over Net.TCP, we fallback to the PollingDuplexHttpBinding. And it seems to work reasonably well -- no major complaints so far at least.
With respect to performance, the PollingDuplex binding is roughly similar to what you'll find in other long-poll based systems. You can find more details on its performance on Tomek's blog.
And yes, it's possible to configure almost every aspect of the binding. The particular properties that control the polling interval are ClientPollTimeout and ServerPollTimeout. See here for more details.

Keeping track of time with 1 second accuracy in Google App Engine

I'm currently in the process of rewriting my java code to run it on Google App Engine. Since I cannot use Timer for programming timeouts (no thread creation allowed), I need to rely on system clock to mark the time of the timeout start so that I could compare it later in order to find out if the timeout has occurred.
Now, several people (even on Google payroll) have advised developers not to rely on system time due to the distributed nature of Google app servers and not being able to keep their clocks in sync. Some say the deviance of system clocks can be up to 10s or even more.
1s deviance would be very good for my app, 2 seconds can be tolerable, anything higher than that would cause a lot of grief for me and my app users, but 10 second difference would turn my app effectively unusable.
I don't know if anything has changed for the better since then (I hope yes), but if not, then what are my options other than shooting up a new separate request so that its handler would sleep the duration of the timeout (which cannot exceed 30 seconds due to request timeout limitation) in order to keep the timeout duration consistent.
Thanks!
More Specifically:
I'm trying to develop a poker game server, but for those who are not familiar how online poker works: I have a set of players attached to 1 game instance. Evey player has a certain amount of time to act before the timeout will occur so the next player can act. There is a countdown on each actor and every client has to see it. Only one player can act at a time. The timeout durations I need are 10s and 20s for now.
You should never be making your request handlers sleep or wait. App Engine will only automatically scale your app if request handlers complete in an average of 1000ms or less; deliberately waiting will ruin that. There's invariably a better option than sleeping/waiting - let us know what you're doing, and perhaps we can suggest one.

Resources