Just curious what can be done to measure TTI (web vital) for a React app. As we are only interested for specific component TTI, so lighthouse won't be able to help here.
Lighthouse's definition of Time to Interactive (TTI) isn't the ideal metric to use when trying to measure component interactivity for a few reasons:
TTI is a lab metric where First Input Delay (FID) is a better proxy to use for real world data, but at the page level
TTI approximates when a page becomes idle by finding the first 5s "quiet" window of network (<=2 in-flight network requests) and CPU activity (no long tasks on the main thread). This approximation isn't the best approach to use when observing things at a component-level.
When trying to measure component-level performance in React, I would suggest either of the following:
Use the official React Developer Tools to get a high-level overview of your entire application's performance sliced in different ways (component profiling, interactions, scheduling)
Use the Profiler API to measure how long any component tree in your application takes to render. You can also use the User Timing API in conjunction to add marks and measures directly in the onRender callback which will then automatically show in Chrome DevTool's Performance panel and in the User Timings section in Lighthouse. This brings you full circle to being able to measure component interactivity directly in Lighthouse :)
Related
This bounty has ended. Answers to this question are eligible for a +50 reputation bounty. Bounty grace period ends in 16 hours.
Evgen is looking for a canonical answer.
Suppose I need to make a component (using React) that displays a devices state table that displays various device attributes such as their IP address, name, etc. Polling each device takes a few seconds, so you need to display a loading indicator for each specific device in the table.
I have several ways to make a similar component:
On the server side, I can create an API endpoint (like GET /device_info) that returns data for only one requested device and make multiple parallel requests from fontend to that endpoint for each device.
I can create an API endpoint (like GET /devices_info) that returns data for the entire list of devices at once on the server and make one request to it at once for the entire list of devices from frontend.
Each method has its pros and cons:
Way one:
Prons:
Easy to make. We make a "table row" component that requests data only for the device whose data it displays. The "device table" component will consist of several "table row" components that execute multiple queries in parallel each for their own device. This is especially true if you are using libraries such as React Query or RTK Query which support this behavior out of the box;
Cons:
Many requests to the endpoint, possibly hundreds (although the number of parallel requests can be limited);
If, for some reason, synchronization of access to some shared resource on the server side is required and the server supports several workers, then synchronization between workers can be very difficult to achieve, especially if each worker is a separate process;
Way two:
Prons:
One request to the endpoint;
There are no problems with access to some shared resource, everything is controlled within a single request on the server (because guaranteed there will be one worker for the request on the server side);
Cons:
It's hard to make. Since one request essentially has several intermediate states, since polling different devices takes different times, you need to make periodic requests from the UI to get an updated state, and you also need to support an interface that will support several states for each device such as: "not pulled", "in progress", "done";
With this in mind, my questions are:
What is the better way to make described component?
Does the better way have a name? Maybe it's some kind of pattern?
Maybe you know a great book/article/post that describes a solution to a similar problem?
that displays a devices state
The component asking for a device state is so... 2010?
If your device knows its state, then have your device send its state to the component
SSE - Server Sent Events, and the EventSource API
https://developer.mozilla.org/.../API/Server-sent_events
https://developer.mozilla.org/.../API/EventSource
PS. React is your choice; I would go Native JavaScript Web Components, so you have ZERO dependencies on Frameworks or Libraries for the next 30 JavaScript years
Many moons ago, I created a dashboard with PHP backend and Web Components front-end WITHOUT SSE: https://github.com/Danny-Engelman/ITpings
(no longer maintained)
Here is a general outline of how this approach might work:
On the server side, create an API endpoint that returns data for all devices in the table.
When the component is initially loaded, make a request to this endpoint to get the initial data for all devices.
Use this data to render the table, but show a loading indicator for each device that has not yet been polled.
Use a client-side timer to periodically poll each device that is still in a loading state.
When the data for a device is returned, update the table with the new information and remove the loading indicator.
This approach minimizes the number of API requests by polling devices only when necessary, while still providing a responsive user interface that updates in real-time.
As for a name or pattern for this approach, it could be considered a form of progressive enhancement or lazy loading, where the initial data is loaded on the server-side and additional data is loaded on-demand as needed.
Both ways of making the component have their pros and cons, and the choice ultimately depends on the specific requirements and constraints of the project. However, here are some additional points to consider:
Way one:
This approach is known as "rendering by fetching" or "server-driven UI", where the server is responsible for providing the data needed to render the UI. It's a common pattern in modern web development, especially with the rise of GraphQL and serverless architectures.
The main advantage of this approach is its simplicity and modularity. Each "table row" component is responsible for fetching and displaying data for its own device, which makes it easy to reason about and test. It also allows for fine-grained caching and error handling at the component level.
The main disadvantage is the potential for network congestion and server overload, especially if there are a large number of devices to display. This can be mitigated by implementing server-side throttling and client-side caching, but it adds additional complexity.
Way two:
This approach is known as "rendering by rendering" or "client-driven UI", where the client is responsible for driving the rendering logic based on the available data. It's a more traditional approach that relies on client-side JavaScript to manipulate the DOM and update the UI.
The main advantage of this approach is its efficiency and scalability. With only one request to the server, there's less network overhead and server load. It also allows for more granular control over the UI state and transitions, which can be useful for complex interactions.
The main disadvantage is its complexity and brittleness. Managing the UI state and transitions can be difficult, especially when dealing with asynchronous data fetching and error handling. It also requires more client-side JavaScript and DOM manipulation, which can slow down the UI and increase the risk of bugs and performance issues.
In summary, both approaches have their trade-offs and should be evaluated based on the specific needs of the project. There's no one-size-fits-all solution or pattern, but there are several best practices and libraries that can help simplify the implementation and improve the user experience. Some resources to explore include:
React Query and RTK Query for data fetching and caching in React.
Suspense and Concurrent Mode for asynchronous rendering and data loading in React.
GraphQL and Apollo for server-driven data fetching and caching.
Redux and MobX for state management and data flow in React.
Progressive Web Apps (PWAs) and Service Workers for offline-first and resilient web applications.
Both approaches have their own advantages and disadvantages, and the best approach depends on the specific requirements and constraints of your project. However, given that polling each device takes a few seconds and you need to display a loading indicator for each specific device in the table, the first approach (making multiple parallel requests from frontend to an API endpoint that returns data for only one requested device) seems more suitable. This approach allows you to display a loading indicator for each specific device in the table and update each row independently as soon as the data for that device becomes available.
This approach is commonly known as "concurrent data fetching" or "parallel data fetching", and it is supported by many modern front-end libraries and frameworks, such as React Query and RTK Query. These libraries allow you to easily make multiple parallel requests and manage the caching and synchronization of the data.
To implement this approach, you can create a "table row" component that requests data only for the device whose data it displays, and the "device table" component will consist of several "table row" components that execute multiple queries in parallel each for their own device. You can also limit the number of parallel requests to avoid overloading the server.
To learn more about concurrent data fetching and its implementation using React Query or RTK Query, you can refer to their official documentation and tutorials. You can also find many articles and blog posts on this topic by searching for "concurrent data fetching" or "parallel data fetching" in Google or other search engines.
I have a web app that generates and plays back rhythmic musical notation and I would like to build "playalong mode".
e.g. the user clicks/taps to play along during playback and their accuracy is assessed along the way.
In order to achieve this I am measuring the currentTime of the Web Audio API AudioContext at each user interaction, resolving this to a position within the measure of music.
No matter how I build it I can't seem to achieve 100% accuracy. The problem seems to be the latency caused by JS event handlers such as 'onClick', 'onPointerDown' etc. The interaction is always read slightly late and inconsistently late each time so that I can't reliably account for this latency.
Does anyone have any idea how I could solve this problem?
A few approaches I've tried so far
Measuring the AudioContext time directly from user interaction onClick, onPointerDown
Reducing the AudioContext time received by a constant latency value
Reducing the AudioContext time received by measuring event execution time with an approach similar to: How to measure latency between interaction and mousedown event in JavaScript?
My approach seems fundamentally flawed as there is always unpredictable and hard to measure latency. I am hoping to find a way to measure the AudioContext time with 100% accuracy on a users interaction
I am implementing/evaluating a "real-time" web app using React, Redux, and Websocket. On the server, I have changes occurring to my data set at a rate of about 32 changes per second.
Each change causes an async message to the app using Websocket. The async message initiates a RECEIVE action in my redux state. State changes lead to component rendering.
My concern is that the frequency of state changes will lead to unacceptable load on the client, but I'm not sure how to characterize load against number of messages, number of components, etc.
When will this become a problem or what tools would I use to figure out if it is a problem?
Does the "shape" of my state make a difference to the rendering performance? Should I consider placing high change objects in one entity while low change objects are in another entity?
Should I focus my efforts on batching the change events so that the app can respond to a list of changes rather than each individual change (effectively reducing the rate of change on state)?
I appreciate any suggestions.
Those are actually pretty reasonable questions to be asking, and yes, those do all sound like good approaches to be looking at.
As a thought - you said your server-side data changes are occurring 32 times a second. Can that information itself be batched at all? Do you literally need to display every single update?
You may be interested in the "Performance" section of the Redux FAQ, which includes answers on "scaling" and reducing the number of store subscription updates.
Grouping your state partially based on update frequency sounds like a good idea. Components that aren't subscribed to that chunk should be able to skip updates based on React Redux's built-in shallow equality checks.
I'll toss in several additional useful links for performance-related information and libraries. My React/Redux links repo has a section on React performance, and my Redux library links repo has relevant sections on store change subscriptions and component update monitoring.
Our application is a full-featured game the should load ~15MB of assets, mainly graphics and sounds (PNGs are already optimized with tinyPNG, sounds are MP3/OGG with average bitrate).
So, the game load time is far great than 4000ms (first time). But the game has a nice (honest) loading progress bar.
Will it be indexed and pass through Kik QA?
If no, what are recommendations for such "heavy" applications?
Thanks,
Sergey.
From Kik requirements:
"
The webpage should always be fast to load.
Use minification to accomplish this, aim to be 4000ms on first visit, 700 ms on repeat visits.
Webpage should not consume excessive amounts of data on load.
Load the webpage and with no interaction should never go above 2MB
"
The load time recommendations are referring to how long it takes for the window.onload event to fire.
Based km what you said, your game will be fine because it loads quickly but then implements its own in-game loading state to load assets.
This is something that's been bothering me.
In a typical web application, request goes to the server and the server does it's 'stuff' and returns the template along with the data, ready to be displayed on the browser.
In Angular however, there is a response to get the template and, in most cases, another one to get the data.
Doesn't this add pressure on the bandwidth – 2 responses sent over the wire as compared to one from a typical web app. Doesn't it also imply a (possibly) higher page-load time?
Thanks,
Arun
The short answer is that it depends. :)
For example, in terms of bandwidth, it depends on how big the 'server-side' view render payload is. If the payload is very big, then separating out the calls might reduce bandwidth. This is because the JSON payload might be lighter (less angle brackets) so the overall bandwidth could reduce. Also, many apps that use ajax are already making more than one call (for partial views etc).
As for performance, you should test it in your application. There is an advantage to calling twice if you are composing services and one service might take longer than another. For example, if you need to layout a product and the service which provides 'number in the warehouse' takes longer than the product details. Then, the application could see a perceived performance gain. In the server-side view render model, you would have to wait for all the service calls to finish before returning a rendered view.
Further, client side view rendering will reduce the CPU burden on the server, because view rendering will be distributed on each client's machine. This could make a very big difference in the number of concurrent users and application can handle.
Hope this helps.