I have tried many possible ways of detecting signal strength in React and showing them in signal bars like Weak, Medium, Excellent. The solution I built was in plain javascript where we download a image from server and calculate download time but that needs interval to run again and again and is pushing load on server. Is there any way to do the same in React way ?
you can try to add change eventListener to experimental
NetworkInformation API and calculate your signal based on downlink attribute
But its not well supported yet
I'm working on a platform where user can create rooms, join them and share content. One of major features that needs to be implemented is a robust media upload system, and I have a pretty good idea of how i'm going to achieve this on the backend with chunk-based file upload. An average size for content that users will be uploading would be something like 200MB
On the frontend, I'm using NextJS and the idea is to have a webworker to handle all media upload logic and a queue system to not get affected by components re-rendering and not have to wait on dialogbox until the process completes and continue in the background Is this approach going to work and is it a good practice? Is it going to scale and not have to be redesigned in the long run ? If Yes, do you know any example of it? If not why and what is your suggestion?
Link to an Image explaining what I'm trying to achieve
We have a React website running with lots of high-quality images that has been experiencing this warning. How do you begin debugging this warning message on Safari? Is there specific things that cause this?
This message is caused by Safari watchdog process that monitors the Javascript scripts running on a page. It is there to notify the user when a script is utilizing too many resources. Your page when loaded on my computer raises the CPU utilization to 68 percent. Be weary of loops and custom render code.
Notes for improvement:
Make the rendering code as efficient as possible.
Combine the your internal Javascript files into a single file, instead of 7 files. Major improvement.
When possible(due to licensing and update considerations) include the 9 external scripts into the single file stated above. Minor improvement.
Split the main page into different sections either as separate pages or dynamically loaded using AJAX. Major improvement.
Avoid svg files. SVG files require a lot of computing power to rasterize and display. This is the main cause of the 7 second load times. Convert the files to png at the largest expected display resolution and offer an expanded SVG file if more detail is wanted(by click or delayed mouse over). Major improvement.
The number of images is not the issue. It is the number of SVG images(on load) and the scripts causing the issue.
Open the page in Chrome, open the Developer Tools and then switch over to the "Performance tab".
Then use the 2nd icon from the left - the one that looks like a "reload" button. Which says "Start profiling and reload page".
You will have a full rundown on what is taking how much. You can see in the top what is eating up FPS and CPU, and then you can select the timeframes that had especially high load.
In the bottom part then select the "Call Tree" or "Bottom-Up" tabs, to get a rundown of which scripts and function calls cause performance issues.
Usually "normal" websites ( e.g. not games ) would not have a lot of frame redraws. You can then spot for example if loading spinners are animated with javascript, instead of transforms and transitions; and sometimes they're still re-rendering although they are out of reach.
On a React specific note : It might also make sense to inspect it additionally with the React Developer Tools. E.g. you might be able to spot if sub-frames are re-rendering constantly for no reason.
I'm attempting to write some UI tests for a RequireJS-based Backbone application, utilizing FluentAutomation.SeleniumWebDriver and NUnit. The HTML page in question contains a typical data-main attribute for loading the RequireJS module for the application. My struggle is in properly detecting when the application is fully loaded with these tools; the only thing I've gotten to work consistently so far is using an explicit wait in seconds, like so:
I.Open("http://myapp")
.Wait(5)
.Enter("foo").In("input[name=username]")
.Enter("bar").In("input[name=password]")
.Click("button")
.Wait(5)
.Expect.Text("Welcome").In("#welcome");
This is less than ideal -- my test as written above will always take at least 10 seconds to run, when in reality the app might be "ready" much faster than that. What I'd like to be able to do is something like this:
I.Open("http://myapp")
.WaitUntil(() => I.Assert.Exists("input[name=username]"))
.Enter("foo").In("input[name=username]")
.Enter("bar").In("input[name=password]")
.Click("button")
.WaitUntil(() => I.Assert.Exists("#welcome"))
.Expect.Text("Welcome").In("#welcome");
However, this doesn't work -- using WaitUntil here actually seems to prevent the app from loading, for reasons unclear to me, as I simply receive timeout exceptions after the default wait period (30 seconds), stating that it was unable to locate the element in question within that timeframe.
I see that Selenium 2 provides a WebDriverWait for this kind of scenario, and possibly that would work here, but am unsure how I would use this within FluentAutomation (and a quick search of the FluentAutomation code on GitHub doesn't seem to indicate it's in use within the library).
What can I use in FluentAutomation to properly wait for a RequireJS module (or DOM loaded by it) to be ready?
Additional details:
This might not be a RequireJS compatibility problem at all. I've looked further into the app and found that what's happening after the Click("button") is actually a window.location.replace -- not a RequireJS async module load. It's the one place in the app that this is occurring, apparently. So, is a window.location redirect a known scenario that would cause problems with WaitUntil, and is there an alternate approach (aside from a simple Wait(5)) that would properly handle this?
I can't understand, Why HTML/Web UI response slower than WinForms/WPF/Android View/Native UI?
The Native UI also have styles, elements nesting, events than the CSS, DOM, javascript events of the Web UI.
Event response time includes: focus changing, dropdown, scrolling, animation moving, animation resizing, etc.
The DOM tree insertion/replacing is also slow, inserting 10000 chars html will cost 100 ms in google chrome in android 4.0 while parsing its template only cost 20 ms(jQuery micro template).
I releazied maybe the biggest factor that slowdown event response is:
The UI locking between parallel javascript processes;
The rendering engine is too slow to process the new UI changing messages from javascript workers, especially when the browser rendering engine is busy with the last UI updating(because of the point 3);
The html layout method (for example: css cascading, inline flow layout, responsive layout etc) may slow down partial UI updating.
Parsing html/xml cost long time, a hint: Android view inflation relies heavily on pre-processing of XML files that is done at build time(http://developer.android.com/reference/android/view/LayoutInflater.html)
A subset of HTML and CSS standards maybe the future solution for webview app development:
http://www.silexlabs.org/haxe/cocktail/
http://www.terrainformatica.com/htmlayout/
http://www.nativecss.com/
http://www.pixate.com/
https://github.com/tombenner/nui
http://steelratstory.com/steelrat-products/wrathwebkit
http://trac.webkit.org/wiki/EFLWebKit
https://github.com/WebKitNix/webkitnix
http://qt-project.org/doc/qt-4.8/richtext-html-subset.html
http://sealedabstract.com/rants/why-mobile-web-apps-are-slow/
A pile of native UI markup languages: http://en.wikipedia.org/wiki/User_interface_markup_language
why there is not a simplified HTML standard and a simplified Webcore layout engine to replace these native UIML?
Maybe we could realize a subset html in kivy.org project.
PC, android browser = application thread + ui thread
iOS browser = application thread + ui data thread + ui hardware thread(CoreAnimation/OpenGL ES)
In ios browser, application thread could directly call ui hardware thread.
If Web UI is completely implemented by JavaScript on the client side, the difference from WinForms/Native UI will be trivial.
However, in most cases, the Web UI triggers some Web request to the Web server, then it has to go through the following steps to achieve the same effect as a WinForms/Native app:
Send a HTTP request (GET/POST/...) to the Web server
The Web server is an executable (in the format of an external app or a service) listening to one or multiple ports. When it receives the request, parses it, and finds the Web application.
Web server executes backend (server-side) logic within application.
Web application such as ASP.NET is pre-compiled. Time complexity of this step could be very close to a Windows app.
Web server renders the result into markup and sends it back to the client
Client (Browser) parses the result and updates the UI if necessary.
Controls/images/other resources in a Web page normally take a little longer to render within a browser than a Windows app renders its display.
Even the Web server is local, the cost generated the data parsing/formatting/transfer cannot be simply ignored.
On the other hand, an application with WinForms/Native UI typically maintains a message loop, which is active and hosted in machine code. A UI request normally just triggers a lookup in the message table and then execute the backend logic (Step 2 in the above)
When it returns result and updates UI, it can be simply binary data structure (doesn't need to be in markup), and doesn't reply another application(browser) to render to the screen.
Lastly, a WinForms/Native application normally has full control to maintain multiple threads to update UI gradually, while a Web application has no direct control over that type of server-side resources.
UPDATE:When we compare a Web application and a Windows/WPF (or native) application consuming a same Web service to partially update their UIs
The two UIs should respond and refresh with ignorable speed difference. The implementation difference between binary & scripting execution to respond and refresh UI is almost nothing.
Neither of the UIs needs to reconstruct the control tree and refresh entire appearance. Given same conditions, they could have same CPU priority, memory/virtual memory caching, and same/close number of kernel object & GDI handles at process/thread level.
In this case, as you described, there should be almost no visual difference.
UPDATE 2:
Actually event handling mechanisms in Web and Windows apps are similar. DOM has event bubbling. Similarly, MFC has command routing; Winforms has its event flow; WPF has event bubbling and tunnelling, and so on. The idea is a UI event might not strictly belong to one control and a control has some way to claim an event has been "handled". For standard controls, focus changing, text changing, dropdown, scrolling events should have similar client-side response time for both Web and Windows apps.
Performancewise, rendering is the biggest difference. Web apps have limited control of "device context" because a Web page is hosted by an external application - the Web browser. Windows applications can implement animation effects using GPU resources like WPF and speed up rendering by refreshing the "device context" partially. That's why HTML5 canvas makes Web developers excited while Windows game developers have been using OpenGL/DirectX for over 10 years.
UPDATE 3:
Each Web browser engine (http://en.wikipedia.org/wiki/Layout_engine) has its own implementation of rendering DOM, CSS; and implementation of (CSS) selectors. Moving and resizing elements within a Web page is changing DOM, CSS (tree) setup. The selector and rendering performance highly depends on the Web browser engine.
UI operations could make selectors go through unnecessary steps to update UI.
A Web page doesn't have control to inform the browser to do partial rendering.
which make fancy JavaScript controls (some jQuery UI, dojo, Ext JS) cannot be real-time fast, usually slower than Flash controls.
The time spent on the client is negligible compared to the time the data spends travelling over the network. The actual render time of a Windows form or a webpage in a browser is measured in (tens or maybe hundreds) of microseconds. Sending a request to a server and getting the result back is measured in milliseconds.
You can confirm this quite easily:
Create a simple Winforms application, time it.
Create a similar Web-based application. Run it on the webserver on your own PC, I.E. //localhost/myapp.asp and time it.
Run it on a remote webserver and time it.
You'll see that 1 is fastest followed closely by 2 (a little slower, interpreting the HTML, the CSS etc) and 3 is vastly slower because of the network time.
To answer your question, the difference due almost entirely to network delays, which are an order of magnitude greater than local processing time.
EDIT: It would be kind of the downvoters to add a comment explaining why.
3 big differences
WebUI apps are run within a browser, which then depends on how well the browser is optimized.
The browser also has its own javascript jvm. another process that has to run and interpret the code before it runs.
All of this is an extra layer that is on top of the native OS. If you were to bring up the activity monitor of you computer and bring up a web page in your browser, you will notice what a resource hog web browsers are.
Native UI elements have graphics acceleration support. depending on the os, native ui templates are compiled to a native format that does not have to be parsed for rendering.
One thing to keep in mind is that the browser itself is a native application, so anything built for the browser to run is inherently written with (at least) one additional layer of abstraction, versus something written directly for native execution.
It's also worth noting such dynamics as this:
300ms tap delay, gone away
http://updates.html5rocks.com/2013/12/300ms-tap-delay-gone-away
The initial impetus for this artificial delay was to support pinch-zooming vs other touch interactions -- that is, slower responsiveness in this case was a deliberate way to disambiguate different user actions.
Granted, while this is a rather specific use-case, the general concept does serve as an example of the different considerations for browser-based vs native implementations. That is, browser-based experiences include some of the usual framework cost of solving for a wide variety of interactions and content, whereas native experiences are naturally tailored more specifically to only listen for / respond to the desired interaction models.
Throughout the implementation, many tiny parts (such as this) are slimmer and more focused in a raw native version, which can contribute to the general effect of better responsiveness.
Only in substandard browsers (this includes all Android browsers, all Mac OS browsers, all Linux browsers, and worst of all every version of Google Chrome). These are badly written, unoptimised browsers with no concern for touchscreen latency, UI responsiveness and smooth scrolling. They lock up and stutter during any kind of CPU activity, disk or network I/O and user input.
Superior browsers such as Internet Explorer 11 or iOS Safari are sometimes even more responsive than unoptimised native apps.
Basically only Windows 8.1 and iOS have responsive browsers. All other browsers are inferior as far as UI responsiveness is concerned. The difference is really huge. IE11 and iOS Safari obliterate other browsers in UI latency and smoothness.