Consequences of Space Complexity in React Apps - reactjs

I've recently been taking a class on Big O Notation for space and time complexity and I'm trying to apply the concepts to their real world consequences in my day to day job as a React developer. I understand the consequences of time complexity. If a function is a higher complexity the larger that the data set grows the longer it will take to compute, ergo longer load times for the user which no one likes. What I don't understand is the consequences of space complexity in a React App. What are the consequences to either the user or the application if you have a really large data set and you end up using a huge amount of space in the browser?

Actually, it is not based specifically on React or any other frameworks. The concept of time complexity and space complexity doesn't depend much on frameworks or languages. That is the concept behind using time and space complexity notations.
Now coming to your question, the answer really depends on the browser you are using to run your React App. Every browser allocates max memory to each tab. If the website uses more than the allocated memory, the browser kills that webpage.
Here is a question about memory limits on StackOverflow--
Javascript memory limit
In Chrome and Chromium OS, the memory limit is defined by the browser,
and you can inspect the limit with the following command in the
Developer Tools command-line by hitting F12:
window.performance.memory.jsHeapSizeLimit 1090519040 On my Windows 10 OS, it is about 1 GB.
On Chrom(e/ium), you can get around the heap size limit by allocating
native arrays:
var target = [] while (true) {
target.push(new Uint8Array(1024 * 1024)); // 1Meg native arrays }
This crashes the tab at around 2GB, which happens very rapidly. After
that Chrom(e/ium) goes haywire, and repeating the test is not possible
without restarting the browser.
In the above link, there is a mention of react and angular
Here are some examples of frameworks with well-known performance
degradation:
..........
React Again,
Immutable collections of JS objects only scale so far.
create-react-app internally uses Immutable.JS, and Immutable.JS can
only create about 500k immutable collections before it dies.
One more side effect the user can experience is overall system lag. The lag may be due to the low RAM available and OS actively swapping to allocate more memory to your app until you reach your max limit before the browser kills it.

Related

Does a large amount of helper functions increase the bundle size sent to the client?

I am currently working on a project where we do a lot of parsing of data on the frontend side. The project has a lot of helper functions used to change data structures etc. It is written with React (create-react-app) on the frontend and .Net on the backend. My question is: Does this increase the bundle size of the application sent to the client in a way the user will notice?
If you are defining the function every time & not importing a more generic function from your utilities, there would be more code shipped to the client which would increase the bundle size. But I would suggest that your biggest issue would not be with bundle size, but more with speed. because the calculations (if not optimized, or too complex) would possibly block the UI. you can test that by going to your performance tab in chrome dev-tools & change the CPU performance to 4x-slowdown and then do some interaction where you are doing your data manipulation. if you see lagging UI, you can start by memoizing the values that come out of the operations, so it wouldn't be calculated on each render.

React/Redux - memory footprint not proportionate to state

So my whole Redux state is perhaps around 3-4mb, but Chrome is reporting my tab's memory usage at around 400-500mb, which climbs the longer you use it.
I understand there are other things it needs the memory for (so I shouldn't expect a 1:1 relationship), but does anyone know how I'd attempt to reduce memory consumption?
On a fresh session (or Incognito tab), my app is running very smoothly. If it's open for an afternoon or so, performance suffers greatly.
My Redux store isn't overly large,
Same page/DOM nodes etc between the 2 normal and Incognito tabs
Everything else is seemingly identical
I get that this is fairly vague, but I'm not sure what else to include. Anyone have any pointers?
Please use the Google Chrome Performance Analysis Tools to analyse the performance of your app and see where savings can be made.
That said, Google Chrome can be a memory-hungry application in general. You should try to consider whether this is a problem. If the Chrome session does not consume an amount of RAM which is hurting the overall computer's performance, then it is not a problem. Try running the application on a computer with very little RAM, as long as it stops consuming memory before it impacts performance, then it is a non-issue.
If it does not do this and it begins to consume more and more memory, you likely have a memory leak and should look to resolve this with the tools linked above.

AngularJS Performance vs Page Size

My Site is ~500 KB Gzipped including js, css and images. It is built on AngularJS. A lot of people in my company are complaining about the site being slow in lower bandwidths. There are a few questions I would like to get answered,
Is 500KB Gzipped too high for lower bandwidths? People claim it takes 20 sec for it to load on their machine, which I believe is an exaggeration. Is it really due to anugularJS and its evaluation time?
How does size of the app matters in lower bandwidths? If my site is 500KB and if I reduce it to 150KB by making a custom framework, Would it really help me in lower bandwidth? If so, how much?
It's all subjective, and the definition of "low bandwidth" is rather wide. However...using https://www.download-time.com/ you can get a rough idea of how long it would take to download 500Kb on different bandwidths.
So, on any connection above 512Kbps (minimum aDSL speeds, most are now better than 5Mbps, and 3G mobile is around the same mark), it's unlikely that the file size is the problem.
If "low bandwidth" also implies "limited hardware" (RAM, CPU), it's possible the performance problem lies in unzipping and processing your application. Angular is pretty responsive, but low-end hardware may struggle.
The above root causes would justify rewriting the application using your own custom framework.
The most likely problem, however, is any assets/resources/templates your angular app requires on initialization - images, JSON files etc. This is hard to figure out without specific details - each app is different. The good news is that you shouldn't need to rewrite your application - you should be able to use your existing application and tweak it. I'm assuming the 500Kb application can't be significantly reduced in size without a rewrite, and that the speed problem is down to loading additional assets as part of start-up.
I'd use Google Chrome's Developer tools to see what's going on. The "performance" tab has an option to simulate various types of network conditions. The "network" tab allows you to see which assets are loaded, and how long they take. I'd look at which assets take time, and seeing which of those can be lazy loaded. For instance, if the application is loading a very large image file on start-up, perhaps that could be lazy-loaded, allowing the application to appear responsive to the end user more quickly.
A common way to improve perceived performance is to use lazy loading.
To decrease your load time just process your caching and find the right download tool to calculate the download speed of your file. You can use https://downloadtime.org for reference. If you have any issues let me know. Also to To decrease the page load time try to create chunks of your javascript functionalities which consist only of the functionality which is needed for e.g. the index page to decrease the load time.
As angular.js itself has a gzipped size of 57kb it seems there is much more loaded with this initial page call which is ~10 times the size of angular.js.
To decrease the page load time try to create chunks of your javascript functionalities which consist only of the functionality which is needed for e.g. the index page to decrease the load time.
For example when you're using Webpack the recommended default maximum file size is around 244kb see here

Tracking down memory leak in Google App Engine Golang application?

I saw this Python question: App Engine Deferred: Tracking Down Memory Leaks
... Similarly, I've run into this dreaded error:
Exceeded soft private memory limit of 128 MB with 128 MB after servicing 384 requests total
...
After handling this request, the process that handled this request was found to be using too much memory and was terminated. This is likely to cause a new process to be used for the next request to your application. If you see this message frequently, you may have a memory leak in your application.
According to that other question, it could be that the "instance class" is too small to run this application, but before increasing it I want to be sure.
After checking through the application I can't see anything obvious as to where a leak might be (for example, unclosed buffers, etc.) ... and so whatever it is it's got to be a very small but perhaps common mistake.
Because this is running on GAE, I can't really profile it locally very easily as far as I know as that's the runtime environment. Might anyone have a suggestion as to how to proceed and ensure that memory is being recycled properly? — I'm sort of new to Go but I've enjoyed working with it so far.
For a starting point, you might be able to try pprof.WriteHeapProfile. It'll write to any Writer, including an http.ResponseWriter, so you can write a view that checks for some auth and gives you a heap profile. An annoying thing about that is that it's really tracking allocations, not what remains allocated after GC. So in a sense it's telling you what's RAM-hungry, but doesn't target leaks specifically.
The standard expvar package can expose some JSON including memstats, which tells you about GCs and the number allocs and frees of particular sizes of allocation (example). If there's a leak you could use allocs-frees to get a sense of whether it's large allocs or small that are growing over time, but that's not very fine-grained.
Finally, there's a function to dump the current state of the heap, but I'm not sure it works in GAE and it seems to be kind of rarely used.
Note that, to keep GC work down, Go processes grow to be about twice as large as their actual live data as part of normal steady-state operation. (The exact % it grows before GC depends on runtime.GOGC, which people sometimes increase to save collector work in exchange for using more memory.) A (very old) thread suggests App Engine processes regulate GC like any other, though they could have tweaked it since 2011. Anyhow, if you're allocating slowly (good for you!) you should expect slow process growth; it's just that usage should drop back down again after each collection cycle.
A possible approach to check if your app has indeed a memory leak is to upgrade temporarily the instance class and check the memory usage pattern (in the developer console on the Instances page select the Memory Usage view for the respective module version).
If the pattern eventually levels out and the instance no longer restarts then indeed your instance class was too low. Done :)
If the usage pattern keeps growing (with a rate proportional with the app's activity) then indeed you have a memory leak. During this exercise you might be able to also narrow the search area - if you manage to correlate the graph growth areas with certain activities of the app.
Even if there is a leak, using a higher instance class should increase the time between the instance restarts, maybe even making them tolerable (comparable with the automatic shutdown of dynamically managed instances, for example). Which would allow putting the memory leak investigation on the back burner and focusing on more pressing matters, if that's of interest to you. One could look at such restarts as an instance refresh/self-cleaning "feature" :)

Appengine frontend instances have been using more and more RAM, how can I reduce this?

My instances all now start at 140m and average just under 200. If left long enough they start hitting 240m. However my question is more about the memory being used right after a fresh instance is booted up. I store nothing on the instances. Every request fetches stuff from memcache and datastore and I don't use singletons.
All I have are classes and a lot of static resources that deploy with the instances. I use JSPs extensively (if that makes a difference).
Thanks for any assistance!
I'm going from memory here, since I have used Java on App Engine for a few years. This may be stale.
The JVM doesn't like to release memory. If an instance gets created, and services a request, the memory watermark goes up. Garbage collection may 'free' part of that memory up in the sense of it being available for reuse, but the high watermark on process memory doesn't necessarily go down. A subsequent request may need allocations that aren't available as free chunks, so the watermark on memory goes up again. If the app isn't configured to serve multiple request simultaneously, memory use follows something like a sigmoid curve. If multiple requests are being processed simultaneously, the watermark is raised further.
That said, a common cause of unexpected memory growth is queries that retrieve more rows than are necessary, with filtering happening in the app.
But without more information, your specific case is impossible to diagnose.
I believe I figured out why my project was taking up an ever increasing amount of ram. I happen to have a lot of static resources in my project and it would appear that these static resources all get loaded into the frontend instance memory (probably for speed). I managed to free up huge amounts of memory by moving my static resources off of my primary application servers.

Resources