AngularJS Performance vs Page Size - angularjs

My Site is ~500 KB Gzipped including js, css and images. It is built on AngularJS. A lot of people in my company are complaining about the site being slow in lower bandwidths. There are a few questions I would like to get answered,
Is 500KB Gzipped too high for lower bandwidths? People claim it takes 20 sec for it to load on their machine, which I believe is an exaggeration. Is it really due to anugularJS and its evaluation time?
How does size of the app matters in lower bandwidths? If my site is 500KB and if I reduce it to 150KB by making a custom framework, Would it really help me in lower bandwidth? If so, how much?

It's all subjective, and the definition of "low bandwidth" is rather wide. However...using https://www.download-time.com/ you can get a rough idea of how long it would take to download 500Kb on different bandwidths.
So, on any connection above 512Kbps (minimum aDSL speeds, most are now better than 5Mbps, and 3G mobile is around the same mark), it's unlikely that the file size is the problem.
If "low bandwidth" also implies "limited hardware" (RAM, CPU), it's possible the performance problem lies in unzipping and processing your application. Angular is pretty responsive, but low-end hardware may struggle.
The above root causes would justify rewriting the application using your own custom framework.
The most likely problem, however, is any assets/resources/templates your angular app requires on initialization - images, JSON files etc. This is hard to figure out without specific details - each app is different. The good news is that you shouldn't need to rewrite your application - you should be able to use your existing application and tweak it. I'm assuming the 500Kb application can't be significantly reduced in size without a rewrite, and that the speed problem is down to loading additional assets as part of start-up.
I'd use Google Chrome's Developer tools to see what's going on. The "performance" tab has an option to simulate various types of network conditions. The "network" tab allows you to see which assets are loaded, and how long they take. I'd look at which assets take time, and seeing which of those can be lazy loaded. For instance, if the application is loading a very large image file on start-up, perhaps that could be lazy-loaded, allowing the application to appear responsive to the end user more quickly.
A common way to improve perceived performance is to use lazy loading.

To decrease your load time just process your caching and find the right download tool to calculate the download speed of your file. You can use https://downloadtime.org for reference. If you have any issues let me know. Also to To decrease the page load time try to create chunks of your javascript functionalities which consist only of the functionality which is needed for e.g. the index page to decrease the load time.

As angular.js itself has a gzipped size of 57kb it seems there is much more loaded with this initial page call which is ~10 times the size of angular.js.
To decrease the page load time try to create chunks of your javascript functionalities which consist only of the functionality which is needed for e.g. the index page to decrease the load time.
For example when you're using Webpack the recommended default maximum file size is around 244kb see here

Related

Why is my NextJS performace score so inconsistent in web.dev?

We are seeing very inconsistent performance scores in web.dev with our NextJS application. At first, we had around 30 performance so we started optimising. Now we are at around 90 with a margin of 5 locally in Lighthouse. However, when we are testing it on web.dev, our score variates from 73 to 99 which is a huge difference. What could be the cause of this? When you compare the two reports with exact the same bundle size, one of them has 670ms total blocking time and the other has 70ms. Also, de "Minimize main-thread work" and "Reduce Javascript execution time" differ a lot. "Minimize main-thread work" is 3.5s at the less performant run and 2.8s at the high performing run. "Reduce Javascript execution time" is 1.5s at the less performant run and is not present at all (so 0s i assume) on the performant run. Again, this is with exact the same JS and CSS bundle.
What could cause this drop in performance? Is this any kind of error in my code or is this just an issue in Lighthouse/web.dev? I am hosting on Vercel which serves my website trough a CDN and i am also using a CDN for serving images.
Any help will be appreciated.
Two factors jumped to my mind:
CDN related
Your CDN provider runs many datacenters around the globe. The request from any user including web.dev is routed to the nearest datacenter. Which may or may not have the requested resource in its cache. If it doesn't, then the resource (.html page or script bundle etc.) is requested from your server - this takes extra time and performance suffers.
Once in cache, the resource remains there for some time. No CDN provider will keep it there forever so sooner or later it gets evicted from the cache. When this happens depends on things like CDN provider policy, the free or paid plan you are on, the HTTP headers set by your webserver, demand on the resource.
Lighthouse related
The report generated by web.dev has "CPU/Memory Power" setting at the bottom. It reflects the metrics of the hardware used by Lighthouse. This setting affects the performance results a lot.
Cloud instance of Lighthouse at web.dev runs on a shared cloud VM and the setting reflects the current workload that varies from time to time.
P.S.
Server related
When the CDN requests a resource from a webserver, the performance could take a further hit in case the server suffers from cold starts.

Salesforce Low bandwidth tools

Does anyone know a good mechanism for measuring or reporting on page sizes?
I have a low bandwidth (humanitarian client) use case and trying to evaluate my pages, hi-res imagery or other page size issues, across the org. As an example, even a standard Lightning page view seems to be coming in at around 700kb, which seems high.
If there’s something on the AppEchange that would be great, but otherwise any direction in reporting, API tools or creating this through other mechanisms would be really helpful.
I have searched the Salesforce AppExchange, and available metadata/other API and so far haven't found anything. Event Monitoring has logs that help general page load performance and I found an article around improving performance, but haven't found ways to identify SIZE as would be needed for low bandwidth scenarios.
Don't know where to start yet, unfortunately. This could be a programmatic solution, in which case I'd love some direction, but it could also be tools available elsewhere I'm not aware of.
In Chrome dev tools (F12), in the Network tab you can simulate a low bandwith and long latency connection in order to measure the download time of a web page or web application.
You can also visualize the size and download time of every resources downloaded to identify the biggest images and the most time consumming requests.
In Salesforce, there's an administrative tool call Lightning Usage that can be activated. It generates diffrente dashboard and performance stats by page. You can found some screenshots in that Salesforce description of the service: https://developer.salesforce.com/blogs/2018/10/understanding-experienced-page-time.html. The metric EPT could meet your needs.

Loading Image from remote server

I'm loading images in codename one apps from remote server. How to display new images when these are updated on server?
Storage.getInstance().deleteStorageFile("xxxx");
Image icon = URLImage.createToStorage(placeholder, "xxxx", "remote_link");
I'm loading images in codename one apps from remote server. The images take a lot of time to load and unfortunately still when the images are updated on the server, the application continues to display the old images
The old images will display as long as the local file exists and as long as you have the image object in ram. Frankly if you don't want caching using URLImage might not be the best approach. You can use something like downloadImageToStorage which gives you a bit more flexibility or even just a regular Rest request for byte[] data.
About speed I will need more details to give an authoritative answer. If your image on the server are large then you're effectively downloading a lot of redundant data and "paying" for it. URLImage hides it to some degree by scaling the image down and removing overhead after the fact but you'd still waste bandwidth. You can increase the number of network threads (defined in the init(Object) method usually) which might improve performance for some cases but at some point you're limited by the bandwidth you have on the device.

How to reduce CPU usage of textdb?

Normally, my website based on textdb and use a low CPU usage.
But I notice when my website get visitors or traffic. A CPU usage was increased very quickly then it make my hosting account was temporary suspended from this cause.
So, I would like to find a solution to optimized it.
Thanks
In most cases it is better to have an actual .html or .php page linked together rather than have a blog based off a database. You can use PHP include and just create unique pages to optimize your site.

Zend_Cache_Backend_Sqlite vs Zend_Cache_Backend_File

Currently i'm using Zend_Cache_Backend_File for caching my project (especially responses from external web services). I was wandering if I could find some benefit in migrating the structure to Zend_Cache_Backend_Sqlite.
Possible advantages are:
File system is well-ordered (only 1 file in cache folder)
Removing expired entries should be quicker (my assumption, since zend wouldn't need to scan internal-metadatas for expiring date of each cache)
Possible disadvantages:
Finding record to read (with files zend check if file exists based on filename and should be a bit quicker) in term of speed.
I've tried to search a bit in internet but it seems that there are not a lot of discussion about the matter.
What do you think about it?
Thanks in advance.
I'd say, it depends on your application.
Switch shouldn't be hard. Just test both cases, and see which is the best for you. No benchmark is objective except your own.
Measuring just performance, Zend_Cache_Backend_Static is the fastest one.
One other disadvantage of Zend_Cache_Backend_File is that if you have a lot of cache files it could take your OS a long time to load a single one because it has to open and scan the entire cache directory each time. So say you have 10,000 cache files, try doing an ls shell command on the cache dir to see how long it takes to read in all the files and print the list. This same lag will translate to your app every time the cache needs to be accessed.
You can use the hashed_directory_level option to mitigate this issue a bit, but it only nests up to two directories deep, which may not be enough if you have a lot of cache files. I ran into this problem on a project, causing performance to actually degrade over time as the cache got bigger and bigger. We couldn't switch to Zend_Cache_Backend_Memcached because we needed tag functionality (not supported by Memcached). Switching to Zend_Cache_Backend_Sqlite is a good option to solve this performance degradation problem.

Resources