extjs file size is so big what to do - extjs

I am building an application in extjs, the ext-all.js size is 700kb which is very big and not acceptable by our technical architect. So what should I do ?
Should I remove extjs and build in some other UI.
or
I can do something about size ?

You should calmly but firmly explain the following to your technical architect:
The browser will treat the JavaScript file as a static resource and cache it after the initial download, so each visitor will only download the file once (unless they clear their browser cache, which most people don't), even if it is included on every page on the website.
Any modern web server supports automatic gzip compression of text documents (which includes things like JavaScript files). Assuming this is enabled, it means that the amount of data that the client actually downloads is significantly less than 700 KB. You can see what the actual download size is by taking your 700 KB JavaScript file and archiving it with gzip (or any equivalent utility).

Have you considered using the Ajax Minifier on the ext-all.js file? It should drastically reduce its size. Not to mention that browser caching should make it a one-time download (unless you update the underlying file).

Related

Initial page load performance for an angularjs app

I'm working in an AngularJS app that uses webpack for bundling the resources. Currently we are creating a single app.js file that contains the CSS as well. The size of the app.js is around 6MB. If we break the app.js into multiple chunks does that improve the page performance. My colleagues convinces me if we break the single JS file into 2 or 3 then the page load time will increase twice or thrice. Is that really true? I remember reading some where having a single file is better than multiple. I don't really remember the reasons now. Do I really need to break the app.js file for page performance? or what other options I can apply here?
A single file is better because it requires fewer connections (means less overhead), but this is really negligible when talking about < 5 files. When splitting parts of your file you do gain the ability to cache the files separately, which is often a great win. Therefore I'd recommend splitting the files in logically cachable sections (like vendor code and custom code).
Also note that if the client and server support http/2, the fewer connections reason is also gone since http/2 supports connection re-use.
Note that there is no real difference for the initial load time, since in that case all files will need to be downloaded anyway.
A single file will usually mean better performance. You should also ensure that this file is properly cached (on the browser side) and gzipped when served by your webserver.
I did a practical test in Chrome (Mac 54.0.2840.98 (64-bit)) to prove whether there is really a performance gain in breaking a huge JS file into many. I created a 10MB js file and made three copies of it. Concatenated all the 3 copied and created a 30MB file. I measured the time it took for the single file that is referenced using a normal script tag at the page bottom and it's around 1 minute. Then I referenced the 3 10MB script files one after other and it took nearly 20seconds to load everything. So there is a really a performance gain in breaking a huge JS file into many. But there is a limit in the no. of files the browser can download parallely.

Is there anyway to improve the Javascript Built Apps's web page loading time?

I found the first web page loading time for CN1 Javascript Built taking too long, need about 2 minutes.
I attached the Chrome's network loading screen shot, found the classes.js is the most heavy page, possible to zip it?
Second, there is 2 theme files that downloaded sequentially, is it possible for them to load at the same time?
Kindly advice.
Normally I would answer that you can look at the performance section of the developer guide but the relevant sections there relate to reducing the theme.res size which seems pretty small in your case.
The largest portion in your code is the class files so I'm guessing that the best way to reduce them is to further reduce dependencies so the obfucator can remove more dead code. Keep in mind that the classes.js file is cached and can be deployed via CDN's such as cloudflair to improve download speeds. It can be served in a gzipped form as well which is a part of the CDN repertoire.

cakephp: if cache too many page

I want ask : if cache too many page 10000 page is cached.
10000 page create 10000 file cache.
Is it ok ? it can create slow?
I don't think that this could slowdown the application. Modern file systems support big amount of files in a directory. The problem is if you what to manually list all those files.
A cache file is stored on the server as static HTML rather than the dynamically generated HTML code that is created with PHP.
Loading these cache files is significantly quicker than running PHP code through the PHP compiler at runtime.
The only issue is perhaps disk space as the cache files are physical files on the server. Most cache filesizes should be relatively small if used correctly so this really shouldn't be an issue on a proper web server with sufficient resources.
Cache files are generally always faster than running the PHP script as they do not have to be processed - the overhead is just hitting the file and retrieving it.
The compromise you make with cache is whether or not your data changes often enough to warrant using file cache, and whether or not users need access to an always up to date file.
I wouldn't worry about it, and hey you can always turn the cache off - right?
Yes, but probably not significant
Full-page cache files are all stored in the same folder. As such caching 10k pages, means having 10k files in a folder. It will not likely be significant, but there will be a slow down in application performance as the cache fills up.
Also note that there's a limit to how many files you can store in a folder depending on the drive format though generally speaking by the time the limit is reached performance is already significantly affected.
Don't use view caching if it's not necessary
Even full page caching has a cost. A normal php request is the following logic:
user -> internet -> webserver -> php -> (application logic)
Using full page view caching this doesn't change much:
user -> internet -> webserver -> php -> (read and render cache file)
If there is no dynamic content in the cache file it's a better idea to store the contents as a static file and move the response closer to the user:
user -> internet -> webserver -> static html file
Plugins like html cache permit this by storing cached views as html files and allowing the webserver to handle requests before invoking php.
That also means, depending on the cache headers sent for html files, that subsequent requests come straight out of the user's browser cache - and you can't get faster than that:
user -> user's browser cache

Reduce XAP Size Setting - What's the Benefit?

Using Silverlight 3, I noticed that System.Xml.Linq.dll was added to my XAP file, increasing the size from 12 to 58 k, so I checked the box 'Reduce XAP Size by using application library caching'.
Publishing the app to IIS, then loading it with Web Dev Helper enabled, I see that when I open the app, the XAP file at 12k is loaded, then the System.Xml.Linq.zip is loaded at 46k, for a total of 58k. Whenever I refresh the main page of the app, the same files are loaded into the browser. If I uncheck the 'Reduce..." box, then re-publish the app to IIS, one XAP file at 58k is loaded whenever I load the application.
How is one method different from or better than the other? I could see the advantage if the dll were somehow saved on the client computer removing the need to download it each time the app were opened.
Thanks
Mike Thomas
A browser caches by URL, so by splitting your application into a part which changes frequently and a part which will probably stay the same for a long time (the Linq part) and which might be shared between applications even, you save some download.
But it depends on the exact situation (frequency of change, location of 'generic' DLLs, etc.) whether it really helps.
The whole reason for keeping XAP size small is so that your application loads as quickly as possible. This is important: even on a faster connection, a bloated XAP can take extra seconds to load, which can be long enough for your users to leave your site.
While Linq is only accounting for 46KB, there are other cases where this can make a bigger deal. For instance, the SyndicationFeed class makes it really easy to handle RSS and ATOM feeds, but it weighs in at 114KB.
Application library caching helps in two ways:
It allows for sharing common DLL's between applications, so if another application has already pulled down a system DLL, your app can just reference it.
It allows your application updates to be smaller, since the framework DLL's won't change betwen XAP versions.
The difference is that when dll's are outside of the XAP file even though browser asks for those files webserver responds with 304 Not Modified HTTP response.
By default browser will not request for those files to be downloaded again. This obviously saves time especially when project references "heavy" libraries (ie. Telerik ones can be quite large in size)
Hope this helps someone.

Elegant way to determine total size of website?

is there an elegant way to determine the size of data downloaded from a website -- bearing in mind that not all requests will go to the same domain that you originally visited and that other browsers may in the background be polling at the same time. Ideally i'd like to look at the size of each individual page -- or for a Flash site the total downloaded over time.
I'm looking for some kind of browser plug-in or Fiddler script. I'm not sure Fiddler would work due to the issues pointed out above.
I want to compare sites similar to mine for total filesize - and keep track of my own site also.
Firebug and HttpFox are two Firefox plugin that can be used to determine the size of data downloaded from a website for one single page. While Firebug is a great tool for any web developer, HttpFox is a more specialized plugin to analyze HTTP requests / responses (with relative size).
You can install both and try them out, just be sure to disable the one while enabling the other.
If you need a website wide measurement:
If the website is made of plain HTML and assets (like CSS, images, flash, ...) you can check how big the folder containing the website is on the server (this assumes you can login on the server)
You can mirror the website locally using wget, curl or some GUI based application like Site Sucker and check how big the folder containing the mirror is
If you know the website is huge but you don't know how much, you can estimate its size. i.e. www.mygallery.com has 1000 galleries; each gallery has an average of 20 images loaded; every image is stored in 2 different sizes (thumbnail and full size) an average of for _n_kb / image; ...
Keep in mind that if you download / estimating a dynamic websites, you are dealing with what the website produces, not with the real size of the website on the server. A small PHP script can produce tons of HTML.
Have you tried Firebug for Firefox?
The "Net" panel in Firebug will tell you the size and fetch time of each fetched file, along with the totals.
You can download the entire site and then you will know for sure!
https://www.httrack.com/

Resources