Sencha too slow - extjs

I introduced Sencha grid in one of my JSPs. Locally sencha is quite fast but on an external server it is too slow.
I followed deploy instructions here
http://docs.sencha.com/ext-js/4-0/#!/guide/getting_started
using ext-debug.js and my app.js.
Then, in my JSP, I imported app-all.js (670KB) and ext.js
Where am I wrong ?
Thanks

app-all.js is 670KB, which is a very big file. You should refactor, optimize and minify the code so it will be smaller. You could even separate into multiple files per class or implementation and do a dynamic js load (but that would take more time). A good target would be as small as ext.js's.
Also if you have access to your webserver (i.e. Apache/Tomcat), you could turn on gz compression to compress files before sending to browsers. Also look out for other webserver optimizations.
(btw, your question sounds more like a webserver issue rather than a sencha related issue).

Another way to improve the load time of your application is making sure ext.js and app-all.js are cached by the browser. This way the first time your application loads it will be slow, but the following loads will be faster.
Look into the cache-control, expires and other HTTP cache controlling headers (this appears to be a nice explanation). Your server should generate this headers when sending the files you want to be cached.
The real problem, as it appears from the timeline, is the slow connection to the server (10 seconds loading 206/665 KB is slow for most connections), so you should see if there are no other server problems causing the slowness.

Related

Cache busting a Reactjs web application

I'm developing an application in ReactJS where I quite often push new changes to the the application.
When the users load upp the application they do not always get the newest version of the application causing breaking changes and errors with the express backend I have.
From what I have researched you can invalidate the cache using "cache busting" or a similar method. Although from all the questions I have seen on stackoverflow they have no clear consensus on how to do it, and the latest update was sometime in 2017.
How would one in a modern day ReactJS application invalidate the browsers cache in an efficient and automatic way when deploying?
If it's relevant, I'm using docker and docker-compose to deploy my application
There's not one-fits-all solution. Pretty common is adding some random hash to the bundle file, which will cause browser to process the file again from server.
Something like: app.js?v=435893452 instead of app.js. Most modern bundle tools like Webpack can do all of that automatically but it's hard to give you direction without knowing your setup.

PageSpeed Insights: poor both mobile and desktop

I am not a programmer nor developer and although I followed the links on Google to remove the problems it showed, I do not understand a thing. Therefore I would like to ask you to tell me in some comprehensible words what I should do to rectify the optimisations so that the speed of my pages is as it should be.
UPDATE
Reduce server response time
Your web server is TOO SLOW!
Getting a better ISP/Host is the easiest way to significantly speed up your page. This will speed things up about 10X. All those 600-900ms response times in the waterfall should be one tenth the time.
For example your style.css takes only 7 ms to download the file, but the server waited 632 ms to send the first byte. This is almost a 100X unnecessary delay.
URL: https://moreyouthfulskin.com/wp-content/themes/wellness-pro/style.css?ver=1.0.0
Loaded By: https://moreyouthfulskin.com/en/home/:59
Host: moreyouthfulskin.com
Request Start: 5.951 s
Time to First Byte: 632 ms
Content Download: 7 ms
Bytes In (downloaded): 7.9 KB
Uncompressed Size: 36.9 KB
Bytes Out (uploaded): 0.2 KB
This above data is from this link: Web Page Performance Test for https://moreyouthfulskin.com/en/home/
Leverage browser caching
The 1 day cache is not a big deal, it is a small issue. It is more of a warning. It is a web server setting. It is not worth much effort when you have so many other serious issues.
Eliminate render-blocking JavaScript and CSS in above-the-fold content
The CSS after the JS is big. This is usually caused by WP plug-ins. Only use plug-ins that do not degrade performance. Check your page speed be and after installing any plug-in. Then think about whether it is worth it.
Remove or at least speed up the redirects.
Your redirects are costing you because your web server is too slow. Get a new ISP. The two redirects took over 5 seconds. This means there is an unnecessary 5 second delay before the page even begins to load the first byte of the home page.
Optimize images, Minify JavaScript, and Minify CSS
At the bottom of Google Pagespeed results there is a link to Optimized images, Minified JavaScript, and Minified CSS. Download them, then upload them to the proper directory/folder on your server. This one is easy. If you cannot do this then you are in this way over your head.
END OF UPDATE
Your biggest issue is you have CSS and fonts being loaded after the JavaScript. This is costing you about 5 seconds load time.
Also your page load had two redirects using 2 seconds.
Your server needs to be configured to use a longer max-age cache .
Curently your cache is 14400 seconds. cache-control: public, max-age=14400

Improve TinyMCE performance in Angular app

I've set up tinyMCE in my angular application by using the latest (4.x) version of tinyMCe and the latest version of angular-ui/ui-tinymce (https://github.com/angular-ui/ui-tinymce).
All of the code is minified.
In my application I have multiple instances of tinyMCE on a page (up to three) and the application uses the angular routing mechanism.
Everything is set up correctly, the editors work (and each of them has their own configuration).
The problem I'm facing now is performance. Whenever I load a new page the tinyMCE instances recreate themselves even if they are already there (= in the dom)! Creating a tinyMCE editor takes some time (up to 3 seconds) The amount of text in it doesn't seem to matter much.
I've tried using tinyMCE's gzip compressor but I couldn't get it to work.
What actions can I take to improve the performance in my application?
If at all relevant: I'm using a Java backend and AngularJs version 1.2.16
How-to optimize initialization speed of TinyMCE
(Wanna see original article?)
Here are some actions to take to boost initialization/loading time of TinyMCE.
Use and install the TinyMCE Compressor.
This will bundle all JavaScript HTTP requests into one big request and also gzip compress them by 75%.
Enable the button_tile_map option (should be enabled by default).
This makes the icons load faster since multiple image requests are replaced with a few tilemap requests.
Compress other scripts using the custom scripts option inside the compressor.
There might be other third party scripts on the same page. These can be added to the compressor as well.
Disable plugins that you don't need.
Remember to both remove them from the tinyMCE.init and the tinyMCE_GZ.init calls.
There is currently no compressor for TinyMCE 4 for Java backend, unfortunatelly.
And as you've already said, all of the code is minified.
So the only thing I can advice: remove unused plugins and reduce a number of requests by concatenating multiple JS files into as few files as possible.

Several host page compile full project

I have a GWT application that use and have 2 host pages with 2 differents EntryPoint (gwt.xml files) which some code and most important, Database (datastore in Google App Engine) common.
The problem is that one of them makes use of several external Java libraries, including SmartGWT with its JS, meanwhile the second one use none of them. When I deploy (and compile) to Google App Engine, I need to include reference in the second entry point to SmartGWT, although it's not used. If inherits directive lacks, I get lot of compilation errors. I don't want to load SmartGWT JS files (2 MBytes) when they are not necessary, growing download from a few Kbytes to 100x times Kilobytes with SmartGWT. This second host page is a Google Chrome Extension, where light load is a strong requirement.
If I run in GAE SDK eclipse local webserver without any reference to SmartGWT in second host page, it works. But if I compile the whole project (although in first hots page references to SmartGWT remains) I get errors, related to SmartGWT absence.
Is it possible to make a separate compilation of two host pages?
Your project appears to be burdened with a dependency of questionable value. If your project does not make heavy use of GWT, consider rewriting your web pages to not use GWT. In some cases a framework has more drawbacks than benefits. That does not mean the framework itself is bad, but it may be a big sledgehammer cracking a small nut. Your project has only two pages, so it is imaginable that making it without GWT is feasible.

What is the best way to Optimise my Apache2/PHP5/MySQL Server for HTTP File Sharing?

I was wondering what optimisations I could make to my server to better it's performance at handling file uploads/downloads.
At the moment I am thinking Apache2 may not be the best HTTP server for this?
Any suggestions or optimisations I could make on my server?
My current set up is an Apache2 HTTP server with PHP dealing with the file uploads which are currently stored in a folder out of the web root and randomly assigned a name which is stored in a MySQL database (along with more file/user information).
When a user wants to download a file, I use the header() function to force the download and readfile() to output the file contents.
You are correct that this is inefficient, but it's not Apache's fault. Serving the files with PHP is going to be your bottleneck. You should look into X-Sendfile, which allows you to tell Apache (via a header inserted by PHP) what file to send (even if it's outside the DocRoot).
The increase in speed will be more pronounced with larger files and heavier loads. Of course an even better way to increase speed is by using a CDN, but that's overkill for most of us.
Using X-Sendfile with Apache/PHP
http://www.jasny.net/articles/how-i-php-x-sendfile/
As for increasing performance with uploads, I have no particular knowledge. In general however, I believe each file upload would "block" one of your Apache workers for a long time, meaning Apache has to spawn more worker processes for other requests. With enough workers spawned, a server can slow noticeably. You may look into Nginx, which is an event-based, rather than process-based, server. This may increase your throughput, but I admit I have never experimented with uploads under Nginx.
Note: Nginx uses the X-Accel-Redirect instead of X-Sendfile.
http://wiki.nginx.org/XSendfile

Resources