React performance during communication with API - 1 call-2s vs 262 calls-max 0.3s - reactjs

I am developing React application that communicates with PHP JSON API. And I compare the performance of my application with the performance of the Amazon web page.
My application:
Browser, web server and PHP server (Yii2 framework) are on the same machine
React application makes 1 API call, which lasts 2s for downloading 20kb of data (if data about is several bytes, then 'waiting (TTFB)' is almost 2s, and 'Content download' is 0.0001s; if data is about 20kb, then 'waiting (TTBF)' is about 0.4s, and 'Content download' is 1.4s).
Page load is completed within 3s, that is long time and experience is bad.
Amazon web page:
Server is remote, of course.
Page load makes around 262 calls, each of them lasts no more than 0.3s, many of them less.
Page load is completed within 1.5s, experience is perfect.
How to understand this difference in performance. Can I blame my PHP server for bad configuration of insufficient resources? My development machine has 4GHz CPU with 8 virtual cores, 16GB RAB, very low resources are utilized by background services. Even the simples response from PHP server with 2 bytes takes almost 2 seconds to complete. Is this configuration issue of bad programming failure?

It could be bad configuration but I'd start by checking the debugger tool that comes with Yii and see how much time is spent on each operation to know if there is anything wrong with code or database queries first. Performance is not only related to machine resources. There is advanced optimization techniques, caching, load balancing, use of CDN, ... But 3s is too long anyway and requires investigation to figure out where it is spent.

Related

High Response to Server

I have been searching for the reason why the Response to Server (TTFB) has some major delays on:
https://tools.pingdom.com/#!/cu9yoV/https://graphic-cv.com
without finding the real reason. It resides on a shared server which in fact is running with good hardware and the site is running PHP 7.1, APCu cache, Cloudflare, PrestaShop 1.6.1.18 platform configured with the best speed optimization setting in the backend
As seen on the metrix test the site requests are loading within seconds, but the first http/https request to the server can delay the site all from 3 seconds to 20 seconds. If I do a re-test it will go down to 2-5 seconds, but if I haven't accessed the site 30 min and up, issues will arise again with high load time.
How do I find the culprit which is delaying the TTFB? The hosting company with all their resources for testing/monitoring haven't provided me with a clear answer.
It was hardware related. After the hosting company upgraded their hardware, new CPUs, RAID-10, DDR4, Litespeed, now my site loads within 3 seconds.

Chrome update slowed Ajax/Angular Network rendering and loading

About two weeks ago, a Chrome update crippled users of my angular app. I load a lot of data but the entire single page application loaded in < 4 seconds but every single user went to > 40 seconds after updating Chrome 2 weeks ago. I did not experience the problem, but when I upgraded Chrome to 64.0.3282.167 from 63.0.3239.132, the problem also began for me.
Somewhere between Chrome 63.0.3239.132 and 64.0.3282.167, there was a change that basically slowed my Angular app to a crawl. It affects loading and rendering across the board and made the entire app almost unusable. I've been looking for the issue for a few days with no joy.
Does anyone have any insight or recommendation on what could cause such a performance degradation?
Here is a screenshot of my network tab. All of this used to be very fast before the Chrome update and now it just crawls.
If I set:
httpProvider.useApplyAsync(true), it alleviates the problem but my application is huge and this causes a lot of erratic behavior in a 5 year old application.
I'm not sure if this is still an issue, but I know that Google has continued to ramp up security measures with Chrome. This is especially true with HTTPS and I believe Google is pushing for everything to move to HTTPS. Certificates that are not clean (several criteria for this) present problems and may be requiring extra measures to process. I believe there is an add-on (or built-in) for Chrome dev tools that can break out the TLS processing to show you more detail.
A high TTFB reveals one of two primary issues. Either:
Bad network conditions between client and server, or A slowly
responding server application
To address a high TTFB, first cut out as much network as possible. Ideally, host the application locally and see if there is still a big TTFB. If there is, then the application needs to be optimized for response speed. This could mean optimizing database queries, implementing a cache for certain portions of content, or modifying your web server configuration. There are many reasons a backend can be slow. You will need to do research into your software and figure out what is not meeting your performance budget.
If the TTFB is low locally then the networks between your client and the server are the problem. The network traversal could be hindered by any number of things. There are a lot of points between clients and servers and each one has its own connection limitations and could cause a problem. The simplest method to test reducing this is to put your application on another host and see if the TTFB improves.

How to do load test with real user nearly 10.000 user?

I'm planning to do a load test for e-commerce site for valentine's day. I want to do this test with real user. I prepared it Jmeter WebDriver Sampler. I just found blazemeter.com but when i upload my scripts on blazemeter i got errors. I'm waiting a response from them. Do you know any platform, cloud servers provider or anything else to do this?
There's tons of tools, some of them are;
https://www.blitz.io/
http://loadstorm.com/
https://loader.io/
https://flood.io/
http://www.neotys.com/introduction/neoload-cloud-testing.html
http://www.soasta.com/products/cloudtest/
We work with BlazeMeter, which is easy to use, stable, and its staff are helpful if we stuck at dead-end. It creates nice reports, too.
Just to add one more to the mix:
http://loadimpact.com
Also in my 15 years experience of performance and load testing I have very rarely found a need for doing tests with full clients.
The virtual users are much more efficient and scale well in tests even if they possibly need some scripting and data management and reverse engineering of how the service you are testing works on the client.
And running full clients will consume huge amounts of server resources. One client per core in your load generators. Testing an e-commerce site I would estimate requires at least volumes in the thousands.
1.000 test users would require 250 4-core load generators. It quickly becomes large and expensive. For 500 virtual users you can get by with as little as a single 2-core load generator.

Fastest Open Source Content Management System for Cloud/Cluster deployment

Currently clouds are mushrooming like crazy and people start to deploy everything to the cloud including CMS systems, but so far I have not seen people that have succeeded in deploying popular CMS systems to a load balanced cluster in the cloud. Some performance hurdles seem to prevent standard open-source CMS systems to be deployed to the cloud like this.
CLOUD: A cloud, better load-balanced cluster, has at least one frontend-server, one network-connected(!) database-server and one cloud-storage server. This fits well to Amazon Beanstalk and Google Appengine. (This specifically excludes CMS on a single computer or Linux server with MySQL on the same "CPU".)
To deploy a standard CMS in such a load balanced cluster needs a cloud-ready CMS with the following characteristics:
The CMS must deal with the latency of queries to still be responsive and render pages in less than a second to be cached (or use a precaching strategy)
The filesystem probably must be connected to a remote storage (Amazon S3, Google cloudstorage, etc.)
Currently I know of python/django and Wordpress having middleware modules or plugins that can connect to cloud storages instead of a filesystem, but there might be other cloud-ready CMS implementations (Java, PHP, ?) and systems.
I myself have failed to deploy django-CMS to the cloud, finally due to query latency of the remote DB. So here is my question:
Did you deploy an open-source CMS that still performs well in rendering pages and backend admin? Please post your average page rendering access stats in microseconds for uncached pages.
IMPORTANT: Please describe your configuration, the problems you have encountered, which modules had to be optimized in the CMS to make it work, don't post simple "this works", contribute your experience and knowledge.
Such a CMS probably has to make fewer than 10 queries per page, if more, the queries must be made in parallel, and deal with filesystem access times of 100ms for a stat and query delays of 40ms.
Related:
Slow MySQL Remote Connection
Have you tried Umbraco?
It relies on database, but it keeps layers of cache so you arent doing selects on every request.
http://umbraco.com/azure
It works great on azure too!
I have found an excellent performance test of Wordpress on Appengine. It appears that Google has spent some time to optimize this system for load-balanced cluster and remote DB deployment:
http://www.syseleven.de/blog/4118/google-app-engine-php/
Scaling test from the report.
parallel
hits GAE 1&1 Sys11
1 1,5 2,6 8,5
10 9,8 8,5 69,4
100 14,9 - 146,1
Conclusion from the report the system is slower than on traditional hosting but scales much better.
http://developers.google.com/appengine/articles/wordpress
We have managed to deploy python django-CMS (www.django-cms.org) on GoogleAppEngine with CloudSQL as DB and CloudStore as Filesystem. Cloud store was attached by forking and fixing a django.storage module by Christos Kopanos http://github.com/locandy/django-google-cloud-storage
After that, the second set of problems came up as we discovered we had access times of up to 17s for a single page access. We have investigated this and found that easy-thumbnails 1.4 accessed the normal file system for mod_time requests while writing results to the store (rendering all thumb images on every request). We switched to the development version where that was already fixed.
Then we worked with SmileyChris to fix unnecessary access of mod_times (stat the file) on every request for every image by tracing and posting issues to http://github.com/SmileyChris/easy-thumbnails
This reduced access times from 12-17s to 4-6s per public page on the CMS basically eliminating all storage/"file"-system access. Once that was fixed, easy-thumbnails replaced (per design) file-system accesses with queries to the DB to check on every request if a thumbnail's source image has changed.
One thing for the web-designer: if she uses a image.width statement in the template this forces a ugly slow read on the "filesystem", because image widths are not cached.
Further investigation led to the conclusion that DB accesses are very costly, too and take about 40ms per roundtrip.
Up to now the deployment is unsuccessful mostly due to DB access times in the cloud leading to 4-5s delays on rendering a page before caching it.

Is CakePHP on Amazon Web Services (Free Tier), a good fit?

I have a 2 GB database and a front end that will likely handle 10-15 hits during the day. Is the AWS (with MySQL RDS) free-tier a good place to start?
Will CakePHP apps encounter time-outs or other resource issues, due to sizing of the Micro Instance?
Micro Instance (from Amazon): Micro instances (t1.micro) provide a small amount of
consistent CPU resources and allow you to increase CPU capacity in
short bursts when additional cycles are available. They are well
suited for lower throughput applications and web sites that require
additional compute cycles periodically. You can learn more about how
you can use Micro instances and appropriate applications in the Amazon
EC2 documentation.
Micro Instance 613 MiB of memory, up to 2 ECUs (for short periodic
bursts), EBS storage only, 32-bit or 64-bit platform
If you're only getting a very small amount of hits, you can probably run your application and mysql database on a micro instance.
The micro will be free, but you will have to pay for the RDS.
You should not notice any issues - we do most of our testing on micros, and our database is larger than yours.
It will work perfectly for your scenario.
I have deployed myself for other clients applications with at least twice requirements as yours and they worked fine.
If your application does operations saving and retrieving files from the disk I would like to suggest you giving a try to Amazon S3.

Resources