High Response to Server - gtmetrix

I have been searching for the reason why the Response to Server (TTFB) has some major delays on:
https://tools.pingdom.com/#!/cu9yoV/https://graphic-cv.com
without finding the real reason. It resides on a shared server which in fact is running with good hardware and the site is running PHP 7.1, APCu cache, Cloudflare, PrestaShop 1.6.1.18 platform configured with the best speed optimization setting in the backend
As seen on the metrix test the site requests are loading within seconds, but the first http/https request to the server can delay the site all from 3 seconds to 20 seconds. If I do a re-test it will go down to 2-5 seconds, but if I haven't accessed the site 30 min and up, issues will arise again with high load time.
How do I find the culprit which is delaying the TTFB? The hosting company with all their resources for testing/monitoring haven't provided me with a clear answer.

It was hardware related. After the hosting company upgraded their hardware, new CPUs, RAID-10, DDR4, Litespeed, now my site loads within 3 seconds.

Related

IIS response time hight every 10-15 minutes for the same simple request

We have a performance issue with an AngularJS website hosted on IIS. This issue only affects our users connected via VPN (working from home).
The problem: regularly, a page that usually takes one or two seconds to load can take over 10 seconds.
This issue first appeared to be random, but we were able to reproduce it in a test environment and found out that the problem seems to arise on a very regular basis (every 10-15 minutes).
What we did: using a tool (ThousandEyes), we send every minute the same simple GET request via 12 clients to the Test server. We can see in the IIS logs that this request is processed in less than 50ms most of the time. However, every 15 minutes or so, the same request takes more than 5 seconds to process at least for 1 client. Example below: the calls done every minutes by client #1 takes more than 5 sec at 21:12, 21:13, 21:14, then 21:28, 21:29, then 21:45:
The graph below shows the mean response times for the 12 clients (peak every 10-15 minutes):
For both the test and the production environments, this issue only affect users connected via VPN (but not all the users connected via VPN are affected at the same time).
Any idea what can cause this behavior ?
All suggestions and questions are welcome.
Notes:
Session State. InProcess. I tried Not Enabled and State Server but we still have the same results.
Maximum Worker Process. 1. I tried 2, no change.
Test server usage. As far as I can tell, nothing special happen every 15 minutes on the server (no special events).
Test server configuration: 2 Xeon proc #2.6GHz, 8 GB RAM, 20 GB disk space, Windonws 2016.
Test server load: almost nothing beside these 12 requests every minute from the 12 test clients.
This issue cost us a lot of time. We finally found out that a VPN server was misconfigured.
Rebuilding this server was the solution.

detecting long latency in .NET Core PaaS for GCloud

I am currently experiencing really long latency issues in my .net core 2.2 applications.
The setup consist of an .net API's through the app engine (memory 2g, 1 cpu, resting instance of 2) which talk to spanner tables with indexes. Whenever our system comes under load we tend to get a spike where our instances jump and the latency raises considerably.
On average our request time for an api request is 30ms but this then jumps to 208s even on instances that do not change. The spanner requests are quite short averaging around 0.072502. The just shows a blue bar spanning the whole of the request time. Checking for row locks but these are simply just GET requests and show nothing.
Is there anything else I can look at?

React performance during communication with API - 1 call-2s vs 262 calls-max 0.3s

I am developing React application that communicates with PHP JSON API. And I compare the performance of my application with the performance of the Amazon web page.
My application:
Browser, web server and PHP server (Yii2 framework) are on the same machine
React application makes 1 API call, which lasts 2s for downloading 20kb of data (if data about is several bytes, then 'waiting (TTFB)' is almost 2s, and 'Content download' is 0.0001s; if data is about 20kb, then 'waiting (TTBF)' is about 0.4s, and 'Content download' is 1.4s).
Page load is completed within 3s, that is long time and experience is bad.
Amazon web page:
Server is remote, of course.
Page load makes around 262 calls, each of them lasts no more than 0.3s, many of them less.
Page load is completed within 1.5s, experience is perfect.
How to understand this difference in performance. Can I blame my PHP server for bad configuration of insufficient resources? My development machine has 4GHz CPU with 8 virtual cores, 16GB RAB, very low resources are utilized by background services. Even the simples response from PHP server with 2 bytes takes almost 2 seconds to complete. Is this configuration issue of bad programming failure?
It could be bad configuration but I'd start by checking the debugger tool that comes with Yii and see how much time is spent on each operation to know if there is anything wrong with code or database queries first. Performance is not only related to machine resources. There is advanced optimization techniques, caching, load balancing, use of CDN, ... But 3s is too long anyway and requires investigation to figure out where it is spent.

Chrome update slowed Ajax/Angular Network rendering and loading

About two weeks ago, a Chrome update crippled users of my angular app. I load a lot of data but the entire single page application loaded in < 4 seconds but every single user went to > 40 seconds after updating Chrome 2 weeks ago. I did not experience the problem, but when I upgraded Chrome to 64.0.3282.167 from 63.0.3239.132, the problem also began for me.
Somewhere between Chrome 63.0.3239.132 and 64.0.3282.167, there was a change that basically slowed my Angular app to a crawl. It affects loading and rendering across the board and made the entire app almost unusable. I've been looking for the issue for a few days with no joy.
Does anyone have any insight or recommendation on what could cause such a performance degradation?
Here is a screenshot of my network tab. All of this used to be very fast before the Chrome update and now it just crawls.
If I set:
httpProvider.useApplyAsync(true), it alleviates the problem but my application is huge and this causes a lot of erratic behavior in a 5 year old application.
I'm not sure if this is still an issue, but I know that Google has continued to ramp up security measures with Chrome. This is especially true with HTTPS and I believe Google is pushing for everything to move to HTTPS. Certificates that are not clean (several criteria for this) present problems and may be requiring extra measures to process. I believe there is an add-on (or built-in) for Chrome dev tools that can break out the TLS processing to show you more detail.
A high TTFB reveals one of two primary issues. Either:
Bad network conditions between client and server, or A slowly
responding server application
To address a high TTFB, first cut out as much network as possible. Ideally, host the application locally and see if there is still a big TTFB. If there is, then the application needs to be optimized for response speed. This could mean optimizing database queries, implementing a cache for certain portions of content, or modifying your web server configuration. There are many reasons a backend can be slow. You will need to do research into your software and figure out what is not meeting your performance budget.
If the TTFB is low locally then the networks between your client and the server are the problem. The network traversal could be hindered by any number of things. There are a lot of points between clients and servers and each one has its own connection limitations and could cause a problem. The simplest method to test reducing this is to put your application on another host and see if the TTFB improves.

Database Network Latency

I am currently working on an n-tier system and battling some database performance issues.
One area we have been investigating is the latency between the database server and the application server. In our test environment the
average ping times between the two boxes is in the region of 0.2ms however on the clients site its more in the region of 8.2 ms. Is that
somthing we should be worried about?
For your average system what do you guys consider a resonable latency and how would you go about testing/measuring the latency?
Yes, network latency (measured by ping) can make a huge difference.
If your database response is .001ms then you will see a huge impact from going from a 0.2ms to 8ms ping. I've heard that database protocols are chatty, which if true means that they would be affected more by slow network latency versus HTTP.
And more than likely, if you are running 1 query, then adding 8ms to get the reply from the db is not going to matter. But if you are doing 10,000 queries which happens generally with bad code or non-optimized use of an ORM, then you will have wait an extra 80seconds for an 8ms ping, where for a 0.2ms ping, you would only wait 4 seconds.
As a matter of policy for myself, I never let client applications contact the database directly. I require that client applications always go through an application server (e.g. a REST web service). That way, if I accidentally have an "1+N" ORM issue, then it is not nearly as impactful. I would still try to fix the underlying problem...
In short : no !
What you should monitor is the global performance of your queries (ie transport to the DB + execution + transport back to your server)
What you could do is use a performance counter to monitor the time your queries usually take to execute.
You'll probably see your results are over the millisecond area.
There's no such thing as "Reasonable latency". You should rather consider the "Reasonable latency for your project", which would vary a lot depending on what you're working on.
People don't have the same expectation for a real-time trading platform and for a read only amateur website.
On a linux based server you can test the effect of latency yourself by using the tc command.
For example this command will add 10ms delay to all packets going via eth0
tc qdisc add dev eth0 root netem delay 10ms
use this command to remove the delay
tc qdisc del dev eth0 root
More details available here:
http://devresources.linux-foundation.org/shemminger/netem/example.html
All applications will differ, but I have definitely seen situations where 10ms latency has had a significant impact on the performance of the system.
One of the head honchos at answers.com said according to their studies, 400 ms wait time for a web page load is about the time when they first start getting people canceling the page load and going elsewhere. My advice is to look at the whole process, from original client request to fulfillment and if you're doing well there, there's no need to optimize further. 8.2 ms vs 0.2 ms is exponentially larger in a mathematical sense, but from a human sense, no one can really perceive an 8.0 ms difference. It's why they have photo finishes in races ;)

Resources