Is there any way we can compare Gatling report stats with previous execution and compare for any significant degradation? - gatling

I have been using Gatling and have been able to cover a lot of simulations for our APIs. But I could not figure out if a given API is being degraded or not.
And the only way is to compare it with previous runs. What all options are there in Gatling through which we can compare a report with the previous run ?
For example -
A given simulation is taking 500 seconds
Previously it was taking 400 seconds
I should know about this degradation up front
Please share if anyone has idea about it.
Thanks in advance !

Related

Google App Engine Instance Hours for every 10min job

I have an script in App engine that gets called every 10min. I am the only user.
The script pulls data from a web source does light processing and returns an image. It takes several minutes to run the first time. The source gets updated every 10min, so the next time my script runs, (10min later), it returns in a few seconds.
I'm using over 30 instance hours a day which is over the 28 free hours.
I read somewhere that every time an instance starts, it uses a minimum of 15min. (so 144x15=36hrs)
Therefore, am I better off trying to keep the instance running 24hrs (using up 24hrs) and limiting to one instance max? Perhaps setting idle_timeout to 10min. Another potential way to save would be to somehow pause my script during late night/early morning hours.
This is where you likely found the 15 minutes minimum statement and it refers to when the accrual of instance hours ends. If you look at that documentation you will see that it depends on what type of scaling you are using.
Here are my tought in all options you mentioned on your question for staying in the free tier:
Use only one instance. The problem with this approach is that giving a higher load to a single instance may increase other cost such as vCPU and memory since all the processing will be done there, also, it's likely that it's going to take longer to run the script.
Pause the script on low activity hours, this will be the ideal if the acumulated load when turning it back on it not too big.
Setting idle_timeout to 10min. This won't work if it runs every 10 min on all instances, since your app runs every 10min and app engine will only stop charging you every 15min idle, which will never happen, unless it does not run every 10min for every instance, if that is the case, it could be worth trying.
So, summing it up, all 3 options have their pros and cons, I would suggest that you test all of them and see what options suits your needs better.
Hope this helped.

too slow TTFB(latency) with go language in appengine

I'm testing go lang in appengine. But it's too slow response. I've checked with chrome and found that the problem is 'Waiting(TTFB)'
The source code is very simple and official example(https://github.com/GoogleCloudPlatform/appengine-try-go).
What's wrong? Is this normal?
Local test performance has nothing to do with production performance. There is nothing wrong with what you see.
Usually first requests are slower than subsequent ones as the AppEngine SDK performs file system scans, compiling and first-time loading and execution of package init() functions of your application's code.
What you see is a 1-second Waiting (TTFB) time, it stands for Time To First Byte (source):
Time spent waiting for the initial response, also known as the Time To First Byte. This time captures the latency of a round trip to the server in addition to the time spent waiting for the server to deliver the response.
This 1 second TTFB most likely includes all the tasks I listed above the SDK has to perform, which isn't so bad if you think about that.
Don't worry, production environment runs "pre-compiled" native binary code, none of these have to be performed and you will see most likely a response time (TTFB) around 20-30 ms.

How to build a weekly programmable web-based timer?

I am trying to figure out what is the best possible solution to create a website to function as a weekly programmable timer; after searching many hours I keep coming up empty handed.
I tried to start development on a web page. This is where I wound up:
http://www.gstrip.tk
I have a complete lack of knowledge on how to progress from this point.
How do I store the timers?
How do I make the server search for timers.
Then how do I execute a function such as a simple html request when the stored time for that day matches the current time?
This has been a huge stumbling block for me I assumed there would be an open source web based weekly programmable timer somewhere that I could modify but it appears nearly to be not available.
If it is already posted could you provide a link or something anything as I have searched fairly regularly the last couple of weeks with no results using different terms both on google and on stack. Maybe the syntax of my search is wrong somehow and this may be the reason for the lack of results IDK but I could sure use someone who is understanding and willing to level with me on this issue.

How to decrease the response time when dealing with SQL Server remotely?

I have created a vb.net application that uses a SQL Server database at a remote location over the internet.
There are 10 vb.net clients that are working on the same time.
The problem is in the delay time that happens when inserting a new row or retrieving rows from the database, the form appears to be freezing for a while when it deals with the database, I don't want to use a background worker to overcome the freeze problem.
I want to eliminate that delay time and decrease it as much as possible
Any tips, advises or information are welcomed, thanks in advance
Well, 2 problems:
The form appears to be freezing for a while when it deals with the database, I don't want to use a background worker
to overcome the freeze problem.
Vanity, arroaance and reality rarely mix. ANY operation that takes more than a SHORT time (0.1-0.5 seconds) SHOULD run async, only way to kep the UI responsive. Regardless what the issue is, if that CAN take longer of is on an internet app, decouple them.
But:
The problem is in the delay time that happens when inserting a new records or retrieving records from the database,
So, what IS The problem? Seriously. Is this a latency problem (too many round trips, work on more efficient sql, batch, so not send 20 q1uestions waiting for a result after each) or is the server overlaoded - it is not clear from the question whether this really is a latency issue.
At the end:
I want to eliminate that delay time
Pray to whatever god you believe in to change the rules of physics (mostly the speed of light) or to your local physician tof finally get quantum teleportation workable for a low cost. Packets take time at the moment to travel, no way to change that.
Check whether you use too many ound trips. NEVER (!) use sql server remotely with SQL - put in a web service and make it fitting the application, possibly even down to a 1:1 match to your screens, so you can ask for data and send updates in ONE round trip, not a dozen. WHen we did something similar 12 years ago with our custom ORM in .NET we used a data access layer for that that acepted multiple queries in one run and retuend multiple result sets for them - so a form with 10 drop downs could ask for all 10 data sets in ONE round trip. If a request takes 0.1 seconds internet time - then this saves 0.9 seconds. We had a form with about 100 (!) round trips (creating a tree) and got that down to less than 5 - talk of "takes time" to "whow, there". Plus it WAS async, sorry.
Then realize moving a lot of data is SLOW unless you have instant high bandwidth connections.
THis is exaclty what async is done for - if you have transfer time or latency time issues that can not be optimized, and do not want to use async, go on delivering a crappy experience.
You can execute the SQL call asynchronously and let Microsoft deal with the background process.
http://msdn.microsoft.com/en-us/library/7szdt0kc.aspx
Please note, this does not decrease the response time from the SQL server, for that you'll have to try to improve your network speed or increase the performance of your SQL statements.
There are a few things you could potentially do to speed things up, however it is difficult to say without seeing the code.
If you are using generic inserts - start using stored procedures
If you are closing the connection after every command then... well dont. Establishing a connection is typically one of the more 'expensive' operations
Increase the pipe between the two.
Add an index
Investigate your SQL Server perhaps it not setup in a preferred manner.

Determine time spent waiting for stats to be updated - SQL05/08

Does anyone know how to specifically identify the portion of the overall compilation time that any queries spent waiting on statistics (after stats are deemed stale) to be updated in SQL 2005/2008? (I do not desire to turn on the async thread to update stats in the background just in case that point of conversation comes up). Thanks!
Quantum,
I doubt that level of detail and granularity is exposed in SQL Server. What is the real question here? Are you trying to gauge how long it takes for the queries to re-compile when the stats are deemed stale to the normal compilation time? Is this a one off request or are you planning to put something in production and measure the difference over a period of time?
If it is former then, you can get that info by figuring out the time taken individually (set statistics time on) and combing them together. If it is latter then I am NOT sure there is anything that is currently available in SQL Server.
PS: I haven't checked Extended Events (in DENALI) in detail for this activity but there could be something there for you. You may want to check that out if you are really interested.

Resources