Stresstest server: starting requests at given subsecond timestamps - c

I have to stresstest a server via a list of 100k Post-Requests per day. Each request must start at a given fixed timestamp. Smallest time-difference between two requests is around 100ms.
Example:
12:08:38.971 url1
12:08:39.429 url2
12:08:40.186 url3
12:08:40.444 url4
...
I'll use perl or c to implement the stresstest. Requests are executed with curl, threading is used, because responses are typically longer than 100ms.
My problem is to fill some kind of queue which starts the requests at the given fixed times from the list.
My first idea is to use a while loop (example for single request) and permanently check the timestamps:
#include <stdio.h>
#include <sys/time.h>
struct timeval time;
int t_sec=1496064929;
int t_usec=100;
int request_started=0;
int main(){
while(1){
gettimeofday(&time, NULL);
if(time.tv_sec == t_sec && time.tv_usec > t_usec && request_started == 0){
printf("Request start at %d %d\n", time.tv_sec, time.tv_usec);
request_started=1;
}
}
}
But this code is far from ideal...
Is there some kind of scheduler library which can handle requests in subsecond-area? Or maybe, there are ready to use benchmark-tools which can produce such requests from the given list (I already checked apache benchmark, httperf and siege without luck)?
Thanks in advance!

You can code your stress client by setting appropriate adding a delay before sending each request.
Please note the calculations done for sending requests emulating a stress test are subjective to network delay,system resources(cpu,memory) available,priority of client and server program when they are running.
You have 100k to be sent over 24 hour(assuming you mean 24 hours for a day), so basically you need to keep delay of 1158 msec between two consecutive requests being sent.
You need to calculate time taken for each request in the loop by subtracting timestamp before sending the request from the timestamp taken after receiving response
Lets say it took x msec for a reqeust to get processed,if x is less than 1158msec, sleep for (1158 - x) msec before sending next request,otherwise just send the next request with any delay.
When you run it with multiple processes or threads, you need to factor that in your calculation as each thread/process is supposed to send 100k/n number of requests in 24 hours where n is number of threads or processes.

Related

Based on Gatling report, how to make sure 100 requests are processed in less than 1 second

how can I check my requirement of 100 requests are processed in less than 1 second in my gatling3 report. I ran this using jenkins.
my simulation looks like as below
rampConcurrentUsers(1) to (100) during (161 second),
constantConcurrentUsers(100) during (1 minute)
Below is my response time percentile graph of two executions for an interval of one second.
enter image []1 here
What does the min,max here will tell us, i am assuming the percentages 25%-99% are the completion of the request.
Those graph sections are not what you're after - they show the distribution of response times and the number of active users.
So min is the fastest response time
max is the longest
95% is the response time for which 95% of your requests were under
and so on...
So what you could do is look at the section of your graph corresponding to the 100 constant concurrent users injection stage. In this part you would require that the max response time always be under 1 second
(Note: there's something odd with your 2nd report - I assume it didn't come from running the stated injection profile as it has more than 100 concurrent users active)

Client-server restaurant simulation

I am making a client-server with select() in C that simulates a fast-food or whatever.
I have clients that order a random "food" between 1-5. The server decides every 30 sec. what is the most preferred food from all the clients. He serves those clients and they close connection, and for the rest, sends them a message to wait. The clients that are not served try 2 more times, otherwise they leave.
My question is, how can I make the server check every 30 sec. what their orders are?
I tried making an array, but I can't figure out how to make the server 'check' continuously every 30 sec. and set the array to 0, then.
Here is the pseudocode:
**client**
patience=0;served=0;
do
{send random 1-5
receieve message. if 1->served=1; if 0, patience++;
}while patience !=3 and served!=1;
if served==1 send -1
else send -2
close connection
**Server**
while(1)
{
serves clients in a concurent manner
select
adds client in socket list
serves client
waits message
if 1-5 adds in vector
//here I don't know how to make it wait for 30 sec.
//If I put sleep(30), it's going to sleep for each client every time. I want it to permanently check every 30 sec and send messages to clients.
send respones(0 or 1, depending on if the max order is the same as the client's)
if -1, thanks for the food
if -2, going somewhere else
close client connection
}
You may want to try sleep() function in a loop, what it will do is pause your program for that much duration and then execute the statements that follow.
sleep(30);
Is what you want i suppose. Have a look here for more about sleep.
What your code should be like is:
for(i=0; i<=100; i++)//Checks 100 times
{
sleep(30);
checkserverforupdate();
}
Update
Code:
while(1){//runs infinite times
sleep(30);//pauses here for 30 seconds everytime before/after running the following code
*client**
patience=0;served=0;
do
{send random 1-5
receieve message. if 1->served=1; if 0, patience++;
}while patience !=3 and served!=1;
if served==1 send -1
else send -2
close connection
**Server**
while(1)
{
serves clients in a concurent manner
select
adds client in socket list
serves client
waits message
if 1-5 adds in vector
//here I don't know how to make it wait for 30 sec.
//If I put sleep(30), it's going to sleep for each client every time. I want it to permanently check every 30 sec and send messages to clients.
send respones(0 or 1, depending on if the max order is the same as the client's)
if -1, thanks for the food
if -2, going somewhere else
close client connection
}
}
See what happens with sleep(30) is that the program pauses for the time of 30 seconds and then executes the code written after it and if you put it in a while loop, it waits every time.
Update 2:
Code for how to get the current time in C:
here
/* localtime example */
#include <stdio.h>
#include <time.h>
int main ()
{
time_t rawtime;
struct tm * timeinfo;
time ( &rawtime );
timeinfo = localtime ( &rawtime );
printf ( "Current local time and date: %s", asctime (timeinfo) );
return 0;
}
Update 3:
So your code should be like this:
time_t rawtime;
struct tm * timeinfo;
time ( &rawtime );
timeinfo = localtime ( &rawtime );
printf ( "Current local time and date: %s", asctime (timeinfo) );
All of your code here
and then get the time again,
time_t rawtime1;
.....
and then you may calculate the difference between them and put a sleep statement as you wish. If the time you want it to pause is in x then,
sleep(x);
return 0;
You can use a thread to wait for 30 secs and when the thread returns do a check to find out the preferred food from all the clients. This link for threading will help.
As you are into pseudocode here goes
t = time now
start loop
... do stuff
t2 = time now
delta = t2 - t
sleep (30 - delta % 30)
end of loop
The select() function in C has a timeout parameter. The correct logic would be something like:
nextTime = now() + 30 seconds;
while(running) {
currentTime = now();
if( nextTime <= currentTime() ) {
makeFood();
nextTime += 30 seconds;
} else {
timeout = nextTime - currentTime;
if(select(..., timeout) > 0) {
// Check FD sets here
}
}
}
EDIT (added explanation below).
You have to have some sort of data structure to track that state (e.g. which foods have been requested by which clients). This data structure isn't shown above.
The // Check FD sets part modifies the current state. For example, when a new client orders some food you might add the details to a linked list of clients waiting for orders. When a client gets sick of waiting you'd remove their order from the list of clients waiting for orders.
The makeFood() function would look at the (previously stored) state and determine which type of food to make; then inform clients to either keep waiting or that their food was made (and remove clients that did receive their food).
EDIT 2 - The problem with using sleep().
If you sleep until it's the right time to send replies and then receive packets, and then send replies; then the time it takes to receive packets varies (depending on the number of packets received) which means that the time that you send replies varies (depending on the number of packets received).
A time-line might look like this:
12:00:59 - stop sleeping
12:00:59 - check for received packets and find there's none
12:00:59 - do 1 second of processing
12:01:00 - start sending reply packets
12:01:02 - finish sending reply packets
12:01:02 - sleep for the remaining 27 seconds
12:01:29 - stop sleeping
12:01:29 - check for received packets and find there's lots of them
12:01:29 - process received packets for 3 seconds
12:00:33 - do 1 second of processing (3 seconds too late)
12:01:34 - start sending reply packets (3 seconds too late)
12:01:02 - finish sending reply packets (3 seconds too late)
12:01:02 - sleep for the remaining 24 seconds
In this case, the replies are sent between 0 and 3 seconds too late because it takes between 0 and N seconds to handle received packets. This timing variation is called "jitter".
To fix the "jitter", you handle received packets while you're waiting so that the time spent handling those received packets doesn't influence the time that replies are sent.
In this case, the time-line looks like this:
12:00:59 - stop waiting and receiving packets
12:00:59 - do 1 second of processing
12:01:00 - start sending reply packets
12:01:02 - finish sending reply packets
12:01:02 - receive packets while waiting for the remaining 27 seconds
12:01:29 - stop waiting and receiving packets
12:00:29 - do 1 second of processing
12:01:00 - start sending reply packets
12:01:02 - finish sending reply packets
12:01:02 - receive packets while waiting for the remaining 27 seconds
In this case, the replies are always sent at the correct time.
EDIT 3 - The other problem with using sleep().
The first problem with sleep exists because it's impossible to predict the ideal amount of time to spent sleeping in advance, which causes you to wake up too late (which causes "jitter"). The second problem is the effect this has only the server's ability to handle periods of "almost maximum load".
Basically, you sleep too long, and as load increases you reach a point where you've slept too long and left yourself with insufficient time to process the load you should've received while you were waiting.
This causes oscillations when there's constant "almost maximum load". For example, if there's enough load that it takes 29 seconds to do everything (receive packets, process them and send reply packets); then for one 30 second period you sleep too long, fail to handle the load in time and finish too late; then you realise that you need to sleep for "negative seconds" (but you can't turn back time - the best you can do is skip the sleep()). This pushes the excess load onto the next 30 second period of time. After several (many) of these "sleep skipped but still starting too late" 30 seconds periods you eventually catch up; and as soon as that happens you sleep too long again and start the entire thing all over again.
Of course this assumes there's constant "almost maximum load". Real life is almost never like that. In practice you tend to get fluctuations - e.g. one 30 second period with "slightly higher than maximum average load", followed by a 30 second period with "slightly less than maximum average load". What we're really talking about here is the ability of the server to recover quickly from the occasional "slightly higher than maximum average load" periods.
For my method, the server recovers as quickly as possible. For the "sleep()" method it takes much longer to recover, resulting in prolonged "lag surges".

Apache Camel - aggregator to space out requests, but not queuing requests

I have route which when sent a message invokes a refresh service
I only want the service to be invoked at most every 1 minute
If the refresh service takes longer than 1 minute (e.g. 11 minutes) I don't want requests for it to queue up
The first part: every 1 minutes is easy, I just create an aggregator with a completionTimeout of 1 mins
The part about stopping requests queueing up is not so easy and I can't figure out how to construct it
e.g.
from( seda_in )
.aggregate( constant(A), blank aggregator )
.completionTimeout( 1000 )
.process( whatever )...
if the process takes 15 seconds then potentially 15 new inoke messages could be waiting for the process when it finishes. I want at most just 1 to be waiting for however long the process takes. (its hard to predict)
how can I avoid this or structure it better to achieve my objectives?
I believe you would be interested in seeing the Throttler pattern, which is documented here http://camel.apache.org/throttler.html
Hope this helps :)
EDIT - If you are wanting to eliminate excess requests, you can also investigate setting a TTL (Time to live) header within JMS and add a concurrent consumer of 1 to your route, which means any excess messages will also be discarded.

Looping through a changing event list

I have a list of machines that I ping depending on their response time.
Initially all machines are pinged each t milliseonds for 5 times. Depending on the response time for these 5 pings on each machine I adjust the time of the ping to stretch or contract it, till arriving at some configuration like this:
machine1: t1....t2....t3....t4....t5
machine2: t1......t2......t3......t4......t5
..
machineN: ..t1..t2..t3..t4..t5
^
|machine needs to be pinged at this tick
t1..tN represents the (millisecond) ticks of the clock.
Having a thread per machine to do the ping is evident, but not an optimal solution due the number of machines.
Rather, one thread that iteratites through the global order of the events is desirable, something like this:
while(true){
fetch_next_machine_to_be_pinged();
ping_it();
if(any_machine_pinged_5_times());
reorder_events(); //adjust the time of its next 5 ping
//continue
}
What is the best way to achieve this? (ps: language C).
I would use a priority queue.
At any time, the queue would contain one entry per machine. The "priority" of entry M would be the timestamp when machine M needs to be pinged next, and the payload would be some token identifying the machine (so that you know whom to ping).

How do I measure response time in seconds given the following benchmarking data?

We recently got some data back on a benchmarking test from a software vendor, and I think I'm missing something obvious.
If there were 17 transactions (I assume they mean successfully completed requests) per second, and 1500 of these requests could be served in 5 minutes, then how do I get the response time for a single user? Is this sort of thing even possible with benchmarking? I have a lot of other data from them, including apache config settings, but I'm not sure how to do all the math.
Given the server setup they sent, I want to know how I can deduce the user response time. I have looked at other similar benchmarking tests, but I'm having trouble measuring requests to response time. What other data do I need to provide here to get that?
If only 1500 of these can be served per 5 minutes then:
1500 / 5 = 300 transactions per min can be served
300 / 60 = 5 transactions per second can be served
so how are they getting 17 completed transactions per second? Last time I checked 5 < 17 !
This doesn't seem to fit. Or am I looking at it wrongly?
I presume be user response time, you mean the time it takes to serve a single transaction:
If they can serve 5 per second than it takes 200ms (1/5) per transaction
If they can serve 17 per second than it takes 59ms (1/17) per transaction
That is all we can tell from the given data. Perhaps clarify how many transactions are being done per second.

Resources