How to add wait of 30 sec. in between 2 loop controller?
Each loop controller has loop count 10.
Scenario:
1.No. of threads=5
2.For each user, No. of loop controllers=5
Loop_controller_1(count=10, i.e. 10 HTTP request)
Wait for 10 sec
Loop_controller_2(count=10 i.e. 10 HTTP request)
wait for 10 sec
Loop_controller_3(count=10 i.e. 10 HTTP request)
wait for 10 sec
Loop_controller_4(count=10 i.e. 10 HTTP request)
wait for 10 sec
Loop_controller_5(count=10 i.e. 10 HTTP request)
wait for 10 sec
Please check there is no wait between 10 HTTP request, it must wait after 10 are completed. How to use Jmeter components?
Add Flow Control Action between Controllers and choose Pause with 10000 miliseconds
allows pauses to be included without needing to generate a sample
Depending on what you're trying to achieve:
You can add a 30 seconds wait for each thread (virtual user) individually you can add a Constant Timer as a child of the first request in the Loop Controller 2, timers are executed before requests so each thread will wait for the defined amount of time prior to starting the new Loop Controller
You can configure JMeter to "gather" all threads and tell them to wait for the specified amount of time together by adding a Flow Control Action sampler and a Synchronizing Timer with number of users to group by equal to the number of threads in your thread group
Related
Is it possible to call some users simultaneously in dasha.ai? Maximum count of simultaneous calls and how can I implement that
Simultaneous calls already implemented via Conversation Queue
Simultaneous calls limits
Instance limit
Instance limit is set by you in sdk application.
It must be initialized through parameter concurrency in application.start method (default value - 1)
await application.start({ concurrency: 10 });
Group limit
All users have a Default group which does not have a limit (theoretically infinite)
However if you are using a custom group - you should set max-concurrency to a number of simultaneous calls in it
You can set and update max-concurrency via dasha cli
Example for creating group
dasha group create group_name --max-concurrency=50
and updating group
dasha group update group_name --max-concurrency=50
Which group will be used by your application (instance) is defined by deploy method:
dasha.deploy("app", { groupName: "Default" });
Customer limit
This limit can be changed only on demand (you can't manually change it, atleast for now)
Application
You need to specify how to handle calls via Conversation Queue: for calls that can be started you must specify ready event
application.queue.on("ready", async (key, conversation) => {
conversation.input = getInput(key); // getInput must return an object that consist of input variables for a call
const result = await conversation.execute(); // start conversation/call
});
This event will be started for each call asynchronously
To make simultaneous calls - push as many calls as you need into a queue (example for adding one call into queue, that must be started within one hour or it will be timed out):
application.queue.push("some_unique_key", {
after: new Date(Date.now()),
before: new Date(Date.now() + 60 * 60 * 1000)
});
While you have free limits and calls in queue that ready to start - they will be processed as soon as possible
We are on Google App engine standard environment, F2 instance (generation 1 - python 2.7). We have a reporting module that follows this flow.
Worker Task is initiated in a queue.
task = taskqueue.add(
url='/backendreport',
target='worker',
queue_name = 'generate-reports',
params={
"task_data" : task_data
})
In the worker class, we query Google datastore and write the data to a Google Sheet. We paginate through the records to find additional report elements. When we find additional page, we call the same task again to spawn another write, so it can fetch the next set of report elements and write them to Google sheet.
in the backendreport.py we have the following code.
class BackendReport():
# Query google datastore to find the records(paginated)
result = self.service.spreadsheets().values().update(
spreadsheetId=spreadsheet_Id,
range=range_name,
valueInputOption=value_input_option,
body=resource_body).execute()
# If pagination finds additional records
task = taskqueue.add(
url='/backendreport',
target='worker',
queue_name = 'generate-reports',
params={
"task_data" : task_data
})
We run the same BackendReport (with pagination) as a front end job (not as a task). The pagination works without any error - meaning we fetch each page of records and display to the front end. But when we execute the tasks iteratively it fails with the soft memory limit issue. We were under the impression that every time a task is called (for each pagination) it should act independently and there shouldn't be any memory constraints. What are we doing wrong here?
Why doesn't GCP spin a different instance when the soft memory limit is reached - automatically (our instance class is F2).
The error message says soft memory limit of 512 MB reached after servicing 3 requests total - does this mean that the backendreport module spun up 3 requests - does it mean there were 3 tasks calls (/backendreport)?
Why doesn't GCP spin a different instance when the soft memory limit is reached
One of the primary mechanisms for when app engine decides to spin up a new instance is max_concurrent_requests. You can checkout all of the automatic_scaling params you can configure here:
https://cloud.google.com/appengine/docs/standard/python/config/appref#scaling_elements
does this mean that the backendreport module spun up 3 requests - does it mean there were 3 tasks calls (/backendreport)?
I think so. To be sure, you can open up Logs viewer, find the log where this was printed and filter your logs by that instance-id to see all the requests it handled that lead to that point.
you're creating multiple tasks in Cloud Tasks, but there's no limitation for the dispatching queue there, and as the queue tries to dispatch multiple tasks at the same time, it reaches the memory limit. So the limitations you want to set in place is really max_concurrent_requests, however not for the instances in app.yaml, it should be set for the queue dispatching in queue.yaml, so only one task at a time is dispatched:
- name: generate-reports
rate: 1/s
max_concurrent_requests: 1
My requirements are like below :
I have Actions URLs say
["http://yyy.com/abrakadabra1", "http://yyy.com/abrakadabra2"]
and maxActionsToSave : 3, waitingDuration : 30(sec)
I have to save requests if it matches to Action Urls Array upto maxActionsToSave times or wait upto waitingDuration constants if in case no network(As network is very very slow having fluctuation also).
If any requests(May be time sync action or any other requests) got success then push these saved actions else throw exception if none of requests has got success or try up to some number of specified actions. After waiting limit or action limit gone http request can through exceptions.
I have this script:
.foreach("${list}", "item") {
exec(http("Req 1")
.post("/path/to/service/one")
.formParam("param1", "${item}")
.formParam("param2", "somestring")
.formParam("param3", "${param3}")
.check(xpath("/xmlroot/id").saveAs("id"))
.check(xpath("/xmlroot/version").saveAs("version")))
.exec(http("Req 2")
.post("/path/to/service/two")
.formParam("param1", "${param1}")
.formParam("param2", "${param2}")
.formParam("version", "${version}")
.formParam("id", "${id}")
.check(status.is(200)))
.exec(http("Req 3")
.post("/path/to/service/three")
.formParam("id", "${id}")
.formParam("param1", "somestring")
.formParam("param2", "${item}")
.check(xpath("/xmlroot/#id").exists))
}
It executes successfully, but it reports that there are only 2 Req 2 requests executed while there are 8 of Req 1 and Req 3. I expect there to be an equal number of requests.
Any idea what could be causing this?
Edit:
It seems to be completing different number of requests each run, where it only tries to run as many Req 2 requests as it can fit in the amount of time Req 1 and Req 3 runs. Are they fired simultaneously? I assumed these are fired one after the other.
Edit2:
Here is a screenshot of one of the reports generated. It shows that Req 2 executed 75% less times, and that the requests are also around 3 times slower than Req 1 and Req 3.
Edit3: Setup and scenario:
val scenario = scenario("Scenario")
.exitBlockOnFail {
exec(loginAction,
actionThatEndsWithTheLoop)
}
setUp(scenario.inject(atOnceUsers(1)))
.assertions(global.successfulRequests.percent.is(100))
.protocols(httpProtocol)
I only run 1 user so all of the 8 iterations are expected for each of the requests.
I have a backends that is normally invoked by a cron to run a few times every day. Yesterday, I noticed it was restarting without stopping. I dont see a place in my code where that invocation is happening. Rather, the task queue seems to indicate it is running due to re-tries due to errors. One error is that status is saved to bigQuery and that is failing because a quoto is exceeded. But this seems to generate an infinite loop. Is this a bug in app engine or I am doing something wrong? Is there a way to indicate to not restart a task if it fails? My other app engine tasks that terminate without 200 status dont do that...
Here is a trace of the queue from which the restarts keep happening:
Here is the logging showing continous running
And here is the http header inside the logging
UPDATE1
Here is the cron:
<?xml version="1.0" encoding="UTF-8"?>
<cronentries>
<cron>
<url>/uploadToBigQueryStatus</url>
<description>Check fileNameSaved Status</description>
<schedule>every 15 minutes from 02:30 to 03:30</schedule>
<timezone>US/Pacific</timezone>
<target>checkuploadstatus-backend</target>
</cron>
</cronentries>
UPDATE 2
As for the comment about catching the error: The error I believe is that the biqQuery job fails because a quota has been hit. Strange thing is that it happened yesterday, and the quota should have been reset, so the error should have good away for at least a while. I dont understand why the task retries, I never selected that option that I am aware of.
I killed the servlet and emptied the task queue so at least it is stopped. But I dont know the root cause. IF BQ table quota was the reason, that shouldnt cause an infinite retry!
UPDATE 3
I have not trapped the servlet call that produced the error that led to the infinite retry. But I checked this cron activated servlet today and found I had another non-200 result. The return value this time was 500 and it is caused by a DataStore time-out exception.
Here is the screen shot of the return that show 500 return code.
Here is the exception info page 1
And the following data
The offending code line is the for loop iterating on the data store query
if (keys[0] != null) {
/* Define the query */
q = new Query(bucket).setAncestor(keys[0]);
pq = datastore.prepare(q);
gotResult = false;
// First system time stamp
Date date= new Timestamp(new Date().getTime());
Timestamp timeStampNow = new Timestamp(date.getTime());
for (Entity result : pq.asIterable()) {
I will add a try-catch on this for loop as it is crashing in this iteration.
if (keys[0] != null) {
/* Define the query */
q = new Query(bucket).setAncestor(keys[0]);
pq = datastore.prepare(q);
gotResult = false;
// First system time stamp
Date date= new Timestamp(new Date().getTime());
Timestamp timeStampNow = new Timestamp(date.getTime());
try {
for (Entity result : pq.asIterable()) {
Hopefully, the data store read will not crash the servlet but it will render a failure. At leas the cron will run again and pickup other non-handled results.
By the way, is this a java error or app engine? I see a lot of these data store time outs and I will add a try-catch around all the result loops. Still, it should not cause the infinite retry that I experienced. I will see if I can find the actual crash..problem is that it overloaded my logging...More later.
UPDATE 4
I went back to the logs to see when the inifinite loop began. In the logs below, I opened the run that is at the head of the continuous running. YOu can see that it fails with 500 every 5th time. It is not the cron that invoked it, it was me calling the servlet to check biq query upload status (I write to the data store the job info, then read it back in servlet and write to bigQuery the job status and if done, erase the data store entry.) I cannot explain the steady 500 errors every 5th call, but it is always the Data Store Timeout exception.
UPDATE 5
Can the infinite retries be happening because of the queue configuration?
CheckUploadStatus
20/s
10
100
10
200
2
I just noticed another task queue had a 500 return code and it was continuously retrying. I did some search and found some people have tried to configure
the queues for no retry. They said that didnt work.
See this link:
Google App Engine: task_retry_limit doesn't work?
But one re-try is possible? That is far better than infinite.
It is contradictory that Google enforces quotas but seems to prefer infinite retries. I would much prefer block the retries by default on non-200 return code and then have NO QUOTAS!!!
According to Retrying cron jobs that fail:
If a cron job's request handler returns a status code that is not in
the range 200–299 (inclusive) App Engine considers the job to have
failed. By default, failed jobs are not retried.
To set failed jobs to be retried:
Include a retry-parameters block in your cron.xml file.
Choose and set the retry parameters in the retry-parameters block.
Your cron config doesn't specify the necessary retry parameters, so the jobs returning the 500 code should, indeed, not be retried, as you expect.
So this looks like a bug. Possibly a variant of the (older) known issue 10075 - the 503 code mentioned there might have changed in the mean time - but it is also a quota-related failure.
The suggestion from GAEfan's comment is likely a good workaround:
You will need to catch the error, and send a 200 response to stop the
task queue from retrying. – GAEfan 1 hour ago