Gatling tryMax reporting error even if the retry was successful - gatling

I have following scenario
.tryMax(10) {
pause(200 millis)
exec(
http("monitor status")
.get("/${orderId}")
.headers(sentHeaders)
.check(status is 200)
.check(jsonPath("$.status").is("SUCCESSFUL"))
)
}
// next request
I expect this to call my endpoint up to 10 times until the condition status=SUCCESSFUL is met.
It does this correctly and only executes the next request after the condition is met. However, the test result still reports it as a failure.
---- Errors --------------------------------------------------------------------
> jsonPath($.status).find.is(SUCCESSFUL), but actu 992 (100.0%)
ally found ACCEPTED
Why is it reporting the request inside the tryMax as an error?

That's the expected behavior for tryMax.
Use an asLongAs loop and a non failing check where you would save the extracted value to be used for the loop condition.

Related

loop with requests.get and avoid timeout errors

I am trying to get scrape information.
I goto the introduction page to determine the number of search results. Often, the results occur over >1 page, and as such, I need to refresh and run another requests in a loop. On a few occasions an error occurs in the extra requests, or it hangs.
I am curious if there is a way to check a request, if it fails, then try again, and if it still fails, log it and go to the next one
Here is a sample script:
t=.3
urls=['https://stackoverflow.com/','https://www.google.com/'] #list of upto 200 urls
for url in urls:
print(url)
response = requests.get(url, timeout=t)
t=t+10
i=i-1
running this returns a timeout occasionally, and the processing stops. My workaround is to print the url that failed, and then rerun, updating the list manually.
I would like to find a way that if a request error occurs, the response retries 5x, and if it fails, logs and stores the failed url, then goes onto the next one, so that I can try the failed urls at a later stage
Any suggestions?
I haven't used Python in a while, but I'm pretty sure that this will work.
t=.3
urls=['https://stackoverflow.com/','https://www.google.com/'] #list of upto 200 urls
for url in urls:
print(url)
try:
response = requests.get(url, timeout=t)
except:
# if the request fails: try again
try:
response = requests.get(url, timeout=t)
except:
# if the request fails again: do nothing (continue to the next url)
pass
t=t+10
i=i-1

How can an HTTP 403 be returned from an apache web server input filter?

I have written an apache 2.x module that attempts to scan request bodies, and conditionally return 403 Forbidden if certain patterns match.
My first attempt used ap_hook_handler to intercept the request, scan it and then returned DECLINED to the real handler could take over (or 403 if conditions were met).
Problem with that approach is when I read the POST body of the request (using ap_get_client_block and friends), it apparently consumed body so that if the request was subsequently handled by mod_proxy, the body was gone.
I think the right way to scan the body would be to use an input filter, except an input filter can only return APR_SUCCESS or fail. Any return codes other than APR_SUCCESS get translated into HTTP 400 Bad Request.
I think maybe I can store a flag in the request notes if the input filter wants to fail the request, but I'm not sure which later hook to get that.
turned out to be pretty easy - just drop an error bucket into the brigade:
apr_bucket_brigade *brigade = apr_brigade_create(f->r->pool, f->r->connection->bucket_alloc);
apr_bucket *bucket = ap_bucket_error_create(403, NULL, f->r->pool,
f->r->connection->bucket_alloc);
APR_BRIGADE_INSERT_TAIL(brigade, bucket);
bucket = apr_bucket_eos_create(f->r->connection->bucket_alloc);
APR_BRIGADE_INSERT_TAIL(brigade, bucket);
ap_pass_brigade(f->next, brigade);

App Engine generating infinite retries

I have a backends that is normally invoked by a cron to run a few times every day. Yesterday, I noticed it was restarting without stopping. I dont see a place in my code where that invocation is happening. Rather, the task queue seems to indicate it is running due to re-tries due to errors. One error is that status is saved to bigQuery and that is failing because a quoto is exceeded. But this seems to generate an infinite loop. Is this a bug in app engine or I am doing something wrong? Is there a way to indicate to not restart a task if it fails? My other app engine tasks that terminate without 200 status dont do that...
Here is a trace of the queue from which the restarts keep happening:
Here is the logging showing continous running
And here is the http header inside the logging
UPDATE1
Here is the cron:
<?xml version="1.0" encoding="UTF-8"?>
<cronentries>
<cron>
<url>/uploadToBigQueryStatus</url>
<description>Check fileNameSaved Status</description>
<schedule>every 15 minutes from 02:30 to 03:30</schedule>
<timezone>US/Pacific</timezone>
<target>checkuploadstatus-backend</target>
</cron>
</cronentries>
UPDATE 2
As for the comment about catching the error: The error I believe is that the biqQuery job fails because a quota has been hit. Strange thing is that it happened yesterday, and the quota should have been reset, so the error should have good away for at least a while. I dont understand why the task retries, I never selected that option that I am aware of.
I killed the servlet and emptied the task queue so at least it is stopped. But I dont know the root cause. IF BQ table quota was the reason, that shouldnt cause an infinite retry!
UPDATE 3
I have not trapped the servlet call that produced the error that led to the infinite retry. But I checked this cron activated servlet today and found I had another non-200 result. The return value this time was 500 and it is caused by a DataStore time-out exception.
Here is the screen shot of the return that show 500 return code.
Here is the exception info page 1
And the following data
The offending code line is the for loop iterating on the data store query
if (keys[0] != null) {
/* Define the query */
q = new Query(bucket).setAncestor(keys[0]);
pq = datastore.prepare(q);
gotResult = false;
// First system time stamp
Date date= new Timestamp(new Date().getTime());
Timestamp timeStampNow = new Timestamp(date.getTime());
for (Entity result : pq.asIterable()) {
I will add a try-catch on this for loop as it is crashing in this iteration.
if (keys[0] != null) {
/* Define the query */
q = new Query(bucket).setAncestor(keys[0]);
pq = datastore.prepare(q);
gotResult = false;
// First system time stamp
Date date= new Timestamp(new Date().getTime());
Timestamp timeStampNow = new Timestamp(date.getTime());
try {
for (Entity result : pq.asIterable()) {
Hopefully, the data store read will not crash the servlet but it will render a failure. At leas the cron will run again and pickup other non-handled results.
By the way, is this a java error or app engine? I see a lot of these data store time outs and I will add a try-catch around all the result loops. Still, it should not cause the infinite retry that I experienced. I will see if I can find the actual crash..problem is that it overloaded my logging...More later.
UPDATE 4
I went back to the logs to see when the inifinite loop began. In the logs below, I opened the run that is at the head of the continuous running. YOu can see that it fails with 500 every 5th time. It is not the cron that invoked it, it was me calling the servlet to check biq query upload status (I write to the data store the job info, then read it back in servlet and write to bigQuery the job status and if done, erase the data store entry.) I cannot explain the steady 500 errors every 5th call, but it is always the Data Store Timeout exception.
UPDATE 5
Can the infinite retries be happening because of the queue configuration?
CheckUploadStatus
20/s
10
100
10
200
2
I just noticed another task queue had a 500 return code and it was continuously retrying. I did some search and found some people have tried to configure
the queues for no retry. They said that didnt work.
See this link:
Google App Engine: task_retry_limit doesn't work?
But one re-try is possible? That is far better than infinite.
It is contradictory that Google enforces quotas but seems to prefer infinite retries. I would much prefer block the retries by default on non-200 return code and then have NO QUOTAS!!!
According to Retrying cron jobs that fail:
If a cron job's request handler returns a status code that is not in
the range 200–299 (inclusive) App Engine considers the job to have
failed. By default, failed jobs are not retried.
To set failed jobs to be retried:
Include a retry-parameters block in your cron.xml file.
Choose and set the retry parameters in the retry-parameters block.
Your cron config doesn't specify the necessary retry parameters, so the jobs returning the 500 code should, indeed, not be retried, as you expect.
So this looks like a bug. Possibly a variant of the (older) known issue 10075 - the 503 code mentioned there might have changed in the mean time - but it is also a quota-related failure.
The suggestion from GAEfan's comment is likely a good workaround:
You will need to catch the error, and send a 200 response to stop the
task queue from retrying. – GAEfan 1 hour ago

Need to perform bulk delete on Search documents -- routinely getting "took too long to respond" errors

I perform a cron job where I need to update my search indices. As part of updating, I delete old documents with this code:
while True:
results = index.search(search.Query(
query_string="locationID="+location_id,
options=search.QueryOptions(
limit=100,
cursor=cursor,
ids_only=True)))
cursor = results.cursor
doc_ids = [tmp_result.doc_id for tmp_result in results]
index.delete(doc_ids)
if not cursor: # if cursor is None, meaning no more results
break
I am relatively routinely seeing this error in my logs:
DeadlineExceededError: The API call search.DeleteDocument() took too long to respond
and was cancelled.
Is there something I'm doing wrong with my deletion code that I'm seeing this error pop up?
Edit:
Is this just a random error that will show up from time to time? If so, should I just implement a redo with an exponential backoff like so:
def delete_doc_ids(doc_ids, retries):
success = False
time_to_sleep = 2**retries*0.1 #100 ms
time.sleep(time_to_sleep)
retries+=1
try:
index.delete(doc_ids)
success = True
return success, retries
except:
logging.info("Failure to delete documents. Retrying in %s seconds"%time_to_sleep)
return success, retries
# because this step fails a lot, keep running in a while loop until it works with exponential backoff
deletion_finished = False
retries = 0
#keep trying until deletion_finished returns true on an expontential backoff
while not deletion_finished:
deletion_finished, retries = delete_doc_ids(doc_ids,retries)
Edit 2:
What is the default deadline alluded to here? I dug through the RPC source files and can't find it.
Try sending the job to the taskqueue, which has a much longer deadline (10 minutes), or to a custom module, which can run indefinitely.

What happens when an async put, results in a contention exception, after the request has ended, on Appengine with NDB?

Using ndb, lets say I put_async'd 40 elements, with #ndb.toplevel, wrote an output to user and ended the request, however one of those put_async's resulted in a contention exception, would the response be 500 or 200? Or lets say If it it a task, would the task get re-executed?
One solution is get_result()'ing all those 40 requests before the request ending and catching those exceptions -if they occur- but I'm not sure whether it will affect performance.
As far as I understand, using #ndb.toplevel causes the handler wait for all async operations to finish before exiting.
From the docs:
As a convenience, you can decorate the request handler with #ndb.toplevel. This tells the handler not to exit until its asynchronous requests have finished. This in turn lets you send off the request and not worry about the result. https://developers.google.com/appengine/docs/python/ndb/async#intro
So by adding #ndb.toplevel that the response doesn't actually get returned until after the async methods have finished executing. Using #ndb.toplevel removes the need to call get_result on all the async calls that were fired off (for convenience). So based on this, the request would still return 500 if the async queries failed, because all the async queries needed to complete before returning. Updated: below
If using a task (I assume you mean task queue) the task queue will retry the request if the request fails.
So your handler could be something like:
def get(self):
deferred.defer(execute_stuff_in_background, param,param1)
template.render(...)
and execute_stuff_in_background would do all the expensive puts once the handler had returned. If there was a contention issue in the task, your original handler would still return 200.
If you suspect there is going to be a contention issue, perhaps consider sharding or using a fork-join queue implementation to handle the writes (see implementation here: http://www.youtube.com/watch?v=zSDC_TU7rtc#t=41m35)
Edit: Short answer
The request will fail (return 500) if the async requests fail, because #ndb.toplevel waits
for all results to finish before exiting.
Updated:Having looked at #alexis's answer below, I re-ran my original test (where I turned off datastore writes and called put_async in the handler decorated with #ndb.toplevel), the response raises 500 intermittently (I assume this depends on execution time). Based on this and #alexis's answer below, don't expect the result to be 500 if an async task throws an exception and the calling function is decorated with #ndb.toplevel
That's odd, I use toplevel and expect the opposite behavior. And that's what I observe. Did something change since the first answer to this question?
As the doc says:
This in turn lets you send off the request and not worry about the
result.
You can try the following unittest (using testbed):
#ndb.tasklet
def raiseSomething():
yield ndb.Key('foo','bar').get_async()
raise Exception()
#ndb.toplevel
def callRaiseSomething():
future = raiseSomething()
return "hello"
response = callRaiseSomething()
self.assertEqual(response, "hello")
This test passes. NDB logs a warning: "suspended generator raiseSomething(tests.py:90) raised Exception()", but it does not re-raise the exception.
ndb.toplevel only waits for the RPCs, but does nothing of the actual result.
If your decorated function is itself a tasklet, it will call get_result() on it first. At this point exceptions will be raised. Then it will wait for remaining 'orphaned' RPCs, and will only log something if an exception is raised.
So my response is: the request will succeed (return 200)

Resources