Gatling scala gradle, HowTo set requestTimeout - gatling

I'm using gatling gradle plugin and I'm trying to increase default timeout.
https://gatling.io/docs/current/extensions/gradle_plugin/
This doesn't work.
I constantly get
> i.g.h.c.i.RequestTimeoutException: Request timeout to localhos 47 (94,00%)
t/127.0.0.1:8080 after 60000 ms
my gatling.conf is
gatling {
core {
http {
pooledConnectionIdleTimeout = 600000 # Timeout in millis for a connection to stay idle in the pool
requestTimeout = 1000000 # Timeout in millis for performing an HTTP request
}
}
}
I tried to corrupt my gatling.conf and build ruins
/build/resources/gatling/gatling.conf: 8: Key 'qd qw qd qd' may not be followed by token: 'core' (if you intended 'core' to be part of a key or string value, try enclosing the key or value in double quotes)
So gatling really tries to read my file but doesn't want to override setting.
Who knows how to override it?

Your configuration file is wrong.
Your file, properly formatted:
gatling {
core {
http {
requestTimeout = 1000000
}
}
}
How it should be, like in the documentation:
gatling {
core {
# core options
}
http {
requestTimeout = 1000000
}
}

Related

After adding more products to woocomerce wordpress store graphql fails

anyone knows how to fix this graphql error? It appeared after i've added more woocomerce products. Url seems to be good because after deleting part of the woocomerce prducts everything stars to work like normal again.
ERROR
timeout of 30000ms exceeded
ERROR #gatsby-source-wordpress_111006
gatsby-source-wordpress It took too long for https://my-web-url/graphql to respond (longer than 30 seconds).
Either your URL is wrong, you need to increase server resources, or you need to decrease the amount of resources each
request takes.
You can configure how much resources each request takes by lowering your `options.schema.perPage` value from the default
of 100 nodes per request.
Alternatively you can increase the request timeout by setting a value in milliseconds to `options.schema.timeout`, the
current setting is 30000.
GraphQL request to https://my-web-url/graphql failed.
The output is quite self-explanatory. You've reached the timeout threshold because of the addition of more data to fetch.
As it has been prompted, you can add a bunch of options to gatsby-sourde-wordpress to customize that limit:
{
resolve: `gatsby-source-wordpress`,
options: {
schema: {
timeout: 30000,
},
},
}
The timeout, by default takes the value of 30000ms.
Additionally, you can change the number of nodes fetched by page(perPage).
Mixing both customizations:
{
resolve: `gatsby-source-wordpress`,
options: {
schema: {
timeout: 30000,
perPage: 100,
},
},
}
Play around increasing those default values to see if your requests succeed.
To fix this issue you need to raise the timeout in your gatsby-config.js file by adding options schema.
options: {schema: { timeout: 1000000,},}
But solely doing this will probably not be enough as if you are getting timeout error your wordpress server is either already overloaded or will be shortly. You need to raise allocated memory in your wordpress server. You can do that by using some FTP software like filezilla and adding this line to wp.config file. define( ‘WP_MEMORY_LIMIT’, ‘512M’ ); If you don't have as much data you should chose lower number like 256MB.

scala - Gatling - I can't seem to use Session Variables stored from a request in a subsequent Request

The code:
package simulations
import io.gatling.core.Predef._
import io.gatling.http.Predef._
class StarWarsBasicExample extends Simulation
{
// 1 Http Conf
val httpConf = http.baseUrl("https://swapi.dev/api/films/")
// 2 Scenario Definition
val scn = scenario("Star Wars API")
.exec(http("Get Number")
.get("4")
.check(jsonPath("$.episode_id")
.saveAs("episodeId"))
)
.exec(session => {
val movie = session("episodeId").as[String]
session.set("episode",movie)
}).pause(4)
.exec(http("$episode")
.get("$episode"))
// 3 Load Scenario
setUp(
scn.inject(atOnceUsers(1)))
.protocols(httpConf)
}
Trying to grab a variable from the first Get request, and inject that variable into a second request, but unable to do so despite using the documentation. There might be something I'm not understanding.
When I use breakpoints, and navigate through the process, it appears the session execution happens AFTER both of the other requests have been completed (by which time is too late). Can't seem to make that session execution happen between the two requests.
Already answered on Gatling's community mailing list.
"$episode" is not correct Gatling Expression Language syntax. "${episode}" is correct.

Cosmos DB error connect ETIMEDOUT when trying to call many times

I have an array of items (to test I used around 250). Within each item is an ID that I am trying to call from CosmosDB. I am doing so in a simple for-loop
for (i = 0; i < arr.length; k++) {
var func = find(context, arr[i].id)
}
Within find I simply call cosmosDB to read the file. This works fine on individual items, or if I use small arrays (20-50), however with large arrays I get the following error:
{ FetchError: request to mycosmossite/docs failed, reason: connect ETIMEDOUT
message:
'request to mycosmossite/docs failed, reason: connect ETIMEDOUT',
type: 'system',
errno: 'ETIMEDOUT',
code: 'ETIMEDOUT',
headers:
{ 'x-ms-throttle-retry-count': 0,
'x-ms-throttle-retry-wait-time-ms': 0 } }
I am not sure why this is happening. I also get this when using request-promise from time to time but if I try again without changing anything it often works. I am not sure if this is linked
Exception: RequestError: Error: connect ETIMEDOUT
Can someone offer a solution so I can work on larger arrays here? Is this a throttling issue?
Thanks
I maintain the Azure Cosmos DB JS SDK. Are you using the SDK to make these calls? We don't throw ETIMEDOUT anywhere inside the SDK so it is bubbling up from the NodeJS or Browser layer. Possibly you are overwhelming the networking stack or event loop by opening up many downstream connections and promises. As currently written, your code will open arr.length number of concurrent backend requests. Did you mean to await the result of each request? Example:
for (i = 0; i < arr.length; k++) {
var func = await find(context, arr[i].id)
}
You could also batch the requests using a package like p-map and using the concurrency parameter

Random -1 response status while uploading to AWS S3 using pre-signed urls

Our front-end side uploads document to S3 using pre-signed urls and seems to be failing randomly. This part of functionality if very critical to us.
Our pre-signed urls are generated by back-end using boto3.
[...]
#classmethod
def get_presigned_url(cls, filename, user, content_type, size=None):
client = cls.get_s3_client()
import logging
logging.info(cls.generate_keyname(filename, user))
key = cls.generate_keyname(filename, user)
params ={'Bucket': cls.s3_staging_bucket, 'Key': key,
"ContentType": content_type}
if size:
params['ContentLength'] = size
# It's private as default
if cls.is_private:
params['ACL'] = 'private'
else:
params['ACL'] = 'public-read'
return client.generate_presigned_url(
'put_object',
Params=params,
ExpiresIn=600
), cls.get_url(key, cls.s3_staging_bucket)
[...]
So the front-end sends following information to request upload link:
[...]
// Request Presigned url
Restangular.all('upload').all('get_presigned_url').post(
{
'resource_type': 'candidate-cv',
'filename': vm.file.name,
'size': vm.file.size || null,
'content_type': vm.file.type || 'application/octet-stream'
}
).then(
[...]
Things to note in above example: the size and type are not available in all browsers so I have to fallback to defaults.
Once link is retrieved front-end attempts to upload directly to s3 bucket:
[...]
$http.put(
data['presigned_url'],
vm.file,
{
headers: {
'Content-Type': vm.file.type || 'application/octet-stream',
'Authorization': undefined // Needed to remove default ApiKey
}
}
).then(
[...]
The above code gives sometimes -1 response. Sometimes is a problem because it happens a way to often. Probably something around 3% of cases.
We have checked inserted debug logger that sends debug information on every bad response but everything really seems to be alright there.
Our facts so far:
It seemed to me in the beginning that's connectivity issue but should the response status be 0 instead -1?
It happens way too often for connectivity issue (~3%)
It happens on whole range or user agents Windows/Mac Chrome/Edge Mobile/Desk old and new.
It happens with whole range of document formats docx/doc/pdf.
Same users tried several times in a row during 1 hour period all failed with -1.
Same users with same user-agents seem to be able to do upload successfully day before or day after.
We are unable replicate it.
What do we do wrong? What direction should we take to investigate this problem? What next steps should we follow to solve the issue?
Thanks for your input.
EDIT:
As #tcrite suggested that -1 means client side timeout. That seem to be correct to replicate the problem in my local env. We updated production server adding long client timeouts: 250 seconds.
But just recently we have got several -1 responses. The user tried to submit file 6 times in 2 minutes with all resulting with -1 response code and timeout config was present:
Response:
{
"data":null,
"status":-1,
"config":{
"method":"PUT",
"transformRequest":[
null
],
"transformResponse":[
null
],
"jsonpCallbackParam":"callback",
"headers":{
"Content-Type":"application/msword",
"Accept":"application/json, text/plain, */*"
},
"timeout":250000,
"url":"https://stackoverflow-question.s3.amazonaws.com/uploads/files/a-b-a36b9b2f216..."
}
}
It can't be S3 timeout as I tried in my local env to upload file on slow connection for ~5 minutes and it was uploaded sucessfully.
I think you should make a server side web application to upload files ( rather than browse based angular ).Because browser are sometime restricted by company policy.
Check this python django application.I believe youa re already using python.
[https://testdriven.io/blog/storing-django-static-and-media-files-on-amazon-s3/][1]

Long running script that polls external server with variable backoff timer?

I am writing a long running script based on Amphp library, that will poll a external server for a list of tasks to run and then execute those tasks.
In the response from the server would be the backoff timer that would control when the script makes it's next request.
Since I am very new to async programming what I am trying is not working.
I tried to create a \Amp\repeat() that had an \Amp\Pause(1000) so that each repeat would pause for 1 second.
Here's my test code:
function test() {
// http request goes here...
echo 'server request '.microtime(true).PHP_EOL;
// based on the server request, change the pause time
yield new \Amp\Pause(1000);
}
Amp\execute(function () {
\Amp\onSignal(SIGINT, function () {
\Amp\stop();
});
\Amp\repeat(100, function () {
yield from test();
});
});
What I expected to happen was that on each repeat, the test() function would pause for 1 second after the echo but instead the echo was run every 100ms (the repeat time).
In the past I would accomplish this with a while loop and usleep() but since usleep() is blocking this defeats the purpose.
I'm using PHP 7.0 and Amphp from github master branch.
\Amp\repeat calls the callback every 100 milliseconds, regardless of when the callback terminates.
\Amp\execute(function () {
/* onSignal handler here for example */
new \Amp\Coroutine(function () {
while (1) {
/* dispatch request */
echo 'server request '.microtime(true).PHP_EOL;
yield new \Amp\Pause(100);
}
});
});
This is using a normal loop which only continues 100 ms after the last action.
[If I misunderstood what exactly you want, please note in comments.]

Resources