Gatling : Any thoughts on writing response into a file ,will it be thread safe and will it be an overhead for load test time or overall performance - gatling

I need to save response from my two tests which give two variations . I need to calculate the ratio of the response received.
response 1
{
"var1": "a1"
}
response 2
{
"var1": "b1"
}
I was thinking about writing the response into files and then write a method to read those files and calculate the ratios.
Is there any other way to do so in gatling?

Gatling stores its results in target/gatling/<simulation>. This folder also contains the raw log file <logfile>.log that you can parse yourself after the run. Note that you can only differentiate between different requests, it doesn't log the returned responses, from your description I'm not sure whether this fits your needs.
Gatling already has a parser for the logfile with which you should be able to easily interface, it's used for the generation of gatlings fancy log reports here: https://github.com/gatling/gatling/blob/master/gatling-charts/src/main/scala/io/gatling/charts/stats/LogFileReader.scala.
I also wrote a much simpler parser myself, if you're interested I can put it on github.

Related

Sagemaker Hyperparameter Tuning Job Mechanism

Does anyone know what's the mechanism behind hyperparameter tuning job in AWS Sagemaker?
In specific, I am trying to do the following:
Bring my own container
Minimize cross entropy loss (this will be the objective metric of the tuner)
My question is when we define the hyper parameter in HyperParameterTuner class, does that get copied into /opt/ml/input/config/hyperparameters.json?
If so, should one adjust the training image so that it uses the hyper parameters from /opt/ml/input/config/hyperparameters.json?
Edit: I've looked into some sample HPO notebooks that AWS provides and they seem to confuse me more. Sometimes they'd use argparser to pass in the HPs. How is that passed into the training code?
So i finally figured it out and had it wrong all the time.
The file /opt/ml/input/config/hyperparameters.json is there. It just has slightly different content compared to a regular training-job. The params to be tuned as well as static params are contained there. As well as the metric-name.
So here is the structure, i hope it helps:
{
'_tuning_objective_metric': 'your-metric',
'dynamic-param1': '0.3',
'dynamic-param2': '1',
'static-param1': 'some-value',
'static-paramN': 'another-value'
}
If you bring your own container, you should consider pip installing SageMaker Training Toolkit. This will allow you to receive hyperparameters as command line arguments to your training script (to be processed with argparser). This will save you the need to Read and parse the /opt/ml/input/config/hyperparameters.json yourself.

Analyse gzip request

I am exploring mountebank and have a case where I need to analyse the gziped json request in order to create a predicate that returns the appropriate response. Can I unzip a json request and analyse the json with mountebank?
It does sound possible using Injection - this way, you should be able to require zlib in your JavaScript function, use it to unzip the payload, parse the result to JSON and then return a response as you see fit.
Depending on what you want to return though, you may need to use a combination of Predicate Injection (where a simple true/false determines whether or not the stub responds) and Response Injection (where you can tailor the response depending on the content of the payload).
Sorry for the late reply - but I thought I would add an answer.
Please see https://groups.google.com/forum/#!topic/mountebank-discuss/lvJq9PdIRlo for an update. There is now an open ticket to add support for this.

Any examples of using a Wandsearcher in vespa ? (After a weighted set query)

Currently i am using the REST interface to query vespa, which seems to work great but something tells me that i should be using searchers in the application to make the client(server side code) a bit lighter (bundle the jar file in the application package) to make it a bit smoother. I have managed to do some simple searcher/processor applications. But this is a bit overwhelming.
So are there any readily available examples ?
Basicially i want to:
Send to /search?query=someId
Do a ordinary search for the weighted set on this documentID (I guess this one can be handy: https://docs.vespa.ai/documentation/reference/inspecting-structured-data.html)
Take those items in the response and add it to a wand item(s) and query for a wand with wandsearcher on a given field. Similar to the yql:
"select * from sources * where wand(interest, some weightedsets));","ranking":"combined_score" and return the matches.
Just curious also, apart from the trouble of string building with the http request i am doing at the moment are there any performance gains of using a searcher or go the java route vs rest?
thanks for any insight or code help i can start with.
There is an example of using the WandItem (YQL wand)here https://docs.vespa.ai/documentation/advanced-ranking.html and see also https://docs.vespa.ai/documentation/using-wand-with-vespa.html as there are two wand implementations available in Vespa, it sounds from the description that the wand() is what you want to use for this use case. For the first call you probably want to have a dedicated document summary to reduce the amount of data fetched for your first query and also the option of serving it out of memory only (See https://docs.vespa.ai/documentation/document-summaries.html)
Also see https://docs.vespa.ai/documentation/searcher-development.html as a general resource on writing searchers.
For your use case it makes a lot of sense to write a searcher to perform these two queries as your second query depends on the first and you avoid the cost of rendering/http/yql parsing which might matter if your client is remote with high network latency.

How to send a variable length list or array of parameters with a HTTP POST request in JMeter?

I'm making JMeter load tests for an ASP.NET web application, and the tests should post some data to the server. Specifically, they should post grades for all the pupils in a class. However, the tests are supposed to be general, so that they can be run towards different schools with a small change in the configuration.
However, this creates a problem when the grades are posted, since the number of parameters in the post request (pupils in the class) can vary from run to run, or even from thread to thread. Currently I only know how to pass parameters through the HTTP request form as shown below:
However, in the next thread there could be a saveModel.PupilOrderAndBehaviours[2] or even up to 30. I have all of this information available directly from csv files. That is, I can tell JMeter ahead of time how many pupils will be in each class, and what grades each of them should receive, so I do not need to read this out from previous responses or anything like that.
Is there a way, potentially using BeanShell, I can configure JMeter to do this correctly?
It can be done with Beanshell Preprocessor.
int count = 10;
for(int i=1;i<=count;i++)
{
sampler.addArgument("Parameter" + i, "Value" + i);
}
It adds 10 parameters as given below # run time.
Please refer to this site.
http://theworkaholic.blogspot.com/2010/03/dynamic-parameters-in-jmeter.html

How to implement a lossless URL shortening

First, a bit of context:
I'm trying to implement a URL shortening on my own server (in C, if that matters). The aim is to avoid long URLs while being able to restore a context from a shortened URL.
Currently I have a implementation that creates a session on the server, identified by a certain ID. This works, but consumes memory on the server (and is not desired since it's an embedded server with limited resources and the main purpose of the device isn't providing web pages but doing other cool stuff).
Another option would be to use cookies or HTML5 webstorage to store the session information in the client.
But what I'm searching for is the possibility to store the shortened URL parameters in one parameter that I attach to the URL and be able to re-construct the original parameters from that one.
First thought was to use a Base64-encoding to put all the parameters into one, but this produces an even larger URL.
Currently, I'm thinking of compressing the URL parameters (using some compression algorithm like zip, bz2, ...), do the Base64-encoding on that compressed binary blob and use that information as context. When I get the parameter, I could do a Base64-decoding, de-compress the result and have hands on the original URL.
The question is: is there any other possibility that I'm overlooking that I could use to lossless compress a large list of URL parameters into a single smaller one?
Update:
After the comments from home, I realized that I overlooked that compressing itself adds some overhead to the compressed data making the compressed data even larger than the original data because of the overhead that for example zipping adds to the content.
So (as home states in his comments), I'm starting to think that compressing the whole list of URL parameters is only really useful if the parameters are beyond a certain length because otherwise, I could end up having an even larger URL than before.
You can always roll your own compression. If you simply apply some huffman coding, the result will always be smaller (but then base64 encoding it, it'll grow a bit, so the net effect may perhaps not be optimal).
I'm using a custom compression strategy on an embedded project I work with where I first use a lzjb (a lempel ziv derivate, follow link for source code, really tight implementation (from open solaris)) followed by huffman coding the compressed result.
The lzjb algorithm doesn't perform too well on very short inputs, though (~16 bytes, in which case I leave it uncompressed).

Resources