Does Gatling provide a way to compare previously run tests - gatling

I've been running Gatling tests and I have a whole bunch of reports in the results folder.
For example I have a report for 200 requests per second and one for 400 requests per second.
Is there anyway to compare the reports against each other?

There's only the Jenkins plugin for now.
That's something we plan on providing as a commercial offer.

Gatling itself provides enterprise version called Gatling Frontline, which does support trends and history runs.
Another possibility is to use your simulation.log and process them using nuxeo gatling-report utility like this:
# check https://maven-eu.nuxeo.org/nexus/#nexus-search;quick~gatling-report for latest version, or build from source
wget 'https://maven-eu.nuxeo.org/nexus/service/local/repositories/public-releases/content/org/nuxeo/tools/gatling-report/4.0/gatling-report-4.0-capsule-fat.jar'
# do not create outputReportDirectory !
java -jar gatling-report-4.0-capsule-fat.jar results/complexscenario-20200618125705159/simulation.log results/complexscenario-20200617130307094/simulation.log -o outputReportDirectory
If you intend to generate trends during maven build, you can have a look at DennisRippinger gatling-reporter maven plugin, which encpsulates previously mentioned project.

Related

Deploying AngularJs + Sinatra to AWS

I have an AngularJS site consuming an API written in Sinatra.
I'm simply trying to deploy these 2 components together on an AWS EC2 instance.
How would one go about doing that? What tools do you recommend? What structure do you think is most suitable?
Cheers
This is based upon my experience of utilizing the HashciCorp line of tools.
Manual: Launch an Ubuntu image, gem install sinatra and deploy your code. Take a snapshot for safe keeping. This one off approach is good for a development box to iron out the configuration process. Write down the commands you run and any options you may need.
Automated: Use the Packer EC2 Builder and Shell Provisioner to automate your commands from the previous manual approach. This will give you a configured AMI that can be launched.
You can apply different methods of getting to an AMI using different toolsets. However, in the end, you want a single immutable image that can be deployed. repeatedly.

Install Jetty or run embedded for Solr install

I am about to install Solr on a production box. It will be the only Java applet running and be on the same box as the web server (nginx).
It seems there are two options.
Install Jetty separately and configure to use with Solr
Set Solr's embedded Jetty server to start as a service and just use that
Is there any performance benefit in having them separate?
I am a big fan of KISS, the less setup the better.
Thanks
If you want KISS there is no question: 2. stick to vanilla Solr distrib with included jetty.
Doing the work of installing an external servlet engine would make sense if you needed Tomcat for example, but just to use the same thing (Jetty) Solr already includes...no way.
Solr is still using jetty 6. So there would be some benefits if you can get the solr application to run in a recent jetty distribution. For example you could use jetty 9 and use features like SPDY to enhance the response times of your application.
However I have no idea or experience if it's possible to run the solr application standalone in a servlet engine.
Another option for running Solr and keeping it simple is to use Solr-Undertow which is a high performance with small footprint server for Solr. It is easy to use on local machines for development and also production. It supports simple config files for running instances with different data directories, ports and more. It also can run just by pointing it at a distribution .zip file without needing to unpack it.
(note, I am the author of Solr-Undertow)
Link here: https://github.com/bremeld/solr-undertow with releases under the "Releases" tab.

Can we schedule Selenium test cases

I have created Selenium Web Driver test cases and running it in Maven.
Can we schedule Selenium test cases to run at user-given date/time.
I googled and found few options like (1) creating a batch file & then adding it in Windows scheduler or (2) Using Jenkins
Somewhere, Quartz Scheduler was given.
Is there any other better method for it or which is the best method among these options.
Thanks !!!
We use Jenkins since it offers a variety of options to connect to different source control systems, allows building your requirement in variety of ways using scripts, maven commands, ant etc. You can create dependent jobs, use variety of plugins that add additional functionality to Jenkins, share reports, send emails ....and can schedule jobs on a variety of parameters as well like based on time or based on a commit in your repo or based on your build being deployed. I haven't used Quartz scheduler, so no opinions on that.

push deployment with test automation

We are developing some testing infrastructure and I have hit a coders block (lack of sleep?)...this seems like it would be a solved problem but I haven't found what I'm looking for via google.
I would like to automatically push builds from our CI server (TeamCity) to a number of machines (growing, but currently 30). These are several WinForms apps and a number of dlls. Once deployed, I would like to kick off tests (NUnit, for both unit and integration tests) and report all results (back to CI? or somewhere else? Not sure).
The target machines are a number of platforms (Win7,Vista, XP, Server 2k8, Server 2k3, Ubuntu, Fedora, Suse, x64, x86, maybe macs down the line)
This gets me part way there (the actual push). But I can't find existing solutions for 'push starting' the tests and reporting back. So far I am thinking of combining the link (or similar) with custom code running on each client machine that watches the deploy directory, runs the tests and reports the results.
Does anyone know of existing solutions?
Links?
Done something similar and care to share?
Edit
If possible, we prefer .net based solutions, but it isn't strictly necessary. I would have tagged the question as such, but ran out of tags :)
You could use KwateeSDCM to both push and start on all the platforms you mention, including mac. However, you'll have to do some coding to get reports out. I'm not familiar with TeamCity but maybe you could push a script along with your application which could then transfer the test results via ftp to a server accessible by TeamCity.
Have a look at: STAF (Software test Automation Framework)
The Software Testing Automation Framework (STAF) is an open source, multi-platform, multi-language framework designed around the idea of reusable components, called services (such as process invocation, resource management, logging, and monitoring).
Which includes STAX:
STAX is an execution engine which can help you thoroughly automate the distribution, execution, and results analysis of your testcases.
And there's an article here:
http://agiletesting.blogspot.com/2004/12/stafstax-tutorial.html
Assuming you have the push part done already, and you don't mind using a TeamCity license, you can create a TeamCity Command Line Runner build configuration or NUnit test configuration that kicks off the tests on a properly configured agent. The build trigger for this test config would be successful completion of the application build.
So far I have ended up using a seperate build step in TeamCity that executes a bat script that in turn fires of tasks to the list of machines using PsExec. So far my trial runs it is working ok, though I now need to parallelize the copying of build output...
Thanks for the input to those who have provided it.

Scripting responses for use in the Maven Release Plugin

We are a SVN/Maven/Hudson shop. We are experimenting with using the Maven Release Plugin to help automate our very laborious tagging and releasing process. We are happy with what we are seeing and have researched thus far in regards to this plugin.
Our question is - if we need to have different tags for some of the modules / applications being built, is there a way to script the responses?
We have waded through the interactive dry runs successfully, however we are looking to script these out to further our automation.
Has anyone tried this or know if it is possible?
Does the "Batch Mode" allow this functionality?
Thanks
Joe R
You can -B but it will use default version names (removing -SNAPSHOT at the end).
regarding tags per module you can have a look at the parameter : autoVersionSubmodules 1
/Olivier

Resources