Running gatlingtest from gradle - gatling

I have created a build.gradle file and in it i have this dep.
compile 'io.gatling.highcharts:gatling-charts-highcharts:2.1.7'
I have also created a simulation
package simulations
import io.gatling.core.Predef._
import io.gatling.http.Predef._
import scala.concurrent.duration._
class LukeSimulation extends Simulation {
val httpConf = http
.baseURL("http://--------:8295/") // Here is the root for all relative URLs
.doNotTrackHeader("1")
.acceptLanguageHeader("en-US,en;q=0.5")
.acceptEncodingHeader("gzip, deflate")
.userAgentHeader("Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:16.0) Gecko/20100101 Firefox/16.0")
val headers_10 = Map("Content-Type" -> "application/x-www-form-urlencoded") // Note the headers specific to a given request
val scn = scenario("TotalUsage") // A scenario is a chain of requests and pauses
.exec(http("Usage")
.get("/api/v1/account/10186413349/totalusage"))
setUp(scn.inject(atOnceUsers(700),
rampUsers(100000) over(30 minutes),
constantUsersPerSec(200) during(10 minutes),
rampUsersPerSec(200) to(1000) during(10 minutes)
).protocols(httpConf))
//setUp(scn.inject(rampUsers(500) over(30 seconds)).protocols(httpConf))
//assertThat(global.failedRequests.count.is(0))
}
How do i execute it with gradle?

Create task like this -
task gatling() << {
javaexec {
main = 'io.gatling.app.Gatling'
classpath = sourceSets.test.runtimeClasspath
args('--simulation',
"simulations.LukeSimulation",
'--results-folder',
file('build/reports/gatling').absolutePath,
'--mute')
environment(['appUrl': appUrl])
systemProperties(['gatling.core.directory.binaries': sourceSets.test.output.classesDir])
}
}
If you want to pass parameter to your gatling you can do this way.
def getAppUrl() {
def appUrl = System.getProperty('appUrl')
appUrl ?: 'http://localhost:8295'
}
Command will be gradle gatling -DappUrl=http://localhost:8000
If you do not pass the appUrl then it will default to what is written in the getAppUrl method.
To use the property in your gatling code do this
val appUrl = getenv("appUrl")

Related

Gatling Test in Blazemeter creates ClassNotFoundException

I used the Taurus Gatling guide to create a simple performance test and uploaded the yaml and scala file to blazemeter. When i start the test in blazemeter there is no test result and the bzt.log contains a ClassNotFoundException.
The validator for the yaml says its fine and i can't find anything so I'm lost...
My blazemleter.yaml:
execution:
- executor: gatling
scenario: products
iterations: 15
concurrency: 3
ramp-up: 2
scenarios:
products:
script: productSimulation.scala
simulation: test.productSimulation
My productSimulation.scala is mostly copied from their documentation:
package test
import io.gatling.core.Predef._
import io.gatling.http.Predef._
class productSimulation extends Simulation {
// parse load profile from Taurus
val t_iterations = Integer.getInteger("iterations", 100).toInt
val t_concurrency = Integer.getInteger("concurrency", 10).toInt
val t_rampUp = Integer.getInteger("ramp-up", 1).toInt
val t_holdFor = Integer.getInteger("hold-for", 60).toInt
val t_throughput = Integer.getInteger("throughput", 100).toInt
val httpConf = http.baseURL("https://mydomain/")
val header = Map(
"Content-Type" -> """application/x-www-form-urlencoded""")
val sessionHeaders = Map("Authorization" -> "Bearer ${access_token}",
"Content-Type" -> "application/json")
// 'forever' means each thread will execute scenario until
// duration limit is reached
val loopScenario = scenario("products").forever() {
// auth
exec(http("POST OAuth Req")
.post("https://oauth-provider")
.formParam("client_secret", "...")
.formParam("client_id", "...")
.formParam("grant_type", "client_credentials")
.headers(header)
.check(status.is(200))
.check(jsonPath("$.access_token").exists
.saveAs("access_token")))
// read products
.exec(http("products")
.get("/products")
.queryParam("limit", 200)
.headers(sessionHeaders))
}
val execution = loopScenario
.inject(rampUsers(concurrency) over rampUp) // during for gatling 3.x
.protocols(httpConf)
setUp(execution).maxDuration(rampUp + holdFor)
}
After learning that i can execute the scala file as a test directly if i click the file directly and not the yaml i got better exceptions.
Basicly i made two mistakes:
my variables are named t_concurrency, ... while the execution definition uses a different name. ups.
since gatling 3.x the keyword for the inject is during, so the correct code is: rampUsers(t_concurrency) during t_rampUp
Now everything works.

parallelization of events notification in case of file creation

I am using "Inotify" to logs event when a file or folder is created in a directory ( tmp here) . The example here does the job in as serial process. Meaning, All file creation are treated one after the other, in a sequential way.
import logging
import inotify.adapters
_DEFAULT_LOG_FORMAT = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
_LOGGER = logging.getLogger(__name__)
def _configure_logging():
_LOGGER.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
formatter = logging.Formatter(_DEFAULT_LOG_FORMAT)
ch.setFormatter(formatter)
_LOGGER.addHandler(ch)
def _main():
i = inotify.adapters.Inotify()
i.add_watch(b'/tmp')
try:
for event in i.event_gen():
if event is not None:
(header, type_names, watch_path, filename) = event
_LOGGER.info("WD=(%d) MASK=(%d) COOKIE=(%d) LEN=(%d) MASK->NAMES=%s "
"WATCH-PATH=[%s] FILENAME=[%s]",
header.wd, header.mask, header.cookie, header.len, type_names,
watch_path.decode('utf-8'), filename.decode('utf-8'))
finally:
i.remove_watch(b'/tmp')
if __name__ == '__main__':
_configure_logging()
_main()
I would like to introduce parallelization of the events notification in case of several files are uploaded by importing threading, should I add a threading as loop ?
Second concern, I am not sure where it would make sens to put the thread function.
The below scripts handles multiples events in case of multiples sessions. So in my case, this is enough. I Added the multiprocessing option instead of threading. I found multiprocessing faster than threading.
import logging
import threading
import inotify.adapters
import multiprocessing
_DEFAULT_LOG_FORMAT = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
_LOGGER = logging.getLogger(__name__)
def _configure_logging():
_LOGGER.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
formatter = logging.Formatter(_DEFAULT_LOG_FORMAT)
ch.setFormatter(formatter)
_LOGGER.addHandler(ch)
def PopUpMessage (event):
if event is not None:
(header, type_names, watch_path, filename) = event
_LOGGER.info("WD=(%d) MASK=(%d) COOKIE=(%d) LEN=(%d) MASK->NAMES=%s "
"WATCH-PATH=[%s] FILENAME=[%s]",
header.wd, header.mask, header.cookie, header.len, type_names,
watch_path.decode('utf-8'), filename.decode('utf-8'))
def My_main(count):
i = inotify.adapters.Inotify()
DirWatcher=i.add_watch(b'/PARA')
try:
while True:
for event in i.event_gen():
m = multiprocessing.Process(target=PopUpMessage, args=(event,))
m.start()
finally:
i.remove_watch(b'/PARA')
if __name__ == '__main__':
_configure_logging()
N = multiprocessing.Process(target=My_main)
N.start()

akka http client system.shutdown() produce "Outgoing request stream error (akka.stream.AbruptTerminationException)" when using https

Hi the following code works as expected.
implicit val system = ActorSystem()
implicit val materializer = ActorMaterializer()
import system.dispatcher
val request = HttpRequest(uri = "http://www.google.com")
Http.get(system).singleRequest(request).map(_.entity.dataBytes.runWith(Sink.ignore))
.onComplete { _ =>
println("shutting down actor system...")
system.terminate()
}
However, if I change http://www.google.com to https://www.google.com like the following:
implicit val system = ActorSystem()
implicit val materializer = ActorMaterializer()
import system.dispatcher
val request = HttpRequest(uri = "https://www.google.com")
Http.get(system).singleRequest(request).map(_.entity.dataBytes.runWith(Sink.ignore))
.onComplete { _ =>
println("shutting down actor system...")
system.terminate()
}
I get the following error message:
shutting down actor system...
[ERROR] [02/11/2017 13:13:08.929] [default-akka.actor.default-dispatcher-4] [akka.actor.ActorSystemImpl(default)] Outgoing request stream error (akka.stream.AbruptTerminationException)
Anyone knows why https produces above error and how can I fix it?
It is apparently a known issue, see the following tickets:
akka-http#497
akka#18747
The error seems harmless.

Can't use reactivestream Subscriber with akka stream sources

I'm trying to attach a reactivestream subscriber to an akka source.
My source seems to work fine with a simple sink (like a foreach) - but if I put in a real sink, made from a subscriber, I don't get anything.
My context is:
import akka.actor.ActorSystem
import akka.stream.ActorMaterializer
import akka.stream.scaladsl.{Sink, Source}
import org.reactivestreams.{Subscriber, Subscription}
implicit val system = ActorSystem.create("test")
implicit val materializer = ActorMaterializer.create(system)
class PrintSubscriber extends Subscriber[String] {
override def onError(t: Throwable): Unit = {}
override def onSubscribe(s: Subscription): Unit = {}
override def onComplete(): Unit = {}
override def onNext(t: String): Unit = {
println(t)
}
}
and my test case is:
val subscriber = new PrintSubscriber()
val sink = Sink.fromSubscriber(subscriber)
val source2 = Source.fromIterator(() => Iterator("aaa", "bbb", "ccc"))
val source1 = Source.fromIterator(() => Iterator("xxx", "yyy", "zzz"))
source1.to(sink).run()(materializer)
source2.runForeach(println)
I get output:
aaa
bbb
ccc
Why don't I get xxx, yyy, and zzz?
Citing the Reactive Streams specs for the Subscriber below:
Will receive call to onSubscribe(Subscription) once after passing an
instance of Subscriber to Publisher.subscribe(Subscriber). No further
notifications will be received until Subscription.request(long) is
called.
The smallest change you can make to see some items flowing through to your subscriber is
override def onSubscribe(s: Subscription): Unit = {
s.request(3)
}
However, keep in mind this won't make it fully compliant to the Reactive Streams specs. It being not-so-easy to implement is the main reason behind higher level toolkits like Akka-Streams itself.

Collect errors with Gatling?

In my long, but simple awesome Gatling simulation, I have few responses that ended with error 500. Is it possible to tell gatling to collect these error responses messages in a file during the simulation?
No in production mode. You only have them when debug logging is enabled.
It is possible to collect what ever you want and save it into simulation.log file. Use extraInfoExtractor method when you define protocol:
val httpProtocol = http
.baseURL(url)
.check(status.is(successStatus))
.extraInfoExtractor { extraInfo => List(getExtraInfo(extraInfo)) }
Then define in your getExtraInfo(extraInfo: ExtraInfo) method whatever criteria you want. Example bellow outputs request and response in case of debug is enables via Java System Property OR response code is not 200 OR status of request is KO (it can be KO if you have setup some max time and this max time gets increased)
private val successStatus: Int = 200
private val isDebug = System.getProperty("debug").toBoolean
private def getExtraInfo(extraInfo: ExtraInfo): String = {
if (isDebug
|| extraInfo.response.statusCode.get != successStatus
|| extraInfo.status.eq(Status.valueOf("KO"))) {
",URL:" + extraInfo.request.getUrl +
" Request: " + extraInfo.request.getStringData +
" Response: " + extraInfo.response.body.string
} else {
""
}
}

Resources