How to get per scenario reports from parameterized simulations in gatling? - gatling

I'm trying to test a single API that internally does different things based on inputs:
country
customer
amount of items
The following simulation is what I came up with:
val countries = List("US", "CAN")
val customerTypes = List("TYPE1", "TYPE2")
val basketSizes = List(1, 10, 50)
val scenarioGenerator: Seq[(String, String, Int)] = for {
country <- countries
customerType <- customerTypes
basketSize <- basketSizes
} yield (country, customerType, basketSize)
def scenarios(): Seq[PopulationBuilder] = {
var scenarioList = new ArraySeq[PopulationBuilder](countries.size * customerTypes.size * basketSizes.size)
var i = 0;
for ((country: String, customerType: String, basketSize: Int) <- scenarioGenerator) {
// fetch customer data for scenario
val customers = DataFetcher.customerRequest(country, customerType)
// fetch product data for scenario
val products = DataFetcher.productRequest(country)
// generate a scenario with given data and parameters
val scen = scenario(s"Pricing-(${country},${customerType},${basketSize})")
// feeder that creates the request object for the gatling user
.feed(new PricingFeeder(country, customers, products, basketSize))
.repeat(10) {
exec(Pricing.price)
.pause(500 milliseconds)
}
.inject(
rampUsers(10) over (10 seconds)
)
scenarioList(i) = scen
i = i + 1
}
scenarioList
}
setUp(scenarios: _*).protocols(httpProto)
This is run with the maven plugin (and with tracking in jenkins using the gatling plugin), but this results in a single tracked case: Pricing. This is useless as even the item amount will be close to a linear increase in response time.
The simulation.log has the data for each scenario type, but out of the box reporting handles it as a single type of query, and merges all the results in a single graph, which means that it's impossible to see if a certain combination causes a spike due to a calculation or data bug.
I'd like to get separate metrics for each of the combinations, so it would be really easy to see for example that a code or data change in the API caused a latency spike in the Pricing-(US,TYPE1,50) scenario.
What is the idiomatic way of achieving this with gatling? I don't want to create simulations for each case, as this would be a nightmare to manage (and getting rid of manually managed data and jenkins jobs with jmeter is what we are trying to achieve).

First thing - it is not a good practice to run so many scenarios in one simulation as it runs them in parallel not sequentially so you should be sure that it is what you want.
If so, you can use fact that gatling report allows to show graphs per group. So you can wrap all your requests in group that is named based on parameters, this way in detailed view of report you will be able to select which group to show, fe.:
val singleScenario = scenario(s"Pricing-(${country},${customerType},${basketSize})")
.group(s"Pricing-(${country},${customerType},${basketSize})"){
.feed(new PricingFeeder(country, customers, products, basketSize))
.repeat(10) {
exec(Pricing.price)
.pause(500 milliseconds)
}
}
If you do not need all scenarios to run in parallel, and want separate reports for separate scenarios best way is to implement simulation class as parametrized abstract class and add separate classes for each parameters set, as in Gatling one simulation equals on report, fe.:
package com.performance.project.simulations
import io.gatling.core.Predef.Simulation
import scala.concurrent.duration._
class UsType1Simulation1 extends ParametrizedSimulation("US", "TYPE1", 1)
class UsType1Simulation10 extends ParametrizedSimulation("US", "TYPE1", 10)
class UsType1Simulation50 extends ParametrizedSimulation("US", "TYPE1", 50)
class UsType2Simulation1 extends ParametrizedSimulation("US", "TYPE2", 1)
class UsType2Simulation10 extends ParametrizedSimulation("US", "TYPE2", 10)
class UsType2Simulation50 extends ParametrizedSimulation("US", "TYPE2", 50)
class CanType1Simulation1 extends ParametrizedSimulation("CAN", "TYPE1", 1)
class CanType1Simulation10 extends ParametrizedSimulation("CAN", "TYPE1", 10)
class CanType1Simulation50 extends ParametrizedSimulation("CAN", "TYPE1", 50)
class CanType2Simulation1 extends ParametrizedSimulation("CAN", "TYPE2", 1)
class CanType2Simulation10 extends ParametrizedSimulation("CAN", "TYPE2", 10)
class CanType2Simulation50 extends ParametrizedSimulation("CAN", "TYPE2", 50)
sealed abstract class ParametrizedSimulation(country: String, customerType: String, basketSize: Int) extends Simulation{
val customers = DataFetcher.customerRequest(country, customerType)
val products = DataFetcher.productRequest(country)
val singleScenario = scenario(s"Pricing-(${country},${customerType},${basketSize})")
.feed(new PricingFeeder(country, customers, products, basketSize))
.repeat(10) {
exec(Pricing.price)
.pause(500 milliseconds)
}
.inject(
rampUsers(10) over (10 seconds)
)
setUp(singleScenario).protocols(httpProto)
}
Of course it makes sense only if there is small amount of combinations, with hundreds of them it will get messy.

Related

I am getting an error in Kapt Debug Kotlin. I have update versions of dependencies in gradle file. still facing this issue

My app was running smoothly but I am getting this error now.I am getting an error in Kapt Debug Kotlin. I have update versions of dependencies in gradle file. still facing this issue. How it can be resolved? I saw somewhere to see your room database , dao and data class. still not able to figure out what is the issue.
The error is showing this file
ROOM DATABASE
#Database(entities = [Transaction::class], version = 1, exportSchema = false)
abstract class MoneyDatabase : RoomDatabase(){
abstract fun transactionListDao():transactionDetailDao
companion object {
// Singleton prevents multiple instances of database opening at the
// same time.
#Volatile
private var INSTANCE: MoneyDatabase? = null
fun getDatabase(context: Context): MoneyDatabase {
// if the INSTANCE is not null, then return it,
// if it is, then create the database
return INSTANCE ?: synchronized(this) {
val instance = Room.databaseBuilder(
context.applicationContext,
MoneyDatabase::class.java,
"transaction_database"
).build()
INSTANCE = instance
// return instance
instance
}
}
}
}
DAO
#Dao
interface transactionDetailDao {
#Insert(onConflict = OnConflictStrategy.IGNORE)
suspend fun insert(transaction : Transaction)
#Delete
suspend fun delete(transaction : Transaction)
#Update
suspend fun update(transaction: Transaction)
#Query("SELECT * FROM transaction_table ORDER BY id ASC")
fun getalltransaction(): LiveData<List<Transaction>>
}
DATA CLASS
enum class Transaction_type(){
Cash , debit , Credit
}
enum class Type(){
Income, Expense
}
#Entity(tableName = "transaction_table")
data class Transaction(
val name : String,
val amount : Float,
val day : Int,
val month : Int,
val year : Int,
val comment: String,
val datePicker: String,
val transaction_type : String,
val category : String,
val recurring_from : String,
val recurring_to : String
){
#PrimaryKey(autoGenerate = true) var id :Long=0
}
The error is resolved. I was using the kotlin version 1.6.0. I reduced it to 1.4.32. As far as I understood, above(latest) version of Kotlin along with Room and coroutines doesn’t work well.
I believe that your issue is due to the use of an incorrect class being inadvertently used, a likely culprit being Transaction as that it also a Room class
Perhaps in transactionDetailDao (although it might be elsewhere)
See if you have import androidx.room.Transaction? (or any other imports with Transaction)?
If so delete or comment out the import
As an example with, and :-
And with the import commented out :-
Imported from github, had a play issue definitely appears to be with co-routines. commented out suspends in the Dao :-
#Dao
interface transactionDetailDao {
#Insert(onConflict = OnConflictStrategy.IGNORE)
suspend fun insert(transaction : Transaction)
#Delete
suspend fun delete(transaction : Transaction)
#Update
suspend fun update(transaction: Transaction)
#Query("SELECT * FROM transaction_table ORDER BY id ASC")
fun getalltransaction(): LiveData<List<Transaction>>
}
Compiled ok and ran and had a play e.g. :-

Restrictions on fields of case class as per db spec

I am creating bunch of case classes in scala which I will use to write to db. As the columns in the db has certain restrictions(Length, type, null/not null, etc). How can I enforce the length restriction on my case object fields without checking every field one by one?
This is how you can put the restructions on the fields of case class
object Solution1 extends App {
case class Payload(name: String, id: Int, address: String) {
require(name.length < 10)
require(address.length <= 50)
}
println(Payload("name5678910", 120, "earth")) // this will give you an erro
println(Payload("name", 121, "earth"))
}

Apache Flink Custom Window Aggregation

I would like to aggregate a stream of trades into windows of the same trade volume, which is the sum of the trade size of all the trades in the interval.
I was able to write a custom Trigger that partitions the data into windows. Here is the code:
case class Trade(key: Int, millis: Long, time: LocalDateTime, price: Double, size: Int)
class VolumeTrigger(triggerVolume: Int, config: ExecutionConfig) extends Trigger[Trade, Window] {
val LOG: Logger = LoggerFactory.getLogger(classOf[VolumeTrigger])
val stateDesc = new ValueStateDescriptor[Double]("volume", createTypeInformation[Double].createSerializer(config))
override def onElement(event: Trade, timestamp: Long, window: Window, ctx: TriggerContext): TriggerResult = {
val volume = ctx.getPartitionedState(stateDesc)
if (volume.value == null) {
volume.update(event.size)
return TriggerResult.CONTINUE
}
volume.update(volume.value + event.size)
if (volume.value < triggerVolume) {
TriggerResult.CONTINUE
}
else {
volume.update(volume.value - triggerVolume)
TriggerResult.FIRE_AND_PURGE
}
}
override def onEventTime(time: Long, window: Window, ctx: TriggerContext): TriggerResult = {
TriggerResult.FIRE_AND_PURGE
}
override def onProcessingTime(time: Long, window:Window, ctx: TriggerContext): TriggerResult = {
throw new UnsupportedOperationException("Not a processing time trigger")
}
override def clear(window: Window, ctx: TriggerContext): Unit = {
val volume = ctx.getPartitionedState(stateDesc)
ctx.getPartitionedState(stateDesc).clear()
}
}
def main(args: Array[String]) : Unit = {
val env: StreamExecutionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment
env.setParallelism(1)
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
val trades = env
.readTextFile("/tmp/trades.csv")
.map {line =>
val cells = line.split(",")
val time = LocalDateTime.parse(cells(0), DateTimeFormatter.ofPattern("yyyyMMdd HH:mm:ss.SSSSSSSSS"))
val millis = time.toInstant(ZoneOffset.UTC).toEpochMilli
Trade(0, millis, time, cells(1).toDouble, cells(2).toInt)
}
val aggregated = trades
.assignAscendingTimestamps(_.millis)
.keyBy("key")
.window(GlobalWindows.create)
.trigger(new VolumeTrigger(500, env.getConfig))
.sum(4)
aggregated.writeAsText("/tmp/trades_agg.csv")
env.execute("volume agg")
}
The data looks for example as follows:
0180102 04:00:29.715706404,169.10,100
20180102 04:00:29.715715627,169.10,100
20180102 05:08:29.025299624,169.12,100
20180102 05:08:29.025906589,169.10,214
20180102 05:08:29.327113252,169.10,200
20180102 05:09:08.350939314,169.00,100
20180102 05:09:11.532817015,169.00,474
20180102 06:06:55.373584329,169.34,200
20180102 06:07:06.993081961,169.34,100
20180102 06:07:08.153291898,169.34,100
20180102 06:07:20.081524768,169.34,364
20180102 06:07:22.838656715,169.34,200
20180102 06:07:24.561360031,169.34,100
20180102 06:07:37.774385969,169.34,100
20180102 06:07:39.305219107,169.34,200
I have a time stamp, a price and a size.
The above code can partition it into windows of roughly the same size:
Trade(0,1514865629715,2018-01-02T04:00:29.715706404,169.1,514)
Trade(0,1514869709327,2018-01-02T05:08:29.327113252,169.1,774)
Trade(0,1514873215373,2018-01-02T06:06:55.373584329,169.34,300)
Trade(0,1514873228153,2018-01-02T06:07:08.153291898,169.34,464)
Trade(0,1514873242838,2018-01-02T06:07:22.838656715,169.34,600)
Trade(0,1514873294898,2018-01-02T06:08:14.898397117,169.34,500)
Trade(0,1514873299492,2018-01-02T06:08:19.492589659,169.34,400)
Trade(0,1514873332251,2018-01-02T06:08:52.251339070,169.34,500)
Trade(0,1514873337928,2018-01-02T06:08:57.928680090,169.34,1000)
Trade(0,1514873338078,2018-01-02T06:08:58.078221995,169.34,1000)
Now I like to partition the data so that the volume is exactly matching the trigger value. For this I would need to change the data slightly by splitting a trade at the end of an interval into two parts, one that belongs to the actual window being fired, and the remaining volume that is above the trigger value, has to be assigned to the next window.
Can that be handled with some custom aggregation function? It would need to know the results from the previous window(s) though and I was not able to find out how to do that.
Any ideas from Apache Flink experts how to handle that case?
Adding an evictor does not work as it only purges some elements at the beginning.
I hope the change from Spark Structured Streaming to Flink was a good choice, as I have later even more complicated situations to handle.
Since your key is the same for all records, you may not require a window in this case. Please refer to this page in Flink's documentation https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/stream/state/state.html#using-managed-keyed-state.
It has a CountWindowAverage class where in the aggregation of a value from each record in a stream is done using a State Variable. You can implement this and send the output whenever the state variable reaches your trigger volume and reset the value of the state variable with the remaining volume.
A simple approach (though not super-efficient) would be to put a FlatMapFunction ahead of your windowing flow. If it's keyed the same way, then you can use ValueState to track total volume, and emit two records (the split) when it hits your limit.

Can't use reactivestream Subscriber with akka stream sources

I'm trying to attach a reactivestream subscriber to an akka source.
My source seems to work fine with a simple sink (like a foreach) - but if I put in a real sink, made from a subscriber, I don't get anything.
My context is:
import akka.actor.ActorSystem
import akka.stream.ActorMaterializer
import akka.stream.scaladsl.{Sink, Source}
import org.reactivestreams.{Subscriber, Subscription}
implicit val system = ActorSystem.create("test")
implicit val materializer = ActorMaterializer.create(system)
class PrintSubscriber extends Subscriber[String] {
override def onError(t: Throwable): Unit = {}
override def onSubscribe(s: Subscription): Unit = {}
override def onComplete(): Unit = {}
override def onNext(t: String): Unit = {
println(t)
}
}
and my test case is:
val subscriber = new PrintSubscriber()
val sink = Sink.fromSubscriber(subscriber)
val source2 = Source.fromIterator(() => Iterator("aaa", "bbb", "ccc"))
val source1 = Source.fromIterator(() => Iterator("xxx", "yyy", "zzz"))
source1.to(sink).run()(materializer)
source2.runForeach(println)
I get output:
aaa
bbb
ccc
Why don't I get xxx, yyy, and zzz?
Citing the Reactive Streams specs for the Subscriber below:
Will receive call to onSubscribe(Subscription) once after passing an
instance of Subscriber to Publisher.subscribe(Subscriber). No further
notifications will be received until Subscription.request(long) is
called.
The smallest change you can make to see some items flowing through to your subscriber is
override def onSubscribe(s: Subscription): Unit = {
s.request(3)
}
However, keep in mind this won't make it fully compliant to the Reactive Streams specs. It being not-so-easy to implement is the main reason behind higher level toolkits like Akka-Streams itself.

Randomly select an instantiation of a class and display it?

Goal: Randomly select a member of a class containing people, and display the information in an alert.
Here is what I am doing: I have a class called Entrepreneurs. Inside the class are variables for name, networth etc. I have instantiated the class with several Entrepreneurs as follows:
//Class for entrepreneurs
class Entrepreneurs {
var name:String?
var netWorth = (0.0, "")
var company:String?
var summary: [String]
var age: Int?
init() {
name = ""
company = ""
summary = [""]
age = 1;
}
}
let markZuckerBerg = Entrepreneurs()
markZuckerBerg.name = "Mark Zuckerberg"
markZuckerBerg.age = 19
markZuckerBerg.company = "Facebook"
markZuckerBerg.netWorth = (35.7, "Billion")
I have several instantiations (more than 5) and I now want to randomly access a member from the Entrepreneur class and display its properties.
I know I will need an Array to hold the class members but I don't think adding each member of the class in an array is the most efficient way to go about it, since I plan on adding hundreds of entrepreneurs eventually.
Any Suggestions?
(Side question: is this the best way to structure such a problem? In other words, If my goal is to have a list of entrepreneurs with information about them, and I want them to display randomly on the screen of a phone as an alert, is creating a class of entrepreneurs the best way?)
Seems to me like you want to randomly select objects from a group of them. To do this you need an array and a random number between 0 and the amount of elements in the array.
Code
let array: [Entrepreneurs] = [a,b,c,...]
let random: Int = Int(arc4random_uniform(UInt32(array.count)))
let selection: Entrepreneurs = array[random]
As far as showing them through and alert I need more details on the question.

Resources