Gatling how to store and load a value for a later request - gatling

I'd like to build a load test where the second request is fed from first response. The data extraction is done in a method because it is not only one line of code. My problem is storing the value (id) and load it later. How should the value be stored and loaded? I tried some different approaches, and I come up with this code. The documentation has not helped me.
object First {
val first = {
exec(http("first request")
.post("/graphql")
.headers(headers_0)
.body(RawFileBody("computerdatabase/recordedsimulation/first.json"))
.check(bodyString.saveAs("bodyResponse"))
)
.exec {
session =>
val response = session("bodyResponse").as[String]
session.set("Id", getRandomValueForKey("id", response))
session}
.pause(1)
}
}
object Second {
val second = {
exec(http("Second ${Id}")
.post("/graphql")
.headers(headers_0)
.body(RawFileBody("computerdatabase/recordedsimulation/second.json"))
)
.pause(1)
}
}
val user = scenario("User")
.exec(
First.first,
Second.second
)
setUp(user.inject(
atOnceUsers(1),
)).protocols(httpProtocol)

Your issue is that you're not using the Session properly.
From the documentation:
Warning
Session instances are immutable!
Why is that so? Because Sessions are messages that are dealt with in a multi-threaded concurrent way, so immutability is the best way to deal with state without relying on synchronization and blocking.
A very common pitfall is to forget that set and setAll actually return new instances.
This is exactly what you're doing:
exec { session =>
val response = session("bodyResponse").as[String]
session.set("Id", getRandomValueForKey("id", response))
session
}
It should be:
exec { session =>
val response = session("bodyResponse").as[String]
session.set("Id", getRandomValueForKey("id", response))
}

Related

Helping me understand session api Gatling

I am new to gatling
I am trying to loop on json response, find the country code that I am looking for and take the id coressponding the that coutry code.
Sample algorithm:
list.foreach( value => { if (value.coutrycode == "PL") then store value.id })
on Gatling:
def getOffer() = {
exec(
http("GET /offer")
.get("/offer")
.check(status.is(Constant.httpOk))
.check((bodyString.exists),
jsonPath("$[*]").ofType[Map[String,Any]].findAll.saveAs("offerList")))
.foreach("${offerList}", "item"){
exec(session => {
val itemMap = session("item").as[Map[String,Any]]
val countryCodeId = itemMap("countryCode")
println("****" + countryCodeId)
// => print all the country code on the list
if (countryCodeId =="PL"){ // if statement condition
println("*************"+ itemMap("offerd")); // print the id eg : "23"
session.set("offerId", itemMap("offerId")); // set the id on the session
}
println("$$$$$$$$$$$$$$" + session("offerId")) // verify that th session contains the offerId but is not
session
})
}
}
When I try to print the session("offerId"), it's print "item" and not the offerId.
I looked on the documentation but I didn't understand the behaviour. Could you please explain it to me ?
It's all in the documentation.
Session instances are immutable!
Why is that so? Because Sessions are messages that are dealt with in a
multi-threaded concurrent way, so immutability is the best way to deal
with state without relying on synchronization and blocking.
A very common pitfall is to forget that set and setAll actually return
new instances.
val session: Session = ???
// wrong usage
session.set("foo", "FOO") // wrong: the result of this set call is just discarded
session.set("bar", "BAR")
// proper usage
session.set("foo", "FOO").set("bar", "BAR")
So what you want is:
val newSession =
if (countryCodeId =="PL"){ // if statement condition
println("*************"+ itemMap("offerd")); // print the id eg : "23"
session.set("offerId", itemMap("offerId")); // set the id on the session
} else {
session
}
// verify that the session contains the offerId
println("$$$$$$$$$$$$$$" + newSession("offerId").as[String])
newSession

Insert multiple records into database with Vapor3

I want to be able to bulk add records to a nosql database in Vapor 3.
This is my Struct.
struct Country: Content {
let countryName: String
let timezone: String
let defaultPickupLocation: String
}
So I'm trying to pass an array of JSON objects but I'm not sure how to structure the route nor how to access the array to decode each one.
I have tried this route:
let countryGroup = router.grouped("api/country")
countryGroup.post([Country.self], at:"bulk", use: bulkAddCountries)
with this function:
func bulkAddCountries(req: Request, countries:[Country]) throws -> Future<String> {
for country in countries{
return try req.content.decode(Country.self).map(to: String.self) { countries in
//creates a JSON encoder to encode the JSON data
let encoder = JSONEncoder()
let countryData:Data
do{
countryData = try encoder.encode(country) // encode the data
} catch {
return "Error. Data in the wrong format."
}
// code to save data
}
}
}
So how do I structure both the Route and the function to get access to each country?
I'm not sure which NoSQL database you plan on using, but the current beta versions of MongoKitten 5 and Meow 2.0 make this pretty easy.
Please note how we didn't write documentation for these two libraries yet as we pushed to a stable API first. The following code is roughly what you need with MongoKitten 5:
// Register MongoKitten to Vapor's Services
services.register(Future<MongoKitten.Database>.self) { container in
return try MongoKitten.Database.connect(settings: ConnectionSettings("mongodb://localhost/my-db"), on: container.eventLoop)
}
// Globally, add this so that the above code can register MongoKitten to Vapor's Services
extension Future: Service where T == MongoKitten.Database {}
// An adaptation of your function
func bulkAddCountries(req: Request, countries:[Country]) throws -> Future<Response> {
// Get a handle to MongoDB
let database = req.make(Future<MongoKitten.Database>.self)
// Make a `Document` for each Country
let documents = try countries.map { country in
return try BSONEncoder().encode(country)
}
// Insert the countries to the "countries" MongoDB collection
return database["countries"].insert(documents: documents).map { success in
return // Return a successful response
}
}
I had a similar need and want to share my solution for bulk processing in Vapor 3. I’d love to have another experienced developer help refine my solution.
I’m going to try my best to explain what I did. And I’m probably wrong.
First, nothing special in the router. Here, I’m handling a POST to items/batch for a JSON array of Items.
router.post("items", "batch", use: itemsController.handleBatch)
Then the controller’s handler.
func createBatch(_ req: Request) throws -> Future<HTTPStatus> {
// Decode request to [Item]
return try req.content.decode([Item].self)
// flatMap will collapse Future<Future<HTTPStatus>> to [Future<HTTPStatus>]
.flatMap(to: HTTPStatus.self) { items in
// Handle each item as 'itm'. Transforming itm to Future<HTTPStatus>
return items.map { itm -> Future<HTTPStatus> in
// Process itm. Here, I save, producing a Future<Item> called savedItem
let savedItem = itm.save(on: req)
// transform the Future<Item> to Future<HTTPStatus>
return savedItem.transform(to: HTTPStatus.ok)
}
// flatten() : “Flattens an array of futures into a future with an array of results”
// e.g. [Future<HTTPStatus>] -> Future<[HTTPStatus]>
.flatten(on: req)
// transform() : Maps the current future to contain the new type. Errors are carried over, successful (expected) results are transformed into the given instance.
// e.g. Future<[.ok]> -> Future<.ok>
.transform(to: HTTPStatus.ok)
}
}

Gatling2 Failing to use user session properly

I hope someone can point me into the right direction!
I try to run one scenario which has several steps that have to be executed in order and each with the same user session to work properly. The below code works fine with one user but fails if I use 2 or more users...
What am I doing wrong?
val headers = Map(
Constants.TENANT_HEADER -> tenant
)
val httpConf = http
.baseURL(baseUrl)
.headers(headers)
val scen = scenario("Default Order Process Perf Test")
.exec(OAuth.getOAuthToken(clientId))
.exec(session => OAuth.createAuthHHeader(session, clientId))
.exec(RegisterCustomer.registerCustomer(customerMail, customerPassword,
tenant))
.exec(SSO.doLogin(clientId, customerMail, customerPassword, tenant))
.exec(session => OAuth.upDateAuthToken(session, clientId))
.exec(session =>
UpdateCustomerBillingAddr.prepareBillingAddrRequestBody(session))
.exec(UpdateCustomerBillingAddr.updateCustomerBillingAddr(tenant))
.exec(RegisterSepa.startRegisterProcess(tenant))
.exec(session => RegisterSepa.prepareRegisterRequestBody(session))
.exec(RegisterSepa.doRegisterSepa(tenant))
setUp(
scen
.inject(atOnceUsers(2))
.protocols(httpConf))
object OAuth {
private val OBJECT_MAPPER = new ObjectMapper()
def getOAuthToken(clientId: String) = {
val authCode = PropertyUtil.getAuthCode
val encryptedAuthCode = new
Crypto().rsaServerKeyEncrypt(authCode)
http("oauthTokenRequest")
.post("/oauth/token")
.formParam("refresh_token", "")
.formParam("code", encryptedAuthCode)
.formParam("grant_type", "authorization_code")
.formParam("client_id", clientId)
.check(jsonPath("$").saveAs("oauthToken"))
.check(status.is(200))
}
def createAuthHHeader(session: Session, clientId: String) = {
val tokenString = session.get("oauthToken").as[String]
val tokenDto = OBJECT_MAPPER.readValue(tokenString,
classOf[TokenDto])
val session2 = session.set(Constants.TOKEN_DTO_KEY, tokenDto)
val authHeader = AuthCommons.createAuthHeader(tokenDto,
clientId, new util.HashMap[String, String]())
session2.set(Constants.AUTH_HEADER_KEY, authHeader)
}
def upDateAuthToken(session: Session, clientId: String) = {
val ssoToken = session.get(Constants.SSO_TOKEN_KEY).as[String]
val oAuthDto = session.get(Constants.TOKEN_DTO_KEY).as[TokenDto]
val params = new util.HashMap[String, String]
params.put("sso_token", ssoToken)
val updatedAuthHeader = AuthCommons.createAuthHeader(oAuthDto,
clientId, params)
session.set(Constants.AUTH_HEADER_KEY, updatedAuthHeader)
}
}
def createAuthHHeader(session: Session, clientId: String) = {
val tokenString = session.get("oauthToken").as[String]
val tokenDto = OBJECT_MAPPER.readValue(tokenString,
classOf[TokenDto])
val session2 = session.set(Constants.TOKEN_DTO_KEY, tokenDto)
val authHeader = AuthCommons.createAuthHeader(tokenDto,
clientId, new util.HashMap[String, String]())
session2.set(Constants.AUTH_HEADER_KEY, authHeader)
}
So I did add the two methods that dont work along as expected. In the first part I try to fetch a token and store in the session via check(jsonPath("$").saveAs("oauthToken")) and in the second call I try to read that token with val tokenString = session.get("oauthToken").as[String] which fails with the exception saying that there is no entry for that key in the session...
I've copied it and removed/mocked any missing code references, switched to one of my apps auth url and it seems to work - at least 2 firsts steps.
One thing that seems weird is jsonPath("$").saveAs("oauthToken") which saves whole json (not single field) as attribute, is it really what you want to do? And are you sure that getOAuthToken is working properly?
You said that it works for 1 user but fails for 2. Aren't there any more errors? For debug I suggest changing logging level to TRACE or add exec(session => {println(session); session}) before second step to verify if token is properly saved to session. I think that something is wrong with authorization request (or building that request) and somehow it fails or throws some exception. I would comment out all steps except 1st and focus on checking if that first request is properly executed and if it adds proper attribute to session.
I think your brackets are not set correctly. Change them to this:
setUp(
scn.inject(atOnceUsers(2))
).protocols(httpConf)

How to stream 2 million rows from SQL Server without crashing Node?

I am using Node to copy 2 million rows from SQL Server to another database, so of course I use the "streaming" option, like this:
const sql = require('mssql')
...
const request = new sql.Request()
request.stream = true
request.query('select * from verylargetable')
request.on('row', row => {
promise = write_to_other_database(row);
})
My problem is that I have do an asynchronous operation with each row ( insert into another database), which takes time.
The reading is faster than the writing, so the "on row" events just keep coming, and memory eventually fills-up with pending promises, and eventually crashes Node. This is frustrating -- the whole point of "streaming" is to avoid this, isn't it?
How can I solve this problem?
To stream millions of rows without crashing, intermittently pause your request.
sql.connect(config, err => {
if (err) console.log(err);
const request = new sql.Request();
request.stream = true; // You can set streaming differently for each request
request.query('select * from dbo.YourAmazingTable'); // or
request.execute(procedure)
request.on('recordset', columns => {
// Emitted once for each recordset in a query
//console.log(columns);
});
let rowsToProcess = [];
request.on('row', row => {
// Emitted for each row in a recordset
rowsToProcess.push(row);
if (rowsToProcess.length >= 3) {
request.pause();
processRows();
}
console.log(row);
});
request.on('error', err => {
// May be emitted multiple times
console.log(err);
});
request.on('done', result => {
// Always emitted as the last one
processRows();
//console.log(result);
});
const processRows = () => {
// process rows
rowsToProcess = [];
request.resume();
}
The problems seems to be caused by reading the stream using "row" events that don't allow you to control the flow of the stream. This should be possible with "pipe" method, but then you end up in a Data Stream and implementing a writable stream - which may be tricky.
A simple solution would be to use Scramjet so your code would be complete in a couple lines:
const sql = require('mssql')
const {DataStream} = require("scramjet");
//...
const request = new sql.Request()
request.stream = true
request.query('select * from verylargetable')
request.pipe(new DataStream({maxParallel: 1}))
// pipe to a new DataStream with no parallel processing
.batch(64)
// optionally batch the requests that someone mentioned
.consume(async (row) => write_to_other_database(row));
// flow control will be done automatically
Scramjet will use promises to control the flow. You can also try increasing the maxParallel method, but keep in mind that in this case the last line could start pushing rows simultaneously.
My own answer: instead of writing to the target database at the same time, I convert each row into an "insert" statement, and push the statement to a message queue ( RabbitMQ, a separate process ). This is fast, and can keep-up with the rate of reading. Another node process pulls from the queue ( more slowly ) and writes to the target database. Thus the big "back-log" of rows is handled by the message queue itself, which is good at that sort of thing.

In Firebase, is there a way to get the number of children of a node without loading all the node data?

You can get the child count via
firebase_node.once('value', function(snapshot) { alert('Count: ' + snapshot.numChildren()); });
But I believe this fetches the entire sub-tree of that node from the server. For huge lists, that seems RAM and latency intensive. Is there a way of getting the count (and/or a list of child names) without fetching the whole thing?
The code snippet you gave does indeed load the entire set of data and then counts it client-side, which can be very slow for large amounts of data.
Firebase doesn't currently have a way to count children without loading data, but we do plan to add it.
For now, one solution would be to maintain a counter of the number of children and update it every time you add a new child. You could use a transaction to count items, like in this code tracking upvodes:
var upvotesRef = new Firebase('https://docs-examples.firebaseio.com/android/saving-data/fireblog/posts/-JRHTHaIs-jNPLXOQivY/upvotes');
upvotesRef.transaction(function (current_value) {
return (current_value || 0) + 1;
});
For more info, see https://www.firebase.com/docs/transactions.html
UPDATE:
Firebase recently released Cloud Functions. With Cloud Functions, you don't need to create your own Server. You can simply write JavaScript functions and upload it to Firebase. Firebase will be responsible for triggering functions whenever an event occurs.
If you want to count upvotes for example, you should create a structure similar to this one:
{
"posts" : {
"-JRHTHaIs-jNPLXOQivY" : {
"upvotes_count":5,
"upvotes" : {
"userX" : true,
"userY" : true,
"userZ" : true,
...
}
}
}
}
And then write a javascript function to increase the upvotes_count when there is a new write to the upvotes node.
const functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp(functions.config().firebase);
exports.countlikes = functions.database.ref('/posts/$postid/upvotes').onWrite(event => {
return event.data.ref.parent.child('upvotes_count').set(event.data.numChildren());
});
You can read the Documentation to know how to Get Started with Cloud Functions.
Also, another example of counting posts is here:
https://github.com/firebase/functions-samples/blob/master/child-count/functions/index.js
Update January 2018
The firebase docs have changed so instead of event we now have change and context.
The given example throws an error complaining that event.data is undefined. This pattern seems to work better:
exports.countPrescriptions = functions.database.ref(`/prescriptions`).onWrite((change, context) => {
const data = change.after.val();
const count = Object.keys(data).length;
return change.after.ref.child('_count').set(count);
});
```
This is a little late in the game as several others have already answered nicely, but I'll share how I might implement it.
This hinges on the fact that the Firebase REST API offers a shallow=true parameter.
Assume you have a post object and each one can have a number of comments:
{
"posts": {
"$postKey": {
"comments": {
...
}
}
}
}
You obviously don't want to fetch all of the comments, just the number of comments.
Assuming you have the key for a post, you can send a GET request to
https://yourapp.firebaseio.com/posts/[the post key]/comments?shallow=true.
This will return an object of key-value pairs, where each key is the key of a comment and its value is true:
{
"comment1key": true,
"comment2key": true,
...,
"comment9999key": true
}
The size of this response is much smaller than requesting the equivalent data, and now you can calculate the number of keys in the response to find your value (e.g. commentCount = Object.keys(result).length).
This may not completely solve your problem, as you are still calculating the number of keys returned, and you can't necessarily subscribe to the value as it changes, but it does greatly reduce the size of the returned data without requiring any changes to your schema.
Save the count as you go - and use validation to enforce it. I hacked this together - for keeping a count of unique votes and counts which keeps coming up!. But this time I have tested my suggestion! (notwithstanding cut/paste errors!).
The 'trick' here is to use the node priority to as the vote count...
The data is:
vote/$issueBeingVotedOn/user/$uniqueIdOfVoter = thisVotesCount, priority=thisVotesCount
vote/$issueBeingVotedOn/count = 'user/'+$idOfLastVoter, priority=CountofLastVote
,"vote": {
".read" : true
,".write" : true
,"$issue" : {
"user" : {
"$user" : {
".validate" : "!data.exists() &&
newData.val()==data.parent().parent().child('count').getPriority()+1 &&
newData.val()==newData.GetPriority()"
user can only vote once && count must be one higher than current count && data value must be same as priority.
}
}
,"count" : {
".validate" : "data.parent().child(newData.val()).val()==newData.getPriority() &&
newData.getPriority()==data.getPriority()+1 "
}
count (last voter really) - vote must exist and its count equal newcount, && newcount (priority) can only go up by one.
}
}
Test script to add 10 votes by different users (for this example, id's faked, should user auth.uid in production). Count down by (i--) 10 to see validation fail.
<script src='https://cdn.firebase.com/v0/firebase.js'></script>
<script>
window.fb = new Firebase('https:...vote/iss1/');
window.fb.child('count').once('value', function (dss) {
votes = dss.getPriority();
for (var i=1;i<10;i++) vote(dss,i+votes);
} );
function vote(dss,count)
{
var user='user/zz' + count; // replace with auth.id or whatever
window.fb.child(user).setWithPriority(count,count);
window.fb.child('count').setWithPriority(user,count);
}
</script>
The 'risk' here is that a vote is cast, but the count not updated (haking or script failure). This is why the votes have a unique 'priority' - the script should really start by ensuring that there is no vote with priority higher than the current count, if there is it should complete that transaction before doing its own - get your clients to clean up for you :)
The count needs to be initialised with a priority before you start - forge doesn't let you do this, so a stub script is needed (before the validation is active!).
write a cloud function to and update the node count.
// below function to get the given node count.
const functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp(functions.config().firebase);
exports.userscount = functions.database.ref('/users/')
.onWrite(event => {
console.log('users number : ', event.data.numChildren());
return event.data.ref.parent.child('count/users').set(event.data.numChildren());
});
Refer :https://firebase.google.com/docs/functions/database-events
root--|
|-users ( this node contains all users list)
|
|-count
|-userscount :
(this node added dynamically by cloud function with the user count)

Resources