scala - Gatling - I can't seem to use Session Variables stored from a request in a subsequent Request - gatling

The code:
package simulations
import io.gatling.core.Predef._
import io.gatling.http.Predef._
class StarWarsBasicExample extends Simulation
{
// 1 Http Conf
val httpConf = http.baseUrl("https://swapi.dev/api/films/")
// 2 Scenario Definition
val scn = scenario("Star Wars API")
.exec(http("Get Number")
.get("4")
.check(jsonPath("$.episode_id")
.saveAs("episodeId"))
)
.exec(session => {
val movie = session("episodeId").as[String]
session.set("episode",movie)
}).pause(4)
.exec(http("$episode")
.get("$episode"))
// 3 Load Scenario
setUp(
scn.inject(atOnceUsers(1)))
.protocols(httpConf)
}
Trying to grab a variable from the first Get request, and inject that variable into a second request, but unable to do so despite using the documentation. There might be something I'm not understanding.
When I use breakpoints, and navigate through the process, it appears the session execution happens AFTER both of the other requests have been completed (by which time is too late). Can't seem to make that session execution happen between the two requests.

Already answered on Gatling's community mailing list.
"$episode" is not correct Gatling Expression Language syntax. "${episode}" is correct.

Related

Configuring additional properties within SnowflakeBasicDataSource

I'm using following class in Java. But this class has limited implementation of properties.
SnowflakeBasicDataSource basicDataSource = new SnowflakeBasicDataSource();
basicDataSource.setSsl(true);
basicDataSource.setUser(dbSnowFlakeUsername);
basicDataSource.setPassword(dbSnowFlakePassword);
basicDataSource.setUrl(dbSnowFlakeUrl);
basicDataSource.setLoginTimeout(dbSnowFlakeLoginTimeoutSeconds);
For example i want to indicate networkTimeout and queryTimeout. But this is not implemented
How do i pass it to SnowflakeBasicDataSource?
Tried to pass withing url like this
jdbc:snowflake://ni31094.eu-central-1.snowflakecomputing.com/?db=TEST_DB&schema=PUBLIC&role=SYSADMIN&warehouse=TEST_WAREHOUSE&tracing=FINE&networkTimeout=10&queryTimeout=10
But don't think it works.
Please need assistance.
The two parameters for Network and Query Timeout are:
Network Timeout: networkTimeout
Query Timeout: queryTimeout=<number>
Looking at your string: Yes, you are using the two parameters correctly but I see your user and password-parameters are missing.
You can also try to:
ensure your properties are set correctly
adjust your connection string and accountinformation-values
You can find some more detail information about troubleshooting here: https://docs.snowflake.com/en/user-guide/jdbc-configure.html#troubleshooting-tips
Setting the properties in the URL for the SnowflakeBasicDataSource does work. I tested the queryTimeout with a long-running query and the query correctly get cancelled after the specified time in the parameter.
Testing networkTimeout is a little more difficult since it's hard to tell how it is actually used. It is used by net.snowflake.client.jdbc.RestRequest and I've tested that the correct parameter gets passed through, and it does. The reason you're getting a timeout of 60 seconds is that the HTTP request for the initial login request gets set to that by default. The initial login request seems to ignore the networkTimeout. The request which contains the query to run gets correctly set to the networkTimeout parameter passed in through the query string. Since my java skills aren't great I was unable to test a situation where the networkTimeout causes an error, unfortunately.
Here is some scala code which shows you that the two params get correctly set in the session:
import net.snowflake.client.jdbc.{SnowflakeBasicDataSource, SnowflakeConnectionV1}
import java.sql.Statement
import java.io.FileReader
import java.util.Properties
object BasicConnector extends App{
val prop = new Properties
prop.load(new FileReader("~/snowflake_conn.properties"))
val username = prop.getProperty("username")
val password = prop.getProperty("password")
val url = prop.getProperty("url") + "?networkTimeout=54321&queryTimeout=1234"
val basicDataSource = new SnowflakeBasicDataSource()
basicDataSource.setSsl(true)
basicDataSource.setUser(username)
basicDataSource.setPassword(password)
basicDataSource.setUrl(url)
basicDataSource.setWarehouse("DEMO_WH")
val conn: SnowflakeConnectionV1 = basicDataSource.getConnection().asInstanceOf[SnowflakeConnectionV1]
val statement: Statement = conn.createStatement()
val queryTimeout = conn.getSfSession.getQueryTimeout
val networkTimeout = conn.getSfSession.getNetworkTimeoutInMilli
println(s"query timeout: $queryTimeout")
println(s"network timeout: $networkTimeout")
statement.close()
conn.close()
The above prints out:
query timeout: 1234
network timeout: 54321
As you can see, I had to cast the Connection object to a SnowflakeConnectionV1 and use the getSfSession method to inspect the params with .asInstanceOf[SnowflakeConnectionV1]. This is because the JDBC Connection type doesn't have this method. You shouldn't have to do this though if you don't care about inspecting the parameter, it'll still use them correctly.

End Gatling simulation when scenario fails BUT generate a report

I have code which currently will not run my scenario if it fails;
//Defined outside of the scenario scope
var simulationHealthy = true
//defined within the scenario
.exec((session: io.gatling.core.session.Session) => {
if (session.status == KO) {
simulationHealthy = false
}
session
})
However my simulation keeps running until the duration set for the simulation is over, though the scenario will not keep executing.
What I would like to do is to have a scenario fail under conditions I define (similar to assertions) and for the entire simulation to fail at that point as well, and also generate a report.
Thanks
Edit: I am running these tests within the IntelliJ IDE. Ending the simulation programmatically is required.
You might run the test itself without report and produce the report with a second call for just the report generation from the simulation.log
Run Simulation w/o report (-nr flag), i.e.
gatling.sh -nr -s YourSimulationClass
Generate Report (-ro flag):
gatling.sh -ro yoursimulation
(your simultation is the path underneath the results folder, which can be specified with -rf, which contains the simulation.log file)
In IntelliJ you can define another LaunchConfiguration to be executed before. So you define an action for executing Gatling Test (with -nr flag) and another configuration for report generation (with -ro flag), that executes the Gatling Test run action before.
Alternatively you could use the gatling-maven-plugin and define two executions (run, report) with the same flags.
Edit
According to this group thread you could execute your steps conditionally or mute them. The condition could be the presence of an error, but anything else as well. If the condition depends on global state i.e. a global variable, it would mute all users (unlike exitHereIfFailed)
For example:
val continue = new AtomicBoolean(true)
val scn = scenario("MyTest")
.exec(
doIf(session => continue.get) {
exec(http("request_0").get("/home").check(status.is(200)))
.exec((session: io.gatling.core.session.Session) => {
if (session.status == KO) {
continue.set(false)
}
session
})
})
As said, this only stops sending requests to the SUT. Seems there is no other option at the moment (apart from System.exit(0))
You can use exitHereIfFailed in ScenarioBuilder returned by exec().
.exec(http("login")
.post("/serviceapp/api/auth/login")
...
.check(status.is(200))))
.exitHereIfFailed
.pause(1)
.exec(http("getProfileDetails")
.get("/serviceapp/api/user/get_profile")
.headers(authHeader("${token}"))
.check(status.is(200)))
Thanks to #GeraldMücke 's suggestion of using system.exit I've come up with a work around. Still no where close to ideal but it does the job.
The problems are
Still have to manually generate the report from the log that is created when gatling is run
The user has to constantly manage how long the scenario lasts for both items as I don't know a way to have a scenario last the length of a simulation
This is obviously a "proof of concept" it has nothing in the code to define failure over thresholds etc like the asserts and checks available in Gatling itself
Here's the code. I've nested simulations within the setUp function because it fits the criteria of the work I am doing currently, allowing me to run multiple simulations within the main simulation.
FailOverSimulation and ScenarioFailOver are the classes that need to be added to the list; obviously this only adds value when you are running something that loops within the setUp.
import java.util.concurrent.atomic.AtomicBoolean
import io.gatling.commons.stats.KO
import io.gatling.core.Predef._
import io.gatling.core.scenario.Simulation
import io.gatling.http.Predef._
import scala.concurrent.duration._
object ScenarioTest {
val get = scenario("Test Scenario")
.exec(http("Test Scenario")
.get("https://.co.uk/")
)
.exec((session: io.gatling.core.session.Session) => {
if(session.status == KO) {
ScenarioFailOver.exitFlag.set(true)
}
session
})
}
object TestSimulation {
val fullScenario = List(
ScenarioTest.get.inject(constantUsersPerSec(1).during(10.seconds))
)
}
object ScenarioFailOver {
var exitFlag = new AtomicBoolean(false)
val get = scenario("Fail Over")
.doIf(session => exitFlag.get()) {
exec(s => {
java.lang.System.exit(0)
s
})
}
}
object FailOverSimulation {
val fullScenario = List(
ScenarioFailOver.get.inject(constantUsersPerSec(1).during(10.seconds))
)
}
class SimulateTestEnding extends Simulation {
setUp(
FailOverSimulation.fullScenario
::: TestSimulation.fullScenario
).protocols(
)
}

Gatling switch protocols during scenario

I'm trying to create a Gatling scenario which requires switching the protocol to a different host during the test. The user journey is
https://example.com/page1
https://example.com/page2
https://accounts.example.com/signin
https://example.com/page3
so as part of a single scenario, I need to ether switch the protocol defined in the scenario set up, or switch the baseUrl defined on the protocol but I can't figure out how to do that.
A basic scenario might look like
package protocolexample
import io.gatling.core.Predef._
import io.gatling.http.Predef._
class Example extends Simulation {
val exampleHttp = http.baseURL("https://example.com/")
val exampleAccountsHttp = http.baseURL("https://accounts.example.com/")
val scn = scenario("Signin")
.exec(
http("Page 1").get("/page1")
)
.exec(
http("Page 2").get("/page2")
)
.exec(
// This needs to be done against accounts.example.com
http("Signin").get("/signin")
)
.exec(
// Back to example.com
http("Page 3").get("/page3")
)
setUp(
scn.inject(
atOnceUsers(3)
).protocols(exampleHttp)
)
}
I just need to figure out how to ether switch the host or protocol for the 3rd step. I know I can create multiple scenarios but this needs to be a single user flow across multiple hosts.
I've tried directly using the other protocol
exec(
// This needs to be done against accounts.example.com
exampleAccountsHttp("Signin").get("/signin")
)
which results in
protocolexample/example.scala:19: type mismatch;
found : String("Signin")
required: io.gatling.core.session.Session
exampleAccountsHttp("Signin").get("/signin")
and also changing the base URL on the request
exec(
// This needs to be done against accounts.example.com
http("Signin").baseUrl("https://accounts.example.com/").get("/signin")
)
which results in
protocolexample/example.scala:19: value baseUrl is not a member of io.gatling.http.request.builder.Http
You can use an absolute URI (inclusive protocol) as a parameter for Http.get, Http.post and so on.
class Example extends Simulation {
val exampleHttp = http.baseURL("https://example.com/")
val scn = scenario("Signin")
.exec(http("Page 1").get("/page1"))
.exec(http("Page 2").get("/page2"))
.exec(http("Signin").get("https://accounts.example.com/signin"))
.exec(http("Page 3").get("/page3"))
setUp(scn.inject(atOnceUsers(3))
.protocols(exampleHttp))
}
see: https://gatling.io/docs/current/cheat-sheet/#http-protocol-urls-baseUrl
baseURL: Sets the base URL of all relative URLs of the scenario on which the configuration is applied.
I am currently working in gatling.
SOLUTION:
=> WHAT WE USE IF WE HAVE ONE HTTPBASE:
var httpConf1= http.baseUrl("https://s1.com")
=>FOR MULTIPLE HTTPBASE:
var httpConf1=http.**baseUrls**("https://s1.com","https://s2.com","https://s3.com")
You can use baseUrls function which takes a list of httpBase urls.
ANOTHER METHOD:
Reading from csv file all httpbase and storing it in list buffer in scala language and then converting it into list while passing it in http.baseUrls
var listOfTenants:ListBuffer[String] = new ListBuffer[String] //buffer
var httpConf1=http.**baseUrls**(listOfTenants.toList) //converting buffer to list.
Hope it helps!.
Thank you.😊

Write multiple times(Enities) using RPC proxyclass in Google App-Engine from Android

I am using App Engine Connected Android Plugin support and customized the sample project shown in Google I/O. Ran it successfully. I wrote some Tasks from Android device to cloud succesffully using the code.
CloudTasksRequestFactory factory = (CloudTasksRequestFactory) Util
.getRequestFactory(CloudTasksActivity.this,
CloudTasksRequestFactory.class);
TaskRequest request = factory.taskRequest();
TaskProxy task = request.create(TaskProxy.class);
task.setName(taskName);
task.setNote(taskDetails);
task.setDueDate(dueDate);
request.updateTask(task).fire();
This works well and I have tested.
What I am trying to now is I have an array String[][] addArrayServer and want to write all the its elements to the server. The approach I am using is:
NoteSyncDemoRequestFactory factory = Util.getRequestFactory(activity,NoteSyncDemoRequestFactory.class);
NoteSyncDemoRequest request = factory.taskRequest();
TaskProxy task;
for(int ik=0;ik<addArrayServer.length;ik++) {
task = request.create(TaskProxy.class);
Log.d(TAG,"inside uploading task:"+ik+":"+addArrayServer[ik][1]);
task.setTitle(addArrayServer[ik][1]);
task.setNote(addArrayServer[ik][2]);
task.setCreatedDate(addArrayServer[ik][3]);
// made one task
request.updateTask(task).fire();
}
One task is uploaded for sure, the first element of the array. But hangs when making a new instance of task. I am pretty new to Google-Appengine. Whats the right way to call RPC, to upload multiple entities really fast??
Thanks.
Well answer to this queston is that request.fire() can be called only once for an request object but I was calling it every time in the loop. So it would update only once. Simple solution is to call fire() outside the loop.
NoteSyncDemoRequestFactory factory = Util.getRequestFactory(activity,NoteSyncDemoRequestFactory.class);
NoteSyncDemoRequest request = factory.taskRequest();
TaskProxy task;
for(int ik=0;ik<addArrayServer.length;ik++) {
task = request.create(TaskProxy.class);
Log.d(TAG,"inside uploading task:"+ik+":"+addArrayServer[ik][1]);
task.setTitle(addArrayServer[ik][1]);
task.setNote(addArrayServer[ik][2]);
task.setCreatedDate(addArrayServer[ik][3]);
// made one task
request.updateTask(task);
}
request.fire(); //call fire only once after all the actions are done...
For more info check out this question.. GWT RequestFactory and multiple requests

Using Google App Engine's Cron service to extract data from a URL

I need to scrape a simple webpage which has the following text:
Value=29
Time=128769
The values change frequently.
I want to extract the Value (29 in this case) and store it in a database. I want to scrape this page every 6 hours. I am not interested in displaying the value anywhere, I just am interested in the cron. Hope I made sense.
Please advise me if I can accomplish this using Google's App Engine.
Thank you!
Please advise me if I can accomplish
this using Google's App Engine.
Sure! E.g., in Python, urlfetch (with the URL as argument) to get the contents, then a simple re.search(r'Value=(\d+)').group(1) (if the contents are as simple as you're showing) to get the value, and a db.put to store it. Do you want the Python details spelled out, or do you prefer Java?
Edit: urllib / urllib2 would also be feasible (GAE does support them now).
So cron.yaml should be something like:
cron:
- description: refresh "value"
url: /refvalue
schedule: every 6 hours
and app.yaml something like:
application: valueref
version: 1
runtime: python
api_version: 1
handlers:
- url: /refvalue
script: refvalue.py
login: admin
You may have other entries in either or both, of course, but this is the subset needed to "refresh the value". A possible refvalue.py might be:
import re
import wsgiref.handlers
from google.appengine.ext import db
from google.appengine.ext import webapp
from google.appengine.api import urlfetch
class Value(db.Model):
thevalue = db.IntegerProperty()
when = db.DateTimeProperty(auto_now_add=True)
class RefValueHandler(webapp.RequestHandler):
def get(self):
resp = urlfetch.fetch('http://whatever.example.com')
mo = re.match(r'Value=(\d+)', resp.content)
if mo:
val = int(mo.group(1))
else:
val = None
valobj = Value(thevalue=val)
valobj.put()
def main():
application = webapp.WSGIApplication(
[('/refvalue', RefValueHandler),], debug=True)
wsgiref.handlers.CGIHandler().run(application)
if __name__ == '__main__':
main()
Depending on what else your web app is doing, you'll probably want to move the class Value to a separate file (e.g. models.py with other models) which of course you'll then have to import (from this .py file and from others which do something interesting with all of your saved values). Here I've taken some possible anomalies into account (no Value= found on the target page) but not others (the target page's server does not respond or gives an error); it's hard to know exactly what anomalies you need to consider and what you want to do if they occur (what I'm doing here is very simply recording None as the value at the anomaly's time, but you may want to do more... or less -- I'll leave that up to you!-)

Resources