The code:
package simulations
import io.gatling.core.Predef._
import io.gatling.http.Predef._
class StarWarsBasicExample extends Simulation
{
// 1 Http Conf
val httpConf = http.baseUrl("https://swapi.dev/api/films/")
// 2 Scenario Definition
val scn = scenario("Star Wars API")
.exec(http("Get Number")
.get("4")
.check(jsonPath("$.episode_id")
.saveAs("episodeId"))
)
.exec(session => {
val movie = session("episodeId").as[String]
session.set("episode",movie)
}).pause(4)
.exec(http("$episode")
.get("$episode"))
// 3 Load Scenario
setUp(
scn.inject(atOnceUsers(1)))
.protocols(httpConf)
}
Trying to grab a variable from the first Get request, and inject that variable into a second request, but unable to do so despite using the documentation. There might be something I'm not understanding.
When I use breakpoints, and navigate through the process, it appears the session execution happens AFTER both of the other requests have been completed (by which time is too late). Can't seem to make that session execution happen between the two requests.
Already answered on Gatling's community mailing list.
"$episode" is not correct Gatling Expression Language syntax. "${episode}" is correct.
I know how to source stream from an entity via a POST request, but I want to be able to also create a source stream from the query parameters of a GET request.
I know i can get query parameters to a case class via a as[] directive, but it seems like a miss to have to wrap that in a source in order to source stream it.
The query parameters that are part of the URL are not "streamed" from the client, rather they are part of the request line. Therefore, when you have an HttpRequest object in your memory you have already consumed enough space to hold the query parameters. This means that you lose any back-pressure benefits from using a Source. I recommend analyzing why you want to create a Source in the first place...
If you absolutely have to create a Source out of the parameters then you can use the parameterSeq Directive:
val route =
parameterSeq { params : Seq[(String, String)] =>
val parameterSource : Source[(String, String), _] = Source(params)
}
I am using the following code to parse url in google-app-engine script:
from urlparse import urlparse, parse_qs
def parse_url(url):
parsed_url = urlparse(url)
params = parse_qs(parsed_url.query)
return params
class Handler(webapp2.RequestHandler):
def get(self):
url = self.request.path_info
params = parse_url(url)
self.response.write(params)
Params always None after calling the function.
However, when using the exact same parsing code inside the handler (not as a function) - the parsing is good and I get a non-empty dictionary in params.
Any idea why it could happen?
The self.request.path_info value is not the full URL you need to be passing down to urlparse() to properly extract the params, it has the query parameters stripped off, which is why you get no parameters. It doesn't work inside the handler either, you may have done some additional changes since you tried that.
To get the parameters using your parse_url() pass the full url:
url = self.request.url
params = parse_url(url)
But you should note that all this is rather unnecessary, webapp2 already has a parameter parser included, except it returns a MultiDict. From Request data:
params
A dictionary-like object combining the variables GET and POST.
All you have to do is convert it to a real dict identical to the one your parse_url() produces:
self.response.write(dict(self.request.params))
Scenario: I've been stuck on this for way to long and I think solution might be easy but I just can't see it, this is the scenario:
cURL POST to http://localhost:8080/my_imports (raw JSON data on body)
->
MyImportsCustomHandler (extends ThreadedHttpRequestHandler [Validations]
->
MyObjectProcessor (extends Processor) [JSON deserialize and data massage]
->
MyFirstDocumentProcessor (extends DocumentProcessor) [Set some fields and save]
Problem is that execution never reaches MyFirstDocumentProcessor, likely because request didn't started from the document_api endpoints (intentionaly).
There are no errors thrown, just processing route never reaches the document processor chain, I think it should because on MyObjectProcessor I'm doing:
DocumentType type =
localDocHandler.getDocumentTypeManager().getDocumentType("my_doc");
DocumentId id = new DocumentId("id:default:my_doc::2");
Document document = new Document(type, id);
DocumentPut docPut = new DocumentPut(document);
Processing proc = com.yahoo.docproc.Processing.of(docPut);
I got this idea from here: https://github.com/vespa-engine/vespa/blob/master/docproc/src/test/java/com/yahoo/docproc/util/SplitterJoinerTestCase.java
but on that test I see this line splitter.process(p);, which I'm not able to find a suitable replacement that works inside a Processor, in that context I only have the Request, Execution and DocumentProcessingHandler
I hope somebody versed on Vespa con shine some light on this, is just the last hop on the processing chain that I can't bridge :|
To write documents from Java code, you need to use the Document Access API:
http://docs.vespa.ai/documentation/document-api-guide.html#document-access
A working solution is in https://github.com/vespa-engine/sample-apps/pull/44
I wrote a function using dbListTables from the DBI package, that throws a warning that I cannot understand. When I run the same code outside of a function, I don't get the warning message.
For info, the database used is Microsoft SQL Server.
Reproducible example
library(odbc)
library(DBI)
# dbListTables in a function: gives a warning message
dbListTablesTest <- function(dsn, userName, password){
con <- dbConnect(
odbc::odbc(),
dsn = dsn,
UID = userName,
PWD = password,
Port = 1433,
encoding = "latin1"
)
availableTables <- dbListTables(con)
}
availableTables <-
dbListTablesTest(
dsn = "myDsn"
,userName = myLogin
,password = myPassword
)
# dbListTables not within a function works fine (no warnings)
con2 <- dbConnect(
odbc::odbc(),
dsn = "myDsn",
UID = myLogin,
PWD = myPassword,
Port = 1433,
encoding = "latin1"
)
availableTables <- dbListTables(con2)
(Incidentally, I realise I should use dbDisconnect to close a connection after working with it. But that seems to throw similar warnings. So for the sake of simplicity I've omitted dbDisconnect.)
The warning message
When executing the code above, I get the following warning message when using the first option (via a function), but I do not get it when using the second option (no function).
warning messages from top-level task callback '1'
Warning message:
Could not notify connection observer. trying to get slot "info" from an object of a basic class ("character") with no slots
The warning is clearly caused by dbListTables, because it disappears when I omit that line from the above funtion.
My questions
Why am I getting this warning message?
More specifically why am I only getting it when calling dbListTables via a function?
What am I doing wrong / should I do to avoid it?
My session info
R version 3.4.2 (2017-09-28)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 7 x64 (build 7601) Service Pack 1
Matrix products: default
locale:
[1] LC_COLLATE=Dutch_Belgium.1252 LC_CTYPE=Dutch_Belgium.1252 LC_MONETARY=Dutch_Belgium.1252 LC_NUMERIC=C LC_TIME=Dutch_Belgium.1252
attached base packages:
[1] stats graphics grDevices utils datasets tools methods base
other attached packages:
[1] DBI_0.7 odbc_1.1.3
loaded via a namespace (and not attached):
[1] bit_1.1-12 compiler_3.4.2 hms_0.3 tibble_1.3.4 Rcpp_0.12.13 bit64_0.9-7 blob_1.1.0 rlang_0.1.2
Thanks in advance for any help!
TL:DR calling odbc::dbConnect within another function causes this warning.
After a lot of digging in the odbc github, I have found the source of the warning. Calling dbConnect creates a db connection. Within this function is the following code:
# perform the connection notification at the top level, to ensure that it's had
# a chance to get its external pointer connected, and so we can capture the
# expression that created it
if (!is.null(getOption("connectionObserver"))) { # nocov start
addTaskCallback(function(expr, ...) {
tryCatch({
if (is.call(expr) && identical(expr[[1]], as.symbol("<-"))) {
# notify if this is an assignment we can replay
on_connection_opened(eval(expr[[2]]), paste(
c("library(odbc)", deparse(expr)), collapse = "\n"))
}
}, error = function(e) {
warning("Could not notify connection observer. ", e$message, call. = FALSE)
})
# always return false so the task callback is run at most once
FALSE
})
} # nocov end
This warning call should look familiar. This is what generates the warning. So why does it do that?
The snippet above is trying to do some checking on the connection object, to see if everything went well.
How it does that, is by adding a function checking this to the 'TaskCallBack'. This is a list of functions that get executed after a top-level task is completed. I am not 100% sure on this, but from what I can tell, this means that these functions are executed after the highest function in the call stack finishes.
Normally, this would be a line in your script. So for example:
library(odbc)
con <- odbc::dbConnect(odbc::odbc(), ...)
After the assignment in the second line is finished, the following function is executed:
function(expr, ...) {
tryCatch({
if (is.call(expr) && identical(expr[[1]], as.symbol("<-"))) {
# notify if this is an assignment we can replay
on_connection_opened(eval(expr[[2]]), paste(
c("library(odbc)", deparse(expr)), collapse = "\n"))
}
}, error = function(e) {
warning("Could not notify connection observer. ", e$message, call. = FALSE)
}
}
The top-level expression gets passed to the function and used to check if the connection works. Another odbc function called on_connection_opened then does some checks. If this throws an error anywhere, the warning is given, because of the tryCatch.
So why would the function on_connection_opened crash?
The function takes the following arguments:
on_connection_opened <- function(connection, code)
and one of the first things it does is:
display_name <- connection#info$dbname
Which seems to match the warning message:
trying to get slot "info" from an object of a basic class ("character") with no slots
From the name of the argument, it is clear that the function on_connection_opened expects a database connection object in its first argument. What does it get from its caller? eval(expr[[2]])
This is the lefthand side of the original call: con
In this case, this is a connection object and everything is nice.
Now we have enough information to answer your questions:
Why am I getting this warning message?
Your function creates a connection, which queues up the check connection function. If then checks for a list of tables and returns that. The check connection function then interprets the list of tables as if it is a connection, tries to check it, and fails miserably. This throws the warning.
More specifically why am I only getting it when calling dbListTables via a function?
dbListTables is not the culprit, dbConnect is. Because you are calling it from within a function, it doesn't get the connection object it is trying to check back and fails.
What am I doing wrong / should I do to avoid it?
A workaround would be to open a connection separately and pass that into your function. This way the connection gets opened in its own call, so the check works properly.
Alternatively, you can remove the TaskCallback again:
before <- getTaskCallbackNames()
con <- odbc::dbConnect(odbc::odbc(), ...)
after <- getTaskCallbackNames()
removeTaskCallback(which(!after %in% before))
Is the running is on_connection_opened essential? What does it do exactly?
As explained by the package's creator in this comment on Github, the function handles the displaying of the connection in the connections tab in RStudio. This is not that interesting to look at if you close the connection in the same function again. So this is not essential for your function.