How to pass Gatling session attributes in an exec() invoking another library (generated gRPC code)? - gatling

Newbie Gatling+Scala question: I’m using George Leung's gatling-grpc library (which is modeled after the http library) and trying to pass a value from the session (generated in a feeder), into a non-DSL, non-Gatling method call, specifically calls populating the gRPC payload object.
Before I start, let me add that it seems I can’t use the sessionFunction (Expression[T]) form of exec, which would resolve my issue:
.exec{ session => { … grpc(…).rpc(…)… }}
…because, AFAICT, the grpc call must be the last thing in the block, or else it’s never evaluated ... yet it can’t be the last thing in the block because there’s no way to coerce it to return a Session object (again, AFAICT).
Therefore, I have to use the ActionBuilder form of exec (grpc(...) returns a Call so this is as designed):
.exec( grpc(…).rpc(…)... )
and this works… until I have a gRPC payload (i.e., non-Gatling) method call to which I need to pass a non-constant value (from a feeder).
In this context, I have no access to a Session object, and the Gatling Expression Language is not applied because the library defining the gRPC types I need to use (to generate the payload) has no knowledge of Gatling.
So, in this fragment:
.header(transactionIdHeader)("${tid}.SAVE")
.payload(Student.newBuilder()
.setId(GlobalId.newBuilder().setValue("${authid}_${uniqId}").build()).build())
)
…the first call evaluates ${tid} because the param in the second parens is Expression[T], and hence is evaluated as Expression Language, but the second call fails to evaluate ${authid} or ${uniqId} because the external, generated library that defines the gRPC type GlobalId has no knowledge of Gatling.
So...
Is there a way to invoke the EL outside of Gatling's DSL?
Or a way to access a Session object via an ActionBuilder?
(I see that the Gatling code magically finds a Session object when I use the sessionFunction form, but I can't see whence it comes — even looking at the bytecode is not illuminating)
Or, turning back to the Expression[T] form of exec, is there a way to have an ActionBuilder return a Session object?
Or, still in the Expression[T] form, I could trivially pass back the existing Session object, if I had a way to ensure the grpc()... expression was evaluated (i.e., imperative programming).
Gatling 3.3.1, Scala 2.12.10
The gatling-grpc library is at phiSgr/gatling-grpc; I'm using version 0.7.0 (com.github.phisgr:gatling-grpc).
(The gRPC Java code is generated from .proto files, of course.)

You need the Gatling-JavaPB integration.
To see that in action, see here.
The .payload method takes an Expression[T], which is an alias for Session => Validation[T]. In plain English, that is a function that constructs the payload from the session with a possibility of failure.
Much of your frustration is not knowing how to get hold of a Session. I hope this clears up the confusion.
In the worst case one can write a lambda to create an expression. But for string interpolation or accessing one single object, Gatling provides an implicit conversation to turn an EL String into an Expression.
The problem is you want to construct well-typed payloads and Gatling's EL cannot help with that. The builders’ setters want a T, but you only have an Expression[T] (either from EL or the $ function). The library mentioned above is created to handle that plumbing.
After importing com.github.phisgr.gatling.javapb._, you should write the following.
...
.payload(
Student.getDefaultInstance
.update(_.getIdBuilder.setValue)("${authid}_${uniqId}")
)
For the sake of completeness, see the warning in Gatling's documentation for why defining actions in .exec(sessionFunction) is not going to work.

Related

How to map array type groups parameter to LTI1p0

I have an LTI Tool Consumer(LMS) that is using LTI1p0 which will send a request to a service that is currently not using LTI. Therefore I'm writing a NodeJS implementation of a wrapper which will
receive from the LTI Tool Consumer,
map it to match service's API,
send it to the service,
then parse the response from the service into an LTI Tool Provider format,
and finally send it back to the Tool Consumer.
The service has a required field called groups which expects an array of group objects like so:
group: [ {
id: <string>, // id of the group
name: <string>, // name of the group
role: <string> // role of the user
}]
This parameter doesn't exactly exist in the LTI1p0 implementation guide. So I want to know how to best send array-type (groups in my case) information via LTI.
When looking through the docs, I've come across a few potential parameters I could use:
1. Context parameters
The guide mentions that a 'type of context would be "group"', and there are parameters for context_id, context_type, context_title. The issue would be that this is only an option for one group per request/user.
2. Custom parameters
I could make a custom parameter and call it custom_groups which seems simple, but I'm not sure how the value should look for arrays? Just like a stringified json object?
custom_groups = "{"id":123,"name":"Group Name","role":"Instructor"}, {"id":124,"name":"Group Name 2","role":"Creator"}"
For the roles parameter, one can send a list of comma-separated strings (i.e. roles= Instructor, Creator,..)but that wouldn't suffice in my case.
I'm still new to LTI, so my apologies if this is blatantly obvious.
Note: Both LTI Consumer (LMS) and the service are external, i.e. I can't change them and only provide the wrapper. I can communicate with the Tool Consumer about possible custom parameters but again not sure which format to request.
Additionally, the service might implement LTI towards the end of the year, so ideally the wrapper could then be removed and the Tool Consumer wouldn't have to change much.
Any help much appreciated!
Groups are notably absent from the LTI spec. So any answer will be part opinion.
I would agree with you that using the context parameter fields, with one LTI launch per group. Would be the most correct way, as far as the spec goes.
However I have not seen an LMS that allows LTI launches from group context. So you may not be able to use the service without a wrapper, even if it supported LTI natively.
Alternatively:
LTI 1.0 Supports custom parameters, as you are extending the the information already sent (context and roles) You could use the ext_ prefix.
Referer: https://www.imsglobal.org/specs/ltiv1p0/implementation-guide
If a profile wants to extend these fields, they should prefix all fields not described herein with "ext_".
So you could send a custom parameter with that prefix. Assuming your LMS lets you send a useful custom paramater. LTI is designed to use basic POST request, Not multidimensional Json objects. But a stringified JSON object is perfectly valid with an appropriate key.
i.e:
ext_custom_groups = "{"id":123,"name":"Group Name","role":"Instructor"}, {"id":124,"name":"Group Name 2","role":"Creator"}"

Transferring data to/from a callback from/to a worker thread

My current application is a toy web service written in C designed to replicate the behaviour of http://sprunge.us/ (takes data in via http POST, stores it to disk, returns the client a url to the data - also serves data that has been previously stored upon request).
The application is structured such that a thread pool is instantiated with worker threads (just a function pointer that takes a void* parameter) and a socket is opened to listen to incoming connections. The main loop of the program comprises a sock = accept(...) call and then a pool_add_task(worker_function_parse_http, sock) to enable requests to be handled quickly.
The parse_http worker parses the incoming request and either adds another task to the work queue for storing the data (POST) or serving previously stored data (GET).
My problem with this approach stems from the use of the http-parser library which uses a callback design to return parsed data (all http parsers that I looked at used this style). The problem I encounter is as such:
My parse_http worker:
Buffers data from the accepted socket (the function's only parameter, at this stage)
Sets up a http-parser object as per its API, complete with setting callback functions for it to call when it finishes parsing the URL or BODY or whatever. (These functions are of a fixed type signature defined by the http-parser lib, with a pointer to a buffer containing the parsed data relevant to the call, so I can't pass in my own variables and solve the problem that way. These functions also return a status code to the http parser, so I can't use the return values either. The suggested way to get data out of the parser for later use is to copy it out to a global variable during the callback - fun with multiple threads.)
Execute the parser on the buffered socket data. At this stage, the parser is expected to call its set up callbacks when it parses different sections of the buffer. The callback is supplied with parsed data relevant to each callback (e.g. BODY segment supplied to body_parsed callback function).
Well, this is where the problem shows. The parser has executed, but I don't have any access to the parsed data. Here is where I would add a new task to the queue with a worker function to store the received body data or another to handle the GET request for previously stored data. These functions would need to be supplied with both the parsed information (POST data or GET url) as well as the accepted socket so that the now delegated work can respond to the request and close the connection.
Of course, the obvious solution to the problem is simply to not use this thread-pool model with asynchronous practices, but I would like to know, for now and for later, how best to tackle this problem.
How can I get the parsed data from these callbacks back to the worker thread function. I've considered simply making my on_url_parsed and on_body_parsed do the rest of the application's job (storing and retrieving data), but of course I no longer have the client's socket to respond back to in these contexts.
If needed, I can post up the source code to the project when I get the chance.
Edit: It turns out that it is possible to access a user defined void * from within the callbacks of this particular http-parser library as the callbacks are passed a reference to the caller (the parser object) which has a user-definable data field.
A well-designed callback interface would provide for you to give the parser a void * which it would pass on to each of the callback functions when it calls them. The callback functions you provide know what type of object it points to (since you provide both the data pointer and the function pointers), so they can cast and properly dereference it. Among other advantages, this way you can provide for the callbacks to access a local variable of the function that initiates the parse, instead of having to rely on global variables.
If the parser library you are using does not have such a feature (and you don't want to switch to a better-designed one), then you can probably use thread-local storage instead of global variables. How exactly you would do that depends on your thread library and compiler, or you could roll your own by using thread identifiers as keys to thread-specific slots in some global data structure (a hash table for instance).

What's the best practice to for Thrift file (api) versioning?

I have an API written in thrift. Example:
service Api {
void invoke()
}
It does something. I want to change the behavior to do something else but still keep the old behavior for clients that expect the old behavior.
What's the best practice to handle a new API version?
Soft versioning
Thrift supports soft versioning, so it is perfectly valid to do a version 2 of your service which looks like this:
service Api {
void invoke(1: string optional_arg1, 2: i32 optional_arg2) throws (1: MyError e)
i32 number_of_invokes()
}
Since the newly added arguments are technically optional, an arbitrary clients request may or may not contain them, or may contain only parts of them (e.g. specify arg1 but not arg2). The exception is a bit different, old clients will raise some kind of generic unexpected exception or similar.
It is even possible to remove an outdated function entirely, in this case old clients will get an exception whenever they try to call the (now non-existing) removed function.
All of the above is similarly is true with regard to adding member fields to structures, exceptions etc. Instead of removing declarations from the IDL file, it is recommended to comment out old removed member fields and functions to prevent people from re-using old field IDs, old function names or old enum values in later versions.
struct foobar {
// API 1.0 fields
1: i32 foo
//2: i32 bar - obsolete with API 2.0
// API 2.0 fields
3: i32 baz
}
required is forever
Where you need to be careful is the use of the keyword required. Once you publish an API with a struct containing an required member, you will need to carry this one until the structure as a whole is removed. Same is true with adding new required fields later on. Otherwise you risk breaking changes, because mixing old and new clients and servers will sooner or later produce a situation where one end absolutely expects a certain required member field, but the opposite end can't deliver, simply because it does not know anything about it.
This is not a problem with normal or optional fields, since Thrift is designed to skip over unknown fields (the type ID is contained in the wire data), and just ignore missing fields. In contrast, additional checks are applied for required fields to ensure they are present in the wire data.
Endpoints
Although soft versioning is a great tool, it comes at the cost of cumulating burdens due to the need to be compatible. Furthermore, in some cases your API will undergo breaking changes, intentionally without being backwards compatible. In that case, it is recommened to set the new service at a different endpoint.
Alternatively, the multiplex protocol introduced with Thrift 0.9.2 can be used to offer multiple services and/or service versions over the same endpoint (i.e. socket, http URI, ...)
For your specific scenario, you can just add a new method (named something else) that does the new thing. In the future, I would avoid naming methods invoke or similar for this exact reason. Your service will now have a method invoke that does some unknown thing and another method (hopefully named better) that does what it says it does. This may lead to confusion by your users, but everything will still work.

Is there an eval() equivalent in apex/Salesforce?

I've looked into this, and it seems there is no directly related function available since Apex is so strongly typed, but I was wondering if anyone had found a workaround. I'm designing a credit risk object and my client wants to be able to insert expressions such as "150 + 3" instead of "153" when updating fields to help speed things up on her end. Unfortunately, I'm new to salesforce, so I'm having trouble coming up with ideas. Is this even feasible?
You could allow hand-entering of SOQL statements and then use dynamic SOQL to process them. But this would require a bit more than "150 + 3."
Otherwise you could do this in JavaScript and pass the value back to Apex as an already calculated number.
It is possible to mimic a Javascript eval() in Apex by making a callout to the executeAnonymous API method on either the Tooling or Apex API.
The trick is you need to pass any required input parameters in the eval string body. If a response is required you need a mechanism to extract it.
There are two common ways you can get a response back from executeAnonymous.
Throw a deliberate exception at the end of the execute and include the response. Kevin covers this approach in EVAL() in Apex. Secure Dynamic Code Evaluation on the Salesforce1 Platform.
I used a variation of this approach but returned the response via the debug log rather than an intentional exception. See Adding Eval() support to Apex.
Using my example the Apex would be something like:
integer sum = soapSforceCom200608Apex.evalInteger(
'integer result = 150 + 3; System.debug(LoggingLevel.Error, result);');
You might not be able to perform the callout during member initialisation or in a constructor.
Incidentally, the Salesforce Stackexchange site is a great place to ask Salesforce specific questions.
Script.apex can help with evaulating javascript expressions. Check this out. https://github.com/Click-to-Cloud/Script.apex. It is just as simple as this.
Integer result = (Integer)ScriptEngine.getInstance().eval('1 + 2');

Dereferencing objects with angularJS' $resource

I'm new to AngularJS and I am currently building a webapp using a Django/Tastypie API. This webapp works with posts and an API call (GET) looks like :
{
title: "Bootstrap: wider input field - Stack Overflow",
link: "http://stackoverflow.com/questions/978...",
author: "/v1/users/1/",
resource_uri: "/v1/posts/18/",
}
To request these objects, I created an angular's service which embed resources declared like the following one :
Post: $resource(ConfigurationService.API_URL_RESOURCE + '/v1/posts/:id/')
Everything works like a charm but I would like to solve two problems :
How to properly replace the author field by its value ? In other word, how the request as automatically as possible every reference field ?
How to cache this value to avoid several calls on the same endpoint ?
Again, I'm new to angularJS and I might missed something in my comprehension of the $resource object.
Thanks,
Maxime.
With regard to question one, I know of no trivial, out-of-the-box solution. I suppose you could use custom response transformers to launch subsidiary $resource requests, attaching promises from those requests as properties of the response object (in place of the URLs). Recent versions of the $resource factory allow you to specify such a transformer for $resource instances. You would delegate to the global default response transformer ($httpProvider.defaults.transformResponse) to get your actual JSON, then substitute properties and launch subsidiary requests from there. And remember, when delegating this way, to pass along the first TWO, not ONE, parameters your own response transformer receives when it is called, even though the documentation mentions only one (I learned this the hard way).
There's no way to know when every last promise has been fulfilled, but I presume you won't have any code that will depend on this knowledge (as it is common for your UI to just be bound to bits and pieces of the model object, itself a promise, returned by the original HTTP request).
As for question two, I'm not sure whether you're referring to your main object (in which case $scope should suffice as a means of retaining a reference) or these subsidiary objects that you propose to download as a means of assembling an aggregate on the client side. Presuming the latter, I guess you could do something like maintaining a hash relating URLs to objects in your $scope, say, and have the success functions on your subsidiary $resource requests update this dictionary. Then you could make the response transformer I described above check the hash first to see if it's already got the resource instance desired, getting the $resource from the back end only when such a local copy is absent.
This is all a bunch of work (and round trips) to assemble resources on the client side when it might be much easier just to assemble your aggregate in your application layer and serve it up pre-cooked. REST sets no expectations for data normalization.

Resources