JMeter - converting File to String to Array - arrays

I need to do the following:
Read a .csv file into a variable. Csv file is having one single row with a string like (110,111,112,113,114)
Using this String variable, split the content on the basis of a comma",".
What I have done:
I have added a Thread Group
2a. Added a user defined variable 'Config Element'.
2b. Added a variable named 'issueIds' having value ${__FileToString(D:\TestCasesId.csv,,issueIds)}
3a. Now I added a JSR223 Sampler with the following code:
String lineItems1 = ${issueIds};
log.info(lineItems1);
3b. Executing this give the following error:
Response code:500
Response message:javax.script.ScriptException: In file: inline evaluation of: ``String lineItems1 = 114660,114661,114662,114663; log.info(lineItems1); ;'' Encountered "114661" at line 1, column 28.
in inline evaluation of: ``String lineItems1 = 114660,114661,114662,114663; log.info(lineItems1); ;'' at line number 1
4a. Added a BeanShell Sampler with the following script:
String lineItems2 = ${issueIds};
String[] lineItems2Arr = lineItems2.split(",");
log.info(lineItems2);
log.info(lineItems2Arr[0]);
4b. Executing this give the following error:
Response code:500
Response message:org.apache.jorphan.util.JMeterException: Error invoking bsh method: eval In file: inline evaluation of: ``String lineItems2 = 114660,114661,114662,114663; String[] lineItems2Arr = lineIt . . . '' Encountered "114661" at line 1, column 28.
What am i doing wrong?

You are doing 2 things wrong:
Inlining JMeter Functions or Variables into scripting elements is not recommended, you should be using vars shorthand to JMeterVariables class instance instead like:
String lineItems1 = vars.get("issueIds");
Since JMeter 3.1 it's recommended to use JSR223 Test Elements and Groovy language for scripting therefore consider choosing groovy from the language drop-down
Groovy has much better performance comparing to Beanshell, it supports all modern Java SDK features and provides some syntax sugar on top of it, check out Apache Groovy - Why and How You Should Use It article for more details.

In case amount of comma separated fields is the same for all used csv files, you can consider using 'CSV Data Set Config' instead of manual splitting. In that case you will have a separate variable for each column in the csv, e.g.
id1,id2,id3,id4,id5
110,111,112,113,114

Related

Apache Camel - Yaml DSL - Expression returning custom datatype

I am using Camel kamelets SFTP connect to process files in SFTP. When SFTP Source downloads the file from SFTP server we need to set the file length in the header.
I have used set-header to set the values of file length and it is working except the data type, we are expecting the value of the header to be LONG but the simple expression returns the STRING data type. How can I return the LONG datatype from simple expression (or any other expressions),
Is YAML DSL supports result type in simple expression?
You can use result-type
However, then you need to use the verbose syntax to be able to set multiple options on the simple, something like:
- set-header:
name: test
simple:
expression: "${body}"
result-type: "long"

Gatling .sign issue

I am trying to build a Get request as follows and I would like CaseReference value to be populated via feeder .feed(CaseProviderSeq) but for some reason it's not picking CaseReference value and printing following for my println statement in .sign statement bellow
PATH KJ: /caseworkers/554355/jurisdictions/EMPLOYMENT/case-types/Manchester_Multiples/cases/$%7BCaseReference%7D/event-triggers/updateBulkAction_v2/token
My feeder CSV got following rows currently
1574761472170530
1574622770056940
so I am expecting this amended URL would be like
/caseworkers/554355/jurisdictions/EMPLOYMENT/case-types/Manchester_Multiples/cases/1574761472170530/event-triggers/updateBulkAction_v2/token
any idea what wrong I am doing here ??
.get(session => SaveEventUrl.replace(":case_reference","${CaseReference}").replaceAll("events", "") + s"event-triggers/${EventId}/token")
.header("ServiceAuthorization", s2sToken)
.header("Authorization", userToken)
.header("Content-Type","application/json")
.sign(new SignatureCalculator {
override def sign(request: Request): Unit = {
val path = request.getUri.getPath
println("PATH KJ: " + path)
request.getHeaders.add("uri", path)
}
})
This is not related to .sign, but your session attribute CaseReference not being interpreted. If you look closely you can see the braces %-encoded in $%7BCaseReference%7D.
Interpretation of the Gatling Expression Language strings happens only when a String is present when an Expression[Something] is needed1.
This bug you wrote is shown exactly in the warning in the documentation above.
I believe you can simply remove session => in your .get, so you are passing in a String rather than a Session => String2. That string will be implicitly converted to Expression[String]. That way Gatling will put the session attribute into the URL.
This happens because of the Scala implicit conversion.
In fact it is Session => Validation[String], because, again, of implicit conversions.

How to set a context variable with dot in name?

I am trying to add a context data variable (CDV), which has a dot in its name. According to Adobe site this is correct:
s.contextData['myco.rsid'] = 'value'
Unfortunately, after calling s.t() the variable is split into two or more:
Context Variables
myco.:
rsid: value
.myco:
How can I set the variable and prevent splitting it into pieces?
You are setting it properly already. If you are referring to what you see in the request URL, that's how the Adobe library sends it. In your example, "myco" is a namespace, and "rsid" is a variable in that namespace. And you can have other variables in that namespace. For example if you have
s.contextData['myco.rsid1'] = 'value';
s.contextData['myco.rsid2'] = 'value';
You would see in the AA request URL (just showing the relevant part):
c.&myco.&rsid1=value&rsid2=value&.myco&.c
I assume you are asking because you want to more easily parse/qa AA collection request URLs from the browser network tab, extension, or some unit tester? There is no way to force AA to not behave like this when using dot syntax (namespaces) in your variables.
But, there isn't anything particularly special about using namespaces for your contextData variables; it's just there for your own organization if you choose. So if you want all variables to be "top level" and show full names in the request URL, then do not use dot syntax.
If you want to still have some measure of organization/hierarchy, I suggest you instead use an underscore _ :
s.contextData['myco_rsid1'] = 'value';
s.contextData['myco_rsid2'] = 'value';
Which will give you:
c.&myco_rsid1=value&myco_rsid2=value&.c
Side Note: You cannot do full object/dot notation syntax with s.contextData, e.g.
s.contextData = {
foo:'bar', // <--- this will properly parse
myco:{ // this will not properly parse
rsid:'value' //
} //
};
AA library does not parse this correctly; it just loops through top level properties of contextData when building the request URL. So if you do full object syntax like above, you will end up with:
c.&foo=bar&myco=%5Bobject%20Object%5D&&.c
foo would be okay, but you end up with just myco with "[object Object]" as the recorded value. Why Adobe didn't allow for full object syntax and just JSON.stringify(s.contextData) ? ¯\_(ツ)_/¯

Locating a dynamic string in a text file

Problem:
Hello, I have been struggling recently in my programming endeavours. I have managed to receive the output below from Google Speech to Text, but I cannot figure out how draw data from this block.
Excerpt 1:
[VoiceMain]: Successfully initialized
{"result":[]}
{"result":[{"alternative":[{"transcript":"hello","confidence":0.46152416},{"transcript":"how low"},{"transcript":"how lo"},{"transcript":"how long"},{"transcript":"Polo"}],"final":true}],"result_index":0}
[VoiceMain]: Successfully initialized
{"result":[]}
{"result":[{"alternative":[{"transcript":"hello"},{"transcript":"how long"},{"transcript":"how low"},{"transcript":"howlong"}],"final":true}],"result_index":0}
Objective:
My goal is to extract the string "hello" (without the quotation marks) from the first transcript of each block and set it equal to a variable. The problem arises when I do not know what the phrase will be. Instead of "hello", the phrase may be a string of any length. Even if it is a different string, I would still like to set it to the same variable to which the phrase "hello" would have been set to.
Furthermore, I would like to extract the number after the word "confidence". In this case, it is 0.46152416. Data type does not matter for the confidence variable. The confidence variable appears to be more difficult to extract from the blocks because it may or may not be present. If it is not present, it must be ignored. If it is present however, it must be detected and stored as a variable.
Also please note that this text block is stored within a file named "CurlOutput.txt".
All help or advice related to solving this problem is greatly appreciated.
You could do this with regex, but then I am assuming you will want to use this as a dict later in your code. So here is a python approach to building this result as a dictionary.
import json
with open('CurlOutput.txt') as f:
lines = f.read().splitlines()
flag = '{"result":[]} '
for line in lines: # Loop through each lin in file
if flag in line: # check if this is a line with data on it
results = json.loads(line.replace(flag, ''))['result'] # Load data as a dict
# If you just want to change first index of alternative
# results[0]['alternative'][0]['transcript'] = 'myNewString'
# If you want to check all alternative for confidence and transcript
for result in results[0]['alternative']: # Loop over each alternative
transcript = result['transcript']
confidence = None
if 'confidence' in result:
confidence = result['confidence']
# now do whatever you want with confidence and transcript.

Golang CSV error bare " in non-quoted-field

I haven't had trouble parsing csv files for my GAE golang app until this week (I updated to appengine 1.9.23 last week). Now, regardless of file content I am getting this error:
2015/07/09 15:25:34 http: panic serving 127.0.0.1:50352: line 1, column 22: bare " in non-quoted-field
Even when the file content doesn't contain any " characters at all the error occurs.
Anyone know why my files can no longer be parsed? Something changed or I'm doing something super-stupid.
PS using urlfetch to obtain the csv file
This happens when we have on CSV file de " (double quotes) value.
To avoid this error we should use LazyQuotes Parameter like that:
csvFile, _ := os.Open("file.csv")
reader := csv.NewReader(bufio.NewReader(csvFile))
reader.Comma = ';'
reader.LazyQuotes = true
After much ado I determined that the hosting company had updated DotDefender which introduced a rule to block .csv/.tsv arg
If the csv decode library follows RFC-4180
If double-quotes are used to enclose fields, then a double-quote
appearing inside a field must be escaped by preceding it with
another double quote.
For example:
"aaa","b""bb","ccc"

Resources