How to read multiple values in one row from csv file to post body as variable in Jmeter - arrays

How to make an Post HTTP request Body from multiple values in a single column from a CSV file?
In CSV file, under the transactional_currencies, I need to insert two or more values as per requirement.
This is the Json Body need to pass in Post HTTP request Body
{
"country_name": "${country}",
"status": "$ {status}",
"transactional_currencies": ["${transactional_currencies[0]", "${transactional_currencies[1]"]
}``
``

I'm not sure about what did you ask but let's give it a try
You should define a delimiter character in your CSV dataset config element.
For example
Single Line in our CSV: sample#sample.com,testusername,testpassword
After adding a CSV data config to your project, open the configuration panel.
Choose your delimiter character as " , " since the line above has
some commas.
Choose variable names as "email", "username" and "password"
Now you have splitted a single csv line into 3 different variables.
On your request's body, write these variables as ${email} , ${username}
, ${password}

Not with the CSV Data Set Config
If you have fixed number of entries in the transactional_currencies (i.e. always 2) you can use __CSVRead() function where you will be able to decide when to go to the next entry/row.
If the number of entries in the transactional_currencies is dynamic, you can go for JSR223 PreProcessor and build your request body using Groovy language as it's described in the Apache Groovy - Parsing and producing JSON article

Related

How to encounter error when uploading an arff file in WEKA?

Hi Im using WEKA for data mining and i have a project based on kid's usage of internet.I have downloaded the data from openML in .arff form and im processing them in Notepad i have changed the values "," to "." and "?" to ",". Although when i try to open the file in WEKA i get this
"nominal value not declaired in header, read line 76"
line 76 is the first information after #data
Error:
error.png
The ARFF defines comma as the separator between columns. Replacing commas with periods essentially turned all columns into a single nominal value which wasn't defined for that column in the header. Nominal attributes require all possible values to be declared in the header section for that attribute.
What was the reason for converting commas to periods?

SoapUI how to generate dynamically request password?

I need to generate the password of the request dynamically, because I need to concatenate it with a timestamp and encode the result with SHA-256 to get the actual password.
Is there a way to generate that password to every request?
Where should the script be created to generate the password, and how can it be added to the request or to a variable that is read in the request?
You have full access to the Groovy language in SoapUI. You can do any sort of coding in a Groovy script test step. Then you can store the resulting value in a property:
testRunner.testCase.setPropertyValue("passwordVar", passwdResult)
And in the request XML you parameterize the value to be read from the property:
<passwordNode>${#TestCase#passwordVar}</passwordNode>
The only catch is that you will have to execute the Groovy step before the SOAP request step, but that can be done at test case level, or in a loop in Groovy, depending on your project structure. I usually have a Groovy script that:
does calculations or SQL to get input values
sets properties
calls the SOAP steps
extract required response values from resp XML
in a loop.

Changing .csv delimiter on ADF

I am trying to load a .csv table to MS SQL Server via Azure Data Factory, but I have a problem with the delimiter (;) since it appears as a character in some of the values included in some columns.
As a result, I get an error saying in the details "found more columns than expected column count".
Is there any way to change the delimiter directly on ADF before/while loading the .csv table (ex.: making it from ";" to "|||")?
Thanks in advance!
I have a problem with the delimiter (;) since it appears as a
character in some of the values included in some columns.
As you have quoted that your delimiter is ; but it is occurring as a character in some of the columns which means that there is no specific pattern of the occurrence. Hence, it is not possible in ADF.
The recommendation is to write a program using any preferred language (like python) which will iterate each row from the dataset and write a logic to replace the delimiter to ||| or you can also remove the unrequired ; and append the changes in new file. Later you can ingest this new file in ADF.

Bulk API Error : InvalidBatch : Field name not found : Id

I am trying to update contact records one field - Ext_Id__c via bulk api. We have created the csv file with two columns and there no whitespaces in the header names.
I am creating the job and pushing the batches to the job via a simple java client. the jab and batches are getting created successfully , however it's not updating the contact , instead it's gving below error:
BATCH STATUS:
[BatchInfo id='7512D000000XUV0QAO'
jobId='7502D000000KWQuQAO'
state='Failed'
stateMessage='InvalidBatch : Field name not found : LastName'
.......
..........
numberRecordsProcessed='0'
numberRecordsFailed='0'
totalProcessingTime='0'
apiActiveProcessingTime='0'
apexProcessingTime='0'
]
I have all the neccesary access at field level for both fields. Can anyone please help?
So the issue was that , the csv file which we were uploading was saved using the format - "CSV UTF-8 (Comma Delimited (.csv))" :
See the old Format : 1
and due to this the system wasn't recognizing the first column header as a valid field - not sure why , may be dur to Bulk API V1.0
So as a solution , we saved the file in plain csv format , i.e. - "Comma Separated Values (.csv)" ,
Here is the new format: 2
and this resolved the issue !!!

Pentaho kettle transformation - skip 1st line in csv file

I am working on a csv file and format of csv file is very similar to :
FIRST LINE---FIRST LINE---
deptno,dname,loction
10,ACCOUNTING,NEW YORK
20,RESEARCH,DALLAS
30,SALES,CHICAGO
40,OPERATIONS,BOSTON
Now I want to skip 1st line (), when this file will be read by csv input or text input step.
2nd line is header.
Is there any method or transformation to achieve this requirement?
At this moment, I am CSV File input but I don't find.
I'm using Pentaho 5.0.1
PS : Sorry for my english
Thanks a lot
You could put your strange string in the fields tab ("FIRST LINE---FIRST LINE---") as it would be a regular header. Then you split all the rows with a split fields step.
Otherwise you could use the "Load file content in memory" step and check, in the content tab, the rownum field. Then you can use the "filter rows step" to skip the first one. After this you can put everything in a new csv file, which should be correct.
You can define the header rows on the tab Content of the text input component. See the following screen shot.

Resources