Divide data file as unique data each iteration among multiple virtual users in Gatling - gatling

I'm Interest on how we can divide test data unique among multiple threads in Gatling
Example
Virtual users : 3
Data in file : 9
Divide 3 data into each virtual user
user 1 : dataline 1, dataline 2 ,dataline 3
user 2 : dataline 4, dataline 5,dataline 6
user 3 : dataline 7, dataline 8, dataline 9

This guide is not only yet (it will only be along with the Gatling 3.7 release), but you can check the doc sources commit for an example on how to do this kind of things.
Basically, you have to use readRecords to grab all the data from your csv file, and then apply whatever grouping strategy you want.

Related

Google sheets Find next numeric value from the colnums

I have two rows with SKUs, one comes from one database, and another one from another one. As you can see in the visual example SKUs with values 1, 2, 4 & 5 are present in both databases.
https://i.stack.imgur.com/KlTCr.png
I start with number 1 and I need a formula that would bring up the next valid number (in this case number 2) that is present in both columns.
I would need a formula that would do the following:
If I lookup 1 it should bring 2
If I lookup 2 it should bring 4
If I lookup 4 it should bring 5
Thank you in advance
try:
={FILTER(B2:B16, COUNTIF(E2:E16, B2:B16)),
{QUERY(FILTER(B2:B16, COUNTIF(E2:E16, B2:B16)), "offset 1", ); ""}}

sentiment140 dataset doesn't contain label 2 i.e neutral sentences when uploading it from HuggingFace

I want to work with the sentiment140 dataset for a sentiment analysis task, as I saw that it contains normally the following labels :
0, 4 for pos and neg sentences
2 for neutral sentences
which I found when looking at the dataset on their website :
https://huggingface.co/datasets/sentiment140#data-fields
But after importing it on my notebook it tells me that it contains just two labels :
0 for neg
4 for pos !!!
So how to get full dataset with the three labels ?
You are correct on the fact that the HuggingFace sentiment140 dataset only contains 2 labels in the training set (0 and 4); however, the test set contains the 3 labels (0, 2 and 4).
You could open a discussion here to ask the authors if this is normal.

Spreading data for one tenant to multiple shards in Solr

I am trying to set up a sharded set up for Solr.
My index file size is 100 GB and increasing.
Data distribution is across 32 tenant where each tenant has different count of document:
Tenant 1 : 7 M
Tenant 2 : 3 M
Tenant 3 : 3 M
Tenant 4 : 2 M
Tenant 5-28 : 6 M (combined)
So i was trying to use bit operation in shard key like this :
tenant/2!uniqueId
I have one query here if I have 5 shards in total then how this bit num 2 will spread data.
I want to spread data for one tenant to 2 shards.
Could you please help me to identify minim number of shards in this set up to achieve this.
Thanks
Virendra Agarwal

Querying REDIS with HMSET

I am using REDIS data store and I have created HMSET like:
HMSET key:1 source 5 target 2
HMSET key:2 source 3 target 1
HMSET key:3 source 3 target 5
HMSET key:4 source 6 target 2
HMSET key:5 source 2 target 3
Now, I want to query these keys based on the provided source and target list. Suppose, the list of source and target is [2, 3, 6]
I want to have a query like
select from key where source in[2, 3, 6] and traget in[2, 3, 6]
which will give me the result like
key:4 source 6 target 2
key:5 source 2 target 3
With a dataset like this (only a few sets), your only option is to iterate them (either in a lua script or by fetching into the app) and do the filtering yourself by inspecting the hashes.
To speed things up, you could maintain the secondary indexes (again, the effort is yours). Something like:
SADD source:3 key:2 key:3
SADD target:2 key:1 key:4
Then you can relatively quickly find all matching keys by issuing SINTERSTORE command
SINTERSTORE found_keys source:2 source:3 source:6 target:2 target:3 target:6
You'll have the keys you seek under the found_keys name.
Although, if you find yourself do this, you should ask yourself: why don't I just give up and use an SQL-capable database, because I clearly want one.

Extract a .txt file to a .mat file

I am working with a publicly available database in which four files are there : They are all .txt documents. How can I put them in a .mat format ? I am giving a simple example:
A.txt file
1 2 3 4 5 6
7 8 9 1 2 3
4 5 6 7 8 9
1 2 3 4 9 8
So I need to form a matrix with 4 rows and 6 columns. The data in the txt format is separated by 'space' delimiter. The rows are separated by 'newline'. Typically the .txt documents that I will handle will have sizes 130x1000, 3200x58, etc. Can anyone please help me regarding this? The publicly database is available at : click link. Please download the dataset under the topic "Multimodal Texture Dataset".
You can load the .txt file into MatLab:
load audio.txt
then save them
save audio audio
(the first "audio" is the ".mat" file, the second "audio" is the name of the variable stored in it.
Hope this helps.

Resources