Extract a .txt file to a .mat file - database

I am working with a publicly available database in which four files are there : They are all .txt documents. How can I put them in a .mat format ? I am giving a simple example:
A.txt file
1 2 3 4 5 6
7 8 9 1 2 3
4 5 6 7 8 9
1 2 3 4 9 8
So I need to form a matrix with 4 rows and 6 columns. The data in the txt format is separated by 'space' delimiter. The rows are separated by 'newline'. Typically the .txt documents that I will handle will have sizes 130x1000, 3200x58, etc. Can anyone please help me regarding this? The publicly database is available at : click link. Please download the dataset under the topic "Multimodal Texture Dataset".

You can load the .txt file into MatLab:
load audio.txt
then save them
save audio audio
(the first "audio" is the ".mat" file, the second "audio" is the name of the variable stored in it.
Hope this helps.

Related

Regexextract number just before specific text Google sheets

I need to extract the number that comes right before the text Positions
Example String:
Medical Specialist (Anaestheologist) (4 Positions) at Ministry
Valid Output should be 4
Example String 2 (If text Positions doesn't exist)
Medical Specialist (Anaestheologist) (4) at Ministry
Valid Output
4
I tried:
=REGEXEXTRACT(A24,"\(.*Positions.*\)") but it did not work.
try:
=ARRAYFORMULA(REGEXEXTRACT(A1:A2; "(\d+)(?: Positions)?"))

Divide data file as unique data each iteration among multiple virtual users in Gatling

I'm Interest on how we can divide test data unique among multiple threads in Gatling
Example
Virtual users : 3
Data in file : 9
Divide 3 data into each virtual user
user 1 : dataline 1, dataline 2 ,dataline 3
user 2 : dataline 4, dataline 5,dataline 6
user 3 : dataline 7, dataline 8, dataline 9
This guide is not only yet (it will only be along with the Gatling 3.7 release), but you can check the doc sources commit for an example on how to do this kind of things.
Basically, you have to use readRecords to grab all the data from your csv file, and then apply whatever grouping strategy you want.

Unable to decode all information from Oracle RAW data

I have an application where I can upload files and add metadata to the file. This metadata information is stored in a database, but parts of the added information is encoded somehow (sadly I have no access to the source code).
The raw representation of the metadata in the Oracle database is as follows:
00000009010000000000000000512005B69801505B000000010000000700000040000000010000000A0100000006496D616765000000003C000000010000000A010000000A696D6167652F706E670000000027000000030000000501000000010000000500000001010000000B64653A3132332E706E6700000002A8000000030000000501000000030000000700000001010000000E737461636B6F766572666C6F770000000042000000010000000A010000001844433078303166363565396420307830303033336433640000000A2600000001000000020100033D3D0000003E000000010000000A0100000021346266653539343939343631356333323861613736313431636337346134353900
Whereas the raw sequence
737461636B6F766572666C6F77
corresponds to
stackoverflow
The query
select UTL_RAW.CAST_TO_VARCHAR2(<raw_data>) from dual;
returns the string below:
Here the values of the metadata are shown. But the names/identifier of the properties are unreadable. The corresponding name/identifier of stackoverflow should be test or a foreign key to a table that contains test. The other data contains additional information about the file (like the checksum, title or mime type)
Is it possible to retrieve the unreadable data (identifier) from the raw string?
RAW columns are not always containing a string, since the results it looks like that the content is binary data, more exactly a jpg file which has a string header in it but among binary information.
Converting it to a varchar will generate invalid charcode that are represented as rectangular boxes.
What you are doing here with varchar is the equivalent of opening a binary file, i.e a winword.doc or even a .jpeg by using Notepad.
To be able to get the content you need to treat it as image, not as varchar.
You can obtain the jpg file by using PLSQL as described here:
http://www.dba-oracle.com/t_extract_jpg_image_photo_sql_file.htm
Eventually it is possible to get all the content without loss in a char datatype using the following:
select RAWTOHEX(<raw_data>) from dual;
This will return the whole content as character value containing its hexadecimal equivalent and should not present any invalid ANSI character which is rapresented with a rectangular box.
Indeed you will not be able to read anymore "stackoverflow" or any other text since you will get only a sequence of HEX values.
You will need then from your program to convert it to binary/image and treat it properly.
Both "A01" and "101" are used to preface a 4 byte length followed by the Text, which is null terminated
00000009 010000000000000000512005B69801505B000000010000000700000040000000010000000A01
00000006 496D61676500 Image
0000003C 000000010000000A01
0000000A 696D6167652F706E6700 image/png
00000027 00000003000000050100000001000000050000000101
0000000B 64653A3132332E706E6700 de:123.png
000002A8 00000003000000050100000003000000070000000101
0000000E 737461636B6F766572666C6F7700 stackoverflow
00000042 000000010000000A01
00000018 444330783031663635653964203078303030333364336400
D C 0 x 0 1 f 6 5 e 9 d 0 x 0 0 0 3 3 d 3 d
00000A26 00000001000000020100033D3D0000003E000000010000000A01
00000021 346266653539343939343631356333323861613736313431636337346134353900
4 b f e 5 9 4 9 9 4 6 1 5 c 3 2 8 a a 7 6 1 4 1 c c 7 4 a 4 5 9

create a new file from a string of text

I need to create a series of files with the file name of a line of text in a text document. I am on a mac and don't know about scripting. Is there a simple utility or a Terminal command where I can drag and drop in the file with the text in it and it would make the new files in the same directory?
The text file would say:
Week 1 Session 1
Week 1 Session 2
Week 1 Session 3
Week 2 Session 1
Week 2 Session 2
Week 2 Session 3
Week 3 Session 1
Week 3 Session 2
Week 3 Session 3
Week 4 Session 1
Week 4 Session 2
Week 4 Session 3
And so on for 35 weeks, with 'Week' starting a new line. This is something I would have to make periodically.
Having searched for an answer, I can see plenty of file names to a text file but not much the other way.
I apologise if this appears too simplistic but I would appreciate any help with this.
Thank you for the advice but I did find out how to do it.
On the mac you open a Terminal window and write 'touch' then the file name(s). If there is a gap in the file name, you put it in double inverted commas like this: "file 1.txt" then a space and more inverted commas if you want a second file such as
"file 1.txt" "file 2.txt"
I used Numbers (like Excel) to make all the names and pasted it in.
And that way it all worked very quickly. Low tech I am sure, but quick and useful.
Hopefully that answer will be useful to someone like me in the future.

How do I parse custom file with F#?

I have a custom file which contains data in a format like below
prop1: value1
prop2: value2
prop3: value 2
Table Instance 1
A B C D E
10 11 12 13 14
12 13 11 12 20
Table Instance 2
X Y Z
1 3 4
3 4 0
Table Instance 3
P R S
2 3 5
5 5 0
I want to be able to parse this file and map the contents to a POCO. I was really excited about working with CSV type provider in F#, but then I quickly realized that it might be not possible to use that in my case. Should I have to write my own parser in this case? (Iterate through each like and parse the values into its appropriate properties in POCO)
Thanks
Kay
If that's a one-of file format, I would just write a parser by hand. Split the file into separate tables, throw away the title and header, then String.Split each row and parse the resulting array into a record type specific for the table.
If that file format is more or less standardized and you expect that you'll need to parse it more often and in different contexts (and/or you're feeling adventurous), you can always write your own type provider.

Resources