How to export many variables of a simulation result to csv? - arrays

I'm trying to export my results from Dymola to excel through a csv file. But I have many results. How can I write them in an array??
I tried to create a for loop but I lack the knowledge on how to type the code.
if (time) <= 0 then
Modelica.Utilities.Files.removeFile("tube_0.02"+".csv");
Modelica.Utilities.Streams.print("temps," +
"delta_fr1,"+
"delta_fr2,"+
"delta_fr3,",
"tube_0.02"+".csv");
else
Modelica.Utilities.Streams.print(String(time) +
"," + String( my_code_tube1[1].delta_fr)+
"," + String( my_code_tube1[2].delta_fr)+
"," + String( my_code_tube1[3].delta_fr),
"tube_0.02"+".csv");
end if;
Instead of having to write delta_fr1, delta_fr2... and then my_code_tube1[1].delta_fr, my_code_tube1[2].delta_fr.... I need to create a for loop because I will have almost 1500 variable to export.

Trying to write the results to a csv file during simulation comes with some troubles in Modelica:
For which time steps do you want to write the result? Using the solver step size is not possible.
But we can define a periodic output using the sample() function
Calling print is quite expensive. You don't want to do that too many times while your simulation runs
The print function always adds a newline after every call. That requires us to write every line at once (but due to the performance limits mentioned before we should write as much text as possible at once anyhow).
Unfortunately, the default maximum string size of Dymola is quite small, limited on 500.
You have to ensure that the current Dymola instance is using an appropriate large value by setting e.g. Advanced.MaxStringLength = 50000.
With these things in mind, we could come up with a code like below:
model SO_print_for
My_code_tube my_code_tube1[1500];
protected
String line;
String f="tube_0.02" + ".csv";
model My_code_tube
Real delta_fr=1;
end My_code_tube;
initial algorithm
line := "temps";
for i in 1:size(my_code_tube1,1) loop
line := line + ", delta_fr" + String(i);
end for;
Modelica.Utilities.Files.removeFile(f);
Modelica.Utilities.Streams.print(line, f);
algorithm
when sample(0, 0.1) then
line := String(time);
for i in 1:size(my_code_tube1,1) loop
line := line + ", "+String(my_code_tube1[i].delta_fr);
end for;
Modelica.Utilities.Streams.print(line, f);
end when;
end SO_print_for;
The code works, but it will slow down your simulation, as many time events are generated (from the sample() function).
Instead of writing a csv during the simulation, you should consider one of the following ways to convert the result file after the simulation has finished:
Use Export Result in the Dymola variable browser to export all or only the plotted variables into csv. This must be done manually after every simulation and can not be scrippted
Use the function DataFiles.convertMATtoCSV. The following two lines of code will extract 1500 delta_fr variables from the .mat result file
vars = {"my_code_tube1["+String(i)+"].delta_fr" for i in 1:1500}
DataFiles.convertMATtoCSV("your-model.mat", vars, "out.csv")
Use Matlab, Octave or FreeMat to open the .mat result file and convert it to a file format which Excel understands.
Dymola provides dymload.m to import .mat result files into Matlab. See the User manual volume 1 of Dymola for details
Use python to convert the results to a csv file, e.g. by using the package DyMat or SDF-Python, which can both be used to read the result files

Related

Create 2d array from csv file in python 3

I have a .csv file which contains data like the following example
A,B,C,D,E,F,
1,-1.978,7.676,7.676,7.676,0,
2,-2.028,6.081,6.081,6.081,1,
3,-1.991,6.142,6.142,6.142,1,
4,-1.990,65.210,65.210,65.210,5,
5,-2.018,8.212,8.212,8.212,5,
6,54.733,32.545,32.545,32.545,6,
..and so on
The format is constant.
I want to load the file in a variable "log" and access them as log[row][column]
example
log[0][2] should give C
log[3][1] should give -1
If use this code
file = open('log.csv')
log = list(file)
when i use this code i get only one dimensional. log[row]
Is their any way to directly store them?
for example
read the file
split with '\n'
split again with ','
Thanks!!
Try this
log = [line.split(',') for line in open('log.csv')]

Import XML objects in batches

I'm working on a PowerShell script that deals with a very large dataset. I have found that it runs very well until the memory available is consumed. Because of how large the dataset is, and what the script does, it has two arrays that become very large. The original array is something around a half gig, and the final object is easily six or seven gigs en-memory. My idea is that it should work better if I'm able to release rows as done and run the script in increments.
I am able to split the imported XML using a function I've found and tweaked, but I'm not able to change the data actually contained in the array.
This is the script I'm using to split the array into batches currently: https://gallery.technet.microsoft.com/scriptcenter/Split-an-array-into-parts-4357dcc1
And this is the code used to import and split the results.
# Import object which should have been prepared beforehand by the query
# script. (QueryForCombos.ps1)
$SaveObj = "\\server\share$\me\Global\Scripts\Resultant Sets\LatestQuery.xml"
$result_table_import = Import-Clixml $SaveObj
if ($result_tables.count > 100000) {
$result_tables = Split-Array -inArray $result_table_import -size 30000;
} else {
$result_tables = Split-Array -inArray $result_table_import -parts 6
}
And then of course there is the processing script which actually uses the data and converts it as desired.
For large XML files, I don't think you want to read it all into memory as is required with an XmlDocument or Import-Clxml. You should look at the XmlTextReader as one way to process the XML file a bit at a time.

converting large text file to database

Background
I am not a programmer or technical person
I have a project where I need to convert a large text file to an access database.
The text file is not in traditional flat file format so I need some help pre processing.
The files are large (millions of records) between 100MB and 1GB and seem to be choking all of the editors I have tried (word pad, notepad, vim, em editor)
The following is a sample of the source text file:
product/productId:B000H9LE4U
product/title: Copper 122-H04 Hard Drawn Round Tubing, ASTM B75, 1/2" OD, 0.436" ID, 0.032" Wall, 96" Length
product/price: 22.14
review/userId: ABWHUEYK6JTPP
review/profileName: Robert Campbell
review/helpfulness: 0/0
review/score: 1.0
review/time: 1339113600review/summary: Either 1 or 5 Stars. Depends on how you look at it.
review/text: Either 1 or 5 Stars. Depends on how you look at it.1 Star because they sent 6 feet of 2" OD copper pipe.0 Star because they won't accept returns on it.5 stars because I figure it's actually worth $12-15/foot and since they won't take a return I figure I can sell it and make $40-50 on this deal
product/productId: B000LDNH8I
product/title: Bacharach 0012-7012 Sling Psychrometer, 25?F to 120?F, red spirit filled
product/price: 84.99
review/userId: A19Y7ZIICAKM48
review/profileName: T Foley "computer guy"
review/helpfulness: 3/3
review/score: 5.0
review/time: 1248307200
review/summary: I recommend this Sling Psychrometer
review/text: Not too much to say. This instrument is well built, accurate (compared) to a known good source. It's easy to use, has great instructions if you haven't used one before and stores compactly.I compared prices before I purchased and this is a good value.
Each line represents a specific attribute of a product, starting at "product/productId:"
What I need
I need to convert this file to a character delimited field (i think # symbol work) by stripping out each of the codes (i.e. product/productId:, product/title:, etc and replacing with the # and replacing the line feeds.
I want to eliminate the review/text: line
The output would look like this:
B000H9LE4U#Copper 122-H04 Hard Drawn Round Tubing, ASTM B75, 1/2" OD, 0.436" ID, 0.032" Wall, 96" Length#22.14#ABWHUEYK6JTPP#Robert Campbell#0/0#1.0#1339113600#Either 1 or 5 Stars. Depends on how you look at it.
B000LDNH8I#Bacharach 0012-7012 Sling Psychrometer, 25?F to 120?F, red spirit filled#84.99#A19Y7ZIICAKM48#T Foley "computer guy"#3/3#5.0#1248307200#I recommend this Sling Psychrometer
B000LDNH8I#Bacharach 0012-7012 Sling Psychrometer, 25?F to 120?F, red spirit filled#84.99#A3683PMJPFMAAS#Spencer L. Cullen#1/1#5.0#1335398400#A very useful tool
I now would have a flat file delimited with "#" that I can easily import into access.
Sorry for the ramble. I am open to suggestions, but don't understand programming enough to write using the editor language. Thanks in advance
This is a method I just put together and it comes with no guarantee. It reads the data (you have provided as sample) and displays in the right format as you need.
Public Sub ReadFileAndSave(filePath As String, breakIdentity As String, Optional sepStr As String = "#")
'******************************************************************************
' Opens a large TXT File, reads the data until EOF on the Source,
' then reformats the data to be saved on the Destination
' Arguments:
' ``````````
' 1. The Source File Path - "C:\Users\SO\FileName.Txt" (or) D:\Data.txt
' 2. The element used to identify new row - "new row" (or) "-" (or) "sam"
' 3. (Optional) Separator - The separator, you wish to use. Defauls to '#'
'*******************************************************************************
Dim newFilePath As String, strIn As String, tmpStr As String, lineCtr As Long
'The Destination file is stored in the same drive with a suffix to the source file name
newFilePath = Replace(filePath, ".txt", "-ReFormatted.txt")
'Open the SOURCE file for Read.
Open filePath For Input As #1
'Open/Create the DESTINATION file for Write.
Open newFilePath For Output As #2
'Loop the SOURCE till the last line.
Do While Not EOF(1)
'Read one line at a time.
Line Input #1, strIn
'If it is a blank/empty line SKIP.
If Len(strIn) > 1 Then
lineCtr = lineCtr + 1
'Create a String of the same ID.
tmpStr = tmpStr & Trim(Mid(strIn, InStr(strIn, ":") + 1)) & sepStr
'If a new row needs to be inserted, the BREAK IDENTITY is analyzed.
If InStr(strIn, breakIdentity) <> 0 And lineCtr > 1 Then
'Once the new row is triggered, dump the line in the Destination.
Print #2, Left(tmpStr, Len(tmpStr) - Len(Mid(strIn, InStr(strIn, ":") + 1)) - 1) & vbCrLf
'Prepare the NEXT ROW
tmpStr = Trim(Mid(strIn, InStr(strIn, ":") + 1)) & sepStr
End If
End If
Loop
'Print the last line
Print #2, Left(tmpStr, Len(tmpStr) - 1) & vbCrLf
'Close the files.
Close #1
Close #2
End Sub
Again, this code works on my system and I have not tested the bulk of the matter, so it might be slower in yours. Let me know if this works alright for you.
I'm not sure I understand how you want to map pf your textfile to data base fields.
That's the first thing you need to decide.
Once you've done that I'd suggest putting your text file into columns corresponding to the database columns. Then you should be able to import it into Access.

open text file, modify text, place into sql database with groovy

I have a text file that has a large grouping of numbers (137mb text file) and am looking to use groovy to open the text file, read it line-by-line, modify the numbers, and then place them into a database (as strings). There are going to be 2 items per line that need to be written to separate database columns, which are related.
My text file looks as such:
A.12345
A.14553
A.26343
B.23524
C.43633
C.23525
So the flow would be:
Step 1.The file is opened
Step 2.Line 1 is red
Step 3.Line 1 is split into letter/number pair [:]
Step 4.The number is divided by 10
Step 5.Letter is written to letter data base (as string)
Step 6.Number is written to number database (as string)
Step 7.Letter:number pair is also written to a separate comma separated text file.
Step 8.Proceed to next line (line 2)
Output text file should look like this:
A,1234.5
A,1455.3
A,2634.3
B,2352.4
C,4363.3
C,2352.5
Database for numbers should look like this:
1:1234.5
2:1455.3
3:2634.3
4:2352.4
5:4363.3
6:2352.5
*lead numbers are database index locations, for relational purpose
Database for letters should look like this:
1:A
2:A
3:A
4:B
5:C
6:C
*lead numbers are database index locations, for relational purpose
I have been able to do most of this; the issue I am running into is not be able to use the .eachLine( line -> ) function correctly... and have NO clue how to output the values to the databases.
There is one more thing I am quite dense about, and that is the instance where the script encounters an error. The text file has TONS of entries (around 9000000) so I am wondering if there is a way to make it so if the script fails or anything happens that I can restart the script from the last modified line.
Meaning, the script has an error (my computer gets shut down somehow) and stops running at line 125122 (completes modification of line 125122) of the text file... how do I make it so when I start the script the second time run the script at line 125123.
Here is my sample code so far:
//openfile
myFile = new File("C:\\file.txt")
//set fileline to target
printFileLine = { it }
//set target to argument
numArg = myFile.eachLine( printFileLine )
//set argument to array split at "."
numArray = numArg.split(".")
//set int array for numbers after the first char, which is a letter
def intArray = numArray[2] { it as int } as int
//set string array for numbers after the first char, which is a letter
def letArray = numArray[1] { it as string }
//No clue how to write to a database or file... or do the persistence thing.
Any help would be appreciated.
I would use a loop to cycle over every line within the text file, I would also use Java methods for manipulating strings.
def file = new File('C:\\file.txt')
StringBuilder sb = new StringBuilder();
file.eachLine { line ->
//set StringBuilder to new line
sb.setLength(0);
sb.append(line);
//format string
sb.setCharAt(1, ',');
sb.insert(5, '.');
}
You could then write each line to a new text file, example here. You could use a simple counter (e.g. counter = 0; and then counter++;) to store the latest line that has been read/written and use that if an error occurs. You could catch possible errors within a try/catch statement if you are regularly getting crashes also.
This guide should give you a good start with working with a database (presuming SQL).
Warning, all of this code is untested and should hopefully give you more direction. There are probably many other ways to solve this differently, so keep an open mind.

How to load text file into sort of table-like variable in Lua?

I need to load file to Lua's variables.
Let's say I got
name address email
There is space between each. I need the text file that has x-many of such lines in it to be loaded into some kind of object - or at least the one line shall be cut to array of strings divided by spaces.
Is this kind of job possible in Lua and how should I do this? I'm pretty new to Lua but I couldn't find anything relevant on Internet.
You want to read about Lua patterns, which are part of the string library. Here's an example function (not tested):
function read_addresses(filename)
local database = { }
for l in io.lines(filename) do
local n, a, e = l:match '(%S+)%s+(%S+)%s+(%S+)'
table.insert(database, { name = n, address = a, email = e })
end
return database
end
This function just grabs three substrings made up of nonspace (%S) characters. A real function would have some error checking to make sure the pattern actually matches.
To expand on uroc's answer:
local file = io.open("filename.txt")
if file then
for line in file:lines() do
local name, address, email = unpack(line:split(" ")) --unpack turns a table like the one given (if you use the recommended version) into a bunch of separate variables
--do something with that data
end
else
end
--you'll need a split method, i recommend the python-like version at http://lua-users.org/wiki/SplitJoin
--not providing here because of possible license issues
This however won't cover the case that your names have spaces in them.
If you have control over the format of the input file, you will be better off storing the data in Lua format as described here.
If not, use the io library to open the file and then use the string library like:
local f = io.open("foo.txt")
while 1 do
local l = f:read()
if not l then break end
print(l) -- use the string library to split the string
end

Resources