Optimize a file writing operation in OCaml? - file

basically in my project, I am trying to write a list of strings into file like this:
val mutable rodata_list : (string*string) list = []
.....
let zip1 ll =
List.map (fun (h,e) -> h^e) ll in
let oc = open_out_gen [Open_append; Open_creat] 0o666 "final_data.s" in
List.iter (fun l -> Printf.fprintf oc "%s\n" l) (zip1 rodata_list);
Here is my problem, usually the rodata_list can reach as long as 800,000 size, and the above code on our server (64-bit, 32 core Intel(R) Xeon(R) CPU E5-2690 0 # 2.90GHz) takes about 3.5 seconds.. The OCaml version I use is 4.01.0.
This is not acceptable, especially as I have 4 piece of code like this to write into a file. Totally they could take me over 15 seconds..
I tried this:
Printf.fprintf oc "%s\n" (String.concat "\n" (zip1 rodata_list));
But no obvious improvement..
So I am wondering that, how to optimize this part? I appreciate any solutions. Thank you!

Don't use ^ to concatenate a bunch of strings in performance critical code, as it will lead to quadratic complexity;
Try not to rely on *printf functions, when performance matters (although in OCaml 4.02 it is pretty fast);
Don't apply several iterations on a list in a row, since OCaml doesn't have a deforesting. Try to do as much operations in an iteration as possible;
If you're using lists of 1 million elements, then you're actually doing something wrong. Try to use different data structure;
So, given the advices above we have the following:
List.iter (fun (x,y) ->
output_string oc x;
output_string oc y;
output_char oc '\n') rodata_list
Also, any optimizations should start from profiling, to get the profile you need to compile it with profiling info, for example like this:
ocamlbuild myprogram.p.native
Then you can run program to collect the profile, that can be read with gprof. My guess, that you will spend all the time not in the actual IO, or even concatenation, but in garbage collection, since your zip, will create millions of string.
How fast it should be
So to proof, that you're actually trying to optimize wrong part of your code, I've wrote this small program:
let rec init_rev acc = function
| 0 -> acc
| n -> init_rev (("hello", "world") :: acc) (n-1)
let () = List.iter (fun (x,y) ->
print_string x;
print_endline y) (init_rev [] 1000_000)
It creates a list of one million elements and outputs it:
$ ocamlbuild main.native
$ time ./main.native > data.txt
real 0m0.998s
user 0m0.211s
sys 0m0.783s
This is on macbook laptop. Moreover we spend most of the time in the system, with only 200ms in OCaml. And a simple loop for 1000_000 iterations without creating a list, takes only 11ms.
So, profile.

Related

How can I repeatedly read in shuffled lines of a large data file in Haskell?

I have a data file of 60k lines, where each line has ~1k comma separated Ints (that I want to immediately turn into Doubles).
I want to iterate over a sequence of random "batches" of 32 lines, where a batch is a random subset of all of the lines, and none of the batches share lines in common. Since there are 60k lines and 32 lines per batch, there should be 1875 batches.
I'm open to changing things if necessary, but I'd like them to be in the form of a list (of batches) that's lazily evaluated. The code that needs this is a foldM, where I'm using it like:
resulting_struct <- foldM fold_fn my_struct batch_list
so that it repeatedly calls fold_fn on the result of the current accumulator my_struct and the next element of batch_list.
I'm very confused. It was easy when I didn't need to shuffle them; I simply read them in and chunked them, and they were evaluated lazily, so I had no problems. Now I'm completely stuck and feel like I must be missing something simple.
I've tried the following:
Reading the file into a list of lines and naively shuffling the input. This doesn't work, as readFile is lazily evaluated, but it needs to read the whole file into memory to shuffle it randomly, and it quickly eats up all my ~8 GB RAM.
Getting the length of the file, and then creating a list of batches of shuffled indices from 0 to 60k that correspond to the line numbers that will be selected to form the batches. Then, when I want to actually get the data batches, I do:
ind_batches <- get_shuffled_ind_batches_from_file fname
batch_list <- mapM (get_data_batch_from_ind_batch fname) ind_batches
where:
get_shuffled_ind_batches_from_file :: String -> IO [[Int]]
get_shuffled_ind_batches_from_file fname = do
contents <- get_contents_from_file fname -- uses readFile, returns [[Double]]
let n_samps = length contents
ind = [0..(n_samps-1)]
shuffled_indices <- shuffle_list ind
let shuffled_ind_chunks = take 1800 $ chunksOf 32 shuffled_indices
return shuffled_ind_chunks
get_data_batch_from_ind_batch :: String -> [Int] -> IO [[Double]]
get_data_batch_from_ind_batch fname ind_chunk = do
contents <- get_contents_from_file fname
let data_batch = get_elems_at_indices contents ind_chunk
return data_batch
shuffle_list :: [a] -> IO [a]
shuffle_list xs = do
ar <- newArray n xs
forM [1..n] $ \i -> do
j <- randomRIO (i,n)
vi <- readArray ar i
vj <- readArray ar j
writeArray ar j vi
return vj
where
n = length xs
newArray :: Int -> [a] -> IO (IOArray Int a)
newArray n xs = newListArray (1,n) xs
get_elems_at_indices :: [a] -> [Int] -> [a]
get_elems_at_indices my_list ind_list = (map . (!!)) my_list ind_list
however, it seems like mapM evaluates immediately, which then tries to read in the file contents repeatedly (I think, the RAM blows up anyway).
A bit more searching told me that I could try using unsafeInterleaveIO to make it so it lazily evaluates an action, so I tried sticking it in like so:
get_data_batch_from_ind_batch :: String -> [Int] -> IO [[Double]]
get_data_batch_from_ind_batch fname ind_chunk = unsafeInterleaveIO $ do
contents <- get_contents_from_file fname
let data_batch = get_elems_at_indices contents ind_chunk
return data_batch
but no luck, same problem as above.
I feel like I've been banging my head against the wall here and must be missing something very simple. Someone suggested using streams or conduits instead, but when I looked at the documentation for them, it wasn't really clear to me how I could use them to solve this problem.
How can I read in a large data file and also shuffle it, without using up all my memory?
hGetContents will return the contents of the file lazily, but if you do much of anything with the result you will realize the whole file at once. I suggest reading the file once, and scanning over it for newlines, so that you can build an index of which chunk starts at which byte offset. That index will be quite small, so you can shuffle it easily. Then you can iterate through the index, each time opening the file and reading only a defined sub-range of it, and parsing only that one chunk.

Code becomes slower as more boxed arrays are allocated

Edit: It turns out that things generally (not just array/ref operations) slow down the more arrays have been created, so I guess this might just be measuring increased GC times and might not be as strange as I thought. But I'd really like to know (and learn how to find out) what's happening here though, and if there's some way to mitigate this effect in code that creates lots of smallish arrays. Original question follows.
In investigating some weird benchmarking results in a library, I stumbled upon some behavior I don't understand, though it might be really obvious. It seems that the time taken for many operations (creating a new MutableArray, reading or modifying an IORef) increases in proportion to the number of arrays in memory.
Here's the first example:
module Main
where
import Control.Monad
import qualified Data.Primitive as P
import Control.Concurrent
import Data.IORef
import Criterion.Main
import Control.Monad.Primitive(PrimState)
main = do
let n = 100000
allTheArrays <- newIORef []
defaultMain $
[ bench "array creation" $ do
newArr <- P.newArray 64 () :: IO (P.MutableArray (PrimState IO) ())
atomicModifyIORef' allTheArrays (\l-> (newArr:l,()))
]
We're creating a new array and adding it to a stack. As criterion does more samples and the stack grows, array creation takes more time, and this seems to grow linearly and regularly:
Even more odd, IORef reads and writes are affected, and we can see the atomicModifyIORef' getting faster presumably as more arrays are GC'd.
main = do
let n = 1000000
arrs <- replicateM (n) $ (P.newArray 64 () :: IO (P.MutableArray (PrimState IO) ()))
-- print $ length arrs -- THIS WORKS TO MAKE THINGS FASTER
arrsRef <- newIORef arrs
defaultMain $
[ bench "atomic-mods of IORef" $
-- nfIO $ -- OR THIS ALSO WORKS
replicateM 1000 $
atomicModifyIORef' arrsRef (\(a:as)-> (as,()))
]
Either of the two lines that are commented get rid of this behavior but I'm not sure why (maybe after we force the spine of the list, the elements can actually by collected).
Questions
What's happening here?
Is it expected behavior?
Is there a way I can avoid this slowdown?
Edit: I assume this has something to do with GC taking longer, but I'd like to understand more precisely what's happening, especially in the first benchmark.
Bonus example
Finally, here's a simple test program that can be used to pre-allocate some number of arrays and time a bunch of atomicModifyIORefs. This seems to exhibit the slow IORef behavior.
import Control.Monad
import System.Environment
import qualified Data.Primitive as P
import Control.Concurrent
import Control.Concurrent.Chan
import Control.Concurrent.MVar
import Data.IORef
import Criterion.Main
import Control.Exception(evaluate)
import Control.Monad.Primitive(PrimState)
import qualified Data.Array.IO as IO
import qualified Data.Vector.Mutable as V
import System.CPUTime
import System.Mem(performGC)
import System.Environment
main :: IO ()
main = do
[n] <- fmap (map read) getArgs
arrs <- replicateM (n) $ (P.newArray 64 () :: IO (P.MutableArray (PrimState IO) ()))
arrsRef <- newIORef arrs
t0 <- getCPUTimeDouble
cnt <- newIORef (0::Int)
replicateM_ 1000000 $
(atomicModifyIORef' cnt (\n-> (n+1,())) >>= evaluate)
t1 <- getCPUTimeDouble
-- make sure these stick around
readIORef cnt >>= print
readIORef arrsRef >>= (flip P.readArray 0 . head) >>= print
putStrLn "The time:"
print (t1 - t0)
A heap profile with -hy shows mostly MUT_ARR_PTRS_CLEAN, which I don't completely understand.
If you want to reproduce, here is the cabal file I've been using
name: small-concurrency-benchmarks
version: 0.1.0.0
build-type: Simple
cabal-version: >=1.10
executable small-concurrency-benchmarks
main-is: Main.hs
build-depends: base >=4.6
, criterion
, primitive
default-language: Haskell2010
ghc-options: -O2 -rtsopts
Edit: Here's another test program, that can be used to compare slowdown with heaps of the same size of arrays vs [Integer]. It takes some trial and error adjusting n and observing profiling to get comparable runs.
main4 :: IO ()
main4= do
[n] <- fmap (map read) getArgs
let ns = [(1::Integer).. n]
arrsRef <- newIORef ns
print $ length ns
t0 <- getCPUTimeDouble
mapM (evaluate . sum) (tails [1.. 10000])
t1 <- getCPUTimeDouble
readIORef arrsRef >>= (print . sum)
print (t1 - t0)
Interestingly, when I test this I find that the same heap size-worth of arrays affects performance to a greater degree than [Integer]. E.g.
Baseline 20M 200M
Lists: 0.7 1.0 4.4
Arrays: 0.7 2.6 20.4
Conclusions (WIP)
This is most likely due to GC behavior
But mutable unboxed arrays seem to lead to more sever slowdowns (see above). Setting +RTS -A200M brings performance of the array garbage version in line with the list version, supporting that this has to do with GC.
The slowdown is proportional to the number of arrays allocated, not the number of total cells in the array. Here is a set of runs showing, for a similar test to main4, the effects of number of arrays allocated both on the time taken to allocate, and a completely unrelated "payload". This is for 16777216 total cells (divided amongst however many arrays):
Array size Array create time Time for "payload":
8 3.164 14.264
16 1.532 9.008
32 1.208 6.668
64 0.644 3.78
128 0.528 2.052
256 0.444 3.08
512 0.336 4.648
1024 0.356 0.652
And running this same test on 16777216*4 cells, shows basically identical payload times as above, only shifted down two places.
From what I understand about how GHC works, and looking at (3), I think this overhead might be simply from having pointers to all these arrays sticking around in the remembered set (see also: here), and whatever overhead that causes for the GC.
You are paying linear overhead every minor GC per mutable array that remains live and gets promoted to the old generation. This is because GHC unconditionally places all mutable arrays on the mutable list and traverses the entire list every minor GC. See https://ghc.haskell.org/trac/ghc/ticket/7662 for more information, as well as my mailing list response to your question: http://www.haskell.org/pipermail/glasgow-haskell-users/2014-May/024976.html
I think you're definitely seeing GC effects. I had a related issue in cassava (https://github.com/tibbe/cassava/issues/49#issuecomment-34929984) where the GC time was increasing linearly with increasing heap size.
Try to measure how the GC time and mutator time increase as you hold on to more and more arrays in memory.
You can reduce GC time with playing with the +RTS options. For example, try setting -A to your L3 cache size.

Haskell Constant Propagation on Data Structures?

I want to know how deeply Haskell evaluates data structures at compile time.
Consider the following list:
simpleTableMultsList :: [Int]
simpleTableMultsList = [n*m | n <- [1 ..9],m <- [1 ..9]]
This list gives a representation of the multiplication table for 1 through 9. Now, suppose we want to change it so that we represent the product of two one digit numbers as a pair of numbers (first digit, second digit). Then we may consider
simpleTableMultsList :: [(Int,Int)]
simpleTableMultsList = [(k `div` 10, k `rem` 10) | n <- [1 ..9],m <- [1 ..9],let k = n*m]
Now we can implement multiplication on one digit numbers as a table lookup. YAY!! However, we want to be more efficient than this! So we want to make this structure an unboxed array. Haskell gives a really great way to do this using
import qualified Data.Array.Unboxed as A
Then we can do:
simpleTableMults :: A.Array (Int,Int) (Int,Int)
simpleTableMults = A.listArray ((1,1),(9,9)) simpleTableMultsList
Now if I want a constant time multiplication of two one digit numbers n and m, I can do:
simpleTableMults ! (n,m)
This is great! Now suppose I compile this module we've been working on. Does the simpleTableMults get fully evaluated so that when I run the computation simpleTableMults ! (n,m), the program literally makes a lookup in memory ... or does it have to build the data structure in memory first. Since it is an unboxed array, my understanding is that the Array must be created at once and is completely strict in its elements -- so that all the elements of the array are fully evaluated.
So really my question is: when does this evaluation occur, and can I force it to occur at compile time?
------- Edit ---------------
I tried to dig further on this! I tried compiling and examining information about the core. It seems GHC is performing a lot of reductions on the code at compile time. I wish I knew more about core to be able to tell. If we compile with
ghc -O2 -ddump-simpl-stats Main.hs
We can see that 98 beta reductions are performed, an unpack-list operation is carried out, many things are unfolded, and a bunch of inlines are performed (around 150). It even tells you where the beta reductions occur, ... since the word IxArray is coming, I am more curious if some sort of simplification is occuring. Now the interesting thing from my point of view is that adding
simpleTableMults = D.deepseq t t
where t = A.listArray ((1,1),(9,9)) simpleTableMultsList
increases the number of beta reductions, inlines, and simplifications quite substantially at compile time. It would be really great if I could load the compiled into a debugger of some sort and "view" the data structure! I am, as it stands, more mistified than before.
------ Edit 2 -------------
I still don't know what beta reductions are being performed. However, I did find out some interesting things based on sassa-nf's repsonse response. For the following experiment, I used the ghc-heap-view package. I changed the way Array was represented in the source according to the Sassa-NF answer. I loaded the program into GHCi, and immediately called
:printHeap simpleTableMults
And as expected got a index too large exception. But under the suggested unpacked datatype, I got a let expression with a toArray and a bunch of _thunks, and some _funs. Not really sure yet what these mean ... The other interesting thing is that by using seq, or some other strictness forcing in the source code, I ended up with all _thunks inside of the let. I can upload the exact emission if that helps.
Also, if I perform a single indexing, the array gets completely evaluated in all cases.
Also, there is no way to call ghci with optimizations, so I might not be getting the same results as when compiled with GHC -O2.
Let's exaggerate:
import qualified Data.Array.Unboxed as A
simpleTableMults :: A.Array (Int,Int) (Int,Int)
simpleTableMults = A.listArray ((1,1),(10000,2000))
[(k `div` 10, k `rem` 10) | n <- [1 ..10000],m <- [1 ..2000],let k = n*m]
main = print $ simpleTableMults A.! (10000,1000)
Then
ghc -O2 -prof b.hs
b +RTS -hy
......Out of memory
hp2hs b.exe.hp
What happened?! You can see the heap consumption graph to go above 1GB, and then it died.
Well, the pair is computed eagerly, but the projections of the pair are lazy, so we end up with tons of thunks to compute k ``div`` 10 and k ``rem`` 10.
import qualified Data.Array.Unboxed as A
data P = P {-# UNPACK #-} !Int {-# UNPACK #-} !Int deriving (Show)
simpleTableMults :: A.Array (Int,Int) P
simpleTableMults = A.listArray ((1,1),(10000,2000))
[P (k `div` 10) (k `rem` 10) |
n <- [1 ..10000],m <- [1 ..2000],let k = n*m]
main = print $ simpleTableMults A.! (10000,1000)
This one is fine, because we eagerly computed the pair.

How to make my Haskell code use Laziness and Garbage collector

I wrote a Haskell code which has to solve the following problem : we have n files : f1, f2, f3 .... fn and I cut those files such a way that each slice has 100 lines
f1_1, f1_2, f1_3 .... f1_m
f2_1, f2_2, .... f2_n
...
fn_1, fn_2, .... fn_k
finally I construct a special data type (Dags) using slices in the following way
f1_1, f2_1, f3_1, .... fn_1 => Dag1
f1_2, f2_2, f3_2, ..... fn_2 => Dag2
....
f1_k, f2_k, f3_k, ..... fn_k => Dagk
the code that I wrote start by cutting all the files, then it couple the i-th elements of the results list and construct Dag using the final result list
it looks like this
-- # take a filename and cut the file in slices of 100 lines
sliceFile :: FilePath -> [[String]]
-- # take a list of lists and group the i-th elements into list
coupleIthElement :: [[String]] -> [[String]]
-- # take a list of lines and create a DAG
makeDags :: [String] -> Dag
-- # final code look like this
makeDag_ :: [FilePath] -> [Dag]
makeDags files = map makeDags $ coupleIthElement (concat (map sliceFile files))
The problem is that this code is non-efficient because :
it needs storing all the files in memory in list form
the garbage collector is not working efficiently since all fonctions need the results list of the previous fonction
How could I re-write my program to take advantage of garbage collector work and Laziness of Haskell ?
if not possible or easier, what can i do to be more efficient even a bit ?
thanks for reply
edit
coupleIthElement ["abc", "123", "xyz"] must return ["a1x","b2y","c3z"]
of cause the 100 lines are arbitrary selected using a particular criteria upon some element of the lines but i discard this aspect to make the problem more easier to understand,
another edition
data Dag = Dag ([(Int, String)], [((Int, Int), Int)]) deriving Show
test_dag = Dag ([(1, "a"),(2, "b"),(3, "c")],[((1,2),1),((1,3),1)])
test_dag2 = Dag ([],[])
the first list is each vertice define by the number and the label, the second list is the edges ((1,2),3) means edge between vertice 1 and 2 with the cost 3
A few points:
1) Have you considered using fgl? It's probably more efficient than your own Dag implementation. If you really need to use Dag, you could construct your graphs with fgl then convert them to Dag when they're complete.
2) It seems like you don't actually use the slices when constructing your graphs, rather they control how many graphs you have. If so, how about something like this:
dagFromHandles :: [Handle] -> IO Dag
dagFromHandles = fmap makeDags . mapM hGetLine
allDags :: [FilePath] -> IO [Dag]
allDags listOfFiles = do
handles <- mapM (flip openFile ReadMode) listOfFiles
replicateM 100 (dagFromHandles handles)
This assumes that each file has at least 100 lines, and any extra lines will be ignored. Even better would be if you had a function that would consume a Dag, then you could do
useDag :: Dag -> IO ()
runDags :: [FilePath] -> IO ()
runDags listOfFiles = do
handles <- mapM (flip openFile ReadMode) listOfFiles
replicateM_ 100 (dagFromHandles handles >>= useDag)
This should make more efficient use of garbage collection.
Of course this assumes that I understand the problem properly, and I'm not certain that I do. Note that concat (map sliceFile) should be a no-op (sliceFile would need to be in IO as you've defined the type, but ignoring that for now), so I don't see why you're bothering with it at all.
If it's not needed to process your file in slices, avoid this. Haskell does this automatically! In Haskell, you think of IO as a stream. Data is read from input, as soon as it's needed and discarded, as soon as it's unused. So for instance, this is an easy file-copying programm:
main = interact id
interact has the signature interact :: (String -> String) -> IO (), and feeds the input into a function which handles it and produces some output, which is written to stdout. This program is more efficient then most C-implementations, as the runtime automatically buffers the input and output.
If you want to understand laziness, you have to forget all the wisdom you learned as a imperative programmer and have to think about a program as a description to modify data, not as a set of instructions - data is only processed when needed!
The key point, why your data may be handled the wrong way is the multiple traversion of the list. Your function makeDags traverses the transposed the slices list one by one, so the elements of the original list may not be discarded. What you should try, is to write your function in a way like this:
sliceFile :: FilePath -> [[String]]
sliceFile fp = do
f <- readFile fp
let l = lines fp
slice [] = []
slice x = ll : slice ls where (ll,ls) = splitAt 100 x
return slice l
sliceFirstRow :: [[String]] -> ([String],[[String]])
sliceFirstRow list = unzip $ map (\(x:xs) -> (x,xs)) list
makeDags :: [[String]] -> [Dag]
makeDags [[]] = []
makeDags list = makeDag firstRow : makeDags restOfList where
(firstRow,restOfList) = sliceFirstRow list
This function may be a solution, since the first row is no longer referenced, when it's done. But in the most places, this is a result of laziness, so you could probably try to use seq to force building the Dags and allowing the IO data to be garbage-collected. (If you don't force building the dags, the data can't be garbage collected).
But anyway, I could provide a more helpfull answer, if you give some informations about what these dags are.

How to split a 110Mo file with Haskell

I have a file which look like this index : label, index's value contain keys in the range of 0... 100000000 and label can be any String value, I want split this file which has 110 Mo in many slices of 100 lines each an make some computation upon each slice. How can I do this?
123 : "acgbdv"
127 : "ytehdh"
129 : "yhdhgdt"
...
9898657 : "bdggdggd"
If you're using String IO, you can do the following:
import System.IO
import Control.Monad
-- | Process 100 lines
process100 :: [String] -> MyData
-- whatever this function does
loop :: [String] -> [MyData]
loop lns = go [] lns
where
go acc [] = reverse acc
go acc lns = let (this, next) = splitAt 100 lns in go (process100 this:acc) next
processFile :: FilePath -> IO [MyData]
processFile f = withFile f ReadMode (fmap (loop . lines) . hGetContents)
Note that this function will silently process the last chunk even if it isn't exactly 100 lines.
Packages like bytestring and text generally provide functions like lines and hGetContents so you should be able to easily adapt this function to any of them.
It's important to know what you're doing with the results of processing each slice, because you don't want to hold on to that data for longer than necessary. Ideally, after each slice is calculated the data would be entirely consumed and could be gc'd. Generally either the separate results get combined into a single data structure (a "fold"), or each one is dealt with separately (maybe outputting a line to a file or something similar). If it's a fold, you should change "loop" to look like this:
loopFold :: [String] -> MyData -- assuming there is a Monoid instance for MyData
loopFold lns = go mzero lns
where
go !acc [] = acc
go !acc lns = let (this, next) = splitAt 100 lns in go (process100 this `mappend` acc) next
The loopFold function uses bang patterns (enabled with "LANGUAGE BangPatterns" pragma) to force evaluation of the "MyData". Depending on what MyData is, you may need to use deepseq to make sure it's fully evaluated.
If instead you're writing each line to output, leave loop as it is and change processFile:
processFileMapping :: FilePath -> IO ()
processFileMapping f = withFile f ReadMode pf
where
pf = mapM_ (putStrLn . show) <=< fmap (loop . lines) . hGetContents
If you're interested in enumerator/iteratee style processing, this is a pretty simple problem. I can't give a good example without knowing what sort of work process100 is doing, but it would involve enumLines and take.
Is it necessary to process exactly 100 lines at a time, or do you just want to process in chunks for efficiency? If it's the latter, don't worry about it. You'd most likely be better off processing one line at a time, using either an actual fold function or a function similar to processFileMapping.

Resources