OCaml - Read csv file into array - arrays

I'm trying to import a csv file in OCaml into an array. I do realise it's not the best fit for the langage and I'm not actually sure an array is the best structure, but anyway...
It's working fine, but I'm really uneasy about the way I did it.
let import file_name separator =
let reg_separator = Str.regexp separator in
let value_array = Array.make_matrix 1600 12 0. in
let i = ref 0 in
try
let ic = open_in file_name in
(* Skip the first line, columns headers *)
let _ = input_line ic in
try
while true; do
(* Create a list of values from a line *)
let line_list = Str.split reg_separator (input_line ic) in
for j = 0 to ((List.length line_list) - 1) do
value_array.(!i).(j) <- float_of_string (List.nth line_list j)
done;
i := !i + 1
done;
value_array
with
| End_of_file -> close_in ic; value_array
with
| e -> raise e;;
Basically, I read the file line by line, and I split each line along the separator. The problem is that this returns a list and thus the complexity of the following line is really dreadfull.
value_array.(!i).(j) <- float_of_string (List.nth line_list j)
Is there any way to do it in a better way short of recoding the whole split thing by myself?
PS : I haven't coded in Ocaml in a long time, so I'm quite unsure about the try things and the way I return the array.
Cheers.

On OCaml >=4.00.0, you can use the List.iteri function.
List.iteri
(fun j elem -> value_array.(!i).(j) <- float_of_string elem)
line_list
You can replace your for-loop with this code and it should work nicely (of course, you need to keep the ;).
On older version of OCaml, you can use List.iter with a reference you manually increment or, in a cleaner way, declare your own iteri.
Note that your code is not very safe, notably with respect to your file's size (in terms of number of lines and columns, for example). Maybe you should put your dimension parameters as function arguments for a bit of flexibility.
EDIT: for future readers, you can use the very simple ocaml-csv (through OPAM: opam install csv)

Related

How can I repeatedly read in shuffled lines of a large data file in Haskell?

I have a data file of 60k lines, where each line has ~1k comma separated Ints (that I want to immediately turn into Doubles).
I want to iterate over a sequence of random "batches" of 32 lines, where a batch is a random subset of all of the lines, and none of the batches share lines in common. Since there are 60k lines and 32 lines per batch, there should be 1875 batches.
I'm open to changing things if necessary, but I'd like them to be in the form of a list (of batches) that's lazily evaluated. The code that needs this is a foldM, where I'm using it like:
resulting_struct <- foldM fold_fn my_struct batch_list
so that it repeatedly calls fold_fn on the result of the current accumulator my_struct and the next element of batch_list.
I'm very confused. It was easy when I didn't need to shuffle them; I simply read them in and chunked them, and they were evaluated lazily, so I had no problems. Now I'm completely stuck and feel like I must be missing something simple.
I've tried the following:
Reading the file into a list of lines and naively shuffling the input. This doesn't work, as readFile is lazily evaluated, but it needs to read the whole file into memory to shuffle it randomly, and it quickly eats up all my ~8 GB RAM.
Getting the length of the file, and then creating a list of batches of shuffled indices from 0 to 60k that correspond to the line numbers that will be selected to form the batches. Then, when I want to actually get the data batches, I do:
ind_batches <- get_shuffled_ind_batches_from_file fname
batch_list <- mapM (get_data_batch_from_ind_batch fname) ind_batches
where:
get_shuffled_ind_batches_from_file :: String -> IO [[Int]]
get_shuffled_ind_batches_from_file fname = do
contents <- get_contents_from_file fname -- uses readFile, returns [[Double]]
let n_samps = length contents
ind = [0..(n_samps-1)]
shuffled_indices <- shuffle_list ind
let shuffled_ind_chunks = take 1800 $ chunksOf 32 shuffled_indices
return shuffled_ind_chunks
get_data_batch_from_ind_batch :: String -> [Int] -> IO [[Double]]
get_data_batch_from_ind_batch fname ind_chunk = do
contents <- get_contents_from_file fname
let data_batch = get_elems_at_indices contents ind_chunk
return data_batch
shuffle_list :: [a] -> IO [a]
shuffle_list xs = do
ar <- newArray n xs
forM [1..n] $ \i -> do
j <- randomRIO (i,n)
vi <- readArray ar i
vj <- readArray ar j
writeArray ar j vi
return vj
where
n = length xs
newArray :: Int -> [a] -> IO (IOArray Int a)
newArray n xs = newListArray (1,n) xs
get_elems_at_indices :: [a] -> [Int] -> [a]
get_elems_at_indices my_list ind_list = (map . (!!)) my_list ind_list
however, it seems like mapM evaluates immediately, which then tries to read in the file contents repeatedly (I think, the RAM blows up anyway).
A bit more searching told me that I could try using unsafeInterleaveIO to make it so it lazily evaluates an action, so I tried sticking it in like so:
get_data_batch_from_ind_batch :: String -> [Int] -> IO [[Double]]
get_data_batch_from_ind_batch fname ind_chunk = unsafeInterleaveIO $ do
contents <- get_contents_from_file fname
let data_batch = get_elems_at_indices contents ind_chunk
return data_batch
but no luck, same problem as above.
I feel like I've been banging my head against the wall here and must be missing something very simple. Someone suggested using streams or conduits instead, but when I looked at the documentation for them, it wasn't really clear to me how I could use them to solve this problem.
How can I read in a large data file and also shuffle it, without using up all my memory?
hGetContents will return the contents of the file lazily, but if you do much of anything with the result you will realize the whole file at once. I suggest reading the file once, and scanning over it for newlines, so that you can build an index of which chunk starts at which byte offset. That index will be quite small, so you can shuffle it easily. Then you can iterate through the index, each time opening the file and reading only a defined sub-range of it, and parsing only that one chunk.

haskell repeat all chars in a string

i just started with Haskell and wanted to do a little function that takes an integer and a String to repeat each char in the String as often as the integer implies.
e.g.: multiply 3 "hello" would output "hhheeelllooo"
My problem now is that i am not sure how to iterate over all the chars.
multiply::Int->String->String
multiply 1 s = s
multiply i s = multiply (i-1) (take 1 s ++ s)
so what i would get is "hhhello". so basically i need to do something like:
mult::Int->String->String
mult 0 s = []
mult 1 s = s
mult i s = "iterate over s, take each char and call a modified version of the multiply method that only takes chars above"
Thank you for helping me out
This gets easier when you use the standard library. First off, repeating an item is done with replicate:
Prelude> replicate 3 'h'
"hhh"
You can then partially apply this function and map it over the string:
Prelude> map (replicate 3) "hello"
["hhh", "eee", "lll", "lll", "ooo"]
And finally concat that list of strings into one string:
Prelude> concat (map (replicate 3) "hello")
"hhheeellllllooo"
The composition of concat and map can be abbreviated as concatMap (this is a library function, not a language feature).
Prelude> concatMap (replicate 3) "hello"
"hhheeellllllooo"
So your function becomes
mult n s = concatMap (replicate n) s
For extra brevity, write this in point-free style as
mult = concatMap . replicate
There are many ways to achieve the same effect as you would with a loop in other languages, and larsmans has shown you one way, using map. Another common way is with recursion. You already know what to do with the first character, so you can recurse through the list like so:
multiply n [] = []
multiply n (x:xs) = replicate n x ++ multiply n xs
larsmans has explained how replicate works. For your homework, maybe you're not supposed to use library functions like replicate, so you can replace the call to replicate with your own version.
Another way based on monadic's nature of list.
You'd like to apply a function to each element of a list.
To do this just bind the list to the function, like this
# "hello" >>= replicate 3
Or,
# let f = flip (>>=) . replicate
To remove flip,
# let g = (=<<) . replicate
You can use applicative functors for this:
import Control.Applicative
multiply n = (<* [1..n])
--- multiply 3 "hello" --> "hhheeellllllooo"

Which haskell library will let me save a 2D array/vector to a png/jpg/gif... file?

I am playing around with haskell, starting with simple plotting programs to wet my feet. I need a library that will let me save a 2D array/vector to an image file. I don't want to write a list of colors. I want to use containers that are meant for array/vector like computations and can be (well, almost) automagically parallelized.
EDIT Ability to store color images is a must.
I'd start with PGM library. This is a very simple uncompressed graymap format. Almost no additinal dependencies. You can convert PGM to other formats with ImageMagick or other tools.
PGM supports generic IArray interface, and should work with most of the standard Haskell arrays. You can easily parallelize array computations with Control.Parallel.Strategies.
PGM usage example:
ghci> :m + Data.Array Graphics.Pgm
ghci> let a = accumArray (+) 0 ((0::Int,0::Int),(127,127)) [ ((i,i), 1.0::Double) | i <- [0..127] ]
ghci> arrayToFile "t.pgm" (fmap round a)
And this is the image:
Otherwise you may use Codec-Image-DevIL which can save unboxed arrays to many of the image formats. You'll need DevIL library too. And you'll need to convert all arrays to that particular type of them (UArray (Int, Int, Int) Word8).
Finally, if you want bleeding edge, you may consider repa parallel arrays and corresponding repa-io library, which can write them to BMP images. Unfortunately, today repa is not yet buildable with the new GHC 7.0.2 and doesn't give performance advantages on old GHC 6.12.
A new combination is:
repa; for n-dimensional arrays, plus
repa-devil, for image loading in dozens of formats.
Repa is the only widely used array library that is automatically parallelized.
An example, from the repa tutorial, using readImage and writeImage, to read an image, rotate it, and write it back out, in whatever format:
import System.Environment
import Data.Word
import Data.Array.Repa hiding ((++))
import Data.Array.Repa.IO.DevIL
main = do
[f] <- getArgs
runIL $ do
v <- readImage f
writeImage ("flip-"++f) (rot180 v)
rot180 :: Array DIM3 Word8 -> Array DIM3 Word8
rot180 g = backpermute e flop g
where
e#(Z :. x :. y :. _) = extent g
flop (Z :. i :. j :. k) =
(Z :. x - i - 1 :. y - j - 1 :. k)
The more recent JuicyPixels library let you save image to Jpg/Png/Tiff easily, you can use it in combination with Repa with the JuicyPixels-repa library.
You might also want to check out Diagrams
Example code for the dragon fractal:
{- Heighway dragon. See http://en.wikipedia.org/wiki/Dragon_curve. -}
module Main where
import Graphics.Rendering.Diagrams
import Control.Monad.State
import Data.Maybe
dragonStr :: Int -> String
dragonStr 0 = "FX"
dragonStr n = concatMap rules $ dragonStr (n-1)
where rules 'X' = "X+YF+"
rules 'Y' = "-FX-Y"
rules c = [c]
strToPath :: String -> Path
strToPath s = pathFromVectors . catMaybes $ evalState c (0,-1)
where c = mapM exec s
exec 'F' = Just `fmap` get
exec '-' = modify left >> return Nothing
exec '+' = modify right >> return Nothing
exec _ = return Nothing
left (x,y) = (-y,x)
right (x,y) = (y,-x)
dragon :: Int -> Diagram
dragon = lc red . curved 0.8 . strToPath . dragonStr
main = renderAs PNG "dragon.png" (Width 300) (dragon 12)

How to make my Haskell code use Laziness and Garbage collector

I wrote a Haskell code which has to solve the following problem : we have n files : f1, f2, f3 .... fn and I cut those files such a way that each slice has 100 lines
f1_1, f1_2, f1_3 .... f1_m
f2_1, f2_2, .... f2_n
...
fn_1, fn_2, .... fn_k
finally I construct a special data type (Dags) using slices in the following way
f1_1, f2_1, f3_1, .... fn_1 => Dag1
f1_2, f2_2, f3_2, ..... fn_2 => Dag2
....
f1_k, f2_k, f3_k, ..... fn_k => Dagk
the code that I wrote start by cutting all the files, then it couple the i-th elements of the results list and construct Dag using the final result list
it looks like this
-- # take a filename and cut the file in slices of 100 lines
sliceFile :: FilePath -> [[String]]
-- # take a list of lists and group the i-th elements into list
coupleIthElement :: [[String]] -> [[String]]
-- # take a list of lines and create a DAG
makeDags :: [String] -> Dag
-- # final code look like this
makeDag_ :: [FilePath] -> [Dag]
makeDags files = map makeDags $ coupleIthElement (concat (map sliceFile files))
The problem is that this code is non-efficient because :
it needs storing all the files in memory in list form
the garbage collector is not working efficiently since all fonctions need the results list of the previous fonction
How could I re-write my program to take advantage of garbage collector work and Laziness of Haskell ?
if not possible or easier, what can i do to be more efficient even a bit ?
thanks for reply
edit
coupleIthElement ["abc", "123", "xyz"] must return ["a1x","b2y","c3z"]
of cause the 100 lines are arbitrary selected using a particular criteria upon some element of the lines but i discard this aspect to make the problem more easier to understand,
another edition
data Dag = Dag ([(Int, String)], [((Int, Int), Int)]) deriving Show
test_dag = Dag ([(1, "a"),(2, "b"),(3, "c")],[((1,2),1),((1,3),1)])
test_dag2 = Dag ([],[])
the first list is each vertice define by the number and the label, the second list is the edges ((1,2),3) means edge between vertice 1 and 2 with the cost 3
A few points:
1) Have you considered using fgl? It's probably more efficient than your own Dag implementation. If you really need to use Dag, you could construct your graphs with fgl then convert them to Dag when they're complete.
2) It seems like you don't actually use the slices when constructing your graphs, rather they control how many graphs you have. If so, how about something like this:
dagFromHandles :: [Handle] -> IO Dag
dagFromHandles = fmap makeDags . mapM hGetLine
allDags :: [FilePath] -> IO [Dag]
allDags listOfFiles = do
handles <- mapM (flip openFile ReadMode) listOfFiles
replicateM 100 (dagFromHandles handles)
This assumes that each file has at least 100 lines, and any extra lines will be ignored. Even better would be if you had a function that would consume a Dag, then you could do
useDag :: Dag -> IO ()
runDags :: [FilePath] -> IO ()
runDags listOfFiles = do
handles <- mapM (flip openFile ReadMode) listOfFiles
replicateM_ 100 (dagFromHandles handles >>= useDag)
This should make more efficient use of garbage collection.
Of course this assumes that I understand the problem properly, and I'm not certain that I do. Note that concat (map sliceFile) should be a no-op (sliceFile would need to be in IO as you've defined the type, but ignoring that for now), so I don't see why you're bothering with it at all.
If it's not needed to process your file in slices, avoid this. Haskell does this automatically! In Haskell, you think of IO as a stream. Data is read from input, as soon as it's needed and discarded, as soon as it's unused. So for instance, this is an easy file-copying programm:
main = interact id
interact has the signature interact :: (String -> String) -> IO (), and feeds the input into a function which handles it and produces some output, which is written to stdout. This program is more efficient then most C-implementations, as the runtime automatically buffers the input and output.
If you want to understand laziness, you have to forget all the wisdom you learned as a imperative programmer and have to think about a program as a description to modify data, not as a set of instructions - data is only processed when needed!
The key point, why your data may be handled the wrong way is the multiple traversion of the list. Your function makeDags traverses the transposed the slices list one by one, so the elements of the original list may not be discarded. What you should try, is to write your function in a way like this:
sliceFile :: FilePath -> [[String]]
sliceFile fp = do
f <- readFile fp
let l = lines fp
slice [] = []
slice x = ll : slice ls where (ll,ls) = splitAt 100 x
return slice l
sliceFirstRow :: [[String]] -> ([String],[[String]])
sliceFirstRow list = unzip $ map (\(x:xs) -> (x,xs)) list
makeDags :: [[String]] -> [Dag]
makeDags [[]] = []
makeDags list = makeDag firstRow : makeDags restOfList where
(firstRow,restOfList) = sliceFirstRow list
This function may be a solution, since the first row is no longer referenced, when it's done. But in the most places, this is a result of laziness, so you could probably try to use seq to force building the Dags and allowing the IO data to be garbage-collected. (If you don't force building the dags, the data can't be garbage collected).
But anyway, I could provide a more helpfull answer, if you give some informations about what these dags are.

How to split a 110Mo file with Haskell

I have a file which look like this index : label, index's value contain keys in the range of 0... 100000000 and label can be any String value, I want split this file which has 110 Mo in many slices of 100 lines each an make some computation upon each slice. How can I do this?
123 : "acgbdv"
127 : "ytehdh"
129 : "yhdhgdt"
...
9898657 : "bdggdggd"
If you're using String IO, you can do the following:
import System.IO
import Control.Monad
-- | Process 100 lines
process100 :: [String] -> MyData
-- whatever this function does
loop :: [String] -> [MyData]
loop lns = go [] lns
where
go acc [] = reverse acc
go acc lns = let (this, next) = splitAt 100 lns in go (process100 this:acc) next
processFile :: FilePath -> IO [MyData]
processFile f = withFile f ReadMode (fmap (loop . lines) . hGetContents)
Note that this function will silently process the last chunk even if it isn't exactly 100 lines.
Packages like bytestring and text generally provide functions like lines and hGetContents so you should be able to easily adapt this function to any of them.
It's important to know what you're doing with the results of processing each slice, because you don't want to hold on to that data for longer than necessary. Ideally, after each slice is calculated the data would be entirely consumed and could be gc'd. Generally either the separate results get combined into a single data structure (a "fold"), or each one is dealt with separately (maybe outputting a line to a file or something similar). If it's a fold, you should change "loop" to look like this:
loopFold :: [String] -> MyData -- assuming there is a Monoid instance for MyData
loopFold lns = go mzero lns
where
go !acc [] = acc
go !acc lns = let (this, next) = splitAt 100 lns in go (process100 this `mappend` acc) next
The loopFold function uses bang patterns (enabled with "LANGUAGE BangPatterns" pragma) to force evaluation of the "MyData". Depending on what MyData is, you may need to use deepseq to make sure it's fully evaluated.
If instead you're writing each line to output, leave loop as it is and change processFile:
processFileMapping :: FilePath -> IO ()
processFileMapping f = withFile f ReadMode pf
where
pf = mapM_ (putStrLn . show) <=< fmap (loop . lines) . hGetContents
If you're interested in enumerator/iteratee style processing, this is a pretty simple problem. I can't give a good example without knowing what sort of work process100 is doing, but it would involve enumLines and take.
Is it necessary to process exactly 100 lines at a time, or do you just want to process in chunks for efficiency? If it's the latter, don't worry about it. You'd most likely be better off processing one line at a time, using either an actual fold function or a function similar to processFileMapping.

Resources