Call an input file doesn't work - file

I'm using Gviz library from bioconductor. I input a tab delimited file containing CNV position that I need to plot on my chromosome ideogram.
My input file is defined by dat and has 4 columns
[1] chromosome
[2] start
[3] end
[4] width (could '+' or '-' depending on the orientation of the Copy Number)
So I did that :
library(IRanges)
libraray(Gviz)
gen <- "mm9"
chr <- "chr1"
itrack <- IdeogramTrack(genome = gen, chromosome = chr)
gtrack <- GenomeAxisTrack()
dat <- read.delim("C:/R/1ips_chr1.txt", header = FALSE, sep ="\t")
s <- dat[2]
e <- dat[3]
l <- dat[4]
It shows an error message when I call the file dat :
atrack1 <- AnnotationTrack( start = s, width = l , chromosome = chr, genome = gen, name = "Sample1")
Error : function (classes, fdef, mtable) : unable to find an inherited method for function ".buildRange", for signature "NULL", "data.frame", "NULL", "data.frame"
Obviously the way I call a the inputed file (in dat) doesn't satisfy R .. Someone help me please :)

From the reference manual for the Gviz package (with which I am not familiar), the arguments start and width in the AnnotationTrack function need to be integer vectors. When you subset dat using the single square bracket [, the resulting object is a data.frame (see ?`[.data.frame` for more on this). Try instead
s <- dat[[2]]
e <- dat[[3]]
l <- dat[[4]]
to obtain integer vectors.

Related

R: Adding columns from one data frame to another, non-matching number of rows

I have a .txt file with millions of rows of data - DateTime (1-min intervals) and Precipitation.
I have a .csv file with thousands of rows of data - DateTime (daily intevals), MaxTemp, MinTemp, WindSpd, WindDir.
I import the .txt file as a data frame and do a few transformations. I then move this into a new data frame.
I import the .csv file as a data frame do a few transformations. I then want to add the columns from this data frame into the new data frame (total of 7 columns). However, R throws an error: "Error in data.frame(..., check.names = FALSE) : arguments imply differing number of rows: 10382384, 32868, 1"
I know the number of rows is different, however, this is the format I need for the next step in processing. This could be easily done in Excel were it not for the crazy amount of rows.
Simulated code is below, which produces the same error:
a <- as.character(c(1,2,3,4,5,6,7,8,9,10))
b <- c(paste("Date", a))
c <- c(rnorm(10, mean = 5, sd = 2.1))
Frame1 <- data.frame(b,c)
d <- as.character(c(1,2,3))
e <- c(paste("Date", d))
f <- c(rnorm(3, mean = 1, sd = 0.7))
g <- c(rnorm(3, mean = 3, sd = 2))
h <- c(rnorm(3, mean = 8, sd = 1))
Frame2 <- data.frame(e,f,g,h)
NewFrame <- cbind(Frame1)
NewFrame <- cbind(NewFrame, Frame2)
I have tried a *_join but it throws error: "Error: by must be supplied when x and y have no common variables.
i use by = character()` to perform a cross-join." which to me reads like it wants to match things up, which I don't need. I really just need to plop these two datasets side-by-side for the next processing step. Help?
The data frames MUST have an equal number of rows. To compensate then, I just added a bunch of rows to the smaller dataset so that it contains the same amount of rows as the larger dataset (in my case, it will always be the .csv file) and filled it with "NA" values. The following application I use for downstream processing knows how to handle the "NA" values so this works well for me.
I've run the solution with a representative dataset and I am able to cbind the two data frames together.
Sample code with the simulated dataset:
#create data frame 1
a <- as.character(c(1:10))
b <- c(paste("Date", a))
c <- c(rnorm(10, mean = 5, sd = 2.1))
Frame1 <- data.frame(b,c)
#create date frame 2
d <- as.character(c(1,2,3))
e <- c(paste("Date", d))
f <- c(rnorm(3, mean = 1, sd = 0.7))
g <- c(rnorm(3, mean = 3, sd = 2))
h <- c(rnorm(3, mean = 8, sd = 1))
Frame2 <- data.frame(e,f,g,h)
#find the maximum number of rows
maxlen <- max(nrow(Frame1), nrow(Frame2))
#finds the minimum number of rows
rowrow <- min(nrow(Frame1), nrow(Frame2))
#adds enough rows to the smaller dataset to equal the number of rows
#in the larger dataset. Populates the rows with "NA" values
Frame2[rowrow+(maxlen-rowrow),] <- NA
#creates the new data frame from the two frames
NewFrame <- cbind(NewFrame, Frame2)

Read multidimensional NetCDF as data frame in R

I use a netCDF file which stores one variable and has following dimensions: lon, lat, time.
Generally speaking I wish to compare it against different data that I have already in R stored as dataframe - first two columns are coordinates in WGS84, and next are values for specific time.
So I wrote following code.
# since # ncFile$dim$time$units say: [1] "days since 1900-1-1"
daysFromDate <- function(data1, data2="1900-01-01")
{
round(as.numeric(difftime(data1,data2,units = "days")))
}
#study area:
lon <- c(40.25, 48)
lat <- c(16, 24.25)
myTime <- c(daysFromDate("2008-01-16"), daysFromDate("2011-12-31"))
varName <- "spei"
require(ncdf4)
require(RCurl)
x <- getBinaryURL("http://digital.csic.es/bitstream/10261/104742/3/SPEI_01.nc")
ncFile <- nc_open(x)
LonIdx <- which( ncFile$dim$lon$vals >= lon[1] | ncFile$dim$lon$vals <= lon[2])
LatIdx <- which( ncFile$dim$lat$vals >= lat[1] & ncFile$dim$lat$vals <= lat[2])
TimeIdx <- which( ncFile$dim$time$vals >= myTime[1] & ncFile$dim$time$vals <= myTime[2])
MyVariable <- ncvar_get( ncFile, varName)[ LonIdx, LatIdx, TimeIdx]
I thought that data frame will be returned so that I will be able to easily manipulate data (in example - check correlation or create a plot).
Unfortunately 3-dimensional list has been returned instead.
How can I reformat this to data frame with following columns X-Y-Time1-Time2-...
So, example data will looks as follows
X Y 2014-01-01 2014-01-02 2014-01-02
50 17 0.5 0.4 0.3
where 0.5, 0.4 and 0.3 are example variable values
Or maybe there is different solution?
Ok, try following code, but it assumes that ranges are dense filled. And I changed lon test from or to and
require(ncdf4)
nc <- nc_open("SPEI_01.nc")
print(nc)
lon <- ncvar_get(nc, "lon")
lat <- ncvar_get(nc, "lat")
time <- ncvar_get(nc, "time")
lonIdx <- which( lon >= 40.25 & lon <= 48.00)
latIdx <- which( lat >= 16.00 & lat <= 24.25)
myTime <- c(daysFromDate("2008-01-16"), daysFromDate("2011-12-31"))
timeIdx <- which(time >= myTime[1] & time <= myTime[2])
data <- ncvar_get(nc, "spei")[lonIdx, latIdx, timeIdx]
indices <- expand.grid(lon[lonIdx], lat[latIdx], time[timeIdx])
print(length(indices))
class(indices)
summary(indices)
str(indices)
df <- data.frame(cbind(indices, as.vector(data)))
summary(df)
str(df)
UPDATE
ok, looks like I got the idea what do you want, but have do direct solution. What I've got so far is this: split data frame using either split() function or data.table package. After splitting by X&Y, you'll get lists of small data frames where X&Y are a constant for a given frame. Probably is it possible to transpose and recombine them back, but I have no idea how. It might be a good idea to continue to work with data as columns, Lists are nested, could be flattened, and here is link for splitting in R: http://www.uni-kiel.de/psychologie/rexrepos/posts/dfSplitMerge.html
Code, as continued from previous example
require(data.table)
colnames(df) <- c("X","Y","Time","spei")
df$Time <- as.Date(df$Time, origin="1900-01-01")
dt <- as.data.table(df)
summary(dt)
# Taken from https://github.com/Rdatatable/data.table/issues/1389
# x data.table
# f use `by` argument instead - unlike data.frame
# drop logical default FALSE will include `by` columns in resulting data.tables - unlike data.frame
# by character column names on which split into lists
# flatten logical default FALSE will result in recursive nested list having data.table as leafs
# ... ignored
split.data.table <- function(x, f, drop = FALSE, by, flatten = FALSE, ...){
if(missing(by) && !missing(f)) by = f
stopifnot(!missing(by), is.character(by), is.logical(drop), is.logical(flatten), !".ll" %in% names(x), by %in% names(x), !"nm" %in% by)
if(!flatten){
.by = by[1L]
tmp = x[, list(.ll=list(.SD)), by = .by, .SDcols = if(drop) setdiff(names(x), .by) else names(x)]
setattr(ll <- tmp$.ll, "names", tmp[[.by]])
if(length(by) > 1L) return(lapply(ll, split.data.table, drop = drop, by = by[-1L])) else return(ll)
} else {
tmp = x[, list(.ll=list(.SD)), by=by, .SDcols = if(drop) setdiff(names(x), by) else names(x)]
setattr(ll <- tmp$.ll, 'names', tmp[, .(nm = paste(.SD, collapse = ".")), by = by, .SDcols = by]$nm)
return(ll)
}
}
# here is data.table split
q <- split.data.table(dt, by = c("X","Y"), drop=FALSE)
str(q)
# here is data frame split
qq <- split(df, list(df$X, df$Y))
str(qq)

Haskell: Read a binary file backwards

I'm looking to find the last 32bit word in an uInt32 binary dump matching a particular pattern using Haskell. I am able to complete the task using last, however the code must trawl through the entire file so it is rather inefficient.
Is there a simple way to make readfile operate through the file in reverse? I believe this would solve the issue with the smallest change to the current code.
Here is my current code, for reference. I only began with Haskell this weekend so I am sure it is quite ugly. It looks for the last 32 bit word beginning with 0b10 at the MSB.
import System.Environment(getArgs)
import qualified Data.ByteString.Lazy as BL
import qualified Data.ByteString.Lazy.Internal as BL
import qualified Data.ByteString as BS
import Data.Binary.Get
import Data.Word
import Data.Bits
import Text.Printf(printf)
main = do
args <- getArgs
let file = args!!0
putStrLn $ "Find last 0xCXXXXXXX in " ++ file
content <- BL.readFile file
let packets = getPackets content
putStrLn . show . getValue . last . filterTimes $ packets
-- Data
type Packet = Word32
-- filter where first 2 bits are 10
filterTimes :: [Packet] -> [Packet]
filterTimes = filter ((== 0x2) . tag)
-- get the first 2 bits
tag :: Packet -> Packet
tag rp =
let tagSize = 2
in shiftR rp (finiteBitSize rp - tagSize)
-- remove the tag bits
getValue :: Packet -> Packet
getValue =
let tagSize = 2
mask = complement $ rotateR (2^tagSize - 1) tagSize
in (.&.) mask
-- Input
-- Based on https://hackage.haskell.org/package/binary/docs/Data-Binary-Get.html
getPacket :: Get Packet
getPacket = do
packet <- getWord32le
return $! packet
getPackets :: BL.ByteString -> [Packet]
getPackets input0 = go decoder input0
where
decoder = runGetIncremental getPacket
go :: Decoder Packet -> BL.ByteString -> [Packet]
go (Done leftover _consumed packet) input =
packet : go decoder (BL.chunk leftover input)
go (Partial k) input =
go (k . takeHeadChunk $ input) (dropHeadChunk input)
go (Fail _leftover _consumed msg) _input =
[]
takeHeadChunk :: BL.ByteString -> Maybe BS.ByteString
takeHeadChunk lbs =
case lbs of
(BL.Chunk bs _) -> Just bs
_ -> Nothing
dropHeadChunk :: BL.ByteString -> BL.ByteString
dropHeadChunk lbs =
case lbs of
(BL.Chunk _ lbs') -> lbs'
_ -> BL.Empty
Some comments on your code:
You are using last which could throw an exception. You should use lastMay fromthe safe package which returns a Maybe.
Since you are just treating the file as a vector of Word32s, I don't think it's worth using Data.Binary.Get and the associated overhead and complexity that it entails. Just treat the file as a (perhaps lazy) ByteString and access every 4th byte or break it up into 4-byte substrings.
You can have a look at code which uses ByteStrings here. It implements the following approaches to the problem:
Read in the entire file as a lazy ByteString and produce a (lazy) list of 4-byte substrings. Return the last substring which satisifies the criteria.
intoWords :: BL.ByteString -> [ BL.ByteString ]
intoWords bs
| BL.null a = []
| otherwise = a : intoWords b
where (a,b) = BL.splitAt 4 bs
-- find by breaking the file into 4-byte words
find_C0_v1 :: FilePath -> IO (Maybe BL.ByteString)
find_C0_v1 path = do
contents <- BL.readFile path
return $ lastMay . filter (\bs -> BL.index bs 0 == 0xC0) . intoWords $ contents
Read in the entire file as a lazy ByteString and access every 4-th byte looking for a 0xC0. Return the last occurrence.
-- find by looking at every 4th byte
find_C0_v2 :: FilePath -> IO (Maybe BL.ByteString)
find_C0_v2 path = do
contents <- BL.readFile path
size <- fmap fromIntegral $ withFile path ReadMode hFileSize
let wordAt i = BL.take 4 . BL.drop i $ contents
return $ fmap wordAt $ lastMay $ filter (\i -> BL.index contents i == 0xC0) [0,4..size-1]
Read the file in backwards in chunks of 64K. Within each chunk (which is a strict ByteString) access every 4th byte looking for a 0xC0 starting from the end of the chunk. Return the first occurrence.
-- read a file backwords until a predicate returns a Just value
loopBlocks :: Int -> Handle -> Integer -> (BS.ByteString -> Integer -> Maybe a) -> IO (Maybe a)
loopBlocks blksize h top pred
| top <= 0 = return Nothing
| otherwise = do
let offset = top - fromIntegral blksize
hSeek h AbsoluteSeek offset
blk <- BS.hGet h blksize
case pred blk offset of
Nothing -> loopBlocks blksize h offset pred
x -> return x
-- find by reading backwords lookint at every 4th byte
find_C0_v3 :: FilePath -> IO (Maybe Integer)
find_C0_v3 path = do
withFile path ReadMode $ \h -> do
size <- hFileSize h
let top = size - (mod size 4)
blksize = 64*1024 :: Int
loopBlocks blksize h top $ \blk offset ->
fmap ( (+offset) . fromIntegral ) $ headMay $ filter (\i -> BS.index blk i == 0xC0) [blksize-4,blksize-8..0]
The third method is the fastest even if it has to read in the entire file. The first method actually works pretty well. I wouldn't recommend the second at all - its performance degrades precipitously as the file size grows.
For any others who may be interested, I have adapted #ErikR's answer. This solution follows his proposed solution 3, but makes use of my existing code, by stepping through blocks in reverse lazily.
This requires a few extra imports:
import System.IO
import Safe
import Data.Maybe
main becomes:
main = do
args <- getArgs
let file = args!!0
putStrLn $ "Find last 0xCXXXXXXX in " ++ file
-- forward
withFile file ReadMode $ \h -> do
content <- BL.hGetContents h
let packets = getPackets content
putStrLn . show . getValue . last . filterTimes $ packets
-- reverse
withFile file ReadMode $ \h -> do
size <- hFileSize h
let blksize = 64*1024 :: Int
chunks <- makeReverseChunks blksize h (fromIntegral size)
putStrLn . show . getValue . (fromMaybe 0) . headMay . catMaybes . (map $ lastMay . filterTimes . getPackets) $ chunks
With an added helper function:
-- create list of data chunks, backwards in order through the file
makeReverseChunks :: Int -> Handle -> Int -> IO [BL.ByteString]
makeReverseChunks blksize h top
| top == 0 = return []
| top < 0 = error "negative file index"
| otherwise = do
let offset = max (top - fromIntegral blksize) 0
hSeek h AbsoluteSeek (fromIntegral offset)
blk <- BL.hGet h blksize
rest <- makeReverseChunks blksize h offset
return $ blk : rest
Here is a variation of the function makeReverseChunks. It was currently quite strict. Moreover the use of lazy bytestrings is not helping if one keep the blksize low enough. In order to achieve lazy reading one must use unsafeInterleaveIO. Here is a solution using strict bytestrings and lazy IO:
-- create list of data chunks, backwards in order through the file
makeReverseChunks :: Int -> Handle -> Int -> IO [SBS.ByteString]
makeReverseChunks blksize h top
| top == 0 = return []
| top < 0 = error "negative file index"
| otherwise = do
let offset = max (top - fromIntegral blksize) 0
hSeek h AbsoluteSeek (fromIntegral offset)
blk <- SBS.hGet h blksize
rest <- unsafeInterleaveIO $ makeReverseChunks blksize h offset
return $ blk : rest

How to create a grid from 1D array using R?

I have a file which contains a 209091 element 1D binary array representing the global land area
which can be downloaded from here:
ftp://sidads.colorado.edu/DATASETS/nsidc0451_AMSRE_Land_Parms_v01/AMSRE_flags_2002/
I want to create a full from the 1D data arrays using provided ancillary row and column files .globland_r and globland_c which can be downloaded from here:
ftp://sidads.colorado.edu/DATASETS/nsidc0451_AMSRE_Land_Parms_v01/AMSRE_ancil/
There is a code written in Matlab for this purpose and I want to translate this Matlab code to R but I do not know Matlab
function [gridout, EASE_r, EASE_s] = mkgrid_global(x)
%MKGRID_GLOBAL(x) Creates a matrix for mapping
% gridout = mkgrid_global(x) uses the 2090887 element array (x) and returns
%Load ancillary EASE grid row and column data, where <MyDir> is the path to
%wherever the globland_r and globland_c files are located on your machine.
fid = fopen('C:\MyDir\globland_r','r');
EASE_r = fread(fid, 209091, 'int16');
fclose(fid);
fid = fopen('C:\MyDir\globland_c','r');
EASE_s = fread(fid, 209091, 'int16');
fclose(fid);
gridout = NaN.*zeros(586,1383);
%Loop through the elment array
for i=1:1:209091
%Distribute each element to the appropriate location in the output
%matrix (but MATLAB is
%(1,1)
end
EDit following the solution of #mdsumner:
The files MLLATLSB and MLLONLSB (4-byte integers) contain latitude and longitude (multiply by 1e-5) for geo-locating the full global EASE grid matrix (586×1383)
MLLATLSB and MLLONLSB can be downloaded from here:
ftp://sidads.colorado.edu/DATASETS/nsidc0451_AMSRE_Land_Parms_v01/AMSRE_ancil/
## the sparse dims, literally the xcol * yrow indexes
dims <- c(1383, 586)
cfile <- "ftp://sidads.colorado.edu/DATASETS/nsidc0451_AMSRE_Land_Parms_v01/AMSRE_ancil/globland_c"
rfile <- "ftp://sidads.colorado.edu/DATASETS/nsidc0451_AMSRE_Land_Parms_v01/AMSRE_ancil/globland_r"
## be nice, don't abuse this
col <- readBin(cfile, "integer", n = prod(dims), size = 2, signed = FALSE)
row <- readBin(rfile, "integer", n = prod(dims), size = 2, signed = FALSE)
## example data file
fdat <- "ftp://sidads.colorado.edu/DATASETS/nsidc0451_AMSRE_Land_Parms_v01/AMSRE_flags_2002/flags_2002170A.bin"
dat <- readBin(fdat, "integer", n = prod(dims), size = 1, signed = FALSE)
## now get serious
m <- matrix(as.integer(NA), dims[2L], dims[1L])
m[cbind(row + 1L, col + 1L)] <- dat
image(t(m)[,dims[2]:1], col = rainbow(length(unique(m)), alpha = 0.5))
Maybe we can reconstruct this map projection too.
flon <- "MLLONLSB"
flat <- "MLLATLSB"
## the key is that these are integers, floats scaled by 1e5
lon <- readBin(flon, "integer", n = prod(dims), size = 4) * 1e-5
lat <- readBin(flat, "integer", n = prod(dims), size = 4) * 1e-5
## this is all we really need from now on
range(lon)
range(lat)
library(raster)
library(rgdal) ## need for coordinate transformation
ex <- extent(projectExtent(raster(extent(range(lon), range(lat)), crs = "+proj=longlat"), "+proj=cea"))
grd <- raster(ncols = dims[1L], nrows = dims[2L], xmn = xmin(ex), xmx = xmax(ex), ymn = ymin(ex), ymx = ymax(ex), crs = "+proj=cea")
There is probably an "out by half pixel" error in there, left as an exercise.
Test
plot(setValues(grd, m), col = rainbow(max(m, na.rm = TRUE), alpha = 0.5))
Hohum
library(maptools)
data(wrld_simpl)
plot(spTransform(wrld_simpl, CRS(projection(grd))), add = TRUE)
We can now save the valid cellnumbers to match our "grd" template, then read any particular dat-file and just populate the template with those values based on cellnumbers. Also, it seems someone trod nearly this path earlier but not much was gained:
How to identify lat and long for a global matrix?

storing output of simulation in an array (R)

i was trying to store my simulation output inside an array. I have written the following code:
nsim=50
res=array(0,c(nsim,20,20))
for(i in 1:nsim) {
cat("simul=",i,"\n")
simulated = NULL
stik.simulated = NULL
simulated = rpp(....)
stik.simulated = STIKhat(....)
# from stik.simulated we will get $khat and $Ktheo and
# the dimension of stik.simulated$Khat-stik.simulated$Ktheo is 20 x 20
res[i,,] = stik.simulated$Khat - stik.simulated$Ktheo
}
But whenever the function is trying to store the output inside an array, I get the following error:
simul= 1
Xrange is 20 40
Yrange is -20 20
Doing quartic kernel
Error in res[, , i] = stik.simulated$Khat - stik.simulated$Ktheo :
subscript out of bounds
seeking your help. Thanks.
I think you need to organize your code to avoid such errors. I assume you are using package stpp.
First You create, a function which generate the matrix of each iteration; Try to test your function with many values.
stick_diff <- function(u,v){
u <- seq(0,u,by=1)
v <- seq(0,v,by=1)
simulated <- rpp(...) ## call here rpp with right parameters
stik <- STIKhat(xyt=simulated$xyt,
dist=u, times=v, ...)
stik$Khat-stik$Ktheo ## !! here if u#v you will have recycling!
}
Once you are sure of your function you call it the loop, with right dimensions.
nsim <- 50
u <- 20
res=array(0,c(nsim,u,u))
for(i in 1:nsim)
res[i,,] <- stick_diff(i,u,u)

Resources