Error: no applicable method for 'bbox' applied to an object of Class "Extent" - qmap

I am plotting how collared animals utilise water points using the recurse package, and working through the code supplied here, with my data replacing Leo's data: http://dx.doi.org/10.5441/001/1.46ft1k05
I'm currently trying to map movement based on most frequently visited locations. However I keep getting an error in UseMethod "bbox".
When I use show(leoGeo), it returns as a Move object, and I have enabled and registered a Google API key. I have recurse, move, ggplot2, ggmap, RgoogleMaps, raster, scales, viridis, lubridate, reshape2, raster, rworldmap, aptools, cluster, amt, sp, rgdal,curl and dplyr loaded.
leovisit50 = getRecursions(leo.df, 50)
revisitThreshold = 75
leoGeo.map.df = as(leoGeo,'data.frame')
leoGeo.map.df$revisits = leovisit50$revisits
and when I go to use this command
map.leoGeo = qmap(bbox(extent(leoGeo[leovisit50$revisits >
revisitThreshold,])), zoom = 13, maptype = "road.Dist")
it keeps returning the error below
Error in UseMethod("bbox", x) :
no applicable method for 'bbox' applied to an object of class "Extent"
(I can provide full code for this if that is required, it was just the map.leoGeo line I was having difficulty with).
I'm new to movement analysis and am not sure how to fix this problem,any help would be greatly appreciated!

The method clearly exists
library(raster)
r <- raster()
e <- extent(r)
#bbox(e)
# min max
#s1 -180 180
#s2 -90 90
So probably you are loading a package that hides that method. As you are not calling the method directly, you cannot do raster::bbox. Start with a fresh R session, and see if there are warnings that tell you about this when you load your packages. Try to avoid loading many packages, and avoid those that hide methods in other packages.

Related

R automate testing?

Currently, I employ following script to test for something called Granger causality. Note: My main question is about the script structure, not the method.
#These values always have to be specified manually
dat <- data.frame(df[2],df[3])
lag = 2
# VAR-Model
V <- VAR(dat,p=lag,type="both")
V$var
summary(V)
wald.test(b=coef(V$var[[1]]), Sigma=vcov(V$var[[1]]), Terms=c(seq(2, by=2, length=lag)))
names(cof1[2])
wald.test(b=coef(V$var[[2]]), Sigma=vcov(V$var[[2]]), Terms=c(seq(1, by=2, length=lag)))
names(cof1[1])
Main issue is, that I always manually have to change the testing pair in dat <- data.frame(..). Further, I always manually enter "lag = x" after some tests on stationarity that can rather not be automated.
Let's say I would have to test following pairs:
df[2],df[3]
df[2],df[4]
df[2],df[5]
df[6],df[7]
df[8],df[9]
can I somehow state that in an array for the test? Assuming I would also know lag for each testing pair, could I also add that to it?
It would be perfect to directly output my test results into a table, instead of manually changing the data and then entering the result into an Excel/LaTeX.

FSharpChart with Windows.Forms very slow for many points

I use code like the example below to do basic plotting of a list of values from F# Interactive. When plotting more points, the time taken to display increases dramatically. In the examples below, 10^4 points display in 4 seconds whereas 4.10^4 points take a patience-testing 53 seconds to display. Overall it's roughly as if the time to plot N points is in N^2.
The result is that I'll probably add an interpolation layer in front of this code, but
1) I wonder if someone who knows the workings of FSharpChart and Windows.Forms could explain what is causing this behaviour? (The data is bounded so one thing that seems to rule out is the display needing to adjust scale.)
2)Is there a simple remedy other than interpolating the data myself?
let plotl (f:float list) =
let chart = FSharpChart.Line(f, Name = "")
|> FSharpChart.WithSeries.Style(Color = System.Drawing.Color.Red, BorderWidth = 2)
let form = new Form(Visible = true, TopMost = true, Width = 700, Height = 500)
let ctl = new ChartControl(chart, Dock = DockStyle.Fill)
form.Controls.Add(ctl)
let z1 = [for i in 1 .. 10000 do yield sin(float(i * i))]
let z2 = [for i in 1 .. 20000 do yield sin(float(i * i))]
plotl z1
plotl z2
First of all, FSharpChart is a name used in an older version of the library. The latest version is called F# Charting, comes with a new documentation and uses just Chart.
To answer your question, Chart.Line and Chart.Points are quite slow for large number of points. The library also has Chart.FastLine and Chart.FastPoints (which do not support as many features, but are faster). So, try getting the latest version of F# Charting and using the "Fast" version of the method.

AI help for Corona SDK

I've created a game which loops through a table of properties to create enemies to place on screen. The created enemies are stored in a variable called "baddie", and things like their x and y value are determined by the properties I gave them in the table. Currently, "baddie" creates 3 enemies at varying spots on screen. It looks something like this.
for i=1, #level[section]["enemies"] do
local object = level[section]["enemies"][i]
baddie = display.newSprite(baddieSheet, baddieData)
baddie.anchorX = 0.5
baddie.anchorY = 1
baddie.x = object["position"][1]; baddie.y = object["position"][2];
baddie.xScale = -1
baddie.myName = "Baddie"
baddie.health = 15
baddie:setSequence("standrt"); baddie:play()
physics.addBody(baddie, "dynamic", {radius=22, density=0.1, friction=10.0, bounce=0.0})
baddie.isFixedRotation = true
enemyGroup:insert(baddie)
end
I then inserted all of the created instances stored in the baddie variable, into a display group called "enemyGroup."
Now here's my question. I'm working on my game's AI and storing it all in an enterFrame listener. I want to make a "True/False" flag called "inRange." When the enemy's x position is within 20 pixels of the player's x, inRange = true. When it's true, the enemy will attack him. But I haven't figured out a way to make the inRange flag check for each individual enemy, instead of all of them.
I was thinking of something like,
for i = 1, enemyGroup.numChildren do
enemyGroup[i].widthBetween = enemyGroup[i].x - sprite.x
if enemyGroup[i].widthBetween <= 20 and enemyGroup[i].widthBetween >= -20 then
enemyGroup[i].inRange = true
else
enemyGroup[i].inRange = false
end
end
But the issue is, enemyGroup[i].inRange is a local value and I can't call for it in outside of the loop or in other functions. This is obviously problematic, because in another function I want to have each individual enemy punch, roll, jump, etc when their individual inRange property is true. Is there a way I can store enemyGroup[i].inRange so that I can call for it whenever?
Sorry if this question is confusing. It's been a struggle to word it.
I'm not sure why this isn't working for you. enemyGroup[i].inRange is not local, its an attribute of the object at enemyGroup[i]. It should be avalble anywhere you can access enemyGroup[i].
Personally I would have not used a display.newGroup() for this, instead I would have just created an array/table that's scoped for the whole scene.
local baddies = {}
then in your loop:
--enemyGroup:insert(baddie) instead of this, do this:
baddies[#baddies + 1] = baddie
Then you have a table that you can loop over, but it's really more code style than functionality. As long as your enemyGroup is scoped at a high enough level that any function that the scene can see.
you should create a file in the structure below:
module(..., package.seeall)
enemyGroup = {}
and in all your files where you want to use this table, first of all require this file( assume you named this file enemies.lua):
local enemiesArray = require "enemies"
-- somewhere in your code:
enemiesArray.enemyGroup[i].isRange = true -- or whatever you like to do
there is one better option for you to use _G variable. when you store an object to _G you can access that wherever you want( like the famous design pattern Singleton ). just set the variable one time and use it anywhere and as much as you want. for example:
-- in one file you set enemy table:
_G.enemies = enemyGroup
-- somewhere else in nowhere :)
print(_G.enemies.isRange)

Need to figure out how to use DeepZoomTools.dll to create DZI

I am not familiar with .NET coding.
However, I must create DZI sliced image assets on a shared server and am told that I can instantiate and use DeepZoomTools.dll.
Can someone show me a very simple DZI creation script that demonstrates the proper .NET coding technique? I can embellish as needed, I'm sure, but don't know where to start.
Assuming I have a jpg, how does a script simply slice it up and save it?
I can imagine it's only a few lines of code. The server is running IIS 7.5.
If anyone has a simple example, I'd be most appreciative.
Thanks
I don't know myself, but you might ask in the OpenSeadragon community:
https://github.com/openseadragon/openseadragon/issues
Someone there might know.
Does it have to be DeepZoomTools.dll? There are a number of other options for creating DZI files. Here are a few:
http://openseadragon.github.io/examples/creating-zooming-images/
Example of building a Seadragon Image from multiple images.
In this, the "clsCanvas" objects and collection can pretty much be ignored, it was an object internal to my code that was generating the images with GDI+, then putting them on disk. The code below just shows how to get a bunch of images from file and assemble them into a zoomable collection. Hope this helps someone :-).
CollectionCreator cc = new CollectionCreator();
// set default values that make sense for conversion options
cc.ServerFormat = ServerFormats.Default;
cc.TileFormat = ImageFormat.Jpg;
cc.TileSize = 256;
cc.ImageQuality = 0.92;
cc.TileOverlap = 0;
// the max level should always correspond to the log base 2 of the tilesize, unless otherwise specified
cc.MaxLevel = (int)Math.Log(cc.TileSize, 2);
List<Microsoft.DeepZoomTools.Image> aoImages = new List<Microsoft.DeepZoomTools.Image>();
double fLeftShift = 0;
foreach (clsCanvas oCanvas in aoCanvases)
{
//viewport width as a function of this canvas, so the width of this canvas is 1
double fThisImgWidth = oCanvas.MyImageWidth - 1; //the -1 creates a 1px overlap, hides the seam between images.
double fTotalViewportWidth = fTotalImageWidth / fThisImgWidth;
double fMyLeftEdgeInViewportUnits = -fLeftShift / fThisImgWidth; ; //please don't ask me why this is a negative numeber
double fMyTopInViewportUnits = -fTotalViewportWidth * 0.3;
fLeftShift += fThisImgWidth;
Microsoft.DeepZoomTools.Image oImg = new Microsoft.DeepZoomTools.Image(oCanvas.MyFileName.Replace("_Out_Tile",""));
oImg.ViewportWidth = fTotalViewportWidth;
oImg.ViewportOrigin = new System.Windows.Point(fMyLeftEdgeInViewportUnits, fMyTopInViewportUnits);
aoImages.Add(oImg);
}
// create a list of all the images to include in the collection
cc.Create(aoImages, sMasterOutFile);

Plotting a word-cloud by date for a twitter search result? (using R)

I wish to search twitter for a word (let's say #google), and then be able to generate a tag cloud of the words used in twitts, but according to dates (for example, having a moving window of an hour, that moves by 10 minutes each time, and shows me how different words gotten more often used throughout the day).
I would appreciate any help on how to go about doing this regarding: resources for the information, code for the programming (R is the only language I am apt in using) and ideas on visualization. Questions:
How do I get the information?
In R, I found that the twitteR package has the searchTwitter command. But I don't know how big an "n" I can get from it. Also, It doesn't return the dates in which the twitt originated from.
I see here that I could get until 1500 twitts, but this requires me to do the parsing manually (which leads me to step 2). Also, for my purposes, I would need tens of thousands of twitts. Is it even possible to get them in retrospect?? (for example, asking older posts each time through the API URL ?) If not, there is the more general question of how to create a personal storage of twitts on your home computer? (a question which might be better left to another SO thread - although any insights from people here would be very interesting for me to read)
How to parse the information (in R)? I know that R has functions that could help from the rcurl and twitteR packages. But I don't know which, or how to use them. Any suggestions would be of help.
How to analyse? how to remove all the "not interesting" words? I found that the "tm" package in R has this example:
reuters <- tm_map(reuters, removeWords, stopwords("english"))
Would this do the trick? I should I do something else/more ?
Also, I imagine I would like to do that after cutting my dataset according to time (which will require some posix-like functions (which I am not exactly sure which would be needed here, or how to use it).
And lastly, there is the question of visualization. How do I create a tag cloud of the words? I found a solution for this here, any other suggestion/recommendations?
I believe I am asking a huge question here but I tried to break it to as many straightforward questions as possible. Any help will be welcomed!
Best,
Tal
Word/Tag cloud in R using "snippets" package
www.wordle.net
Using openNLP package you could pos-tag the tweets(pos=Part of speech) and then extract just the nouns, verbs or adjectives for visualization in a wordcloud.
Maybe you can query twitter and use the current system-time as a time-stamp, write to a local database and query again in increments of x secs/mins, etc.
There is historical data available at http://www.readwriteweb.com/archives/twitter_data_dump_infochimp_puts_1b_connections_up.php and http://www.wired.com/epicenter/2010/04/loc-google-twitter/
As for the plotting piece: I did a word cloud here: http://trends.techcrunch.com/2009/09/25/describe-yourself-in-3-or-4-words/ using the snippets package, my code is in there. I manually pulled out certain words. Check it out and let me know if you have more specific questions.
I note that this is an old question, and there are several solutions available via web search, but here's one answer (via http://blog.ouseful.info/2012/02/15/generating-twitter-wordclouds-in-r-prompted-by-an-open-learning-blogpost/):
require(twitteR)
searchTerm='#dev8d'
#Grab the tweets
rdmTweets <- searchTwitter(searchTerm, n=500)
#Use a handy helper function to put the tweets into a dataframe
tw.df=twListToDF(rdmTweets)
##Note: there are some handy, basic Twitter related functions here:
##https://github.com/matteoredaelli/twitter-r-utils
#For example:
RemoveAtPeople <- function(tweet) {
gsub("#\\w+", "", tweet)
}
#Then for example, remove #d names
tweets <- as.vector(sapply(tw.df$text, RemoveAtPeople))
##Wordcloud - scripts available from various sources; I used:
#http://rdatamining.wordpress.com/2011/11/09/using-text-mining-to-find-out-what-rdatamining-tweets-are-about/
#Call with eg: tw.c=generateCorpus(tw.df$text)
generateCorpus= function(df,my.stopwords=c()){
#Install the textmining library
require(tm)
#The following is cribbed and seems to do what it says on the can
tw.corpus= Corpus(VectorSource(df))
# remove punctuation
tw.corpus = tm_map(tw.corpus, removePunctuation)
#normalise case
tw.corpus = tm_map(tw.corpus, tolower)
# remove stopwords
tw.corpus = tm_map(tw.corpus, removeWords, stopwords('english'))
tw.corpus = tm_map(tw.corpus, removeWords, my.stopwords)
tw.corpus
}
wordcloud.generate=function(corpus,min.freq=3){
require(wordcloud)
doc.m = TermDocumentMatrix(corpus, control = list(minWordLength = 1))
dm = as.matrix(doc.m)
# calculate the frequency of words
v = sort(rowSums(dm), decreasing=TRUE)
d = data.frame(word=names(v), freq=v)
#Generate the wordcloud
wc=wordcloud(d$word, d$freq, min.freq=min.freq)
wc
}
print(wordcloud.generate(generateCorpus(tweets,'dev8d'),7))
##Generate an image file of the wordcloud
png('test.png', width=600,height=600)
wordcloud.generate(generateCorpus(tweets,'dev8d'),7)
dev.off()
#We could make it even easier if we hide away the tweet grabbing code. eg:
tweets.grabber=function(searchTerm,num=500){
require(twitteR)
rdmTweets = searchTwitter(searchTerm, n=num)
tw.df=twListToDF(rdmTweets)
as.vector(sapply(tw.df$text, RemoveAtPeople))
}
#Then we could do something like:
tweets=tweets.grabber('ukgc12')
wordcloud.generate(generateCorpus(tweets),3)
I would like to answer your question in making big word cloud.
What I did is
Use s0.tweet <- searchTwitter(KEYWORD,n=1500) for 7 days or more, such as THIS.
Combine them by this command :
rdmTweets = c(s0.tweet,s1.tweet,s2.tweet,s3.tweet,s4.tweet,s5.tweet,s6.tweet,s7.tweet)
The result:
This Square Cloud consists of about 9000 tweets.
Source: People voice about Lynas Malaysia through Twitter Analysis with R CloudStat
Hope it help!

Resources