Parse RRULE to readable text? - calendar

I am looking for a library (php) or some pseudocode to parse Ical RRULEs to readable text(like for example google calendar)
RRULE:FREQ=MONTHLY;INTERVAL=1;BYDAY=1FR,3FR,5FR
-> Monthly on Friday of weeks 1, 3, 5 of the month

I was wondering the same thing and I struggled to find one, but I found one which is still in development, and you can find it here: https://github.com/simshaun/recurr/tree/v0.2-dev
I was able to get it working with this code:
$timezone = 'America/New_York';
$startDate = new \DateTime('2013-06-12 20:00:00', new \DateTimeZone($timezone));
$rule = new \Recurr\Rule('FREQ=MONTHLY;COUNT=5', $startDate, $timezone);
$textTransformer = new \Recurr\Transformer\TextTransformer();
echo $textTransformer->transform($rule);
I believe it's not perfect yet and I guess it's still changing while in dev, but looks very promising.

Related

Biblatex doesn't compile. Probably .bib file not recognised

I've spent many hours trying to get my bibliography working - unsuccessfully. I suspect that, somehow, my .bib file doesn't get recognised.
Help would be greatly appreciated.
MWE:
\documentclass[a4paper, 12pt]{article}
\usepackage{array}
\usepackage{lscape}
\usepackage[paper=portrait,pagesize]{typearea}
\usepackage[showframe=false]{geometry}
\usepackage{changepage}
\usepackage{tabularx}
\usepackage{graphicx}
\usepackage{adjustbox}
\usepackage[utf8]{inputenc}
\usepackage{babel,csquotes,xpatch}
\usepackage[backend=biber,style=authoryear, natbib]{biblatex}
\addbibresource{test.bib}
\usepackage{xurl}
\usepackage[colorlinks,allcolors=blue]{hyperref}
\begin{document}
This is a test... test test\\
\cite{glaeser_gyourko}\\
\cite{hsieh-moretti:2019}\\
\cite{glaeser_gyourko}\\
\printbibliography
\end{document}
test.bib file:
#article{hsieh-moretti:2019,
Author = {Hsieh, Chang-Tai and Moretti, Enrico},
Title = {Housing Constraints and Spatial Misallocation},
Journal = {American Economic Journal: Macroeconomics},
Volume = {11},
Number = {2},
Year = {2019},
Month = {4},
Pages = {1-39},
DOI = {10.1257/mac.20170388},
URL = {https://www.aeaweb.org/articles?id=10.1257/mac.20170388}
}
#article{glaeser_gyourko,
Author = {Glaeser, Edward and Gyourko, Joseph},
Title = {The Economic Implications of Housing Supply},
Journal = {Journal of Economic Perspectives},
Volume = {32},
Number = {1},
Year = {2018},
Month = {2},
Pages = {3-30},
DOI = {10.1257/jep.32.1.3},
URL = {https://www.aeaweb.org/articles?id=10.1257/jep.32.1.3}
}
In PDF it looks like this: enter image description here
I get the following information in the source viewer:
Process started
INFO - This is Biber 2.14 INFO - Logfile is 'test.blg' INFO - Reading
'test.bcf' INFO - Found 2 citekeys in bib section 0 INFO - Processing
section 0 INFO - Globbing data source 'test.bib' INFO - Globbed data
source 'test.bib' to test.bib INFO - Looking for bibtex format file
'test.bib' for section 0 INFO - LaTeX decoding ... INFO - Found
BibTeX data source 'test.bib'
Process exited with error(s)
I use texmaker 5.0.4 on MacOS and I post my configurations here:
enter image description here enter image description here
I really have very little idea on what goes on. Today, I started a work session, added a new source and it didn't work. I deleted the new source so that the bibliography would be the same as prior to me changing it, and it didn't work either. So, this let's me assume that, somehow, the program doesn't understand where the bibliography is. The .bib file and the document are in the same folder.
What I tried:
Triple checked code in bibliography using tools such as https://biblatex-linter.herokuapp.com/
Clear the cache of all documents.
change the natbib in the command \usepackage[backend=biber,style=authoryear, natbib]{biblatex} to biber -> doesn't seem to work.
Left out natbib and got same result. \usepackage[backend=biber,style=authoryear, natbib]{biblatex} => \usepackage[backend=biber,style=authoryear]{biblatex}
Add the command \usepackgage{natbitb} in addition to biblatex but this produces compatibility issues.
Add the codes \usepackage[utf8]{inputenc} &
\usepackage{babel,csquotes,xpatch} because they are recommendet by this biblatex cheat sheet: http://tug.ctan.org/info/biblatex-cheatsheet/biblatex-cheatsheet.pdf. Didn't change anything.
Thanks for your time!
I had a similar problem, what helped me was looking up the articles and rewriting them via the Google Scholar bibTex version.
The problem arose as I changed the name manually. This results in an error which is not recognized. And this threw me into researching exactly the same kind of .bib file not recognized error.
Your housing article should be formatted like this:
#article{hsieh2019housing,
title={Housing constraints and spatial misallocation},
author={Hsieh, Chang-Tai and Moretti, Enrico},
journal={American Economic Journal: Macroeconomics},
volume={11},
number={2},
pages={1--39},
year={2019}
}
I found another source of this problem Citavi generates invalid bibtex syntax. Often the year field is not correctly filled or special characters are not escaped properly. Maybe these are data errors which have their origin in the sources not in Citavi, but nonetheless often Citavi does not export valid bibtex format.

how to get value from firebase as date while using moment.js

I'm using firebase as database and momentjs for dates on my Ionic Project.
I save needed dates on firebase but i want to take 2 dates and find difference between them as hours, days etc.
But when i try to take this value's something goes wrong and can not take difference.
Fields on firebase.
My code.
let ms = moment(this.userActivityLoginTime,"DD/MM/YYYY HH:mm:ss")
.diff(moment(this.userActivityLogoutTime,"DD/MM/YYYY HH:mm:ss"));
let d = moment.duration(ms);
console.log(d.days(), d.hours(), d.minutes(), d.seconds());
Your code is wrong. You are trying to subtract bigger value from smaller one. Try code below.
let ms = moment(this.userActivityLogoutTime,"DD/MM/YYYY HH:mm:ss").diff(moment(this.userActivityLoginTime,"DD/MM/YYYY HH:mm:ss"));
let d = moment.duration(ms);
console.log(d.months(),d.days(), d.hours(), d.minutes(), d.seconds());

bibTeX citation for stackoverflow

(I am not sure if this question belongs to the meta website or not, but here we go)
I want to add stackoverflow to the bibliography of a research paper I am writing, and wonder if there is any bibTeX code to do so. I already did that for gnuplot
I searched online, but in most cases the citation goes to a specific thread. In this case, I want to acknowledge SO as a whole, and add a proper citation, probably to the website itself. Hopefully somebody already did this in the past?
As an example, below are the codes I use for R and gnuplot:
#Manual{rproject,
title = {R: A Language and Environment for Statistical Computing},
author = {{R Core Team}},
organization = {R Foundation for Statistical Computing},
address = {Vienna, Austria},
year = {2015},
url = {https://www.R-project.org/},
}
#MISC{gnuplot,
author = {Thomas Williams and Colin Kelley and {many others}},
title = {Gnuplot 5.0: an interactive plotting program},
month = {June},
year = {2015},
howpublished = {\href{http://www.gnuplot.info/}{http://www.gnuplot.info/}}
}
I know that both are software, not website resources, but maybe something along those lines would work. Any feedback is appreciated!
Thanks!
I did not realize this question never got answered. The solution I found was to acknowledge the SO website in the LaTeX code with the following:
This research has made use of the online Q\&A platform {\texttt{stackoverflow}}
(\href{http://stackoverflow.com/}{http://stackoverflow.com/}).
Hope it helps somebody in the future!
Actually, for my paper I am using the following citation:
#misc{stackoverflow,
url={https://stackoverflow.com/},
title={Stack overflow},
year={2008}
}
I hope it helps!

App Engine Search API - Sort Results

I have several entities that I am searching across that include dates, and the Search API works great across all of them except for one thing - sorting.
Here's the data model for one of my entities (simplified of course):
class DepositReceipt(ndb.Expando):
#Sets creation date
creation_date = ndb.DateTimeProperty(auto_now_add=True)
And the code to create the search.Document where de is an instance of the entity:
document = search.Document(doc_id=de.key.urlsafe(),
fields=[search.TextField(name='deposit_key', value=de.key.urlsafe()),
search.DateField(name='created', value=de.creation_date),
search.TextField(name='settings', value=de.settings.urlsafe()),
])
This returns a valid document.
And finally the problem line. I took this snippet from the official GAE Search API tutorial and just changed the direction of the sort to DESCENDING and changed the search expression to created (the date property from the Document above).
expr_list = [search.SortExpression(
expression="created", default_value='',
direction=search.SortExpression.DESCENDING)]
I don't think this is important, but the rest of the search code looks like this:
sort_opts = search.SortOptions(expressions=expr_list)
query_options = search.QueryOptions(
cursor=query_cursor,
limit=_NUM_RESULTS,
sort_options=sort_opts)
query_obj = search.Query(query_string=query, options=query_options)
search_results = search.Index(name=index_name).search(query=query_obj)
In production, I get this error message:
InvalidRequest: Failed to parse search request "settings:ag5zfmdoaWRvbmF0aW9uc3IQCxIIU2V0dGluZ3MYmewDDA"; failed to parse date
Changing the expression="created" to anything else works perfectly fine. This also happens across my other entity types that use dates, so I have no idea what's going on. Advice?
I think default_value needs to be a valid date, rather than '' as you have it.

Plotting a word-cloud by date for a twitter search result? (using R)

I wish to search twitter for a word (let's say #google), and then be able to generate a tag cloud of the words used in twitts, but according to dates (for example, having a moving window of an hour, that moves by 10 minutes each time, and shows me how different words gotten more often used throughout the day).
I would appreciate any help on how to go about doing this regarding: resources for the information, code for the programming (R is the only language I am apt in using) and ideas on visualization. Questions:
How do I get the information?
In R, I found that the twitteR package has the searchTwitter command. But I don't know how big an "n" I can get from it. Also, It doesn't return the dates in which the twitt originated from.
I see here that I could get until 1500 twitts, but this requires me to do the parsing manually (which leads me to step 2). Also, for my purposes, I would need tens of thousands of twitts. Is it even possible to get them in retrospect?? (for example, asking older posts each time through the API URL ?) If not, there is the more general question of how to create a personal storage of twitts on your home computer? (a question which might be better left to another SO thread - although any insights from people here would be very interesting for me to read)
How to parse the information (in R)? I know that R has functions that could help from the rcurl and twitteR packages. But I don't know which, or how to use them. Any suggestions would be of help.
How to analyse? how to remove all the "not interesting" words? I found that the "tm" package in R has this example:
reuters <- tm_map(reuters, removeWords, stopwords("english"))
Would this do the trick? I should I do something else/more ?
Also, I imagine I would like to do that after cutting my dataset according to time (which will require some posix-like functions (which I am not exactly sure which would be needed here, or how to use it).
And lastly, there is the question of visualization. How do I create a tag cloud of the words? I found a solution for this here, any other suggestion/recommendations?
I believe I am asking a huge question here but I tried to break it to as many straightforward questions as possible. Any help will be welcomed!
Best,
Tal
Word/Tag cloud in R using "snippets" package
www.wordle.net
Using openNLP package you could pos-tag the tweets(pos=Part of speech) and then extract just the nouns, verbs or adjectives for visualization in a wordcloud.
Maybe you can query twitter and use the current system-time as a time-stamp, write to a local database and query again in increments of x secs/mins, etc.
There is historical data available at http://www.readwriteweb.com/archives/twitter_data_dump_infochimp_puts_1b_connections_up.php and http://www.wired.com/epicenter/2010/04/loc-google-twitter/
As for the plotting piece: I did a word cloud here: http://trends.techcrunch.com/2009/09/25/describe-yourself-in-3-or-4-words/ using the snippets package, my code is in there. I manually pulled out certain words. Check it out and let me know if you have more specific questions.
I note that this is an old question, and there are several solutions available via web search, but here's one answer (via http://blog.ouseful.info/2012/02/15/generating-twitter-wordclouds-in-r-prompted-by-an-open-learning-blogpost/):
require(twitteR)
searchTerm='#dev8d'
#Grab the tweets
rdmTweets <- searchTwitter(searchTerm, n=500)
#Use a handy helper function to put the tweets into a dataframe
tw.df=twListToDF(rdmTweets)
##Note: there are some handy, basic Twitter related functions here:
##https://github.com/matteoredaelli/twitter-r-utils
#For example:
RemoveAtPeople <- function(tweet) {
gsub("#\\w+", "", tweet)
}
#Then for example, remove #d names
tweets <- as.vector(sapply(tw.df$text, RemoveAtPeople))
##Wordcloud - scripts available from various sources; I used:
#http://rdatamining.wordpress.com/2011/11/09/using-text-mining-to-find-out-what-rdatamining-tweets-are-about/
#Call with eg: tw.c=generateCorpus(tw.df$text)
generateCorpus= function(df,my.stopwords=c()){
#Install the textmining library
require(tm)
#The following is cribbed and seems to do what it says on the can
tw.corpus= Corpus(VectorSource(df))
# remove punctuation
tw.corpus = tm_map(tw.corpus, removePunctuation)
#normalise case
tw.corpus = tm_map(tw.corpus, tolower)
# remove stopwords
tw.corpus = tm_map(tw.corpus, removeWords, stopwords('english'))
tw.corpus = tm_map(tw.corpus, removeWords, my.stopwords)
tw.corpus
}
wordcloud.generate=function(corpus,min.freq=3){
require(wordcloud)
doc.m = TermDocumentMatrix(corpus, control = list(minWordLength = 1))
dm = as.matrix(doc.m)
# calculate the frequency of words
v = sort(rowSums(dm), decreasing=TRUE)
d = data.frame(word=names(v), freq=v)
#Generate the wordcloud
wc=wordcloud(d$word, d$freq, min.freq=min.freq)
wc
}
print(wordcloud.generate(generateCorpus(tweets,'dev8d'),7))
##Generate an image file of the wordcloud
png('test.png', width=600,height=600)
wordcloud.generate(generateCorpus(tweets,'dev8d'),7)
dev.off()
#We could make it even easier if we hide away the tweet grabbing code. eg:
tweets.grabber=function(searchTerm,num=500){
require(twitteR)
rdmTweets = searchTwitter(searchTerm, n=num)
tw.df=twListToDF(rdmTweets)
as.vector(sapply(tw.df$text, RemoveAtPeople))
}
#Then we could do something like:
tweets=tweets.grabber('ukgc12')
wordcloud.generate(generateCorpus(tweets),3)
I would like to answer your question in making big word cloud.
What I did is
Use s0.tweet <- searchTwitter(KEYWORD,n=1500) for 7 days or more, such as THIS.
Combine them by this command :
rdmTweets = c(s0.tweet,s1.tweet,s2.tweet,s3.tweet,s4.tweet,s5.tweet,s6.tweet,s7.tweet)
The result:
This Square Cloud consists of about 9000 tweets.
Source: People voice about Lynas Malaysia through Twitter Analysis with R CloudStat
Hope it help!

Resources