Build an array from yaml in rails - arrays

I'm working on a simple rails app that does SMS. I am leveraging Twilio for this via the twilio_ruby gem. I have 10 different phone numbers that I want to be able to send SMS from randomly.
I know if I do something like this:
numbers = ["281-555-1212", "821-442-2222", "810-440-2293"]
numbers.sample
281-555-1212
It will randomly pull one of the values from the array, which is exactly what I want. The problem is I don't want to hardcode all 10 of these numbers into the app or commit them to version control.
So I'm listing them in yaml (secrets.yml) along with my Twilio SID/Token. How can I build an array out of the 10 yaml fields i.e. twilio_num_1, twilio_num_2, etc, etc so that I can call numbers.sample?
Or is there a better way to do this?

You can also use
twilio_numbers:
- 281-555-1122
- 817-444-2222
- 802-333-2222
thus you don't have to write the numbers in one line.

Figured this out through trial and error.
In secrets.yml
twilio_numbers: ["281-555-1122","817-444-2222","802-333-2222"]
In my code:
Rails.application.secrets.twilio_numbers.sample
Works like a charm.

create a file: config/twilio_numbers.yml
---
- 281-555-1122
- 817-444-2222
- 802-333-2222
and load it in your config/application.rb like this:
config.twilio_numbers = YAML.load_file 'config/twilio_numbers.yml'
you can then access the array from inside any file like this:
Rails.application.config.twilio_numbers
=> ["281-555-1122", "817-444-2222", "802-333-2222"]

Related

Use content of a tuple as variable session

I extracted from a previous response an Object of tuple with the following regex :
.check(regex(""""idSc":(.{1,8}),"pasTemps":."codePasTemps":(.),"""").ofType[(String,String)].findAll.saveAs ("OBJECTS1"))
So I get my object :
OBJECTS1 -> List((1657751,2), (1658105,2), (4557378,2), (1657750,1), (916,1), (917,2), (1658068,1), (1658069,2), (4557379,2), (1658082,1), (4557367,1), (4557368,1), (1660865,2), (1660866,2), (1658122,1), (921,1), (922,2), (923,2), (1660875,1), (1660876,2), (1660877,2), (1658300,1), (1658301,1), (1658302,1), (1658309,1), (1658310,1), (2996562,1), (4638455,1))
After that I did a Foreach and need to extract every couple to add them in next requests So we tried :
.foreach("${OBJECTS1}", "couple") {
exec(http("request_foreach47"
.get("/ctr/web/api/seriegraph/bydates/${couple(0)}/${couple(1)}/1552863600000/1554191743799")
.headers(headers_27))
}
But I get the message : named 'couple' does not support index access
I also though that to use 2 regex on the couple to extract both part could work but I haven't found any way to use a regex on a session variable. (Even if its not needed for this case but possible im really interessed to learn how as it could be usefull)
If would be really thankfull if you could provided me help. (Im using Gatling 2 but can,'t use a more recent version as its for work and others scripts have been develloped with Gatling2)
each "couple" is a scala tuple which can't be indexed into like a collection. Fortunately the gatling EL has a function that handles tuples.
so instead of
.get("/ctr/web/api/seriegraph/bydates/${couple(0)}/${couple(1)}/1552863600000/1554191743799")
you can use
.get("/ctr/web/api/seriegraph/bydates/${couple._1}/${couple._2}/1552863600000/1554191743799")

Dynamic mapping for BeanIO in Camel

I would like to achieve something like below:
from("direct:dataload")
.beanRef("headerUpdater")
.log("Log: " + simple("${in.header.contentType}").getText())
//.unmarshal().beanio(simple("${in.header.contentType}").getText(), "content")
.unmarshal(new BeanIODataFormat(
"file://C://Users//admr229//Documents//mappings.xml", "clients"))
.to("bean:headerFooterValidator")
.split(body())
.process(dataValidator).choice()
.when(header("error").isNotNull())
.to("seda:saveErrorsForReport").otherwise()
.to("seda:updateLive")
.end();
I have commented out the line which I cannot make. I wanted to pass dynamic values from previous endpoint's output to initialize beanio.
Only thing I can think of is using recipient list which will dynamically choose a predefined endpoint. Because, for my case, that endpoint will have unmarshall with beanio, unlike something like "activemq:queue:test", which is purely text.
I hope I have made my question clear. Please let me know if you need any further details.
I am using camel 2.15.2
You can use the data format component [1] where you can specify beanio as the data format, and build the uri dynamic [2]
[1] - http://camel.apache.org/dataformat-component.html
[2] - http://camel.apache.org/how-to-use-a-dynamic-uri-in-to.html
you can use do something like this but, again this not dynamic I guess as it need to have properties set before brining up the context.
.unmarshal().beanio(mapping, streamName)

Error while generating nodes with neo4j via neo4j-console

I'm trying to put data in my graph DB using neo4j. I'm new in the field and I don't find it easy to use the batch import tool that Michael Hunger wrote.
My goal is to generate at least 10000 nodes with just one property set. So I wrote a python script that generates 10000 lines of Cypher queries like "CREATE (:label{ number : '3796142470'})".
I put them in the console and execute them but I get this exception:
StackTrace:
scala.collection.immutable.List.take(List.scala:84)
org.neo4j.cypher.internal.compiler.v2_0.ast.SingleQuery.checkOrder(Query.scala:33)
Am I doing something wrong? In case the only way to generate those nodes is to use a batch/rest API, could you suggest me a easier way to do it?
Change:
CREATE (:label{ number : '3796142470'})
to look like:
CREATE (n1:Label { number : '3796142470'})
So you are following convention:
CREATE (n:Person { name : 'Andres', title : 'Developer' })
Put them into a file (say import.txt) and then:
bin/neo4j-shell -file import.txt
See http://blog.neo4j.org/2014/01/importing-data-to-neo4j-spreadsheet-way.html for more details.

How can I fix the unsecure_base_url on my Magento installation?

I've been having problems with uploading images and in trying to fix it, happened to change the base_url in the config which has now caused my website to appear without any styling at all (inc the Admin).
I've gone into phpMyAdmin and fixed the urls but i'm not having any luck. This is what i've got at the moment...
web/unsecure/base_link_url http://www.northwalesdoorworld.co.uk/
web/unsecure/base_skin_url http://www.northwalesdoorworld.co.uk/skin/
web/unsecure/base_media_url http://www.northwalesdoorworld.co.uk/media/
web/unsecure/base_js_url http://www.northwalesdoorworld.co.uk/js/
Could someone please take a look at my site - northwalesdoorworld.co.uk - and recommend a way to solve my problem?
thanks,
Greg.
if you can't access your site due that you can change the values directly in core_config_data table
SELECT * FROM core_config_data WHERE path LIKE '%web/unsecure%' or path LIKE '%web/secure%'
you can fix them by editing the values to be like clockworkgeek suggested or removing whole rows from database and they will be created over again by magneto and you can use the admin page to add new values.
Change the four values to:
{{unsecure_base_url}}
{{unsecure_base_url}}skin/
{{unsecure_base_url}}media/
{{unsecure_base_url}}js/
Empty var/cache/ in your magento folder.
Please refer to the most recent Magento wiki entry: http://www.magentocommerce.com/wiki/recover/restore_base_url_settings
You can also add something like this to /MAGENTO/app/etc/config.xml rather than manipulate the database:
<stores>
<default>
<web>
<unsecure>
<base_url>{{base_url}}</base_url>
<base_link_url>{{unsecure_base_url}}</base_link_url>
<base_web_url>{{unsecure_base_url}}</base_web_url>
<base_skin_url>{{unsecure_base_url}}skin/</base_skin_url>
<base_js_url>{{unsecure_base_url}}js/</base_js_url>
<base_media_url>{{unsecure_base_url}}media/</base_media_url>
</unsecure>
<secure>
<base_url>{{base_url}}</base_url>
<base_web_url>{{secure_base_url}}</base_web_url>
<base_link_url>{{secure_base_url}}</base_link_url>
<base_js_url>{{secure_base_url}}js/</base_js_url>
<base_skin_url>{{secure_base_url}}skin/</base_skin_url>
<base_media_url>{{secure_base_url}}media/</base_media_url>
</secure>
</web>
</default>
</stores>

Plotting a word-cloud by date for a twitter search result? (using R)

I wish to search twitter for a word (let's say #google), and then be able to generate a tag cloud of the words used in twitts, but according to dates (for example, having a moving window of an hour, that moves by 10 minutes each time, and shows me how different words gotten more often used throughout the day).
I would appreciate any help on how to go about doing this regarding: resources for the information, code for the programming (R is the only language I am apt in using) and ideas on visualization. Questions:
How do I get the information?
In R, I found that the twitteR package has the searchTwitter command. But I don't know how big an "n" I can get from it. Also, It doesn't return the dates in which the twitt originated from.
I see here that I could get until 1500 twitts, but this requires me to do the parsing manually (which leads me to step 2). Also, for my purposes, I would need tens of thousands of twitts. Is it even possible to get them in retrospect?? (for example, asking older posts each time through the API URL ?) If not, there is the more general question of how to create a personal storage of twitts on your home computer? (a question which might be better left to another SO thread - although any insights from people here would be very interesting for me to read)
How to parse the information (in R)? I know that R has functions that could help from the rcurl and twitteR packages. But I don't know which, or how to use them. Any suggestions would be of help.
How to analyse? how to remove all the "not interesting" words? I found that the "tm" package in R has this example:
reuters <- tm_map(reuters, removeWords, stopwords("english"))
Would this do the trick? I should I do something else/more ?
Also, I imagine I would like to do that after cutting my dataset according to time (which will require some posix-like functions (which I am not exactly sure which would be needed here, or how to use it).
And lastly, there is the question of visualization. How do I create a tag cloud of the words? I found a solution for this here, any other suggestion/recommendations?
I believe I am asking a huge question here but I tried to break it to as many straightforward questions as possible. Any help will be welcomed!
Best,
Tal
Word/Tag cloud in R using "snippets" package
www.wordle.net
Using openNLP package you could pos-tag the tweets(pos=Part of speech) and then extract just the nouns, verbs or adjectives for visualization in a wordcloud.
Maybe you can query twitter and use the current system-time as a time-stamp, write to a local database and query again in increments of x secs/mins, etc.
There is historical data available at http://www.readwriteweb.com/archives/twitter_data_dump_infochimp_puts_1b_connections_up.php and http://www.wired.com/epicenter/2010/04/loc-google-twitter/
As for the plotting piece: I did a word cloud here: http://trends.techcrunch.com/2009/09/25/describe-yourself-in-3-or-4-words/ using the snippets package, my code is in there. I manually pulled out certain words. Check it out and let me know if you have more specific questions.
I note that this is an old question, and there are several solutions available via web search, but here's one answer (via http://blog.ouseful.info/2012/02/15/generating-twitter-wordclouds-in-r-prompted-by-an-open-learning-blogpost/):
require(twitteR)
searchTerm='#dev8d'
#Grab the tweets
rdmTweets <- searchTwitter(searchTerm, n=500)
#Use a handy helper function to put the tweets into a dataframe
tw.df=twListToDF(rdmTweets)
##Note: there are some handy, basic Twitter related functions here:
##https://github.com/matteoredaelli/twitter-r-utils
#For example:
RemoveAtPeople <- function(tweet) {
gsub("#\\w+", "", tweet)
}
#Then for example, remove #d names
tweets <- as.vector(sapply(tw.df$text, RemoveAtPeople))
##Wordcloud - scripts available from various sources; I used:
#http://rdatamining.wordpress.com/2011/11/09/using-text-mining-to-find-out-what-rdatamining-tweets-are-about/
#Call with eg: tw.c=generateCorpus(tw.df$text)
generateCorpus= function(df,my.stopwords=c()){
#Install the textmining library
require(tm)
#The following is cribbed and seems to do what it says on the can
tw.corpus= Corpus(VectorSource(df))
# remove punctuation
tw.corpus = tm_map(tw.corpus, removePunctuation)
#normalise case
tw.corpus = tm_map(tw.corpus, tolower)
# remove stopwords
tw.corpus = tm_map(tw.corpus, removeWords, stopwords('english'))
tw.corpus = tm_map(tw.corpus, removeWords, my.stopwords)
tw.corpus
}
wordcloud.generate=function(corpus,min.freq=3){
require(wordcloud)
doc.m = TermDocumentMatrix(corpus, control = list(minWordLength = 1))
dm = as.matrix(doc.m)
# calculate the frequency of words
v = sort(rowSums(dm), decreasing=TRUE)
d = data.frame(word=names(v), freq=v)
#Generate the wordcloud
wc=wordcloud(d$word, d$freq, min.freq=min.freq)
wc
}
print(wordcloud.generate(generateCorpus(tweets,'dev8d'),7))
##Generate an image file of the wordcloud
png('test.png', width=600,height=600)
wordcloud.generate(generateCorpus(tweets,'dev8d'),7)
dev.off()
#We could make it even easier if we hide away the tweet grabbing code. eg:
tweets.grabber=function(searchTerm,num=500){
require(twitteR)
rdmTweets = searchTwitter(searchTerm, n=num)
tw.df=twListToDF(rdmTweets)
as.vector(sapply(tw.df$text, RemoveAtPeople))
}
#Then we could do something like:
tweets=tweets.grabber('ukgc12')
wordcloud.generate(generateCorpus(tweets),3)
I would like to answer your question in making big word cloud.
What I did is
Use s0.tweet <- searchTwitter(KEYWORD,n=1500) for 7 days or more, such as THIS.
Combine them by this command :
rdmTweets = c(s0.tweet,s1.tweet,s2.tweet,s3.tweet,s4.tweet,s5.tweet,s6.tweet,s7.tweet)
The result:
This Square Cloud consists of about 9000 tweets.
Source: People voice about Lynas Malaysia through Twitter Analysis with R CloudStat
Hope it help!

Resources