So I tried to create a page to add new friends in my IOS swift app. So, I first tried where "id" is not in AllMyFriendsIds.
db.collection("users").whereField("id",notIn: MyFriends).limit(to: limit).getDocuments(completion: { [self](snap, err) in
But with more than ten friends the program crashed. Then I tried to do where "friends" array-not-content userId but this method doesn't exist.
Here is an example of the user's docs.
Here is the code that does not work so how to get all users that are not friend with us so where friends does not contain userId or where the docId or Id is not in an array(more than 10)
db.collection("users")
.whereField("friends",notIn: [UserDefaults().string(forKey: "userId")!])
.limit(to: limit)
.getDocuments(completion: {
My goal is to get users that are not my friends with the most efficient technique. Because I know how to do it by separating into groups of 10 but this technique is not efficient with reading and writing and cost a lot.
Some of the answers on SO currently does not work with API V2. Currently, I made this Frankenstein code to extract the followers from a list of 110 users (let's call this list A users). I then want to organise my data into a dataframe so that I can see a column of list A users and next to it is the list of followers for each user. Something like this:
List_A_User_1 | List_A_Follower_1
List_A_Follower_2
List_A_Follower_3
It should take around 110 minutes (since every 15 minutes I can make 15 API calls). I am currently on Tweepy and using a Twitter developer account. However, it has been 20 hours and the code ran before the Kernel died. How may I be able to execute what I want to do?
This is the code so far:
twitter_handles = List A
twitter_handles = df['id']
new_follower_ids = []
ids = []
for user in twitter_handles:
current_user_followers = []
while True:
try:
for page in tweepy.Paginator(client.get_users_followers, id=next_users, max_results=1000, user_fields='created_at'):
current_user_followers.extend(page)
except tweepy.TooManyRequests:
print('Hit Twitter API rate limit.')
for i in range(3, 0, -1):
print("Wait for {} mins.".format(i * 5))
time.sleep(5 * 60)
new_follower_ids.extend(current_user_followers)
ids.extend([user for _ in current_user_followers])
new_followers_df = pd.DataFrame({
"IDs": ids,
"Follower_ID": new_follower_ids})
I am retrieving contacts groups of users. I have to show groups by pagination.
So i am doing like this,
URL feedUrl1 = new URL("https://www.google.com/m8/feeds/groups/"+userEmail+
"/full/?xoauth_requestor_id"+userEmail+"&start-index=10&max-results=5");
ContactGroupFeed resultFeed1 = contactService.getFeed(myQuery, ContactGroupFeed.class);
Above query showing results starting from 10 and showing all records.
It is not retrieving results based on max-results.
Is there is any wrong in it? What is another option for me?
I did a quick testing, and the max-results parameter is working as intended.
I have total 6 contact groups. Happy 1, Happy 2, Happy 3, Happy 4, Happy 5 and Happy 6.
This is the request I made:
https://www.google.com/m8/feeds/groups/userEmail/full?start-index=1&max-results=5
In the response, I get the parameter <openSearch:itemsPerPage>5</openSearch:itemsPerPage>, and it is only showing Happy 1 - 5 in the result.
I think if you are using the oauth_requestor_id, the URL should looks like this:
https://www.google.com/m8/feeds/groups/userEmail/full?xoauth_requestor_id=userEmail&start-index=10&max-results=5
I state that Parse.com use for my app
Does anyone know what 's the right way to create a list of friends?
The user must be able to create a relationship with another user.
I tried to take a look at AnyPic but I could not follow. seems very complicated for me ... I know there is the possibility of creating a relationship with PFRelation but did not find much nl web
Can you help?
Thank's Rory
If you want to have both sides be able to see the friendship you have two options:
duplicate the relationship, i.e. a PFRelation on each PFUser
us a many-to-many table, i.e. a new Class with two PFUser references, and possibly other information
Given that you might want more information about the relationship (e.g. status=requested/accepted/rejected, etc), I would suggest option two.
Here's a similar question on managing friend requests and friend lists using Parse.
then, I had to make a query where I drew all the posts of my users (Timeline) and their names ... I had a problem (same as what I here to do this' you suggested) or call in a query of the Pointer ...
I solved so
This is the content of the cell
NSString *user = [[object objectForKey:#"Utente"] valueForKey:#"Nome_Cognome"];
cell.FFNomeLabel.frame=CGRectMake(15, -35, 270, 100);
cell.FFNomeLabel.textAlignment = NSTextAlignmentRight;
cell.FFNomeLabel.text = user;
[cell.BiancaView addSubview:cell.FFNomeLabel];
NSString *img = [[object objectForKey:#"Utente"] valueForKey:#"foto"];
cell.FFImmagineUtente.file = (PFFile *)img;
cell.FFImmagineUtente.frame = CGRectMake(10, 10, 70, 70);
[cell.FFImmagineUtente.layer setMasksToBounds:YES];
[cell.FFImmagineUtente.layer setCornerRadius:35.3f];
cell.FFImmagineUtente.contentMode = UIViewContentModeScaleAspectFill;
[cell.FFImmagineUtente loadInBackground];
What do you think?
Also how can I save a pointer that is not a Current User?
I saw the documentation parse, but having little experience I was not entirely clear :)
I wish to search twitter for a word (let's say #google), and then be able to generate a tag cloud of the words used in twitts, but according to dates (for example, having a moving window of an hour, that moves by 10 minutes each time, and shows me how different words gotten more often used throughout the day).
I would appreciate any help on how to go about doing this regarding: resources for the information, code for the programming (R is the only language I am apt in using) and ideas on visualization. Questions:
How do I get the information?
In R, I found that the twitteR package has the searchTwitter command. But I don't know how big an "n" I can get from it. Also, It doesn't return the dates in which the twitt originated from.
I see here that I could get until 1500 twitts, but this requires me to do the parsing manually (which leads me to step 2). Also, for my purposes, I would need tens of thousands of twitts. Is it even possible to get them in retrospect?? (for example, asking older posts each time through the API URL ?) If not, there is the more general question of how to create a personal storage of twitts on your home computer? (a question which might be better left to another SO thread - although any insights from people here would be very interesting for me to read)
How to parse the information (in R)? I know that R has functions that could help from the rcurl and twitteR packages. But I don't know which, or how to use them. Any suggestions would be of help.
How to analyse? how to remove all the "not interesting" words? I found that the "tm" package in R has this example:
reuters <- tm_map(reuters, removeWords, stopwords("english"))
Would this do the trick? I should I do something else/more ?
Also, I imagine I would like to do that after cutting my dataset according to time (which will require some posix-like functions (which I am not exactly sure which would be needed here, or how to use it).
And lastly, there is the question of visualization. How do I create a tag cloud of the words? I found a solution for this here, any other suggestion/recommendations?
I believe I am asking a huge question here but I tried to break it to as many straightforward questions as possible. Any help will be welcomed!
Best,
Tal
Word/Tag cloud in R using "snippets" package
www.wordle.net
Using openNLP package you could pos-tag the tweets(pos=Part of speech) and then extract just the nouns, verbs or adjectives for visualization in a wordcloud.
Maybe you can query twitter and use the current system-time as a time-stamp, write to a local database and query again in increments of x secs/mins, etc.
There is historical data available at http://www.readwriteweb.com/archives/twitter_data_dump_infochimp_puts_1b_connections_up.php and http://www.wired.com/epicenter/2010/04/loc-google-twitter/
As for the plotting piece: I did a word cloud here: http://trends.techcrunch.com/2009/09/25/describe-yourself-in-3-or-4-words/ using the snippets package, my code is in there. I manually pulled out certain words. Check it out and let me know if you have more specific questions.
I note that this is an old question, and there are several solutions available via web search, but here's one answer (via http://blog.ouseful.info/2012/02/15/generating-twitter-wordclouds-in-r-prompted-by-an-open-learning-blogpost/):
require(twitteR)
searchTerm='#dev8d'
#Grab the tweets
rdmTweets <- searchTwitter(searchTerm, n=500)
#Use a handy helper function to put the tweets into a dataframe
tw.df=twListToDF(rdmTweets)
##Note: there are some handy, basic Twitter related functions here:
##https://github.com/matteoredaelli/twitter-r-utils
#For example:
RemoveAtPeople <- function(tweet) {
gsub("#\\w+", "", tweet)
}
#Then for example, remove #d names
tweets <- as.vector(sapply(tw.df$text, RemoveAtPeople))
##Wordcloud - scripts available from various sources; I used:
#http://rdatamining.wordpress.com/2011/11/09/using-text-mining-to-find-out-what-rdatamining-tweets-are-about/
#Call with eg: tw.c=generateCorpus(tw.df$text)
generateCorpus= function(df,my.stopwords=c()){
#Install the textmining library
require(tm)
#The following is cribbed and seems to do what it says on the can
tw.corpus= Corpus(VectorSource(df))
# remove punctuation
tw.corpus = tm_map(tw.corpus, removePunctuation)
#normalise case
tw.corpus = tm_map(tw.corpus, tolower)
# remove stopwords
tw.corpus = tm_map(tw.corpus, removeWords, stopwords('english'))
tw.corpus = tm_map(tw.corpus, removeWords, my.stopwords)
tw.corpus
}
wordcloud.generate=function(corpus,min.freq=3){
require(wordcloud)
doc.m = TermDocumentMatrix(corpus, control = list(minWordLength = 1))
dm = as.matrix(doc.m)
# calculate the frequency of words
v = sort(rowSums(dm), decreasing=TRUE)
d = data.frame(word=names(v), freq=v)
#Generate the wordcloud
wc=wordcloud(d$word, d$freq, min.freq=min.freq)
wc
}
print(wordcloud.generate(generateCorpus(tweets,'dev8d'),7))
##Generate an image file of the wordcloud
png('test.png', width=600,height=600)
wordcloud.generate(generateCorpus(tweets,'dev8d'),7)
dev.off()
#We could make it even easier if we hide away the tweet grabbing code. eg:
tweets.grabber=function(searchTerm,num=500){
require(twitteR)
rdmTweets = searchTwitter(searchTerm, n=num)
tw.df=twListToDF(rdmTweets)
as.vector(sapply(tw.df$text, RemoveAtPeople))
}
#Then we could do something like:
tweets=tweets.grabber('ukgc12')
wordcloud.generate(generateCorpus(tweets),3)
I would like to answer your question in making big word cloud.
What I did is
Use s0.tweet <- searchTwitter(KEYWORD,n=1500) for 7 days or more, such as THIS.
Combine them by this command :
rdmTweets = c(s0.tweet,s1.tweet,s2.tweet,s3.tweet,s4.tweet,s5.tweet,s6.tweet,s7.tweet)
The result:
This Square Cloud consists of about 9000 tweets.
Source: People voice about Lynas Malaysia through Twitter Analysis with R CloudStat
Hope it help!