Creating Relation one - to - one Parse User - relationship

I state that Parse.com use for my app
Does anyone know what 's the right way to create a list of friends?
The user must be able to create a relationship with another user.
I tried to take a look at AnyPic but I could not follow. seems very complicated for me ... I know there is the possibility of creating a relationship with PFRelation but did not find much nl web
Can you help?
Thank's Rory

If you want to have both sides be able to see the friendship you have two options:
duplicate the relationship, i.e. a PFRelation on each PFUser
us a many-to-many table, i.e. a new Class with two PFUser references, and possibly other information
Given that you might want more information about the relationship (e.g. status=requested/accepted/rejected, etc), I would suggest option two.
Here's a similar question on managing friend requests and friend lists using Parse.

then, I had to make a query where I drew all the posts of my users (Timeline) and their names ... I had a problem (same as what I here to do this' you suggested) or call in a query of the Pointer ...
I solved so
This is the content of the cell
NSString *user = [[object objectForKey:#"Utente"] valueForKey:#"Nome_Cognome"];
cell.FFNomeLabel.frame=CGRectMake(15, -35, 270, 100);
cell.FFNomeLabel.textAlignment = NSTextAlignmentRight;
cell.FFNomeLabel.text = user;
[cell.BiancaView addSubview:cell.FFNomeLabel];
NSString *img = [[object objectForKey:#"Utente"] valueForKey:#"foto"];
cell.FFImmagineUtente.file = (PFFile *)img;
cell.FFImmagineUtente.frame = CGRectMake(10, 10, 70, 70);
[cell.FFImmagineUtente.layer setMasksToBounds:YES];
[cell.FFImmagineUtente.layer setCornerRadius:35.3f];
cell.FFImmagineUtente.contentMode = UIViewContentModeScaleAspectFill;
[cell.FFImmagineUtente loadInBackground];
What do you think?
Also how can I save a pointer that is not a Current User?
I saw the documentation parse, but having little experience I was not entirely clear :)

Related

Coffeescript Array Indexing

If I have an array of photos in coffeescript
photos = [ly.p1, ly.p2, ly.p3, ly.p4, ly.p5, ly.p6, ly.p7, ly.p8, ly.p9, ly.p10, ly.p11, ly.p12]
for photo, i in photos
photoMask = new Layer
How can I write my for loop so that the resulting photoMask objects are outputted as photoMask1, photoMask2, photoMask3 .. photoMask12 ?
EDIT: Further elaboration
Maybe the best way to explain this is what I am trying to do in psuedocode:
for photo, i in photos
photoMask[i] = new Layer
photoMask[i].addSubLayer(photo)
So ly.p1 would have a corresponding photoMask1. That way, I can access photoMask1 separately and independently.
While I agree to the commenters about this being a bit strange, you could use something like this:
photos = [ly.p1, ly.p2, ly.p3, ly.p4, ly.p5, ly.p6, ly.p7, ly.p8, ly.p9, ly.p10, ly.p11, ly.p12]
masks = {}
for photo, i in photos
photoMask = new Layer
masks["photoMask#{i}"] = photoMask
This will create dynamic keynames within the masks object. If you really need them globally (in the browser) you could do the same thing with the window object.
But without knowing what exactly you're trying to do, I wouldn't recommend any of the above.

change a db from a certain point in time, when the change doesn't fit the already existing data

I have a model that looks like this:
class Report(models.Model):
updater = models.CharField(max_length=15)
pub_date = models.DateTimeField(auto_add_now=True)
identifier = models.CharField(max_length=100)
... and so on...
There are some more fields but they are irrelevant to the question. Now the site has very simple functions - the users can see older reports and their data, and can edit them or add new ones.
However, the identifier field is actually an integer that symbolizes a log file that is being reported. Most of the times, each report has one log. But sometimes it has more than one. I did it as a CharField because I built the site to replace an older sharepoint 2003 website, where that field was treated as simple text. So I want that in my next version, it would be like it should be, i.e. like this:
class Report(models.Model):
updater = models.CharField(max_length=15)
pub_date = models.DateTimeField(auto_add_now=True)
... and so on...
class Log(models.Model):
report = models.ForeignKey(Report)
identifier = models.IntegerField()
The problem is, since in the old site that field was a CharField, people used this as they liked. Meaning, even if they updated various logs in the same report they just did it like this <logid1>, <logid2>. Sometimes they added some text <logid1> which is related to <logid2>.
So I want to change this, but I don't want to lose all the old data, and I can't fix all those edge cases (the DB contains around 22 thousand reports). I thought about adding this to report:
def disp_id(self):
if self.pub_date < ... #the day I'll do the update
return self.identifier
else:
return ', '.join([log.identifier for log in self.log_set.all()])
But then I'm not really getting rid of the old field now am I? I'm just adding a new one and keeping the original null from a certain date.
As far as I know, what I want to do is impossible. I'm only asking because I know that maybe I'm not the first one to deal with this sort of thing and maybe there is a solution that I'm not aware of.
Hope my explanation is clear enough, thanks in advance!
class Report(models.Model):
updater = models.CharField(max_length=15)
pub_date = models.DateTimeField(auto_add_now=True)
identifier = models.CharField(null=True)
... and so on...
logs = models.ManyToManyField(Log,null=True)
class Log(models.Model):
identifier = models.IntegerField()
Make the above model , and then make a script as follow:
ident_list = []
for reports in Report.objects.all():
identifiers = reports.identifiers.split(',')
for idents in identifiers:
if not idents in ident_list:
log = Log.create(**{'identifier' : int(idents)})
ident_list.append(int(idents))
else:
log = Log.objects.get(identifier = int(idents))
report.log.add(log)
Check the data before removing the column identifiers from the table Report.
Does it solves your purpose now ?

Setting array equal to JSON array - Xcode

I'm trying to figure out how to populate a table from a JSON array. So far, I can populate my table cells perfectly fine by using the following code:
self.countries = [[NSArray alloc]initWithObjects:#"Argentina",#"China",#"Russia",nil];
Concerning the JSON, I can successfully retrieve one line of text at a time and display it in a label. My goal is to populate an entire table view from a JSON array. I tried using the following code, but it still won't populate my table. Obviously I'm doing something wrong, but I searched everywhere and still can't figure it out:
NSURL *url = [NSURL URLWithString:#"http://BlahBlahBlah.com/CountryList"];
NSURLRequest *request = [NSURLRequest requestWithURL:url];
AFJSONRequestOperation *operation = [AFJSONRequestOperation JSONRequestOperationWithRequest:request success:^(NSURLRequest *request, NSHTTPURLResponse *response, id JSON)
{
NSLog(#"%#",[JSON objectForKey:#"COUNTRIES"]);
self.countries = [JSON objectForKey:#"COUNTRIES"];
}
failure:nil];
[operation start];
I am positive that the data is being retrieved, because the NSLog outputs the text perfectly fine. But when I try setting my array equal to the JSON array, nothing happens. I know the code is probably wrong, but I think I'm on the right track. Your help would be much appreciated.
EDIT:
This is the text in the JSON file I'm using:
{
"COUNTRIES": ["Argentina", "China", "Russia",]
}
-Miles
It seems that you need some basic JSON parsing. If you only target iOS 5.0 and above devices, then you should use NSJSONSerialization. If you need to support earlier iOS versions, then I really recommend the open source JSONKit framework.
Having recommended the above, I myself almost always use the Sensible TableView framework to fetch all data from my web service and automatically display it on a table view. Saves me a ton of manual labor and makes app maintenance a breeze, so it's probably something to consider too. Good luck!

Idiomatic List Wrapper

In Google App Engine, I make lists of referenced properties much like this:
class Referenced(BaseModel):
name = db.StringProperty()
class Thing(BaseModel):
foo_keys = db.ListProperty(db.Key)
def __getattr__(self, attrname):
if attrname == 'foos':
return Referenced.get(self.foo_keys)
else:
return BaseModel.__getattr__(self, attrname)
This way, someone can have a Thing and say thing.foos and get something legitimate out of it. The problem comes when somebody says thing.foos.append(x). This will not save the added property because the underlying list of keys remains unchanged. So I quickly wrote this solution to make it easy to append keys to a list:
class KeyBackedList(list):
def __init__(self, key_class, key_list):
list.__init__(self, key_class.get(key_list))
self.key_class = key_class
self.key_list = key_list
def append(self, value):
self.key_list.append(value.key())
list.append(self, value)
class Thing(BaseModel):
foo_keys = db.ListProperty(db.Key)
def __getattr__(self, attrname):
if attrname == 'foos':
return KeyBackedList(Thing, self.foo_keys)
else:
return BaseModel.__getattr__(self, attrname)
This is great for proof-of-concept, in that it works exactly as expected when calling append. However, I would never give this to other people, since they might mutate the list in other ways (thing[1:9] = whatevs or thing.sort()). Sure, I could go define all the __setslice__ and whatnot, but that seems to leave me open for obnoxious bugs. However, that is the best solution I can come up with.
Is there a better way to do what I am trying to do (something in the Python library perhaps)? Or am I going about this the wrong way and trying to make things too smooth?
If you want to modify things like this, you shouldn't be changing __getattr__ on the model; instead, you should write a custom Property class.
As you've observed, though, creating a workable 'ReferenceListProperty' is difficult and involved, and there are many subtle edge cases. I would recommend sticking with the list of keys, and fetching the referenced entities in your code when needed.

Plotting a word-cloud by date for a twitter search result? (using R)

I wish to search twitter for a word (let's say #google), and then be able to generate a tag cloud of the words used in twitts, but according to dates (for example, having a moving window of an hour, that moves by 10 minutes each time, and shows me how different words gotten more often used throughout the day).
I would appreciate any help on how to go about doing this regarding: resources for the information, code for the programming (R is the only language I am apt in using) and ideas on visualization. Questions:
How do I get the information?
In R, I found that the twitteR package has the searchTwitter command. But I don't know how big an "n" I can get from it. Also, It doesn't return the dates in which the twitt originated from.
I see here that I could get until 1500 twitts, but this requires me to do the parsing manually (which leads me to step 2). Also, for my purposes, I would need tens of thousands of twitts. Is it even possible to get them in retrospect?? (for example, asking older posts each time through the API URL ?) If not, there is the more general question of how to create a personal storage of twitts on your home computer? (a question which might be better left to another SO thread - although any insights from people here would be very interesting for me to read)
How to parse the information (in R)? I know that R has functions that could help from the rcurl and twitteR packages. But I don't know which, or how to use them. Any suggestions would be of help.
How to analyse? how to remove all the "not interesting" words? I found that the "tm" package in R has this example:
reuters <- tm_map(reuters, removeWords, stopwords("english"))
Would this do the trick? I should I do something else/more ?
Also, I imagine I would like to do that after cutting my dataset according to time (which will require some posix-like functions (which I am not exactly sure which would be needed here, or how to use it).
And lastly, there is the question of visualization. How do I create a tag cloud of the words? I found a solution for this here, any other suggestion/recommendations?
I believe I am asking a huge question here but I tried to break it to as many straightforward questions as possible. Any help will be welcomed!
Best,
Tal
Word/Tag cloud in R using "snippets" package
www.wordle.net
Using openNLP package you could pos-tag the tweets(pos=Part of speech) and then extract just the nouns, verbs or adjectives for visualization in a wordcloud.
Maybe you can query twitter and use the current system-time as a time-stamp, write to a local database and query again in increments of x secs/mins, etc.
There is historical data available at http://www.readwriteweb.com/archives/twitter_data_dump_infochimp_puts_1b_connections_up.php and http://www.wired.com/epicenter/2010/04/loc-google-twitter/
As for the plotting piece: I did a word cloud here: http://trends.techcrunch.com/2009/09/25/describe-yourself-in-3-or-4-words/ using the snippets package, my code is in there. I manually pulled out certain words. Check it out and let me know if you have more specific questions.
I note that this is an old question, and there are several solutions available via web search, but here's one answer (via http://blog.ouseful.info/2012/02/15/generating-twitter-wordclouds-in-r-prompted-by-an-open-learning-blogpost/):
require(twitteR)
searchTerm='#dev8d'
#Grab the tweets
rdmTweets <- searchTwitter(searchTerm, n=500)
#Use a handy helper function to put the tweets into a dataframe
tw.df=twListToDF(rdmTweets)
##Note: there are some handy, basic Twitter related functions here:
##https://github.com/matteoredaelli/twitter-r-utils
#For example:
RemoveAtPeople <- function(tweet) {
gsub("#\\w+", "", tweet)
}
#Then for example, remove #d names
tweets <- as.vector(sapply(tw.df$text, RemoveAtPeople))
##Wordcloud - scripts available from various sources; I used:
#http://rdatamining.wordpress.com/2011/11/09/using-text-mining-to-find-out-what-rdatamining-tweets-are-about/
#Call with eg: tw.c=generateCorpus(tw.df$text)
generateCorpus= function(df,my.stopwords=c()){
#Install the textmining library
require(tm)
#The following is cribbed and seems to do what it says on the can
tw.corpus= Corpus(VectorSource(df))
# remove punctuation
tw.corpus = tm_map(tw.corpus, removePunctuation)
#normalise case
tw.corpus = tm_map(tw.corpus, tolower)
# remove stopwords
tw.corpus = tm_map(tw.corpus, removeWords, stopwords('english'))
tw.corpus = tm_map(tw.corpus, removeWords, my.stopwords)
tw.corpus
}
wordcloud.generate=function(corpus,min.freq=3){
require(wordcloud)
doc.m = TermDocumentMatrix(corpus, control = list(minWordLength = 1))
dm = as.matrix(doc.m)
# calculate the frequency of words
v = sort(rowSums(dm), decreasing=TRUE)
d = data.frame(word=names(v), freq=v)
#Generate the wordcloud
wc=wordcloud(d$word, d$freq, min.freq=min.freq)
wc
}
print(wordcloud.generate(generateCorpus(tweets,'dev8d'),7))
##Generate an image file of the wordcloud
png('test.png', width=600,height=600)
wordcloud.generate(generateCorpus(tweets,'dev8d'),7)
dev.off()
#We could make it even easier if we hide away the tweet grabbing code. eg:
tweets.grabber=function(searchTerm,num=500){
require(twitteR)
rdmTweets = searchTwitter(searchTerm, n=num)
tw.df=twListToDF(rdmTweets)
as.vector(sapply(tw.df$text, RemoveAtPeople))
}
#Then we could do something like:
tweets=tweets.grabber('ukgc12')
wordcloud.generate(generateCorpus(tweets),3)
I would like to answer your question in making big word cloud.
What I did is
Use s0.tweet <- searchTwitter(KEYWORD,n=1500) for 7 days or more, such as THIS.
Combine them by this command :
rdmTweets = c(s0.tweet,s1.tweet,s2.tweet,s3.tweet,s4.tweet,s5.tweet,s6.tweet,s7.tweet)
The result:
This Square Cloud consists of about 9000 tweets.
Source: People voice about Lynas Malaysia through Twitter Analysis with R CloudStat
Hope it help!

Resources