Returning a nicely punctuated string from an object query - django-models

In my django app, especially on the admin side, I do a few def's with my models:
def get_flora(self):
return self.flora.all()
def targeted_flora(self):
return u"%s" % (self.get_flora())
whereas flora is a ManyToManyField, however, sometimes ForeignKey fields are used also.
I do this to provide a utility 'get' function for the model, and then the second def is to provide django admin with a a friendlier field name to populate the tabular/list view.
Perhaps a two part question here:
1. Is this a good workflow/method for doing such things, and
2. The resultant string output in admin looks something like:
[<Species: pittosporum>, <Species: pinus radiata>]
Naturally enough, but how to make it look like:
pittosporum & pinus radiata
or, if there were three;
pittosporum, pinus radiata & erharta ercta
Super thanks!

Sounds like you want something like this:
def targeted_flora(self):
names= [f.name for f in self.get_flora()] # or however you get a floras name
if len(names) == 1:
return names[0]
else:
return ', '.join(names[:-1]) + ' & ' + names[-1]

This works btw:
def punctuated_object_list(objects, field):
if field:
field_list = [getattr(f, field) for f in objects]
else:
field_list = [str(f) for f in objects]
if len(field_list) > 0:
if len(field_list) == 1:
return field_list[0]
else:
return ', '.join(field_list[:-1]) + ' & ' + field_list[-1]
else:
return u''

Related

Iterate over array of objects. Then access object method if correct one is found. Otherwise create a new object in the array

I start with an empty array, and a Hash of key, values.
I would like to iterate over the Hash and compare it against the empty array. If the value for each k,v pair doesn't already exist in the array, I would like to create an object with that value and then access an object method to append the key to an array inside the object.
This is my code
class Test
def initialize(name)
#name = name
#values = []
end
attr_accessor :name
def values=(value)
#values << value
end
def add(value)
#values.push(value)
end
end
l = []
n = {'server_1': 'cluster_x', 'server_2': 'cluster_y', 'server_3': 'cluster_z', 'server_4': 'cluster_x', 'server_5': 'cluster_y'}
n.each do |key, value|
l.any? do |a|
if a.name == value
a.add(key)
else
t = Test.new(value)
t.add(key)
l << t
end
end
end
p l
I would expect to see this:
[
#<Test:0x007ff8d10cd3a8 #name=:cluster_x, #values=["server_1, server_4"]>,
#<Test:0x007ff8d10cd2e0 #name=:cluster_y, #values=["server_2, server_5"]>,
#<Test:0x007ff8d10cd1f0 #name=:cluster_z, #values=["server_3"]>
]
Instead I just get an empty array.
I think that the condition if a.name == value is not being met and then the add method isn't being called.
#Cyzanfar gave me a clue as to what to look for, and I found the answer here
https://stackoverflow.com/a/34904864/5006720
n.each do |key, value|
found = l.detect {|e| e.name == value}
if found
found.add(key)
else
t = Test.new(value)
t.add(key)
l << t
end
end
#ARL you're almost there! The last thing you need to consider is when found actually returns an object since detect will find a matching one at some point.
n.each do |key, value|
found = l.detect {|e| e.name == value}
if found
found.add(key)
else
t = Test.new(value)
t.add(key)
l << t
end
end
You actually only want to add a new instance of Test when found return nil. This code should yield your desired output:
[
#<Test:0x007ff8d10cd3a8 #name=:cluster_x, #values=["server_1, server_4"]>,
#<Test:0x007ff8d10cd2e0 #name=:cluster_y, #values=["server_2, server_5"]>,
#<Test:0x007ff8d10cd1f0 #name=:cluster_z, #values=["server_3"]>
]
I observe two things in your code :
def values=(value)
#values << value
def add(value)
#values.push(value)
two methods do the same thing, pushing a value, as << is a kind of syntactic sugar meaning push
you have changed the meaning of values=, which is usually reserved for a setter method, equivalent to attire_writer :values.
Just to illustrate that there are many ways to do things in Ruby, I propose the following :
class Test
def initialize(name, value)
#name = name
#values = [value]
end
def add(value)
#values << value
end
end
h_cluster = {} # intermediate hash whose key is the cluster name
n = {'server_1': 'cluster_x', 'server_2': 'cluster_y', 'server_3': 'cluster_z',
'server_4': 'cluster_x', 'server_5': 'cluster_y'}
n.each do | server, cluster |
puts "server=#{server}, cluster=#{cluster}"
cluster_found = h_cluster[cluster] # does the key exist ? => nil or Test
# instance with servers list
puts "cluster_found=#{cluster_found.inspect}"
if cluster_found
then # add server to existing cluster
cluster_found.add(server)
else # create a new cluster
h_cluster[cluster] = Test.new(cluster, server)
end
end
p h_cluster.collect { | cluster, servers | servers }
Execution :
$ ruby -w t.rb
server=server_1, cluster=cluster_x
cluster_found=nil
server=server_2, cluster=cluster_y
cluster_found=nil
server=server_3, cluster=cluster_z
cluster_found=nil
server=server_4, cluster=cluster_x
cluster_found=#<Test:0x007fa7a619ae10 #name="cluster_x", #values=[:server_1]>
server=server_5, cluster=cluster_y
cluster_found=#<Test:0x007fa7a619ac58 #name="cluster_y", #values=[:server_2]>
[#<Test:0x007fa7a619ae10 #name="cluster_x", #values=[:server_1, :server_4]>,
#<Test:0x007fa7a619ac58 #name="cluster_y", #values=[:server_2, :server_5]>,
#<Test:0x007fa7a619aac8 #name="cluster_z", #values=[:server_3]>]

Scala read only certain parts of file

I'm trying to read an input file in Scala that I know the structure of, however I only need every 9th entry. So far I have managed to read the whole thing using:
val lines = sc.textFile("hdfs://moonshot-ha-nameservice/" + args(0))
val fields = lines.map(line => line.split(","))
The issue, this leaves me with an array that is huge (we're talking 20GB of data). Not only have I seen myself forced to write some very ugly code in order to convert between RDD[Array[String]] and Array[String] but it's essentially made my code useless.
I've tried different approaches and mixes between using
.map()
.flatMap() and
.reduceByKey()
however nothing actually put my collected "cells" into the format that I need them to be.
Here's what is supposed to happen: Reading a folder of text files from our server, the code should read each "line" of text in the format:
*---------*
| NASDAQ: |
*---------*
exchange, stock_symbol, date, stock_price_open, stock_price_high, stock_price_low, stock_price_close, stock_volume, stock_price_adj_close
and only keep a hold of the stock_symbol as that is the identifier I'm counting. So far my attempts have been to turn the entire thing into an array only collect every 9th index from the first one into a collected_cells var. Issue is, based on my calculations and real life results, that code would take 335 days to run (no joke).
Here's my current code for reference:
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
object SparkNum {
def main(args: Array[String]) {
// Do some Scala voodoo
val sc = new SparkContext(new SparkConf().setAppName("Spark Numerical"))
// Set input file as per HDFS structure + input args
val lines = sc.textFile("hdfs://moonshot-ha-nameservice/" + args(0))
val fields = lines.map(line => line.split(","))
var collected_cells:Array[String] = new Array[String](0)
//println("[MESSAGE] Length of CC: " + collected_cells.length)
val divider:Long = 9
val array_length = fields.count / divider
val casted_length = array_length.toInt
val indexedFields = fields.zipWithIndex
val indexKey = indexedFields.map{case (k,v) => (v,k)}
println("[MESSAGE] Number of lines: " + array_length)
println("[MESSAGE] Casted lenght of: " + casted_length)
for( i <- 1 to casted_length ) {
println("[URGENT DEBUG] Processin line " + i + " of " + casted_length)
var index = 9 * i - 8
println("[URGENT DEBUG] Index defined to be " + index)
collected_cells :+ indexKey.lookup(index)
}
println("[MESSAGE] collected_cells size: " + collected_cells.length)
val single_cells = collected_cells.flatMap(collected_cells => collected_cells);
val counted_cells = single_cells.map(cell => (cell, 1).reduceByKey{case (x, y) => x + y})
// val result = counted_cells.reduceByKey((a,b) => (a+b))
// val inmem = counted_cells.persist()
//
// // Collect driver into file to be put into user archive
// inmem.saveAsTextFile("path to server location")
// ==> Not necessary to save the result as processing time is recorded, not output
}
}
The bottom part is currently commented out as I tried to debug it, but it acts as pseudo-code for me to know what I need done. I may want to point out that I am next to not at all familiar with Scala and hence things like the _ notation confuse the life out of me.
Thanks for your time.
There are some concepts that need clarification in the question:
When we execute this code:
val lines = sc.textFile("hdfs://moonshot-ha-nameservice/" + args(0))
val fields = lines.map(line => line.split(","))
That does not result in a huge array of the size of the data. That expression represents a transformation of the base data. It can be further transformed until we reduce the data to the information set we desire.
In this case, we want the stock_symbol field of a record encoded a csv:
exchange, stock_symbol, date, stock_price_open, stock_price_high, stock_price_low, stock_price_close, stock_volume, stock_price_adj_close
I'm also going to assume that the data file contains a banner like this:
*---------*
| NASDAQ: |
*---------*
The first thing we're going to do is to remove anything that looks like this banner. In fact, I'm going to assume that the first field is the name of a stock exchange that start with an alphanumeric character. We will do this before we do any splitting, resulting in:
val lines = sc.textFile("hdfs://moonshot-ha-nameservice/" + args(0))
val validLines = lines.filter(line => !line.isEmpty && line.head.isLetter)
val fields = validLines.map(line => line.split(","))
It helps to write the types of the variables, to have peace of mind that we have the data types that we expect. As we progress in our Scala skills that might become less important. Let's rewrite the expression above with types:
val lines: RDD[String] = sc.textFile("hdfs://moonshot-ha-nameservice/" + args(0))
val validLines: RDD[String] = lines.filter(line => !line.isEmpty && line.head.isLetter)
val fields: RDD[Array[String]] = validLines.map(line => line.split(","))
We are interested in the stock_symbol field, which positionally is the element #1 in a 0-based array:
val stockSymbols:RDD[String] = fields.map(record => record(1))
If we want to count the symbols, all that's left is to issue a count:
val totalSymbolCount = stockSymbols.count()
That's not very helpful because we have one entry for every record. Slightly more interesting questions would be:
How many different stock symbols we have?
val uniqueStockSymbols = stockSymbols.distinct.count()
How many records for each symbol do we have?
val countBySymbol = stockSymbols.map(s => (s,1)).reduceByKey(_+_)
In Spark 2.0, CSV support for Dataframes and Datasets is available out of the box
Given that our data does not have a header row with the field names (what's usual in large datasets), we will need to provide the column names:
val stockDF = sparkSession.read.csv("/tmp/quotes_clean.csv").toDF("exchange", "symbol", "date", "open", "close", "volume", "price")
We can answer our questions very easy now:
val uniqueSymbols = stockDF.select("symbol").distinct().count
val recordsPerSymbol = stockDF.groupBy($"symbol").agg(count($"symbol"))

Class-Array Interaction Ruby

I'm trying to set up a program to help me take care of grading for students in a class. I've set it up to make a class of student then to read in from the file (something I'm not very familiar with in Ruby) via an array. My programming experience is in java so if there are errors that can be explained by that I apologize. Thank you in advance for your help.
class Student
def initialize(str_LastName, str_FirstName, arr_Score)
#str_LastName = str_LastName
#str_FirstName = str_FirstName
#arr_Score = arr_Score
str_Grade = ""
int_OutOf = 415
end
def get_LastName
str_LastName
end
def get_FirstName
str_FirstName
end
def get_Grade
str_Grade
end
def set_TotalScore()
sum = 0
arr_Score.each do |item|
sum += item
end
arr_Score[12] = sum
end
def set_Grade
if arr_Score[12]/int_OutOf >= 0.9
str_Grade = "A"
elsif arr_Score[12]/int_OutOf >= 0.8
str_Grade = "B"
elsif arr_Score[12]/int_OutOf >= 0.7
str_Grade = "C"
elsif arr_Score[12]/int_OutOf >= 0.6
str_Grade = "D"
else
str_Grade = "F"
end
end
end
def main
file_name = "Grades"
arr_students = Array.new(31)
arr_scores = Array.new(12)
int_i = 0
file_io = open(file_name).readlines.each do |line|
array = line.split(",").map(&:strip)
student = Student.new(array[0],array[1],array[2..-2]) #the final element in the array is for the final score
arr_students[int_i] = student
puts "read #{arr_students[int_i]}"
end
file_name = "Graded"
file_io = open(file_name,"a+")
arr_students.each do |student|
set_TotalScore
set_Grade
file.io_write(student)
puts "write #{student}"
end
end
main if __FILE__==$0
Here is my run at it. I tried to stay true in general to the original intent of your code while introducing more Rubyish ways of doing things.
class Student
def initialize(firstname, lastname, *scores)
#firstname, #lastname, #scores = firstname, lastname, scores
end
def total_score
#scores.map(&:to_i).inject(:+)
end
def grade
raise "TOO HIGH!" if total_score > MAX_SCORE
case total_score / MAX_SCORE
when 0.9..1.0; "A"
when 0.8...0.9; "B"
when 0.7...0.8; "C"
when 0.6...0.7; "D"
else "F"
end
end
def to_s
"#{#lastname}, #{#firstname}: #{total_score}, #{grade}"
end
end
MAX_SCORE = 415.0
DATA.each_line do |line|
arr = line.split(",").map(&:strip)
student = Student.new *arr
puts student
end
__END__
Herb,Goldberg,22,99,44,22,88,88
Mark,Sullivan,77,88,88,44,33
You can read and write to files like this(not tested):
outfile = File.open("Graded", "a+")
File.open("Grades").each_line do |line|
...
outfile.puts student
end
outfile.close
We can not easily reproduce your code because you open a file called "Grades" and we do not have or know of its content.
You should also add some code to first check whether your file exists, before continuing - right now your script exits with a Errno::ENOENT.
I would also suggest putting the logic in main into your class instead - let your class handle everything.
In the part:
if __FILE__ == $PROGRAM_NAME
end
You can then simply initialize your class with a simple call such as:
Foobar.new(ARGV)
You described the "Grades" file but I did not understand what you wrote - it would be easier if you could link in to a sample, like via a pastie or gist, then link it in; and to also say what the part is that is not working, which is also unclear.
The style issues are secondary, I consider your code ok - the other poster here does not.
You should go through codecademy to get your ruby syntax down.
To access your initialized instance variables (#str_LastName (which should be #last_name), etc) you need to use "attr_reader :str_LastName", preferably at the top of the class. That'll definite you getter (setter is attr_writer, both is attr_accessor).
You can also do a sum on an array like this: [1,4,6,7].inject(:+).
Does Java not allow case statements? You should use that in set_grade. You also don't need to initialize str_Grade. In set grade, you could do #grade_letter ||= "A", and then calling set_grade will return that value on each call.
I didn't look through your main method. It's ugly though. Ruby methods probably shouldn't be more than 5 lines long.

Custom matchers in rspec

I am writing my first custom matcher in rspec. I would like to provide a failure message with a break down on why the comparison has failed. Effectively I would like to output the differences between the expected and the actual object. I effectively just need to do this with 2 arrays on the object. I have done some research and am trying to use =~ as described here. It mentions it has an informative failure message but I am struggling to access the failure message. I would effectively just like to return the combined failure message for two separate arrays to give an informative reason for the matcher returning false.
My attempt is as follows
RSpec::Matchers.define :have_same_state_as_measure_table do |expected_measure_table , max_delta = 1e-06|
match do |actual_measure_table|
actual_measure_table.equivalence(expected_measure_table, max_delta)
end
description do
"checks if measure has same state as expected measure table within a given number of precision"
end
# Optional method description
description do
"checks if measure has same state as expected measure table, within a given level of precision"
end
# Optional failure messages
failure_message do |actual_measure_table|
mismatch_string = ""
mismatch_string += (actual_measure_table.columns =~ expected_measure_table.columns || "")
mismatch_string += (actual_measure_table.names =~ expected_measure_table.names || "")
"Measure tables missmatch as follows %s" % (mismatch_string.to_s)
end
failure_message_when_negated do |actual_measure_table|
"expected friend not to be in zipcode"
end
end
My final matcher was as follows :
class CompareMeasureTables
attr_reader :expected_measure_table, :max_delta, :actual_measure_table
def initialize(expected_measure_table, max_delta=1e-06)
#expected_measure_table = expected_measure_table
#max_delta = max_delta
end
def description
"Checks if measure has same state as expected measure table, within a given level of precision"
end
def matches?(actual_measure_table)
#actual_measure_table = actual_measure_table
actual_measure_table.equivalence(expected_measure_table, max_delta, false)
end
def failure_message
#mismatch_description = ""
if actual_measure_table.columns.sort != expected_measure_table.columns.sort
#mismatch_description += "\nColumns mismatch \nExpected =" + expected_measure_table.columns.inspect
#mismatch_description += "\nActual =" + actual_measure_table.columns.inspect
end
if (#mismatch_description == "")
#mismatch_description += "\nData mismatch \nExpected =" + (expected_measure_table.records - actual_measure_table.records).inspect
#mismatch_description += "\nActual =" + (actual_measure_table.records - expected_measure_table.records).inspect
#mismatch_description += "\nTolerance set at #{#max_delta}"
end
"Measure tables mismatch as follows %s" % (#mismatch_description)
end
end

Query repeated property starts with

Say I have a movie database where you can search by title.
I have a Movie model that looks like the following (simplified)
class Movie(ndb.Model):
name = ndb.StringProperty(required=True)
queryName = ndb.ComputedProperty(lambda self: [w.lower() for w in self.name.split()], repeated=True)
#staticmethod
def parent_key():
return ndb.Key(Movie, 'parent')
The queryName is just a lower case list of the words in Movie.name. The parent_key() is just for the query basically
If I was searching for the movie Forest Gump, I would want it to show up for the following search terms (and more, these are just examples)
'fo' - 'forest' starts with 'fo'
'gu' - 'gump' starts with 'gu'
'gu fo' - 'forest' starts with 'fo' and 'gump' starts with 'gu'
I can get the first two easily with a query similar to the following
movies = Movie\
.query(ancestor=Movie.parent_key())\
.filter(Movie.queryName >= x)\
.filter(Movie.queryName < x + u'\ufffd')\
.feth(10)
where x is 'fo' or 'gu'. Again, this is simply a query that works not my actual code. That comes later. If I expand a bit on the above query to look for two words I thought I could do something like the following however, it doesn't work.
movies = Movie\
.query(ancestor=Movie.parent_key())\
.filter(Movie.queryName >= 'fo')\
.filter(Movie.queryName < 'fo' + u'\ufffd')\
.filter(Movie.queryName >= 'gu')\
.filter(Movie.queryName < 'gu' + u'\ufffd')\
.feth(10)
Now , this doesn't work because it is looking in queryName to see if it has any item which starts with 'fo' and starts with 'gu'. Since that could never be true for a single item in the list, the query returns nothing.
The question is how do you query for Movies which have a queryName with an item that starts with 'fo' AND an item that starts with 'gu'?
Actual Code:
class MovieSearchHandler(BaseHandler):
def get(self):
q = self.request.get('q')
if q:
q = q.replace('&', '&').lower()
filters = self.create_filter(*q.split())
if filters:
movies = Movie\
.query(ancestor=Movie.parent_key())\
.filter(*filters)\
.fetch(10)
return self.write_json([{'id': m.movieId, 'name': m.name} for m in movies])
return self.write_json([])
def create_filter(self, *args):
filters = []
if args:
for prefix in args:
filters.append(Movie.queryName >= prefix)
filters.append(Movie.queryName < prefix + u'\ufffd')
return filters
Update:
My current solution is
class MovieSearchHandler(BaseHandler):
def get(self):
q = self.request.get('q')
if q:
q = q.replace('&', '&').lower().split()
movieFilter, reducable = self.create_filter(*q)
if movieFilter:
movies = Movie\
.query(ancestor=Movie.parent_key())\
.filter(movieFilter)\
.fetch(None if reducable else 10)
if reducable:
movies = self.reduce(movies, q)
return self.write_json([{'id': m.movieId, 'name': m.name} for m in movies])
return self.write_json([])
def create_filter(self, *args):
if args:
if len(args) == 1:
prefix = args[0]
return ndb.AND(Movie.queryName >= prefix, Movie.queryName < prefix + u'\ufffd'), False
ands = [ndb.AND(Movie.queryName >= prefix, Movie.queryName < prefix + u'\ufffd')
for prefix in args]
return ndb.OR(*ands), True
return None, False
def reduce(self, movies, terms):
reducedMovies = []
for m in movies:
if len(reducedMovies) >= 10:
return reducedMovies
if all(any(n.startswith(t) for n in m.queryName) for t in terms):
reducedMovies.append(m)
return reducedMovies
Still looking for something better though
Thanks

Resources