How do sites like this store tens of thousands of words "containing c", or like this, "words with d and c", or even further, "unscrambling" the word like CAUDK and finding that the database has duck. Curious from an algorithms/efficiency perspective how they would accomplish this:
Would a database be used, or would the words simply be stored in memory and quickly traversed? If a database was used (and each word was a record), how would you make these sorts of queries (with PostgreSQL for example, contains, starts_with, ends_with, and unscrambles)?
I guess the easiest thing to do would be to store all words in memory (sorted?), and just traverse the whole million or less word list to find the matches? But how about the unscramble one?
Basically wondering the efficient way this would be done.
"Containing C" amounts to count(C) > 0. Unscrambling CAUDC amounts to count(C) <= 2 && count(A) <= 1 && count(U) <= 1 && count(D) <= 1. So both queries could be efficiently answered by a database with 26 indices, one for the count of each letter in the alphabet.
Here is a quick and dirty python sqlite3 demo:
from collections import defaultdict, Counter
import sqlite3
conn = sqlite3.connect(':memory:')
cur = conn.cursor()
alphabet = [chr(ord('A')+i) for i in range(26)]
alphabet_set = set(alphabet)
columns = ['word TEXT'] + [f'{c}_count TINYINT DEFAULT 0' for c in alphabet]
create_cmd = f'CREATE TABLE abc ({", ".join(columns)})'
cur.execute(create_cmd)
for c in alphabet:
cur.execute(f'CREATE INDEX {c}_index ON abc ({c}_count)')
def insert(word):
counts = Counter(word)
columns = ['word'] + [f'{c}_count' for c in counts.keys()]
counts = [f'"{word}"'] + [f'{n}' for n in counts.values()]
var_str = f'({", ".join(columns)})'
val_str = f'({", ".join(counts)})'
insert_cmd = f'INSERT INTO abc {var_str} VALUES {val_str}'
cur.execute(insert_cmd)
def unscramble(text):
counts = {a:0 for a in alphabet}
for c in text:
counts[c] += 1
where_clauses = [f'{c}_count <= {n}' for (c, n) in counts.items()]
select_cmd = f'SELECT word FROM abc WHERE {" AND ".join(where_clauses)}'
cur.execute(select_cmd)
return list(sorted([tup[0] for tup in cur.fetchall()]))
print('Building sqlite table...')
with open('/usr/share/dict/words') as f:
word_set = set(line.strip().upper() for line in f)
for word in word_set:
if all(c in alphabet_set for c in word):
insert(word)
print('Table built!')
d = defaultdict(list)
for word in unscramble('CAUDK'):
d[len(word)].append(word)
print("unscramble('CAUDK'):")
for n in sorted(d):
print(' '.join(d[n]))
Output:
Building sqlite table...
Table built!
unscramble('CAUDK'):
A C D K U
AC AD AK AU CA CD CU DA DC KC UK
AUK CAD CUD
DUCK
I don't know for sure what they're doing, but I suggest this algorithm for contains and unscramble (and, I think, can be trivially extended to starts with or end with):
User submits a set of letters in the form of a string. Say, user submits bdsfa.
The algorithm sorts that string in (1). So, query becomes abdfs
Then, to find all words with those letters in them, the algorithm simply accesses the directory database/a/b/d/f/s/ and finds all words with those letters in. In case it finds the directory to be empty, it goes one level up: database/a/b/d/f/ and shows result there.
So, now, the question is, how to index the database of millions of words as done in step (3)? database/ directory will have 26 directories inside it for a to z, each of which will have 26-1 directories for all letters, except their parent's. E.g.:
database/a/{b,c,...,z}`
database/b/{a,c,...,z}`
...
database/z/{a,c,...,y}`
This tree structure will be only 26 level deep. Each branch will have no more than 26 elements. So browsing this directory structure is scalable.
Words will be stored in the leaves of this tree. So, the word apple will be stored in database/a/e/l/p/leaf_apple. In that place, you will also find other words such as leap. More specifically:
database/
a/
e/
l/
p/
leaf_apple
leaf_leap
leaf_peal
...
This way, you can efficiently reach the subset of target words as O(log n), where n is total number of words in your database.
You can further optimise this by adding additional indices. For example, there are too many words containing a, and the website won't display them all (at least not in the 1st page). Instead, the website may say there are total 500,000 many words containing 'a', here is 100 examples. In order to obtain 500,000 efficiently, the number of children at every level can be added during the indexing. E.g. `database/{a,b,...,z}/{num_children,
`database/
{...}/
{num_children,...}/
{num_childred,...}/
...
Here, num_children is just a leaf node, just like leaf_WORD. All leafs are files.
Depending on the load that this website has, it may not require to load this database in memory. It may simply leave it to the operating system to decide which portion of its file system to cache in memory as a read-time optimisation.
Personally, I think, as a criticism to applications, I think developers tend to jump into requiring RAM too fast even when a simple file system trick can do the job without any noticeable difference to the end user.
I am working on a little program in R which allows me to count the number of occurrences from a list in a data frame.
I therefore import my data frame and my word list as follows.
df <- read.csv("tweets.csv")
wordlist <- read.csv("wordlist.csv")
My idea was to use a "for"-loop which runs through all the words in the wordlist, counts their occurrences in the df data frame and then adds the number to the existing wordlist.
for (id in wordlist)
{
wordlist$frequency <- sum(stri_detect_fixed(df$text, wordlist$word))
}
Clearly this doesn't work. Instead it adds the frequencies of ALL words in my wordlist to each of the words in the wordlist data frame which looks something like:
id word freuquency
1 the 1290
2 answer 1290
3 is 1290
4 wrong 1290
I know it has to do something with the running variable in my for-loop. Any help is appreciated :)
I would clean the tweets df to turn things to lowercase, remove stopwords, and punctuation etc. (clean the tweets first otherwise you're going to get "Dog" and "dog" as two different words.
x <- c("Heute oft gelesen: Hörmann: «Lieber die Fair-Play-Medaille» als Platz eins t.co/w75t1O3zWQ t.co/fQJ2eUbGLf",
"Lokalsport: Wallbaum versteigert Olympia-Kalender t.co/uH5HnJTwUE",
"Die „politischen Spiele“ sind hiermit eröffnet: t.co/EWSnRmNHlw via #de_sputnik")
wordlist <- c("Olympia", "hiermit", "Die")
I would then sapply a tolower version and parse out by spaces. Then I'd flatten it by using unlist so it's a single vector instead of a list, and then unname it so it's a bit easier to read.
wordvec <- unname(unlist(sapply(x, function(z) str_split(tolower(z), " "))))
[1] "heute" "oft" "gelesen:" "hörmann:" "«lieber"
[6] "die" "fair-play-medaille»" "als" "platz" "eins"
[11] "t.co/w75t1o3zwq" "t.co/fqj2eubglf" "lokalsport:" "wallbaum" "versteigert"
[16] "olympia-kalender" "t.co/uh5hnjtwue" "die" "\u0084politischen" "spiele\u0093"
[21] "sind" "hiermit" "eröffnet:" "t.co/ewsnrmnhlw" "via"
[26] "#de_sputnik"
I think this is still pretty messy. I would look up some text cleaning solutions like removing special characters, or using grepl or something to remove the http stuff.
To filter the list to only contain your words, try:
wordvec[wordvec %in% tolower(wordlist)]
[1] "die" "die" "hiermit"
And then you can use table
table(wordvec[wordvec %in% tolower(wordlist)])
die hiermit
2 1
you can do that last part in reverse if you'd like, but I'd focus on cleaning the texts up to remove the special characters and just do some text cleaning.
Here's how I would do it using sapply. The function checks whether data contains 3 consecutive letter combinations and tallies up the count.
library(tidyverse)
library(stringi)
1000 randomly created length 100 letter strings
data <- replicate(100, sample(letters, size = 1000, replace = TRUE))%>%
data.frame()%>%
unite("string" , colnames(.) , sep = "", remove = TRUE)%>%
.$string
head(data)
[1] "uggwaevptdbhhnmvunkcgdssgmulvyxxhnavbxxotwvkvvlycjeftmjufymwzofrhepraqfjlfslynkvbyommaawrvaoscuytfws"
[2] "vftwusfmkzwougqqupeeelcyaefkcxmrqphajcnerfiitttizmpjucohkvsawwiqolkvuofnuarmkriojlnnuvkcreekirfdpsil"
[3] "kbtkrlogalroiulppghcosrpqnryldiuigtsfopsldmcrmnwcwxlhtukvzsujkhqnillzmgwytpneigogvnsxtjgzuuhbjpdvtab"
[4] "cuzqynmbidfwubiuozuhudfqpynnfmkplnyetxxfzvobafmkggiqncykhvmmdrexvxdvtkljppudeiykxsapvpxbbheikydcagey"
[5] "qktsojaevqdegrqunbganigcnvkuxbydepgevcwqqkyekezjddbzqvepodyugwloauxygzgxnwlrjzkyvuihqdfxptwrpsvsdpzf"
[6] "ssfsgxhkankqbrzborfnnvcvqjaykicocxuydzelnuyfljjrhytzgndrktzfglhsuimwjqvvvtvqjsdlnwcbhfdfbsbgdmvfyjef"
Reference to check data on
three_consec_letters = expand.grid(letters, letters, letters)%>%
unite("consec", colnames(.), sep = "", remove = TRUE)%>%
.$consec
head(three_consec_letters)
[1] "aaa" "baa" "caa" "daa" "eaa" "faa"
Check and sum if three_consec_letters is in lengthy strings
counts = sapply(three_consec_letters, function(x) stri_detect_fixed(data, x)%>%sum())
Results
head(counts)
aaa baa caa daa eaa faa
5 6 6 4 0 3
Hope this helps.
I use python 3
Okay, I got a file that lock like this:
id:1
1
34
22
52
id:2
1
23
22
31
id:3
2
12
3
31
id:4
1
21
22
11
how can I find and delete only this part of the file?
id:2
1
23
22
31
I have been trying a lot to do this but can't get it to work.
Is the id used for the decision to delete the sequence, or is the list of values used for the decision?
You can build a dictionary where the id number is the key (converted to int because of the later sorting) and the following lines are converted to the list of strings that is the value for the key. Then you can delete the item with the key 2, and traverse the items sorted by the key, and output the new id:key plus the formated list of the strings.
Or you can build the list of lists where the order is protected. If the sequence of the id's is to be protected (i.e. not renumbered), you can also remember the id:n in the inner list.
This can be done for a reasonably sized file. If the file is huge, you should copy the source to the destination and skip the unwanted sequence on the fly. The last case can be fairly easy also for the small file.
[added after the clarification]
I recommend to learn the following approach that is usefull in many such cases. It uses so called finite automaton that implements actions bound to transitions from one state to another (see Mealy machine).
The text line is the input element here. The nodes that represent the context status are numbered here. (My experience is that it is not worth to give them names -- keep them just stupid numbers.) Here only two states are used and the status could easily be replaced by a boolean variable. However, if the case becomes more complicated, it leads to introduction of another boolean variable, and the code becomes more error prone.
The code may look very complicated at first, but it is fairly easy to understand when you know that you can think about each if status == number separately. This is the mentioned context that captured the previous processing. Do not try to optimize, let the code that way. It can actually be human-decoded later, and you can draw the picture similar to the Mealy machine example. If you do, then it is much more understandable.
The wanted functionality is a bit generalized -- a set of ignored sections can be passed as the first argument:
import re
def filterSections(del_set, fname_in, fname_out):
'''Filtering out the del_set sections from fname_in. Result in fname_out.'''
# The regular expression was chosen for detecting and parsing the id-line.
# It can be done differently, but I consider it just fine and efficient.
rex_id = re.compile(r'^id:(\d+)\s*$')
# Let's open the input and output file. The files will be closed
# automatically.
with open(fname_in) as fin, open(fname_out, 'w') as fout:
status = 1 # initial status -- expecting the id line
for line in fin:
m = rex_id.match(line) # get the match object if it is the id-line
if status == 1: # skipping the non-id lines
if m: # you can also write "if m is not None:"
num_id = int(m.group(1)) # get the numeric value of the id
if num_id in del_set: # if this id should be deleted
status = 1 # or pass (to stay in this status)
else:
fout.write(line) # copy this id-line
status = 2 # to copy the following non-id lines
#else ignore this line (no code needed to ignore it :)
elif status == 2: # copy the non-id lines
if m: # the id-line found
num_id = int(m.group(1)) # get the numeric value of the id
if num_id in del_set: # if this id should be deleted
status = 1 # or pass (to stay in this status)
else:
fout.write(line) # copy this id-line
status = 2 # to copy the following non-id lines
else:
fout.write(line) # copy this non-id line
if __name__ == '__main__':
filterSections( {1, 3}, 'data.txt', 'output.txt')
# or you can write the older set([1, 3]) for the first argument.
Here the output id-lines where given the original number. If you want to renumber the sections, it can be done via a simple modification. Try the code and ask for details.
Beware, the finite automata have limited power. They cannot be used for the usual programming languages as they are not able to capture nested paired structures (like parenteses).
P.S. The 7000 lines is actually a tiny file from a computer perspective ;)
Read each line into an array of strings. The index number is the line number - 1. Check if the line equals "id:2" before you read the line. If yes, then stop reading the line until the line equals "id:3". After reading the line, clear the file and write the array back to the file until the end of the array. This may not be the most efficient way but should work.
if there isn't any values in between that would interfere this would work....
import fileinput
...
def deleteIdGroup( number ):
deleted = False
for line in fileinput.input( "testid.txt", inplace = 1 ):
line = line.strip( '\n' )
if line.count( "id:" + number ): # > 0
deleted = True;
elif line.count( "id:" ): # > 0
deleted = False;
if not deleted:
print( line )
EDIT:
sorry this deletes id:2 and id:20 ... yuo could modify it so that the first if checks - line == "id:" + number
I need to apply the Mann Kendall trend test in R to a big number (about 1 million) of different-sized time series. I've already created a script that takes the time-series (practically a list of numbers) from all the files in a certain directory and then outputs the results to a .txt file.
The problem is that I have about 1 million of time-series so creating 1 million of file isn't exactly nice. So I thought that putting all the time-series in only one .txt file (separated by some symbol like "#" for example) could be more manageable. So I have a file like this:
1
2
4
5
4
#
2
13
34
#
...
I'm wondering, is it possible to extract such series (between two "#") in R and then apply the analysis?
EDIT
Following #acesnap hints I'm using this code:
library(Kendall)
a=read.table("to_r.txt")
numData=1017135
for (i in 1:numData){
s1=subset(a,a$V1==i)
m=MannKendall(s1$V2)
cat(m[[1]]," ",m[[2]], " ", m[[3]]," ",m[[4]]," ", m[[5]], "\n" , file="monotonic_trend_checking.txt",append=TRUE)
}
This approach works but the problem is that it is taking ages for computation. Can you suggest a faster approach?
If you were to number the datasets as they went into the larger file it would make things easier. If you were to do that you could use a for loop and subsetting.
setNum data
1 1
1 2
1 4
1 5
1 4
2 2
2 13
2 34
... ...
Then do something like:
answers1 <- c()
numOfDataSets <- 1000000
for(i in 1:numOfDataSets){
ss1 <- subset(bigData, bigData$setNum == i) ## creates subset of each data set
ans1 <- mannKendallTrendTest(ss1$data) ## gets answer from test
answers1 <- c(answers1, ans1) ## inserts answer into vector
print(paste(i, " | ", ans1, "",sep="" )) ## prints which data set is in use
flush.console() ## prints to console now instead of waiting
}
Here is a perhaps a more elegant solution:
# Read in your data
x=c('1','2','3','4','5','#','4','5','5','6','#','3','6','23','#')
# Build a list of indices where you want to split by:
ind=c(0,which(x=='#'))
# Use those indices split the vector into a list
lapply(seq(length(ind)-1),function (y) as.numeric(x[(ind[y]+1):(ind[y+1]-1)]))
Note that for this code to work, you must have a '#' character at the very end of the file.
I have for example 5 arrays with some inserted elements (numbers):
1,4,8,10
1,2,3,4,11,15
2,4,20,21
2,30
I need to find most common elements in those arrays and every element should go all the way till the end (see example below). In this example that would be the bold combination (or the same one but with "30" on the end, it's the "same") because it contains the smallest number of different elements (only two, 4 and 2/30).
This combination (see below) isn't good because if I have for ex. "4" it must "go" till it ends (next array mustn't contain "4" at all). So combination must go all the way till the end.
1,4,8,10
1,2,3,4,11,15
2,4,20,21
2,30
EDIT2: OR
1,4,8,10
1,2,3,4,11,15
2,4,20,21
2,30
OR anything else is NOT good.
Is there some algorithm to speed this thing up (if I have thousands of arrays with hundreds of elements in each one)?
To make it clear - solution must contain lowest number of different elements and the groups (of the same numbers) must be grouped from first - larger ones to the last - smallest ones. So in upper example 4,4,4,2 is better then 4,2,2,2 because in first example group of 4's is larger than group of 2's.
EDIT: To be more specific. Solution must contain the smallest number of different elements and those elements must be grouped from first to last. So if I have three arrrays like
1,2,3
1,4,5
4,5,6
Solution is 1,1,4 or 1,1,5 or 1,1,6 NOT 2,5,5 because 1's have larger group (two of them) than 2's (only one).
Thanks.
EDIT3: I can't be more specific :(
EDIT4: #spintheblack 1,1,1,2,4 is the correct solution because number used first time (let's say at position 1) can't be used later (except it's in the SAME group of 1's). I would say that grouping has the "priority"? Also, I didn't mention it (sorry about that) but the numbers in arrays are NOT sorted in any way, I typed it that way in this post because it was easier for me to follow.
Here is the approach you want to take, if arrays is an array that contains each individual array.
Starting at i = 0
current = arrays[i]
Loop i from i+1 to len(arrays)-1
new = current & arrays[i] (set intersection, finds common elements)
If there are any elements in new, do step 6, otherwise skip to 7
current = new, return to step 3 (continue loop)
print or yield an element from current, current = arrays[i], return to step 3 (continue loop)
Here is a Python implementation:
def mce(arrays):
count = 1
current = set(arrays[0])
for i in range(1, len(arrays)):
new = current & set(arrays[i])
if new:
count += 1
current = new
else:
print " ".join([str(current.pop())] * count),
count = 1
current = set(arrays[i])
print " ".join([str(current.pop())] * count)
>>> mce([[1, 4, 8, 10], [1, 2, 3, 4, 11, 15], [2, 4, 20, 21], [2, 30]])
4 4 4 2
If all are number lists, and are all sorted, then,
Convert to array of bitmaps.
Keep 'AND'ing the bitmaps till you hit zero. The position of the 1 in the previous value indicates the first element.
Restart step 2 from the next element
This has now turned into a graphing problem with a twist.
The problem is a directed acyclic graph of connections between stops, and the goal is to minimize the number of lines switches when riding on a train/tram.
ie. this list of sets:
1,4,8,10 <-- stop A
1,2,3,4,11,15 <-- stop B
2,4,20,21 <-- stop C
2,30 <-- stop D, destination
He needs to pick lines that are available at his exit stop, and his arrival stop, so for instance, he can't pick 10 from stop A, because 10 does not go to stop B.
So, this is the set of available lines and the stops they stop on:
A B C D
line 1 -----X-----X-----------------
line 2 -----------X-----X-----X-----
line 3 -----------X-----------------
line 4 -----X-----X-----X-----------
line 8 -----X-----------------------
line 10 -----X-----------------------
line 11 -----------X-----------------
line 15 -----------X-----------------
line 20 -----------------X-----------
line 21 -----------------X-----------
line 30 -----------------------X-----
If we consider that a line under consideration must go between at least 2 consecutive stops, let me highlight the possible choices of lines with equal signs:
A B C D
line 1 -----X=====X-----------------
line 2 -----------X=====X=====X-----
line 3 -----------X-----------------
line 4 -----X=====X=====X-----------
line 8 -----X-----------------------
line 10 -----X-----------------------
line 11 -----------X-----------------
line 15 -----------X-----------------
line 20 -----------------X-----------
line 21 -----------------X-----------
line 30 -----------------------X-----
He then needs to pick a way that transports him from A to D, with the minimal number of line switches.
Since he explained that he wants the longest rides first, the following sequence seems the best solution:
take line 4 from stop A to stop C, then switch to line 2 from C to D
Code example:
stops = [
[1, 4, 8, 10],
[1,2,3,4,11,15],
[2,4,20,21],
[2,30],
]
def calculate_possible_exit_lines(stops):
"""
only return lines that are available at both exit
and arrival stops, discard the rest.
"""
result = []
for index in range(0, len(stops) - 1):
lines = []
for value in stops[index]:
if value in stops[index + 1]:
lines.append(value)
result.append(lines)
return result
def all_combinations(lines):
"""
produce all combinations which travel from one end
of the journey to the other, across available lines.
"""
if not lines:
yield []
else:
for line in lines[0]:
for rest_combination in all_combinations(lines[1:]):
yield [line] + rest_combination
def reduce(combination):
"""
reduce a combination by returning the number of
times each value appear consecutively, ie.
[1,1,4,4,3] would return [2,2,1] since
the 1's appear twice, the 4's appear twice, and
the 3 only appear once.
"""
result = []
while combination:
count = 1
value = combination[0]
combination = combination[1:]
while combination and combination[0] == value:
combination = combination[1:]
count += 1
result.append(count)
return tuple(result)
def calculate_best_choice(lines):
"""
find the best choice by reducing each available
combination down to the number of stops you can
sit on a single line before having to switch,
and then picking the one that has the most stops
first, and then so on.
"""
available = []
for combination in all_combinations(lines):
count_stops = reduce(combination)
available.append((count_stops, combination))
available = [k for k in reversed(sorted(available))]
return available[0][1]
possible_lines = calculate_possible_exit_lines(stops)
print("possible lines: %s" % (str(possible_lines), ))
best_choice = calculate_best_choice(possible_lines)
print("best choice: %s" % (str(best_choice), ))
This code prints:
possible lines: [[1, 4], [2, 4], [2]]
best choice: [4, 4, 2]
Since, as I said, I list lines between stops, and the above solution can either count as lines you have to exit from each stop or lines you have to arrive on into the next stop.
So the route is:
Hop onto line 4 at stop A and ride on that to stop B, then to stop C
Hop onto line 2 at stop C and ride on that to stop D
There are probably edge-cases here that the above code doesn't work for.
However, I'm not bothering more with this question. The OP has demonstrated a complete incapability in communicating his question in a clear and concise manner, and I fear that any corrections to the above text and/or code to accommodate the latest comments will only provoke more comments, which leads to yet another version of the question, and so on ad infinitum. The OP has gone to extraordinary lengths to avoid answering direct questions or to explain the problem.
I am assuming that "distinct elements" do not have to actually be distinct, they can repeat in the final solution. That is if presented with [1], [2], [1] that the obvious answer [1, 2, 1] is allowed. But we'd count this as having 3 distinct elements.
If so, then here is a Python solution:
def find_best_run (first_array, *argv):
# initialize data structures.
this_array_best_run = {}
for x in first_array:
this_array_best_run[x] = (1, (1,), (x,))
for this_array in argv:
# find the best runs ending at each value in this_array
last_array_best_run = this_array_best_run
this_array_best_run = {}
for x in this_array:
for (y, pattern) in last_array_best_run.iteritems():
(distinct_count, lengths, elements) = pattern
if x == y:
lengths = tuple(lengths[:-1] + (lengths[-1] + 1,))
else :
distinct_count += 1
lengths = tuple(lengths + (1,))
elements = tuple(elements + (x,))
if x not in this_array_best_run:
this_array_best_run[x] = (distinct_count, lengths, elements)
else:
(prev_count, prev_lengths, prev_elements) = this_array_best_run[x]
if distinct_count < prev_count or prev_lengths < lengths:
this_array_best_run[x] = (distinct_count, lengths, elements)
# find the best overall run
best_count = len(argv) + 10 # Needs to be bigger than any possible answer.
for (distinct_count, lengths, elements) in this_array_best_run.itervalues():
if distinct_count < best_count:
best_count = distinct_count
best_lengths = lengths
best_elements = elements
elif distinct_count == best_count and best_lengths < lengths:
best_count = distinct_count
best_lengths = lengths
best_elements = elements
# convert it into a more normal representation.
answer = []
for (length, element) in zip(best_lengths, elements):
answer.extend([element] * length)
return answer
# example
print find_best_run(
[1,4,8,10],
[1,2,3,4,11,15],
[2,4,20,21],
[2,30]) # prints [4, 4, 4, 30]
Here is an explanation. The ...this_run dictionaries have keys which are elements in the current array, and they have values which are tuples (distinct_count, lengths, elements). We are trying to minimize distinct_count, then maximize lengths (lengths is a tuple, so this will prefer the element with the largest value in the first spot) and are tracking elements for the end. At each step I construct all possible runs which are a combination of a run up to the previous array with this element next in sequence, and find which ones are best to the current. When I get to the end I pick the best possible overall run, then turn it into a conventional representation and return it.
If you have N arrays of length M, this should take O(N*M*M) time to run.
I'm going to take a crack here based on the comments, please feel free to comment further to clarify.
We have N arrays and we are trying to find the 'most common' value over all arrays when one value is picked from each array. There are several constraints 1) We want the smallest number of distinct values 2) The most common is the maximal grouping of similar letters (changing from above for clarity). Thus, 4 t's and 1 p beats 3 x's 2 y's
I don't think either problem can be solved greedily - here's a counterexample [[1,4],[1,2],[1,2],[2],[3,4]] - a greedy algorithm would pick [1,1,1,2,4] (3 distinct numbers) [4,2,2,2,4] (two distinct numbers)
This looks like a bipartite matching problem, but I'm still coming up with the formulation..
EDIT : ignore; This is a different problem, but if anyone can figure it out, I'd be really interested
EDIT 2 : For anyone that's interested, the problem that I misinterpreted can be formulated as an instance of the Hitting Set problem, see http://en.wikipedia.org/wiki/Vertex_cover#Hitting_set_and_set_cover. Basically the left hand side of the bipartite graph would be the arrays and the right hand side would be the numbers, edges would be drawn between arrays that contain each number. Unfortunately, this is NP complete, but the greedy solutions described above are essentially the best approximation.