How to use model.predict? - tensorflow.js

Im importing objects from json into tf.tensor2d. Objects with numbers > 50 are labeled as bigger and < smaller.
var irisTesting = [{"track": 40,"clan": "smaller"},{"track":
2,"clan" :
"smaller"},{"track": 99,"clan": "bigger"},];
const trainingData = tf.tensor2d(iris.map(item => [
item.track,
]))
const outputData = tf.tensor2d(iris.map(item => [
item.clan === "bigger" ? 1 : 0,
item.clan === "smaller" ? 1 : 0,
]))
const testingData = tf.tensor2d(irisTesting.map(item => [
item.track,
]))
The predictions are extremely off. The tensor looks like this
Tensor
[0.0224216, 0.0128847],
[0.0224305, 0.0128918],
[0.0224216, 0.0128847]]
irisTesting has over 5000 labels. I am exspecting first to arrays as smaller then bigger. Instead it reads bigger with higher percentage. Am I reading this right? This is first time using tensorflowjs and my understanding on how to use model.predict function.
const model = tf.sequential()
model.add(tf.layers.dense({
inputShape: [1],
activation: "sigmoid",
units: 5,
}))
model.add(tf.layers.dense({
inputShape: [5],
activation: "sigmoid",
units: 2,
}))
model.add(tf.layers.dense({
activation: "sigmoid",
units: 2
}))
model.compile({
loss: "meanSquaredError",
optimizer: tf.train.adam(.06),
})
// train/fit our network
const startTime = Date.now()
model.fit(trainingData, outputData, {epochs: 100})
.then((history) => {
// console.log(history)
model.predict(testingData).print()
})

A couple of things. Your data is labeled as 1 and 0, I think your model structure is not aligned. Additionally, I'm noticing you're using some "classical" activation and loss functions. Might I suggest we tighten things up a little?
Activation updates
You have 3 layers, all of them using sigmoid.
While this is doable, let's maybe move the first two layers to "relu" which will likely improve performance. Relu is generally considered one of the best activation functions to use for these intermediate layers.
Final layer
You're getting two sigmoid values. Both are predicting how likely the clan is "bigger" or "smaller". Since you're looking for true/false, and your data is labeled 1 and 0, it looks to me like you're really wanting to move to a single output here, with 1 being 100% sure of bigger, and 0 is 0% chance of bigger being true. This would fit your input labels better, yes?
To do this, you'd keep your final layer as sigmoid, but only 1 unit! You'll also have to change your compile loss function to binaryCrossentropy.
With this, you should be able to train and get a single output of % prob of "Bigger" which was labeled as 1

Related

Scala - Efficient element wise sum of two arrays

I have two arrays which I would like to reduce to one array in which at each index you have the sum of the two elements in the original arrays. For example:
val arr1: Array[Int] = Array(1, 1, 3, 3, 5)
val arr1: Array[Int] = Array(2, 1, 2, 2, 1)
val arr3: Array[Int] = sum(arr1, arr2)
// This should result in:
// arr3 = Array(3, 2, 5, 5, 6)
I've seen this post: Element-wise sum of arrays in Scala, and I currently use this approach (zip/map). However, using this for a big data application I am concerned about its performance. Using this approach one has to traverse the array(s) at least twice. Is there a better approach in terms of efficiency?
The most efficient way might well be to do it lazily.
As with anything collection-oriented, Scala 2.12 and 2.13 are going to be different (this code is Scala 2.13, but 2.12 will be similar... might extend IndexedSeqLike, but I don't know for sure)
import scala.collection.IndexedSeq
import scala.math.Numeric
case class SumIndexedSeq[+T: Numeric](seq1: IndexedSeq[T], seq2: IndexedSeq[T]) extends IndexedSeq[T] {
override val length: Int = seq1.length.min(seq2.length)
override def apply(i: Int) =
if (i >= length) throw new IndexOutOfBoundsException
else seq1(i) + seq2(i)
}
Arrays are implicitly convertible to a subtype of collection.IndexedSeq. This will compute the sum of the corresponding elements on every access (which may be generally desirable as it's possible to use a mutable IndexedSeq).
If you need an Array, you can get one with only a single traversal via
val arr3: Array[Int] = SumIndexedSeq(arr1, arr2).toArray
but SumIndexedSeq can be used anywhere a Seq can be used without a traversal.
As a further optimization, especially if you're sure that the underlying collections/arrays won't mutate, you can add a cache so you don't add the same elements together twice. It can also be generalized, if you so care, to any binary operations on T (in which case the Numeric constraint can be removed).
As Luis noted, for a performance question: experiment and benchmark. It's worth keeping in mind that a cache implementation may well entail boxing every element to put in the cache, so you might need to be accessing the same elements many times in order for the cache to be a win (and a sufficiently large cache may have implications for the stability of a distributed system).
Well, first of all, as with all things related to performance the only answer is to benchmark.
Second, are you sure you need plain mutable, invariant, weird Arrays? Can't you use something like Vector or ArraySeq?
Third, you can just do something like this or using a while loop, which would be the same.
val result = ArraySeq.tabulate(math.min(arr1.length, arr2.length)) { i =>
arr1(i) + arr2(i)
}

Slicing a reference of array

I have the following map:
my %cps_per_level = (
Bronze => [100 , 65 ],
Silver => [200 , 125 ],
Gold => [400 , 250 ],
Platinum => [800 , 500 ],
Diamond => [1200, 750 ],
Master => [2000, 1200],
Grandmaster => [3000, 1750],
);
In my code i want to get both values of the list for a particular entry to 2 variables. My first attempt was to de-reference the list for a particular entry then apply a list slice and assign that to the variables i want, like shown below. Here $level holds one key of the map.
my ($cps_before, $cps_after) = $cps_per_level{$level}[0,1];
But this doesn't work. $cps_before gets the first value of the list, but $cps_after gets nothing. I managed to make it work by writing this in 2 lines:
my $cps_before= $cps_per_level{$level}[0];
my $cps_after = $cps_per_level{$level}[1];
My question is: why didn't the slice work? What am i missing?
That's not a slice. If you had an #array, a slice would look like this:
#array[0, 1, 2]
What you wrote is the equivalent of this:
$array[0, 1, 2]
... which uses , in scalar context and is equivalent to $array[2].
In fact,
$cps_per_level{$level}[0,1]
is shorthand for
$cps_per_level{$level}->[0,1]
which is syntactic sugar for
${ $cps_per_level{$level} }[0, 1]
which just means
${ $cps_per_level{$level} }[1]
What you should do instead is:
#{ $cps_per_level{$level} }[0, 1]
The # means you're trying to get multiple elements out.
If your perl is new enough (5.20 and no longer experimental since 5.24), you can also use a postfix dereference form like this:
$cps_per_level{$level}->#[0, 1]

Efficient way to convert Scala Array to Unique Sorted List

Can anybody optimize following statement in Scala:
// maybe large
val someArray = Array(9, 1, 6, 2, 1, 9, 4, 5, 1, 6, 5, 0, 6)
// output a sorted list which contains unique element from the array without 0
val newList=(someArray filter (_>0)).toList.distinct.sort((e1, e2) => (e1 > e2))
Since the performance is critical, is there a better way?
Thank you.
This simple line is one of the fastest codes so far:
someArray.toList.filter (_ > 0).sortWith (_ > _).distinct
but the clear winner so far is - due to my measurement - Jed Wesley-Smith. Maybe if Rex' code is fixed, it looks different.
Typical disclaimer 1 + 2:
I modified the codes to accept an Array and return an List.
Typical benchmark considerations:
This was random data, equally distributed. For 1 Million elements, I created an Array of 1 Million ints between 0 and 1 Million. So with more or less zeros, and more or less duplicates, it might vary.
It might depend on the machine etc.. I used a single core CPU, Intel-Linux-32bit, jdk-1.6, scala 2.9.0.1
Here is the underlying benchcoat-code and the concrete code to produce the graph (gnuplot). Y-axis: time in seconds. X-axis: 100 000 to 1 000 000 elements in Array.
update:
After finding the problem with Rex' code, his code is as fast as Jed's code, but the last operation is a transformation of his Array to a List (to fullfill my benchmark-interface). Using a var result = List [Int], and result = someArray (i) :: result speeds his code up, so that it is about twice as fast as the Jed-Code.
Another, maybe interesting, finding is: If I rearrange my code in the order of filter/sort/distinct (fsd) => (dsf, dfs, fsd, ...), all 6 possibilities don't differ significantly.
I haven't measured, but I'm with Duncan, sort in place then use something like:
util.Sorting.quickSort(array)
array.foldRight(List.empty[Int]){
case (a, b) =>
if (!b.isEmpty && b(0) == a)
b
else
a :: b
}
In theory this should be pretty efficient.
Without benchmarking I can't be sure, but I imagine the following is pretty efficient:
val list = collection.SortedSet(someArray.filter(_>0) :_*).toList
Also try adding .par after someArray in your version. It's not guaranteed to be quicker, bit it might be. You should run a benchmark and experiment.
sort is deprecated. Use .sortWith(_ > _) instead.
Boxing primitives is going to give you a 10-30x performance penalty. Therefore if you really are performance limited, you're going to want to work off of raw primitive arrays:
def arrayDistinctInts(someArray: Array[Int]) = {
java.util.Arrays.sort(someArray)
var overzero = 0
var ndiff = 0
var last = 0
var i = 0
while (i < someArray.length) {
if (someArray(i)<=0) overzero = i+1
else if (someArray(i)>last) {
last = someArray(i)
ndiff += 1
}
i += 1
}
val result = new Array[Int](ndiff)
var j = 0
i = overzero
last = 0
while (i < someArray.length) {
if (someArray(i) > last) {
result(j) = someArray(i)
last = someArray(i)
j += 1
}
i += 1
}
result
}
You can get slightly better than this if you're careful (and be warned, I typed this off the top of my head; I might have typoed something, but this is the style to use), but if you find the existing version too slow, this should be at least 5x faster and possibly a lot more.
Edit (in addition to fixing up the previous code so it actually works):
If you insist on ending with a list, then you can build the list as you go. You could do this recursively, but I don't think in this case it's any clearer than the iterative version, so:
def listDistinctInts(someArray: Array[Int]): List[Int] = {
if (someArray.length == 0 || someArray(someArray.length-1) <= 0) List[Int]()
else {
java.util.Arrays.sort(someArray)
var last = someArray(someArray.length-1)
var list = last :: Nil
var i = someArray.length-2
while (i >= 0) {
if (someArray(i) < last) {
last = someArray(i)
if (last <= 0) return list;
list = last :: list
}
i -= 1
}
list
}
}
Also, if you may not destroy the original array by sorting, you are by far best off if you duplicate the array and destroy the copy (array copies of primitives are really fast).
And keep in mind that there are special-case solutions that are far faster yet depending on the nature of the data. For example, if you know that you have a long array, but the numbers will be in a small range (e.g. -100 to 100), then you can use a bitset to track which ones you've encountered.
For efficiency, depending on your value of large:
val a = someArray.toSet.filter(_>0).toArray
java.util.Arrays.sort(a) // quicksort, mutable data structures bad :-)
res15: Array[Int] = Array(1, 2, 4, 5, 6, 9)
Note that this does the sort using qsort on an unboxed array.
I'm not in a position to measure, but some more suggestions...
Sorting the array in place before converting to a list might well be more efficient, and you might look at removing dups from the sorted list manually, as they will be grouped together. The cost of removing 0's before or after the sort will also depend on their ratio to the other entries.
How about adding everything to a sorted set?
val a = scala.collection.immutable.SortedSet(someArray filter (0 !=): _*)
Of course, you should benchmark the code to check what is faster, and, more importantly, that this is truly a hot spot.

Mongoid: Retrieving objects in the order of the

Suppose:
mentions=["2","1","3"]
unranked = User.where(:nickname.in => mentions).map
The output does not match the ordering in the provided array
output is random => 3, 1, 2
i want it as per the original array => 2, 1, 3
I had the same problem, I solved it like this:
mentions=["foo","bar","baz"]
ranked = User.where(:nickname.in => mentions).sort do |a, b|
mentions.index(a.nickname) <=> mentions.index(b.nickname)
end
Not really the most elegant solution since I'm sorting in the application and not on the database engine but hey.. it works (on small lists).

Weighted random selection from array

I would like to randomly select one element from an array, but each element has a known probability of selection.
All chances together (within the array) sums to 1.
What algorithm would you suggest as the fastest and most suitable for huge calculations?
Example:
id => chance
array[
0 => 0.8
1 => 0.2
]
for this pseudocode, the algorithm in question should on multiple calls statistically return four elements on id 0 for one element on id 1.
Compute the discrete cumulative density function (CDF) of your list -- or in simple terms the array of cumulative sums of the weights. Then generate a random number in the range between 0 and the sum of all weights (might be 1 in your case), do a binary search to find this random number in your discrete CDF array and get the value corresponding to this entry -- this is your weighted random number.
The algorithm is straight forward
rand_no = rand(0,1)
for each element in array
if(rand_num < element.probablity)
select and break
rand_num = rand_num - element.probability
I have found this article to be the most useful at understanding this problem fully.
This stackoverflow question may also be what you're looking for.
I believe the optimal solution is to use the Alias Method (wikipedia).
It requires O(n) time to initialize, O(1) time to make a selection, and O(n) memory.
Here is the algorithm for generating the result of rolling a weighted n-sided die (from here it is trivial to select an element from a length-n array) as take from this article.
The author assumes you have functions for rolling a fair die (floor(random() * n)) and flipping a biased coin (random() < p).
Algorithm: Vose's Alias Method
Initialization:
Create arrays Alias and Prob, each of size n.
Create two worklists, Small and Large.
Multiply each probability by n.
For each scaled probability pi:
If pi < 1, add i to Small.
Otherwise (pi ≥ 1), add i to Large.
While Small and Large are not empty: (Large might be emptied first)
Remove the first element from Small; call it l.
Remove the first element from Large; call it g.
Set Prob[l]=pl.
Set Alias[l]=g.
Set pg := (pg+pl)−1. (This is a more numerically stable option.)
If pg<1, add g to Small.
Otherwise (pg ≥ 1), add g to Large.
While Large is not empty:
Remove the first element from Large; call it g.
Set Prob[g] = 1.
While Small is not empty: This is only possible due to numerical instability.
Remove the first element from Small; call it l.
Set Prob[l] = 1.
Generation:
Generate a fair die roll from an n-sided die; call the side i.
Flip a biased coin that comes up heads with probability Prob[i].
If the coin comes up "heads," return i.
Otherwise, return Alias[i].
Here is an implementation in Ruby:
def weighted_rand(weights = {})
raise 'Probabilities must sum up to 1' unless weights.values.inject(&:+) == 1.0
raise 'Probabilities must not be negative' unless weights.values.all? { |p| p >= 0 }
# Do more sanity checks depending on the amount of trust in the software component using this method,
# e.g. don't allow duplicates, don't allow non-numeric values, etc.
# Ignore elements with probability 0
weights = weights.reject { |k, v| v == 0.0 } # e.g. => {"a"=>0.4, "b"=>0.4, "c"=>0.2}
# Accumulate probabilities and map them to a value
u = 0.0
ranges = weights.map { |v, p| [u += p, v] } # e.g. => [[0.4, "a"], [0.8, "b"], [1.0, "c"]]
# Generate a (pseudo-)random floating point number between 0.0(included) and 1.0(excluded)
u = rand # e.g. => 0.4651073966724186
# Find the first value that has an accumulated probability greater than the random number u
ranges.find { |p, v| p > u }.last # e.g. => "b"
end
How to use:
weights = {'a' => 0.4, 'b' => 0.4, 'c' => 0.2, 'd' => 0.0}
weighted_rand weights
What to expect roughly:
sample = 1000.times.map { weighted_rand weights }
sample.count('a') # 396
sample.count('b') # 406
sample.count('c') # 198
sample.count('d') # 0
An example in ruby
#each element is associated with its probability
a = {1 => 0.25 ,2 => 0.5 ,3 => 0.2, 4 => 0.05}
#at some point, convert to ccumulative probability
acc = 0
a.each { |e,w| a[e] = acc+=w }
#to select an element, pick a random between 0 and 1 and find the first
#cummulative probability that's greater than the random number
r = rand
selected = a.find{ |e,w| w>r }
p selected[0]
This can be done in O(1) expected time per sample as follows.
Compute the CDF F(i) for each element i to be the sum of probabilities less than or equal to i.
Define the range r(i) of an element i to be the interval [F(i - 1), F(i)].
For each interval [(i - 1)/n, i/n], create a bucket consisting of the list of the elements whose range overlaps the interval. This takes O(n) time in total for the full array as long as you are reasonably careful.
When you randomly sample the array, you simply compute which bucket the random number is in, and compare with each element of the list until you find the interval that contains it.
The cost of a sample is O(the expected length of a randomly chosen list) <= 2.
This is a PHP code I used in production:
/**
* #return \App\Models\CdnServer
*/
protected function selectWeightedServer(Collection $servers)
{
if ($servers->count() == 1) {
return $servers->first();
}
$totalWeight = 0;
foreach ($servers as $server) {
$totalWeight += $server->getWeight();
}
// Select a random server using weighted choice
$randWeight = mt_rand(1, $totalWeight);
$accWeight = 0;
foreach ($servers as $server) {
$accWeight += $server->getWeight();
if ($accWeight >= $randWeight) {
return $server;
}
}
}
Ruby solution using the pickup gem:
require 'pickup'
chances = {0=>80, 1=>20}
picker = Pickup.new(chances)
Example:
5.times.collect {
picker.pick(5)
}
gave output:
[[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 1, 1],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 1]]
If the array is small, I would give the array a length of, in this case, five and assign the values as appropriate:
array[
0 => 0
1 => 0
2 => 0
3 => 0
4 => 1
]
"Wheel of Fortune" O(n), use for small arrays only:
function pickRandomWeighted(array, weights) {
var sum = 0;
for (var i=0; i<weights.length; i++) sum += weights[i];
for (var i=0, pick=Math.random()*sum; i<weights.length; i++, pick-=weights[i])
if (pick-weights[i]<0) return array[i];
}
the trick could be to sample an auxiliary array with elements repetitions which reflect the probability
Given the elements associated with their probability, as percentage:
h = {1 => 0.5, 2 => 0.3, 3 => 0.05, 4 => 0.05 }
auxiliary_array = h.inject([]){|memo,(k,v)| memo += Array.new((100*v).to_i,k) }
ruby-1.9.3-p194 > auxiliary_array
=> [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4]
auxiliary_array.sample
if you want to be as generic as possible, you need to calculate the multiplier based on the max number of fractional digits, and use it in the place of 100:
m = 10**h.values.collect{|e| e.to_s.split(".").last.size }.max
Another possibility is to associate, with each element of the array, a random number drawn from an exponential distribution with parameter given by the weight for that element. Then pick the element with the lowest such ‘ordering number’. In this case, the probability that a particular element has the lowest ordering number of the array is proportional to the array element's weight.
This is O(n), doesn't involve any reordering or extra storage, and the selection can be done in the course of a single pass through the array. The weights must be greater than zero, but don't have to sum to any particular value.
This has the further advantage that, if you store the ordering number with each array element, you have the option to sort the array by increasing ordering number, to get a random ordering of the array in which elements with higher weights have a higher probability of coming early (I've found this useful when deciding which DNS SRV record to pick, to decide which machine to query).
Repeated random sampling with replacement requires a new pass through the array each time; for random selection without replacement, the array can be sorted in order of increasing ordering number, and k elements can be read out in that order.
See the Wikipedia page about the exponential distribution (in particular the remarks about the distribution of the minima of an ensemble of such variates) for the proof that the above is true, and also for the pointer towards the technique of generating such variates: if T has a uniform random distribution in [0,1), then Z=-log(1-T)/w (where w is the parameter of the distribution; here the weight of the associated element) has an exponential distribution.
That is:
For each element i in the array, calculate zi = -log(T)/wi (or zi = -log(1-T)/wi), where T is drawn from a uniform distribution in [0,1), and wi is the weight of the I'th element.
Select the element which has the lowest zi.
The element i will be selected with probability wi/(w1+w2+...+wn).
See below for an illustration of this in Python, which takes a single pass through the array of weights, for each of 10000 trials.
import math, random
random.seed()
weights = [10, 20, 50, 20]
nw = len(weights)
results = [0 for i in range(nw)]
n = 10000
while n > 0: # do n trials
smallest_i = 0
smallest_z = -math.log(1-random.random())/weights[0]
for i in range(1, nw):
z = -math.log(1-random.random())/weights[i]
if z < smallest_z:
smallest_i = i
smallest_z = z
results[smallest_i] += 1 # accumulate our choices
n -= 1
for i in range(nw):
print("{} -> {}".format(weights[i], results[i]))
Edit (for history): after posting this, I felt sure I couldn't be the first to have thought of it, and another search with this solution in mind shows that this is indeed the case.
In an answer to a similar question, Joe K suggested this algorithm (and also noted that someone else must have thought of it before).
Another answer to that question, meanwhile, pointed to Efraimidis and Spirakis (preprint), which describes a similar method.
I'm pretty sure, looking at it, that the Efraimidis and Spirakis is in fact the same exponential-distribution algorithm in disguise, and this is corroborated by a passing remark in the Wikipedia page about Reservoir sampling that ‘[e]quivalently, a more numerically stable formulation of this algorithm’ is the exponential-distribution algorithm above. The reference there is to a sequence of lecture notes by Richard Arratia; the relevant property of the exponential distribution is mentioned in Sect.1.3 (which mentions that something similar to this is a ‘familiar fact’ in some circles), but not its relationship to the Efraimidis and Spirakis algorithm.
I would imagine that numbers greater or equal than 0.8 but less than 1.0 selects the third element.
In other terms:
x is a random number between 0 and 1
if 0.0 >= x < 0.2 : Item 1
if 0.2 >= x < 0.8 : Item 2
if 0.8 >= x < 1.0 : Item 3
I am going to improve on https://stackoverflow.com/users/626341/masciugo answer.
Basically you make one big array where the number of times an element shows up is proportional to the weight.
It has some drawbacks.
The weight might not be integer. Imagine element 1 has probability of pi and element 2 has probability of 1-pi. How do you divide that? Or imagine if there are hundreds of such elements.
The array created can be very big. Imagine if least common multiplier is 1 million, then we will need an array of 1 million element in the array we want to pick.
To counter that, this is what you do.
Create such array, but only insert an element randomly. The probability that an element is inserted is proportional the the weight.
Then select random element from usual.
So if there are 3 elements with various weight, you simply pick an element from an array of 1-3 elements.
Problems may arise if the constructed element is empty. That is it just happens that no elements show up in the array because their dice roll differently.
In which case, I propose that the probability an element is inserted is p(inserted)=wi/wmax.
That way, one element, namely the one that has the highest probability, will be inserted. The other elements will be inserted by the relative probability.
Say we have 2 objects.
element 1 shows up .20% of the time.
element 2 shows up .40% of the time and has the highest probability.
In thearray, element 2 will show up all the time. Element 1 will show up half the time.
So element 2 will be called 2 times as many as element 1. For generality all other elements will be called proportional to their weight. Also the sum of all their probability are 1 because the array will always have at least 1 element.
I wrote an implementation in C#:
https://github.com/cdanek/KaimiraWeightedList
O(1) gets (fast!), O(n) recalculates, O(n) memory use.

Resources