Random cut forest score distribution - amazon-sagemaker

I am using SageMaker Random Cut Forest. It seems that setting too high values for hyperparameters <num_samples_per_tree, num_trees> (if num_samples_per_tree*num_trees is higher than number of records) leads to very strange score distribution.
Example 1 - Number of records in training set=cca 200000, num_samples_per_tree=2048, num_trees=256
Example 2 - Number of records in training set=cca 200000, num_samples_per_tree=1028, num_trees=100
Samples should be created using Reservoir Sampling - so I do not see any reason why this happens. Is there any relationship between number of records and these hyperparameters?

Related

Is there possible way to get data sampling in continous data batches?

We've a data stream with continously dumps data in our data lake. Is there a good solution with min running to get 10% random data samples from the data?
I'm currently using code(snipped below) but this will outgrow 10% total sampling as new batches will arrive. I've also tried to calculate 10 batches of 100 records each with (.1) mean but it resulted in ~32% sampling.
select id,
(uniform(0::float, 1::float, random(1)) < .10)::boolean as sampling
from temp_hh_mstr;
Prior to it, I thought to get sampling via snowflake's TABLESAMPLE by substracting from the total count and current IDs in sampling from the table. It takes calculations for every time and any batch arrives which will increase the cost.
Some additional referece I've been thinking towards -
Wilson Score Interval With Continuity Correction
Binomial Confidence Interval

Solr Random Ordering Internals

I am using Random sorting. For Random sorting I am providing seed at run time which is random value from 1001 to 999999.
And the results I am getting are random but not properly distributed. Lets there are 5 results, then in 100 run, approx 20 times each item should come to top. But it is not happening,. One item is coming at top approx 80 times. How should I use random ordering so as the order is fairly distributed.
Also How Random sorting acutally works. How does seed play role? I mean using this seed, solr generates order. So how does it actually does it.? What algo it uses? How can I smartly change random seed?
It depends on number of documents?

Monte Carlo Tree Search for card games like Belot and Bridge, and so on

I've been trying to apply MCTS in card games. Basically, I need a formula or modify the UCB formula, so it is best when selecting which node to proceed.
The problem is, the card games are no win/loss games, they have score distribution in each node, like 158:102 for example. We have 2 teams, so basically it is 2-player game. The games I'm testing are constant sum games (number of tricks, or some score from the taken tricks and so on).
Let's say the maximum sum of teamA and teamB score is 260 at each leaf. Then I search the best move from the root, and the first I try gives me average 250 after 10 tries. I have 3 more possible moves, that had never been tested. Because 250 is too close to the maximum score, the regret factor to test another move is very high, but, what should be mathematically proven to be the optimal formula that gives you which move to chose when you have:
Xm - average score for move m
Nm - number of tries for move m
MAX - maximum score that can be made
MIN - minimum score that can be made
Obviously the more you try the same move, the more you want to try the other moves, but the more close you are to the maximum score, the less you want to try others. What is the best math way to choose a move based ot these factors Xm, Nm, MAX, MIN?
Your problem obviously is an exploration problem, and the problem is that with Upper Confidence Bound (UCB), the exploration cannot be tuned directly. This can be solved by adding an exploration constant.
The Upper Confidence Bound (UCB) is calculated as follows:
with V being the value function (expected score) which you are trying to optimize, s the state you are in (the cards in the hands), and a the action (putting a card for example). And n(s) is the number of times a state s has been used in the Monte Carlo simulations, and n(s,a) the same for the combination of s and action a.
The left part (V(s,a)) is used to exploit knowledge of the previously obtained scores, and the right part is the adds a value to increase exploration. However there is not way to increase/decrease this exploration value, and this is done in the Upper Confidence Bounds for Trees (UCT):
Here Cp > 0 is the exploration constant, which can be used to tune the exploration. It was shown that:
holds the Hoeffding's inequality if the rewards (scores) are between 0 and 1 (in [0,1]).
Silver & Veness propose: Cp = Rhi - Rlo, with Rhi being the highest value returned using Cp=0, and Rlo the lowest value during the roll outs (i.e. when you randomly choose actions when no value function is calculated yet).
Reference:
Cameron Browne, Edward J. Powley, Daniel Whitehouse, Simon M. Lucas, Peter I. Cowling, Philipp Rohlfshagen, Stephen Tavener, Diego Perez, Spyridon Samothrakis and Simon Colton.
A Survey of Monte Carlo Tree Search Methods.
IEEE Trans. Comp. Intell. AI Games, 4(1):1–43, 2012.
Silver, D., & Veness, J. (2010). Monte-Carlo planning in large POMDPs. Advances in Neural Information Processing Systems, 1–9.

House pricing using neural network

I wrote multilayer perceptron implementation (on Python) which is able to classify Iris dataset. It was trained by backpropagation algorithm and uses sigmoid actiovation functions on a hidden and output layers.
But now I want to change it to be able to approximate house price.
(I have dataset of ~300 estates with prices and input parameters like rooms, location etc.)
Now output of my perceptron is in range [0;1]. But as far as I understand if I want to get resulting house price on the output neuron I need to change that activation function somehow right?
Can somebody help me?
I'm new to neural networks
Thanks in advance.
Assuming, for instance, that houses price between $1 and $1,000,000, then you can just map the 0...1 range to the final price range both for the training and for the testing. Just note that 300 estates is a fairly small data set.
To be precise, if a house is $500k, then the target training output becomes 0.5. You can basically divide by your maximum possible home value to get the target training amount. When you get the output value you multiple by the maximum home value to get the predicted price.
So, view the output of the neural network as the percentage of the total cost.

How many is a "large" data set?

Assumed infinite storage where size/volume/physics (metrics,gigabytes/terrabytes) won't matter only the number of elements and their labels, statistically pattern should emerge already at 30 subsets, but can you agree that less than 1000 subsets is too little to test, and at least 10000 distinct subsets / "elements", "entries" / entities is "a large data set". Or larger?
Thanks
I'm not sure I understand your question, but it sounds like you are attempting to ask about how many elements of data set you need to sample in order to ensure a certain degree of accuracy (30 is a magic number from the Central Limit Theorem that comes in to play frequently).
If that is the case, the sample size you need depends on the confidence level and confidence interval. If you want a 95% confidence level and a 5% confidence interval (i.e. you want to be 95% confident that the proportion you determine from your sample is within 5% of the proportion in the full data set), you end up needing a sample size of no more than 385 elements. The greater the confidence level and the smaller the confidence interval that you want to generate, the larger the sample size you need.
Here is a nice discussion on the mathematics of determining sample size
and a handy sample size calculator if you just want to run the numbers.

Resources