AS3/Arrays: impossible duplication after "splice" - arrays

I have a function that gets questions from an array called quizQuestions, displays the choosed question, splice it from the array and pushes the question and correct answer to a new array to be used later in the results screen:
// The questions are stored in an array called "quizQuestions"
function makeQuestion()
var randNum:Number = Math.round(1 + (quizQuestions.length - 1) * Math.random());
mc_quiz.question.text = quizQuestions[randNum][0];
answer = quizQuestions[randNum][1];
quizQuestions.splice(randNum,1);
printQuestions.push(new Array(mc_quiz.question.text, answer));
}
It runs fine but time to time, a question is asked twice. You can continue with the test but the result doesn't show the info. In fact, it only shows the results for the questions answered before the duplication. I have checked visually and with a "duplicated elements finder" and there are no duplicated questions in the array.
Could the splice non being executed time to time? Can you see any "bug" in the function? Could it happen due to hardwer issue?
Thanks in advance.

Your random number generation is not only mathematically wrong (i.e. it won't generate truly random items), it will also from time to time generate a number that is beyond the array's bounds. To explain this: 1+(array.length-1) * Math.random() will generate any number greater or equal to 1 (this will also result in the first item of the array never to be returned, because arrays are 0-based), up to a fraction less than the actual length of the array. If you Math.round() the highest possible result, it will round up to the next highest integer, which is the full length again - and if you access array[array.length], an error is thrown, which is probably responsible for the weird behavior you are seeing.
Here's a possible solution:
Math.round() creates random number bias, anyway (see #OmerHassans link), so you're better off using int() or Math.floor(). Also, Math.random() is defined as 0 <= n < 1, so it will never return 1. Therefore, you can simplify your random index generator to int(Math.random()*array.length) => any integer smaller than the length of the array.
splice(), then, returns an array of the items that were removed, so you can pass its first item, instead of creating a new array.
function getRandomItem( arr:Array ):* {
var rand:int = Math.random()*arr.length;
return arr.splice( rand, 1 )[0];
}
function makeQuestion():void {
var q:Array = getRandomItem( quizQuestions );
mc_quiz.question.text = q[0];
answer=q[1];
printQuestions[printQuestions.length] = q;
}
FYI: It won't matter much in this context, but you get much faster performance by replacing array.push(item) with array[array.length]=item;.

Related

How to change the count of a for loop during the loop

I'm trying to change the number of items in array, over which a for loop is running, during the for loop, with the objective that this changes the number of loops. In a very simplified version, the code would look something like this:
var loopArray: [Int] = []
loopArray.append(1)
loopArray.append(2)
loopArray.append(3)
loopArray.append(4)
loopArray.append(5)
for x in 0..<Int(loopArray.count) {
print(x)
if x == 4 {
loopArray.append(6)
}
}
When running this code, 5 numbers are printed, and while the number 6 is added to the Array, the loopArray.count does not seem to update. How can I make the .count dynamic?
This is a very simplified example, in the project I'm working on, appending numbers to the array depends on conditions that may or may not be met.
I have looked for examples online, but have not been able to find any similar cases. Any help or guidance is much appreciated.
sfung3 gives the correct way to do what you want, but I think there needs to be a bit of explanation as to why your solution doesn't work
The line
for x in 0..<Int(loopArray.count)
only evaluates loopArray.count once, the first time it is hit. This is because of the way for works. Conceptually a for loop iterates through the elements of a sequence. The syntax is something like
for x in s
where
s is a sequence, give it type S
x is a let constant (you can also make it a var but that is not relevant to the current discussion) with type S.Element
So the bit after the in is a sequence - any sequence. There's nothing special about the use of ..< here, it's just a convenient way to construct a sequence of consecutive integers. In fact, it constructs a Range (btw, you don't need the cast to Int, Array.count is already an Int).
The range is only constructed when you first hit the loop and it's effectively a constant because Range is a value type.
If you don't want to use Joakim's answer, you could create your own reference type (class) that conforms to Sequence and whose elements are Int and update the upper bound each time through the loop, but that seems like a lot of work to avoid a while loop.
you can use a while loop instead of a for loop.
var i = 0
while i < loopArray.count {
print(i)
if i == 4 {
loopArray.append(6)
}
i += 1
}
which prints
0 1 2 3 4 5

Kotlin array <init>ialization [duplicate]

This question already has answers here:
Order of init calls in Kotlin Array initialization
(2 answers)
Closed 3 years ago.
I will be reading data from a byte stream. Are the indices given by Kotlin to an array generation function (as described in https://kotlinlang.org/api/latest/jvm/stdlib/kotlin/-array/-init-.html) guaranteed to be called in order, ascending from zero?
The Array.kt file is under builtins so I'm at a loss as to where to find the actual code.
Take a look at the source code for Array:
public inline constructor(size: Int, init: (Int) -> T)
The init parameter is a function that takes an int (the index for a specific item), to which it expects a return value of type T, which the array consists of.
As the other answers have shown by examples, these are called in order, because it's the "natural" way of doing it. If you don't get what I mean, think about the implementation alternatives:
for (i in 0..size) {
this.addToArray(init(i));
}
Alternatively:
for (i in (size - 1)..0 {
this.addToArray(init(i));
}
Compared to:
val indices = mutableListOf<Int>()
while (indices.size != size) {
val i = random.nextInt(size);
if (i !in indices) {
indices.add(i);
this.addToArray(init(i));
}
}
While we can't see the source code for the constructor, the examples show in the other answers alone show they cannot be using a random approach. Applying the code from the first answer, mathematically speaking, the odds of using random and getting 0-49 printed out in order are extremely low.
Additionally, this is backed up by an answer here. The resulting compiled Java code creates a for-loop going from 0 to size. For the JVM, assuming they don't change the implementation, you can assume it'll go from 0 to size. But whether it goes from 0 to size or from size to 0, you can always reverse it if you don't like the order.
If you need to be 100% sure it goes from 0 to size, or if the implementation changes, you can do something like:
var a = (0 until 10).step(1).toList().toTypedArray()
Which, in this case, yields an array with the numbers 0-9.
If you want objects, or otherwise alter the object, you can add a .map {} before the list creation. That being said, this is an overkill alternative as long as the init function works as you'd expect.
And you can always confirm by decompiling the code using IntelliJ, Android Studio, some other IDE, or a decompiler of your choice. But regardless of the implementation, they'll always end up in order - so you don't need to worry about that. The only thing they oculd possibly change is the order the init function is called in, but it'll still end up in the same order in the resulting array.
That does seem to be the case.
Code:
fun main() {
val x = Array(50) {println(it)}
}

Finding 9 elements with the same 'color' with weird classifier

Assume i have 8 types of elements, each with a different "color", and an large "bank" array containing random elements of all types when the type is uniformly distributed.
Now, my goal is to find the fastest way to get a set of 9 elements from the same "color", but only using the following classifier:
Given an array A of n elements, return "True" if A contains a subgroup of at least 9 elements of the same "color", False otherwise.
All i can do is to organize an array by adding/removing elements and send it to the classifier, only knowledge of the "color" is that it is uniform.
As of now, I'm taking an array of size 65 from that bank. Meaning that at least one of the colors will appear 9 times (since 8*8 = 64), and checking element-by-element if removing it from the array changes the classifier answer. If the answer changes from 'True' to 'False' then the element in question is a part of a unique 'nonuplets' and should be inserted back into the array. If the answer stays 'True' than the element is simply discarded.
Is there a better way? Maybe removing 2 at a time? Some sort of binary search?
Thanks.
The first step of taking 65 elements and then removing k and testing is much the same as starting with 65-k elements and then testing and adding on k if you don't have a subgroup of at least 9 of the same color. I would use e.g. Monte Carlo tests to see what the probability is of having at least 9 of the same color for each possible set of size 65-k. If this probability is p the expected number of elements in the first stage is 65-k + (1-p)k. It should be practical to make Monte Carlo tests for all possible k and see which gives you the least expected number of elements after the first test.
If you have an array of elements and test-remove each element in turn, from left to right, you should end up with 9 elements after the first pass, because each test-remove is guaranteed to work until you have only one set of 9 elements of the same color in the array, and then each test-remove works except when it removes one of the 9.
As you move along the array a test-remove is most likely to fail when there is only one set of 9 elements of the same color left, and the elements that have survived tests are of this color. Suppose you decide that you will test-remove the next element in sequence together with k other elements randomly chosen to its right. Because you know the number of elements of the set of 9 you have yet to find, you can work out the probability that this test-remove will succeed. If it fails you test just the next element. You can work out the probabilities of success and failure and find the k that removes the maximum expected number of elements in the first test, and see if this value of k is better value than just removing one element at a time.
The algorithm you currently use will call the classifier 65 times, once for every element you remove from the original array (and maybe insert back into it). So the aim is to lower that number.
Theoretical minimum number of classifier-calls in worst case
A theoretical minimum for the worst case is 35 times:
When starting with an array of 65, it might be that there is indeed only one set of 9 elements that are of the same color, all other elements being of a different color and occurring 8 times each. In that case those 9 elements can be positioned in C(9 of 65) ways (ignoring order), i.e. 65!/(56!9!) = 31 966 749 880.
Since a call of the classifier has 2 possible outcomes, it divides the number of configurations that were still considered possible into two: one group is still possible, the other no longer is. The most efficient use of the classifier will aim to make that split such that 50% of the configurations is eliminated, whichever the return value (false/true).
In order to get that number of possible configurations to 1 (the final solution) you would need to perform 35 divisions by 2. Or otherwise put, 235 is the first power of 2 that is greater than that 31 billion.
So even if we would know how to use the classifier in the most optimal way, we would still need 35 calls in the worst case.
To be honest, I don't know an algorithm that would be that efficient, but with some randomness and reducing the array size with some bigger steps, you can get a better number on average than 65.
Idea for a guessing strategy
There is approximately 50% chance that a random array of 42 objects will have 9 of the same color. You can verify this by running a large number of simulations.
So why not start with a call to the classifier with only 42 of the 65 objects you have initially picked? With a bit of luck you know after one call that your 9 objects are hidden in an array of 42 instead of 65, which would really trim the number of calls you will make in total. And if not, you just "lost" one call.
In that latter case you could try again with a new selection of 42 objects from that array. You would now certainly include the 23 other objects, as those objects would individually have a tiny bit more likelihood to be among the 9 searched elements. Maybe the classifier returns false again. That's bad luck, but you would continue using different sets of 42 until you get success. On average you will have success after 2.7 calls.
After success on an array of 42, you obviously discard the remaining 23 other objects. Now take 38 as the number of objects you will try with, following the same principle as above.
For driving the selection of the objects in each iteration, add credit to the objects that were not part of the previous selection, and sort all objects by descending credit. The added credit should be higher when the number of non-selected objects is small. Where the credit is the same, use a random order.
An additional precaution can be taken by logging all subsets for which the classifier returned false. If ever you're about to call the classifier again, first check whether it is a subset of one of those earlier made calls. If that is true, then it makes no sense to call the classifier, as it is certain it will return false. This will on average save one call to the classifier.
Implementation
Here is a JavaScript implementation of that idea. This implementation also includes the "bank" (get method) and the classifier. It effectively hides the color from the algorithm.
This code will clear the job for 1000 random test cases and then output the minimum, average and maximum number of times it needed to call the classifier before finding the answer.
I used the BigInt primitive, so this will only run on JS engines that support BigInt.
// This is an implementation of the bank & classifier (not part of the solution)
const {get, classify} = (function () {
const colors = new WeakMap; // Private to this closure
return {
get(length) {
return Array.from({length}, () => {
const obj = {};
colors.set(obj, Math.floor(Math.random()*8)); // assign random color (0..7)
return obj; // Only return the (empty) object: its color is secret
});
},
classify(arr) {
const counts = Array(8).fill(9); // counter for 8 different colors
return !arr.every(obj => --counts[colors.get(obj)]);
}
}
})();
function shuffle(a) {
var j, x, i;
for (i = a.length - 1; i > 0; i--) {
j = Math.floor(Math.random() * (i + 1));
x = a[i];
a[i] = a[j];
a[j] = x;
}
return a;
}
// Solution:
function randomReduce(get, classify) {
// Get 65 objects from the bank, so we are sure there are 9 with the same type
const arr = get(65);
// Track some extra information per object:
const objData = new Map(arr.map((obj, i) => [obj, { rank: 0, bit: 1n << BigInt(i) } ]));
// Keep track of all subsets that the classifier returned false for
const failures = [];
let numClassifierCalls = 0;
function myClassify(arr) {
const pattern = arr.reduce((pattern, obj) => pattern | objData.get(obj).bit, 0n);
if (failures.some(failed => (failed & pattern) === pattern)) {
// This would be a redundant call, as we already called the classifier on a superset
// of this array, and it returned false; so don't make the call. Just return false
return false;
}
numClassifierCalls++;
const result = classify(arr);
if (!result) failures.push(pattern);
return result;
}
for (let length of [42,38,35,32,29,27,25,23,21,19,18,17,16,15,14,13,12,11,10,9]) {
while (!myClassify(arr.slice(0, length))) {
// Give the omited entries an increased likelihood of being seleced in the future
let addRank = 1/(arr.length - length - 1); // Could be Infinity: then MUST use
for (let i = length; i < arr.length; i++) {
objData.get(arr[i]).rank += addRank;
}
// Randomise the array, but ensure that the most promising objects are put first
shuffle(arr).sort((a,b) => objData.get(b).rank - objData.get(a).rank);
}
arr.length = length; // Discard the non-selected objects
}
// At this point arr.length is 9, and the classifier returned true. So we have the 9 elements.
return numClassifierCalls;
}
let total = 0, min = Infinity, max = 0;
let numTests = 1000;
for (let i = 0; i < numTests; i++) {
const numClassifierCalls = randomReduce(get, classify);
total += numClassifierCalls;
if (numClassifierCalls < min) min = numClassifierCalls;
if (numClassifierCalls > max) max = numClassifierCalls;
}
console.log("Number of classifier calls:");
console.log("- least: ", min);
console.log("- most: ", max);
console.log("- average: ", total/numTests);
Performance statistics
The above algorithm needs an average of 35 classifier-calls to solve the problem. In the best case its calls to the classifier always return true, and then 20 calls will have been made. The downside is that the worst case is really bad and can go to 170 or more. But such cases are rare.
Here is a graph of the probability distribution:
In 99% of the cases the algorithm will find the solution in at most 50 classifier calls.

What is the best way to make a function resistant to numbers that exceed the maximum number supported?

So, I know that some programming languages have maximum numbers (integers). I am trying to find a combat this with this problem. This is more of an algorithm based question and a storage question. So basically, I am trying to store information to a database. But, I am trying to find a way to test for the worst possible case (reach a number so large its no longer supported) and find a solution to it. So, assume I have a function called functionInTime that looks something like this:
functionInTime(){
currenttime = getCurrentTime();
foo(); // Random void function
endtime = getCurrentTime();
return (endtime - currenttime);
}
This function should essentially just check how long it takes to run the function foo. Additionally, assume there is a SUM function that looks similar to:
SUM(arrayOfNumbers){
var S = 0;
for(int i = 0; i < arrayOfNumbers.length; i++){
S = S + arrayOfNumbers[i];
}
return S;
}
I also there is a function storeToDB, which looks something like this:
storeToDB(time){ // time is an INTEGER
dbInstance = new DBINSTANCE();
dbInstance.connectTo("wherever");
timesRun = dbInstance.get("times_run"); // return INTEGER
listOfTimes = dbInstance.get("times_list"); // return ARRAY
dbInstance.set("times_run", timesRun+1);
dbInstance.set("times_list", listOfTimes.addToArray(time));
dbInstance.close();
}
However, this is where problems start to stand out to me in terms of efficiency and lead me to believe that this is a terrible algorithm. Lastly assume I have a function called computeAverage, that is simply:
computeAverage(){
dbInstance = new DBINSTANCE();
dbInstance.connectTo("wherever");
timesRun = dbInstance.get("times_run"); // return INTEGER
listOfTimes = dbInstance.get("times_list"); // return ARRAY
return SUM(listOfTimes)/timesRun;
}
Since the end goal is to report the computed average each time, I think the above method should work. However, what I am trying to test is what happens if the value returned for the variable timesRun is so large that the language cannot support such a number. Parsing an array that long would take forever, but what if the size of an array is a number the language doesn't support? Removing or resetting the variables timesRun or listOfTimes would distort the data and throw off any other measurements that use this.
::: {"times_run": N} // N = number not supported by language
functionTime();
printThisOut("The overage average is "+computeAverage()+" ms per run...");
::: ERROR (times_run is not a supported number)
How can I infinitely add data in a way that is both efficient and is resistant to a maximum number overflow?

Saving randomly generated numbers in C

I'm trying to find the best way to save a set of randomly generated numbers so they can be recalled later in another function. Basically I have one function that generates the numbers and makes sure they have no repeats, but I need another function that will search the list of numbers to see if the user has picked one of those numbers. whenever I call the random number function within my search function, I just get a list of different random numbers.
Basically I'm just trying to figure out the best way to either save this array of numbers so it doesn't give me knew numbers the next time I call the function, or the best way to pass it on to the next function.
here is the random number generator function, in case you wanted to see what I'm trying to pass onto the next function.
int i, j;
/*generates the set of random numbers*/
for(i = 0; i < MAX; i++) {
random = rand() % 101;
/*checks to to make sure there is no repeats*/
for (j = 0; j < i; j++) {
if (lucky[j] == random) {
random = rand() % 101;
}
}
lucky[i] = random;
printf("%3d",random);
}
Create a new array first:
int *lucky = malloc(amount_of_numbers_you_want * sizeof(int));
Then fill it with random numbers as usual, and then return it. For example:
int* generate_random_numbers(int amount)
{
int *lucky = malloc(amount * sizeof(int));
/* Fill lucky[] with 'amount' unique random numbers. */
return lucky;
}
Then, whenever you call that function, save the pointer it returns somewhere. Do not forget to free() that pointer when you no longer need it, or else you will leak the memory it occupies.
Since this looks like homework, I'm not giving you full code, but rather a general methodology of how you deal with this kind of problem by using dynamically allocated arrays.
So, first of all, that does not ensure the random number are always different:
if your list has [0.1,0.24,0.555] and you add a new RNG with 0.24, it repeats, but can generate a 0.1 which is also stored in lucky[] (and is thus repeated, as you don't like). It is not very probable, but possible.
The way you want is to have a while(), and only when the new RNG is checked against all the list, it is added.
Finally, generally the best way to save a list of RNGs is to set the seed of the RNG. Given a seed "a", the list of numbers generated by the seed "a" is always the same. In that case, your function can even be checking for non-repetitive RNGs, because the result will always be the same.
#Nikos has given correct answer.
You can allocate memory for an array in caller function as well and pass that to random number generating function. Whatever you do make sure lucky isn't locally defined array. Moreover, your logic to generate numbers seems to be wrong (without repetition). As pointed out by #Nikos this seems to be a school assignment, I will just point obvious mistakes.
a) you don't take care of case where second time generated number (second call to random if first random matches with already existing list) is being checked with older set of generated values correctly.
b) random generating function gives you random number between 0 to RAND_MAX and RAND_MAX % 101 isn't 0. That means the probability of getting random number generated isn't uniform.

Resources