Subset Sum TI Basic Programming - loops

I'm trying to program my TI-83 to do a subset sum search. So, given a list of length N, I want to find all lists of given length L, that sum to a given value V.
This is a little bit different than the regular subset sum problem because I am only searching for subsets of given lengths, not all lengths, and recursion is not necessarily the first choice because I can't call the program I'm working in.
I am able to easily accomplish the task with nested loops, but that is becoming cumbersome for values of L greater than 5. I'm trying for dynamic solutions, but am not getting anywhere.
Really, at this point, I am just trying to get the list references correct, so that's what I'm looking at. Let's go with an example:
L1={p,q,r,s,t,u}
so
N=6
let's look for all subsets of length 3 to keep it relatively short, so L = 3 (6c3 = 20 total outputs).
Ideally the list references that would be searched are:
{1,2,3}
{1,2,4}
{1,2,5}
{1,2,6}
{1,3,4}
{1,3,5}
{1,3,6}
{1,4,5}
{1,4,6}
{1,5,6}
{2,3,4}
{2,3,5}
{2,3,6}
{2,4,5}
{2,4,6}
{2,5,6}
{3,4,5}
{3,4,6}
{3,5,6}
{4,5,6}
Obviously accomplished by:
FOR A,1,N-2
FOR B,A+1,N-1
FOR C,B+1,N
display {A,B,C}
END
END
END
I initially sort the data of N descending which allows me to search for criteria that shorten the search, and using FOR loops screws it up a little at different places when I increment the values of A, B and C within the loops.
I am also looking for better dynamic solutions. I've done some research on the web, but I can't seem to adapt what is out there to my particular situation.
Any help would be appreciated. I am trying to keep it brief enough as to not write a novel but explain what I am trying to get at. I can provide more details as needed.

For optimisation, you simply want to skip those sub-trees of the search where you already now they'll exceed the value V. Recursion is the way to go but, since you've already ruled that out, you're going to be best off setting an upper limit on the allowed depths.
I'd go for something like this (for a depth of 3):
N is the total number of array elements.
L is the desired length (3).
V is the desired sum
Y[] is the array
Z is the total
Z = 0
IF Z <= V
FOR A,1,N-L
Z = Z + Y[A]
IF Z <= V
FOR B,A+1,N-L+1
Z = Z + Y[B]
IF Z <= V
FOR C,B+1,N-L+2
Z = Z + Y[C]
IF Z = V
DISPLAY {A,B,C}
END
Z = Z - Y[C]
END
END
Z = Z - Y[B]
END
END
Z = Z - Y[A]
END
END
Now that's pretty convoluted but it basically check at every stage whether you've already exceed the desired value and refuses to check lower sub-trees as an efficiency measure. It also keeps a running total for the current level so that it doesn't have to do a large number of additions when checking at lower levels. That's the adding and subtracting of the array values against Z.
It's going to get even more complicated when you modify it to handle more depth (by using variables from D to K for 11 levels (more if you're willing to move N and L down to W and X or if TI BASIC allows more than one character in a variable name).
The only other non-recursive way I can think of doing that is to use an array of value groups to emulate recursion with iteration, and that will look only slightly less hairy (although the code should be less nested).

Related

Find possible solutions for a matrix with known row/column sums and maximum cell values

I am trying to find solutions to a matrix where I know the row and column sums and the maximum value a cell can have. I want to find possible solutions that are within the constraints. I've already tried various things like constructing an array of all cell values and picking picking from each cell in sequence but whatever I try I always run into the problem where I run out of values for a cell.
I also tried a recursive algorithm but that I only managed to get the first result or it failed to get any solution. I think I have to do this with a backtracking algorithm? Not sure...
Any help or pointers would be appreciated.
Row sums A, B, C, column sums X, Y, Z as well as the maximum value for each ? are known. All values are are positive integers.
C1 | C2 | C3
-----------------
R1 | ? | ? | ? | A
-----------------
R2 | ? | ? | ? | B
-----------------
R3 | ? | ? | ? | C
-----------------
X | Y | Z
If you heard about linear programming (LP) and its 'cousins' (ILP, MILP), that could be a good approach to help you solve your problem with a great efficiency.
A linear program consists in a set of variables (your matrix unknowns), constraints (maximum values, sum of rows and columns), and an objective function (here none) to minimize or maximize.
Let's call x[i][j] the values you are looking for.
With the following data:
NxM the dimensions of your matrix
max_val[i][j] the maximum value for the variable x[i][j]
row_val[i] the sum of the values on the row i
col_val[j] the sum of the values on the column j
Then a possible linear program that could solve your problem is:
// declare variables
int x[N][M] // or eventually float x[N][M]
// declare constaints
for all i in 1 .. N, j in 1 .. M, x[i][j] <= max_val[i][j]
for all i in 1 .. N, sum[j in 1 .. M](x[i][j]) == row_val[i]
for all j in 1 .. M, sum[i in 1 .. N](x[i][j]) == col_val[j]
// here the objective function is useless, but you still will need one
// for instance, let's minimize the sum of all variables (which is constant, but as I said, the objective function does not have to be useful)
minimize sum[i in 1 .. N](sum[j in 1 .. M](x[i][j]))
// you could also be more explicit about the uselessness of the objective function
// minimize 0
Solvers such as gurobi or Cplex (but there are much more of them, see here for instance) can solve this kind of problems incredibly fast, especially if your solutions do not need to be integer, but can be float (that makes the problem much, much easier). It also have the advantage to not only be faster t execute, but faster and simpler to code. They have APIs in several common programming languages to ease their use.
For example, you can reasonably expect to solve this kind of problem in less than a minute, with hundreds of thousands of variables in the integer case, millions in the real variables case.
Edit:
In response to the comment, here is a piece of code in OPL (the language Cplex and other LP solvers use) that would solve your problem. We consider a 3x3 case.
// declare your problem input
int row_val[1..3] = [7, 11, 8];
int col_val[1..3] = [14, 6, 6];
int max_val[1..3][1..3] = [[10, 10, 10], [10, 10, 10], [10, 10, 10]];
// declare your decision variables
dvar int x[1..3][1..3];
// objective function
minimize 0;
// constraints
subject to {
forall(i in 1..3, j in 1..3) x[i][j] <= max_val[i][j];
forall(i in 1..3) sum(j in 1..3) x[i][j] == row_val[i];
forall(j in 1..3) sum(i in 1..3) x[i][j] == col_val[j];
}
The concept of a LP solver is that you only describe the problem you want to solve, then the solver solves it for you. The problem must be described according to a certain set of rules. In the current case (Integer Linear Programming, or ILP), the variables must all be integers, and the constraints and objective function must be linear equalities (or inequalities) with regards to the decision variables.
The solver will then work as a black box. It will analyse the problem, and run algorithms that can solve it, with a ton of optimizations, and output the solution.
As you wrote in a comment, that you want to come up an own solution, here's some guideline:
Use a Backtrack algorithm to find a solution. Your value-space consists of 3*3=9 independent values, each of them are between 1 and maxval[i][j]. Your constraints will be the row and column sums (all of them must match)
Intitalize your space with all 1s, then increment them, until they reach the maxval. Evaluate the conditions only after each value is covered for that condition (particularly, after 3 values you can evaluate the first row, after 6 the second row, after 7 the first col, after 8 the second col, and after 9 the third row and the third col)
If you reach the 9th, with all conditions passing, you've got a solution. Otherwise try the values from 1 till maxval, if neither matches, step back. If the first value was iterated through, then there's no solution.
That's all.
More advanced backtracking:
Your moving values are only the top-left 2*2=4 values. The third column is calculated, the condition is that it must be between 1 and the maxval for that particular element.
After defining the [1][1] element, you need to calculate the [2][2] index by using the column sum, and validate its value by the row sum (or vica versa). The same processing rules apply as above: iterate through all possible values, step back if none matches, and check rules only if they can be applied.
It is a way faster method, since you have 5 bound variables (the bottom and right rows), and only 4 unbound. These are optimizations from your particular rules. A bit more complex to implement, though.
PS: 1 is used because you have positive integers. If you have non-negative integers, you need to start with 0.

Find all possible distances from two arrays

Given two sorted array A and B length N. Each elements may contain natural number less than M. Determine all possible distances for all combinations elements A and B. In this case, if A[i] - B[j] < 0, then the distance is M + (A[i] - B[j]).
Example :
A = {0,2,3}
B = {1,2}
M = 5
Distances = {0,1,2,3,4}
Note: I know O(N^2) solution, but I need faster solution than O(N^2) and O(N x M).
Edit: Array A, B, and Distances contain distinct elements.
You can get a O(MlogM) complexity solution in the following way.
Prepare an array Ax of length M with Ax[i] = 1 if i belongs to A (and 0 otherwise)
Prepare an array Bx of length M with Bx[M-1-i] = 1 if i belongs to B (and 0 otherwise)
Use the Fast Fourier Transform to convolve these 2 sequences together
Inspect the output array, non-zero values correspond to possible distances
Note that the FFT is normally done with floating point numbers, so in step 4 you probably want to test if the output is greater than 0.5 to avoid potential rounding noise issues.
I possible done with optimized N*N.
If convert A to 0 and 1 array where 1 on positions which present in A (in range [0..M].
After convert this array into bitmasks, size of A array will be decreased into 64 times.
This will allow insert results by blocks of size 64.
Complexity still will be N*N but working time will be greatly decreased. As limitation mentioned by author 50000 for A and B sizes and M.
Expected operations count will be N*N/64 ~= 4*10^7. It will passed in 1 sec.
You can use bitvectors to accomplish this. Bitvector operations on large bitvectors is linear in the size of the bitvector, but is fast, easy to implement, and may work well given your 50k size limit.
Initialize two bitvectors of length M. Call these vectA and vectAnswer. Set the bits of vectA that correspond to the elements in A. Leave vectAnswer with all zeroes.
Define a method to rotate a bitvector by k elements (rotate down). I'll call this rotate(vect,k).
Then, for every element b of B, vectAnswer = vectAnswer | rotate(vectA,b).

How do I check to see if two (or more) elements of an array/vector are the same?

For one of my homework problems, we had to write a function that creates an array containing n random numbers between 1 and 365. (Done). Then, check if any of these n birthdays are identical. Is there a shorter way to do this than doing several loops or several logical expressions?
Thank you!
CODE SO FAR, NOT DONE YET!!
function = [prob] bdayprob(N,n)
N = input('Please enter the number of experiments performed: N = ');
n = input('Please enter the sample size: n = ');
count = 0;
for(i=1:n)
x(i) = randi(365);
if(x(i)== x)
count = count + 1
end
return
If I'm interpreting your question properly, you want to check to see if generating n integers or days results in n unique numbers. Given your current knowledge in MATLAB, it's as simple as doing:
n = 30; %// Define sample size
N = 10; %// Define number of trials
%// Define logical array where each location tells you whether
%// birthdays were repeated for a trial
check = false(1, N);
%// For each trial...
for idx = 1 : N
%// Generate sample size random numbers
days = randi(365, n, 1);
%// Check to see if the total number of unique birthdays
%// are equal to the sample size
check(idx) = numel(unique(days)) == n;
end
Woah! Let's go through the code slowly shall we? We first define the sample size and the number of trials. We then specify a logical array where each location tells you whether or not there were repeated birthdays generated for that trial. Now, we start with a loop where for each trial, we generate random numbers from 1 to 365 that is of n or sample size long. We then use unique and figure out all unique integers that were generated from this random generation. If all of the birthdays are unique, then the total number of unique birthdays generated should equal the sample size. If we don't, then we have repeats. For example, if we generated a sample of [1 1 1 2 2], the output of unique would be [1 2], and the total number of unique elements is 2. Since this doesn't equal 5 or the sample size, then we know that the birthdays generated weren't unique. However, if we had [1 3 4 6 7], unique would give the same output, and since the output length is the same as the sample size, we know that all of the days are unique.
So, we check to see if this number is equal to the sample size for each iteration. If it is, then we output true. If not, we output false. When I run this code on my end, this is what I get for check. I set the sample size to 30 and the number of trials to be 10.
check =
0 0 1 1 0 0 0 0 1 0
Take note that if you increase the sample size, there is a higher probability that you will get duplicates, because randi can be considered as sampling with replacement. Therefore, the larger the sample size, the higher the chance of getting duplicate values. I made the sample size small on purpose so that we can see that it's possible to get unique days. However, if you set it to something like 100, or 200, you will most likely get check to be all false as there will most likely be duplicates per trial.
Here are some more approaches that avoid loops. Let
n = 20; %// define sample size
x = randi(365,n,1); %// generate n values between 1 and 365
Any of the following code snippets returns true (or 1) if there are two identical values in x, and false (or 0) otherwise:
Sort and then check if any two consecutive elements are the same:
result = any(diff(sort(x))==0);
Do all pairwise comparisons manually; remove self-pairs and duplicate pairs; and check if any of the remaining comparisons is true:
result = nnz(tril(bsxfun(#eq, x, x.'),-1))>0;
Compute the distance between distinct values, considering each pair just once, and then check if any distance is 0:
result = any(pdist(x(:))==0);
Find the number of occurrences of the most common value (mode):
[~, occurs] = mode(x);
result = occurs>1;
I don't know if I'm supposed to solve the problem for you, but perhaps a few hints may lead you in the right direction (besides I'm not a matlab expert so it will be in general terms):
Maybe not, but you have to ask yourself what they expect of you. The solution you propose requires you to loop through the array in two nested loops which will mean n*(n-1)/2 times through the loop (ie quadratic time complexity).
There are a number of ways you can improve the time complexity of the problem. The most straightforward would be to have a 365 element table where you can keep track if a particular number has been seen yet - which would require only a single loop (ie linear time complexity), but perhaps that's not what they're looking for either. But maybe that solution is a little bit ad-hoc? What we're basically looking for is a fast lookup if a particular number has been seen before - there exists more memory efficient structures that allows look up in O(1) time and O(log n) time (if you know these you have an arsenal of tools to use).
Then of course you could use the pidgeonhole principle to provide the answer much faster in some special cases (remember that you only asked to determine whether two or more numbers are equal or not).

Randomize matrix elements between two values while keeping row and column sums fixed (MATLAB)

I have a bit of a technical issue, but I feel like it should be possible with MATLAB's powerful toolset.
What I have is a random n by n matrix of 0's and w's, say generated with
A=w*(rand(n,n)<p);
A typical value of w would be 3000, but that should not matter too much.
Now, this matrix has two important quantities, the vectors
c = sum(A,1);
r = sum(A,2)';
These are two row vectors, the first denotes the sum of each column and the second the sum of each row.
What I want to do next is randomize each value of w, for example between 0.5 and 2. This I would do as
rand_M = (0.5-2).*rand(n,n) + 0.5
A_rand = rand_M.*A;
However, I don't want to just pick these random numbers: I want them to be such that for every column and row, the sums are still equal to the elements of c and r. So to clean up the notation a bit, say we define
A_rand_c = sum(A_rand,1);
A_rand_r = sum(A_rand,2)';
I want that for all j = 1:n, A_rand_c(j) = c(j) and A_rand_r(j) = r(j).
What I'm looking for is a way to redraw the elements of rand_M in a sort of algorithmic fashion I suppose, so that these demands are finally satisfied.
Now of course, unless I have infinite amounts of time this might not really happen. I therefore accept these quantities to fall into a specific range: A_rand_c(j) has to be an element of [(1-e)*c(j),(1+e)*c(j)] and A_rand_r(j) of [(1-e)*r(j),(1+e)*r(j)]. This e I define beforehand, say like 0.001 or something.
Would anyone be able to help me in the process of finding a way to do this? I've tried an approach where I just randomly repick the numbers, but this really isn't getting me anywhere. It does not have to be crazy efficient either, I just need it to work in finite time for networks of size, say, n = 50.
To be clear, the final output is the matrix A_rand that satisfies these constraints.
Edit:
Alright, so after thinking a bit I suppose it might be doable with some while statement, that goes through every element of the matrix. The difficult part is that there are four possibilities: if you are in a specific element A_rand(i,j), it could be that A_rand_c(j) and A_rand_r(i) are both too small, both too large, or opposite. The first two cases are good, because then you can just redraw the random number until it is smaller than the current value and improve the situation. But the other two cases are problematic, as you will improve one situation but not the other. I guess it would have to look at which criteria is less satisfied, so that it tries to fix the one that is worse. But this is not trivial I would say..
You can take advantage of the fact that rows/columns with a single non-zero entry in A automatically give you results for that same entry in A_rand. If A(2,5) = w and it is the only non-zero entry in its column, then A_rand(2,5) = w as well. What else could it be?
You can alternate between finding these single-entry rows/cols, and assigning random numbers to entries where the value doesn't matter.
Here's a skeleton for the process:
A_rand=zeros(size(A)) is the matrix you are going to fill
entries_left = A>0 is a binary matrix showing which entries in A_rand you still need to fill
col_totals=sum(A,1) is the amount you still need to add in every column of A_rand
row_totals=sum(A,2) is the amount you still need to add in every row of A_rand
while sum( entries_left(:) ) > 0
% STEP 1:
% function to fill entries in A_rand if entries_left has rows/cols with one nonzero entry
% you will need to keep looping over this function until nothing changes
% update() A_rand, entries_left, row_totals, col_totals every time you loop
% STEP 2:
% let (i,j) be the indeces of the next non-zero entry in entries_left
% assign a random number to A_rand(i,j) <= col_totals(j) and <= row_totals(i)
% update() A_rand, entries_left, row_totals, col_totals
end
update()
A_rand(i,j) = random_value;
entries_left(i,j) = 0;
col_totals(j) = col_totals(j) - random_value;
row_totals(i) = row_totals(i) - random_value;
end
Picking the range for random_value might be a little tricky. The best I can think of is to draw it from a relatively narrow distribution centered around N*w*p where p is the probability of an entry in A being nonzero (this would be the average value of row/column totals).
This doesn't scale well to large matrices as it will grow with n^2 complexity. I tested it for a 200 by 200 matrix and it worked in about 20 seconds.

find nth-smallest value across m sorted arrays using idea from 2 sorted arrays

May I ask whether would it be possible? the general approach would be somehow like find n-th value on two sorted array, to ignore the insignificants and try to focus on the rest by adjusting the value of n in recursion
The 2 sorted arrays problem would yield a computation time O(log(|A|)+log(|B|), while the question is similar, I would like to ask if there exist algorithm for m sorted arrays for time O(log(|A1|)+log(|A2|)+...+log(|Am|)),
or some similar variation that is near the time I mentioned above (due to the variable m, we might need some other sorting algorithm for the pivots from those arrays),
or if such algorithm doesn't exist, why?
I just can't find this algorithm from googling
There is a simple randomized algorithm:
Select a pivot randomly from any of the m arrays. Let's call it x
For every array, do a binary search for x to find out how many values < x are in the array. Say we have ri values < x in array i. We know that x has rank r = sum(i = 1 to m, ri) in the union of all arrays.
If n <= r, we restrict each array i to the indices 0...(ri - 1) and recurse. If n > r, we restrict each array to the indices ri...|Ai | - 1
repeat
The expected recursion depth is O(log(N)) with N being the total number of elements, with a proof similar to that of Quickselect, so the expected running time is something like O(m * log2(N)).
The paper "Generalized Selection and Ranking" by Frederickson and Johnson proposes selection and ranking algorithms for different scenarios, for example an O(m + c * log(k/c)) algorithm to select the k-th element from m equally sized sorted sequences, with c = min{m, k}.

Resources