Iterating over all possible ways of dividing 12 objects over 4 groups - arrays

I am trying to find a way to iterate over all possible combinations of dividing 12 objects over equally sized 4 groups (order within the group doesn't matter, the order of the groups does matter).
I know the total amount of combinations is 369600 = 12! / (3!)^4, but I have no idea how I would go about iterating over all these different combinations.

You have objects as O[0], ..., O[11] and groups as G[0], ..., G[3] now you could assign objects to groups with steps like this:
1.Select 3 object for G[0] like this:
for(i = 0 ; i<10 ; i++){
for(j = i+1 ; j<11 ; j++){
for(k = j+1 ; k<12 ; k++){
G[0] = {O[i] , O[j] , O[k]}
2.Create a new object list by removing the O[i], O[j], O[k] from the object list and do the same thing like the pseudo-code above for G[1], I mean something like this:
for(l = 0 ; l<7 ; l++){
for(m = l+1 ; m<8 ; m++){
for(n = m+1 ; n<9 ; n++){
G[1] = {O[l] , O[m] , O[n]}
Do the same thing as step 2 for G[2]
Assign the 3 remaining items to G[3]
Write out G[1], G[2], G[3], G[4]

Related

Generate a matrix of combinations (permutation) without repetition (array exceeds maximum array size preference)

I am trying to generate a matrix, that has all unique combinations of [0 0 1 1], I wrote this code for this:
v1 = [0 0 1 1];
M1 = unique(perms([0 0 1 1]),'rows');
• This isn't ideal, because perms() is seeing each vector element as unique and doing:
4! = 4 * 3 * 2 * 1 = 24 combinations.
• With unique() I tried to delete all the repetitive entries so I end up with the combination matrix M1 →
only [4!/ 2! * (4-2)!] = 6 combinations!
Now, when I try to do something very simple like:
n = 15;
i = 1;
v1 = [zeros(1,n-i) ones(1,i)];
M = unique(perms(vec_1),'rows');
• Instead of getting [15!/ 1! * (15-1)!] = 15 combinations, the perms() function is trying to do
15! = 1.3077e+12 combinations and it's interrupted.
• How would you go about doing in a much better way? Thanks in advance!
You can use nchoosek to return the indicies which should be 1, I think in your heart you knew this must be possible because you were using the definition of nchoosek to determine the expected final number of permutations! So we can use:
idx = nchoosek( 1:N, k );
Where N is the number of elements in your array v1, and k is the number of elements which have the value 1. Then it's simply a case of creating the zeros array and populating the ones.
v1 = [0, 0, 1, 1];
N = numel(v1); % number of elements in array
k = nnz(v1); % number of non-zero elements in array
colidx = nchoosek( 1:N, k ); % column index for ones
rowidx = repmat( 1:size(colidx,1), k, 1 ).'; % row index for ones
M = zeros( size(colidx,1), N ); % create output
M( rowidx(:) + size(M,1) * (colidx(:)-1) ) = 1;
This works for both of your examples without the need for a huge intermediate matrix.
Aside: since you'd have the indicies using this approach, you could instead create a sparse matrix, but whether that's a good idea or not would depend what you're doing after this point.

Python loop not taking any inputs

Whats wrong with this code? I fail to see the error. The compiler shows this output.(N.B: I'm new to python)
output:Enter element no: 1
None
no_of_zeros = 0
for i in range(0, 5):
array[i] = input(print("Enter element no:", i+1))
if(array[i]==0):
array[i] = 1
is_zero += 1
sum = 0
for j in range(0, 5):
sum = sum + array[j]
print(sum)
print(no_of_zeros)
You can't access an index that doesn't exist. Pre-allocate array or use array.append.
array = list(range(0,5))
no_of_zeros = 0
for i in range(0, 5):
array[i] = int(input("Enter element no{}:".format(i+1)))
if(array[i]==0):
array[i] = 1
is_zero += 1
sum = 0
for j in range(0, 5):
sum = sum + array[j]
print(sum)
print(no_of_zeros)
Also that is a very C-ish way of doing things (which is fine). Python can make that code much simpler. Once you catch on to the Python way of doing things you will see why Python is so popular. Learn list comprehension for instance.
Your code can be reduced to three lines ...
array = [ int(input("Enter element no{}:".format(i+1))) for i in range(5) ]
print(sum(array))
print(sum([ 1 for x in array if x==0 ]))

Find all ascending triplets in array

I'm trying to exctract all the ascending triplets in an array of arbitrary dimension. For example if i have an array like [1 2 3 4] i'd like to obtain [1 2 3] , [1 2 4] , [2 3 4]
Here's a simple "graphical" example with 5 elements:
The arrows are the indexes used to iterate, each step is a found triplet.
So far i've just implemented a simple sorting algorithm, which will give me the ordered array.
Once i have the ordered array, I iterate with 3 possible pointers (or just indexes) increasing the pointer starting at the third element until it reaches the end of the array.
Once it reaches the end, i'll increase the second pointer and reset the third to the position right next to the pointer 2 and so on.
array = [ 3 2 1 5 ];
array = sort(array);
//Now i should iterate over the 3 indexes, but i'm totally lost about how to place them
for i=1:length(array-2)
for j=2:length(array-1)
for k=3:length(array)
//storing triplet
end
end
end
Right now i'm able to iterate over the array, and i can extract all the triplets until the k index reaches the end of the array.
The problem is once i reach for the end, i have to increment the j index starting point and reset the k index to be right next to the second index.
To make it clear: right now once k reaches the end, it will start again from 3 and j will also be 3, but i need them to be j = 3 and k = 4 after the first iteration of k is completed and so on (this is valid also for j compared to i , look at the image for a clearer explanation).
How do i fix the indexes in order to extract the triplets correctly?
Seems to me like your inner iterations start one after your outer one:
for j=(i+1):length(array-1)
for k=(j+1):length(array-1)
Generalizing the first loop to the rest.
const arrayTotal = [3, 2, 1, 5];
let combinationArray = [];
arrayTotal.sort();
for (let i = 0; i < (arrayTotal.length - 2); i++) {
for (let j = (i + 1); j < (arrayTotal.length - 1); j++) {
for (let k = (j + 1); k < arrayTotal.length; k++) {
combinationArray.push([arrayTotal[i], arrayTotal[j], arrayTotal[k]])
}
}
}
console.log(combinationArray);

Vectorizing a code that requires to complement some elements of a binary array

I have a matrix A of dimension m-by-n composed of zeros and ones, and a matrix J of dimension m-by-1 reporting some integers from [1,...,n].
I want to construct a matrix B of dimension m-by-n such that for i = 1,...,m
B(i,j) = A(i,j) for j=1,...,n-1
B(i,n) = abs(A(i,n)-1)
If sum(B(i,:)) is odd then B(i,J(i)) = abs(B(i,J(i))-1)
This code does what I want:
m = 4;
n = 5;
A = [1 1 1 1 1; ...
0 0 1 0 0; ...
1 0 1 0 1; ...
0 1 0 0 1];
J = [1;2;1;4];
B = zeros(m,n);
for i = 1:m
B(i,n) = abs(A(i,n)-1);
for j = 1:n-1
B(i,j) = A(i,j);
end
if mod(sum(B(i,:)),2)~=0
B(i,J(i)) = abs(B(i,J(i))-1);
end
end
Can you suggest more efficient algorithms, that do not use the nested loop?
No for loops are required for your question. It just needs an effective use of the colon operator and logical-indexing as follows:
% First initialize B to all zeros
B = zeros(size(A));
% Assign all but last columns of A to B
B(:, 1:end-1) = A(:, 1:end-1);
% Assign the last column of B based on the last column of A
B(:, end) = abs(A(:, end) - 1);
% Set all cells to required value
% Original code which does not work: B(oddRow, J(oddRow)) = abs(B(oddRow, J(oddRow)) - 1);
% Correct code:
% Find all rows in B with an odd sum
oddRow = find(mod(sum(B, 2), 2) ~= 0);
for ii = 1:numel(oddRow)
B(oddRow(ii), J(oddRow(ii))) = abs(B(oddRow(ii), J(oddRow(ii))) - 1);
end
I guess for the last part it is best to use a for loop.
Edit: See the neat trick by EBH to do the last part without a for loop
Just to add to #ammportal good answer, also the last part can be done without a loop with the use of linear indices. For that, sub2ind is useful. So adopting the last part of the previous answer, this can be done:
% Find all rows in B with an odd sum
oddRow = find(mod(sum(B, 2), 2) ~= 0);
% convert the locations to linear indices
ind = sub2ind(size(B),oddRow,J(oddRow));
B(ind) = abs(B(ind)- 1);

Gnuplot: Nested “plot” iteration (“plot for”) with dependent loop indices

I have recently attempted to concisely draw several graphs in a plot using gnuplot and the plot for ... syntax. In this case, I needed nested loops because I wanted to pass something like the following index combinations (simplified here) to the plot expression:
i = 0, j = 0
i = 1, j = 0
i = 1, j = 1
i = 2, j = 0
i = 2, j = 1
i = 2, j = 2
and so on.
So i loops from 0 to some upper limit N and for each iteration of i, j loops from 0 to i (so i <= j). I tried doing this with the following:
# f(i, j, x) = ...
N = 5
plot for [i=0:N] for [j=0:i] f(i, j, x) title sprintf('j = %d', j)
but this only gives five iterations with j = 0 every time (as shown by the title). So it seems that gnuplot only evaluates the for expressions once, fixing i = 0 at the beginning and not re-evaluating to keep up with changing i values. Something like this has already been hinted at in this answer (“in the plot for ... structure the second index cannot depend on the first one.”).
Is there a simple way to do what I want in gnuplot (i.e. use the combinations of indices given above with some kind of loop)? There is the do for { ... } structure since gnuplot 4.6, but that requires individual statements in its body, so it can’t be used to assemble a single plot statement. I suppose one could use multiplot to get around this, but I’d like to avoid multiplot if possible because it makes things more complicated than seems necessary.
I took your problem personally. For your specific problem you can use a mathematical trick. Remap your indices (i,j) to a single index k, such that
(0,0) -> (0)
(1,0) -> (1)
(1,1) -> (2)
(2,0) -> (3)
...
It can be shown that the relation between i and j and k is
k = i*(i+1)/2 + j
which can be inverted with a bit of algebra
i(k)=floor((sqrt(1+8.*k)-1.)/2.)
j(k)=k-i(k)*(i(k)+1)/2
Now, you can use a single index k in your loop
N = 5
kmax = N*(N+1)/2 + N
plot for [k=0:kmax] f(i(k), j(k), x) title sprintf('j = %d', j(k))

Resources