I am struggling to scale a constraint problem (it breaks down for large values and / or if I try to optimise instead of just looking for any solution). I've taken some steps to break the search space down based on advice from some previous questions but it's still stalling. Are there any more techniques that can help me optimise this computation?
%%% constants %%%
#const nRounds = 4.
#const nPlayers = 20.
#const nRooms = 4.
#const nDecks = 7.
player(1..nPlayers).
room(1..nRooms).
deck(1..nDecks).
writer(1,1;2,2;3,3;4,4).
% For reference - that's what I started with:
%nRounds { seat(Player, 1..nRooms, 1..nDecks) } nRounds :- player(Player).
% Now instead I'm using a few building blocks
% Each player shall only play nRounds decks
nRounds { what(Player, 1..nDecks) } nRounds :- player(Player).
% Each player shall only play in up to nRounds rooms.
1 { where(Player, 1..nRooms) } nRounds :- player(Player).
% For each deck, 3 or 4 players can play in each room.
3 { who(1..nPlayers, Room, Deck) } 4 :- room(Room), deck(Deck).
% Putting it all together, hopefully, this leads to fewer combinations than the original monolithic choice rule.
{ seat(Player, Room, Deck) } :- what(Player, Deck), where(Player, Room), who(Player, Room, Deck).
% A player can only play a deck in a single room.
:- seat(Player, Room1, Deck), seat(Player, Room2, Deck), Room1 != Room2.
% A player must play nRounds decks overall.
:- player(Player), #count { Room, Deck: seat(Player, Room, Deck) } != nRounds.
% Any deck in any room must be played by 3-4 players.
legal_player_count(3..4).
:- room(Room), deck(Deck),
Players = #count { Player: seat(Player, Room, Deck) },
Players > 0,
not legal_player_count(Players).
% Writers cannot play their own decks.
:- writer(Player, Deck), seat(Player, _, Deck).
% At least one non-playing player per room.
:- deck(Deck),
Playing = #count { Player, Room: seat(Player, Room, Deck) },
Rooms = #count { Room: seat(_, Room, Deck) },
nPlayers - Playing < Rooms.
%:- room(R1), deck(D), room(R2), X = #sum { P: seat(P, R1, D) }, Y = #sum { P: seat(P, R2, D) }, R1 > R2, X > Y.
#minimize { D: decks(D) }.
#show decks/1.
#show seat/3.
% #show common_games/3.
When, or if, this becomes manageable I am hoping to add more optimisation objectives to choose the best configurations along the lines of:
% Input points(P, R, D, X) to report points.
% winner(P, R, D) :- points(P, R, D, X), X = #max { Y : points(_, R, D, Y) }.
% Compute each player's rank based on each round:
% rank(P, D, R) :- points(P, Room, D, X), winner(Winner, Room, D), D_ = D - 1,
% rank(P, D_, R_),
% R = some_combination_of(X, P=Winner, R_).
% latest_rank(P, R) :- D = #max { DD: rank(P, DD, _) }, rank(P, D, R).
% Total number of decks played throughout the night (for minimisation?)
decks(Decks) :- Decks = #count { Deck: seat(_, _, Deck) }.
% Total number of games played together by the same players (for minimisation)
% The total sum of this predicate is invariant
% Minimisation should took place by a superlinear value (e.g. square)
common_games(Player1, Player2, Games) :-
player(Player1), player(Player2), Player1 != Player2,
Games = #count { Room, Deck:
seat(Player1, Room, Deck),
seat(Player2, Room, Deck)
}, Games > 0.
% For example:
% common_game_penalty(X) :- X = #sum { Y*Y, P1, P2 : common_games(P1, P2, Y) }.
% Another rank-based penalty needs to be added once the rank mechanics are there
% Then the 2 types of penalties need to be combined and / or passed to the optimiser
Update - Problem description
P players gather for a quiz night. D decks and R rooms are
available to play.
Each room can only ever host either 3 or 4 players (due to the rules of the game, not space).
Each Deck is played at most once and is played in multiple rooms simultaneously - so in a sense Deck is kind of synonymous to "Round".
Each player can only play the same Deck at most once.
Each player only gets to play N times during the night (N is pretty much fixed and it's 4).
So if 9 decks are played during the night (i.e. if there are lots of players
present), each will play 4 out of these 9.
Therefore, it is not necessary for each player to play in each "deck/round". In fact, for each deck there is a writer and it is usually one of the players
present.
Naturally, the writer cannot play their own deck so they have to stay out for that round. Additionally, for each deck/round,
somebody must read the questions in each room so if 16 players are
present and there are 4 rooms, it is impossible for all 16 players to
play. It is possible to have 4 rooms with 3 players each (and the
remaining 4 players read out the questions) or to have 3 rooms with 4
players each (with 3 players reading out the questions and 1
spectating).
Hopefully, this clears up the confusion, if not I can try to give more elaborate examples but basically, say if you have 4 rooms and 30 players:
You pick 16 who'll play and 4 more who'll read out the questions
Then you have 16 people who played their 1/4 deck/rounds and 14 who are still at 0/4
So then you can either let the other 14 people play (4,4,3,3 players per room) or continue maximising the room utility so that after the second round everyone played at least once and 2/30 players have already played 2/4 games.
So then you continue picking some number of people until everyone has played exactly 4 decks/rounds.
P.S. You have 2 notions of round - one at a personal level where everyone has 4 to play and the other at the league level where there is some number of decks >4 and each deck is considered "a round" in the eyes of everyone present. From what I understood this was the most confusing bit about the setup that I didn't clarify well at the beginning.
I have rewritten the encoding with your new specification without too much optimizations to get the problem straight.
Remarks:
I assume that one that "reads the questions" is the writer ?
I assured that there is 1 writer per room available but I didn't name it.
#const nPlayers = 20.
#const nRooms = 4.
#const nDecks = 6.
player(1..nPlayers).
room(1..nRooms).
deck(1..nDecks).
% player P plays in room R in round D
{plays(P,R,D)} :- deck(D), room(R), player(P).
% a player may only play in a single room each round
:- player(P), deck(D), 1 < #sum {1,R : plays(P,R,D)}.
% not more than 4 players per room
:- deck(D), room(R), 4 < #sum {1,P : plays(P,R,D)}.
% not less than 3 players per room
:- deck(D), room(R), 3 > #sum {1,P : plays(P,R,D)}.
plays(P,D) :- plays(P,R,D).
% at least one writer per room (we need at least one player not playing for each room, we do not care who does it)
:- deck(D), nRooms > #sum {1,P : not plays(P,D), player(P)}.
% each player only plays 4 times during the night
:- player(P), not 4 = #sum {1,D : plays(P,D)}.
#show plays/3.
%%% shortcut if too many decks are used, each player can only play 4 times but at least 3 players have to play in a room (currently there is no conecpt of an empty room)
:- 3*nRooms*nDecks > nPlayers*4.
Note that I added the last constraint, as your initial configuration was not solveable (each player has to play exactly 4 rounds and we have twenty players, this is 80 individual games. Given that at least 3 players have to be in a room, and we have 4 rooms and 7 decks this is 3 * 4 * 7 = 84, we would need to play at least 84 individual games). You could probably also compute the number of decks I think.
Related
I am trying to use clingo to generate tournament player-room allocations:
player(1..20).
room(1..4).
played(1..20, 0).
rank(1..20, 1).
played(1..20, 1..20, 0).
0 { used_room(R) } 1 :- room(R).
3 { game(P, R) } 4 :- used_room(R), player(P).
:- game(P, R1), game(P, R2), R1 != R2.
penalty(Y) :- Y = #sum {
X: game(P1, R), game(P2, R), played(P1, P2, X);
X: game(P1, R), game(P2, R), rank(P1, R1), rank(P2, R2), abs(R1-R2) = X;
4 - X: played(P, X), not game(P, _)
}.
#minimize { X: penalty(X) }.
The first 5 lines are supposed to be the "input":
The number of players present is variable
So is the number of rooms available
Each player needs to play 4 rounds throughout the night so we record the number of rounds played by each player so far
Each player has a rank (in the league table), which is updated after every round - ideally players in every room should have similar levels (think ELO)
To discourage the algorithm from putting the same players together all the time, we also keep track of the number of rounds any given pair of players spent together in the same room
The idea is to update these inputs after every round (once the points are in) and feed them back into the solver to produce the next round's allocation.
Then, I tried to add some constraints:
There is a certain number of rooms available but they do not all have to be used. Each room can be either used or unused each round
For any room that is used, it has to have either 3 or 4 players assigned to it (due to the mechanics of the game - 4 is always preferred, 3 is for dealing with edge cases)
No player can be assigned to more than one room for any given round
Finally, I tried defining some "penalties" to guide the solver to pick the best allocations:
For every pair of players P1, P2 that were placed in the same room add X to the penalty where X is the number of times they already played together.
For every pair of players P1, P2 that were placed in the same room add the (absolute) difference in their rank to the penalty.
For every player that still has to play in X more rounds but hasn't been selected for this round, add X to the penalty.
What I meant to do was for this penalty to accumulate so that each player who has 4 rounds to go (so every player at the beginning) adds 4 points to the penalty and not just one (which is what happened with this code). In practice, running this gets penalty(4). and no game(player, room). allocations whatsoever.
Also, I'd like to have some constraint so that I cannot end up in a situation where some players still have rounds left to play but there are not enough players left (e.g. if you have 1, 2 or 5 players left who just need to play one round). I am not sure what the right invariant is which could guarantee that this would not happen even several rounds ahead. This is more of an actual logic question than clingo. In practice, you have around 3-4 rooms available and around 20-30 players - importantly, there is never a guarantee that # players is a factor of 4.
Something else that's missing from my current "implementation" is a constraint such that for a specific subset of players (let's call them "experts"), at least one of them has to stay out of the current round (and lead it). And in general for each room used, at least one player has to stay out (including the one expert). This should be a hard constraint.
Finally, we'd like to maximise utilisation for the rooms i.e. maximise the number of players per round and minimise the number of rounds overall. This should be a weak constraint (just like the constraints to do with ranks and games played so far together).
Many thanks in advance for any help or advice! Unfortunately, the documentation does not give so many sophisticated examples so I couldn't figure out what the right syntax for my use cases is.
Writing everything at start and trying to debug afterwards is difficult in answer set programming. In your case it may be better to first define your search space and one by one write constraints to remove unwanted asnwers.
To update inputs after every round you will have to work with "online ASP". You may want to consider looking https://potassco.org/clingo/ as it contains valuable learning material which could help with your learning.
Encoding below may be a good starting point for you
%%% constants %%%
#const numberOfRounds = 4.
#const numberOfPlayers = 2.
#const numberOfRooms = 4.
%%% constants %%%
%%% define players and their initial ranks %%%
player(1..numberOfPlayers,1).
%%% define players and their initial ranks %%%
%%% define rooms %%%
room(1..numberOfRooms).
%%% define rooms %%%
%%% define rounds %%%
round(1..numberOfRounds).
%%% define rounds %%%
%%% define search space (all possible values) %%%
search(P,R,S) :- player(P,_), room(R), round(S).
%%% define search space (all possible values) %%%
%%% define played %%%
{played(P,R,S)} :- search(P,R,S).
%%% define played %%%
%%% remove answers that does not satisfy the condition "Each player needs to play 4 rounds" %%%
:- player(P,_), X = #count{S : played(P,_,S)}, X != numberOfRounds.
%%% remove answers that does not satisfy the condition "Each player needs to play 4 rounds" %%%
%%% show output %%%
#show.
#show played/3.
%%% show output %%%
Based on NTP's advice, I tried rewriting again and now pretty much all constraints are present and seem to work except for the ranking based penalty which I still have to add.
%%% constants %%%
#const nRounds = 3.
#const nPlayers = 4.
#const nRooms = 3.
#const nDecks = 4.
player(1..nPlayers).
room(1..nRooms).
deck(1..nDecks).
writer(1,1;2,2;3,3;4,4).
{ played(P, R, D) } :- player(P), room(R), deck(D).
% A player can only play a deck in a single room.
:- played(P, R1, D), played(P, R2, D), R1 != R2.
% A player must play nRounds decks overall.
:- player(P), X = #count { R, D: played(P, R, D) }, X != nRounds.
% Any deck in any room must be played by 3-4 players.
legal_player_count(3;4).
:- room(R), deck(D),
X = #count { P: played(P, R, D) },
X > 0,
not legal_player_count(X).
% Writers cannot play their own decks.
:- writer(P, D), played(P, _, D).
% At least one non-playing player per room.
:- deck(D),
Playing = #count { P, R: played(P, R, D) },
Rooms = #count { R: played(_, R, D) },
nPlayers - Playing < Rooms.
% Input points(P, R, D, X) to report points.
% winner(P, R, D) :- points(P, R, D, X), X = #max { Y : points(_, R, D, Y) }.
% Total number of decks played throughout the night (for minimisation?)
decks(X) :- X = #count { D: played(_, _, D) }.
% Total number of games played together by the same players (for minimisation)
% The total sum of this predicate is invariant
% Minimisation should took place by a superlinear value (e.g. square)
common_games(P1, P2, X) :- player(P1), player(P2), P1 != P2,
X = #count { R, D: played(P1, R, D), played(P2, R, D) }, X > 0.
% For example:
% common_game_penalty(X) :- X = #sum { Y*Y, P1, P2 : common_games(P1, P2, Y) }.
% Another rank-based penalty needs to be added once the rank mechanics are there
% Then the 2 types of penalties need to be combined and / or passed to the optimiser
#show decks/1.
#show played/3.
#show common_games/3.
i am tackling on a problem. i have gotten stuck, so i decided to ask here. so, the problem is, given n team and their points respectively of a world cup group. determine whether the set is possible or not. each team plays with every other team in the group once. hence, each group plays (n-1) times. for 1<=n<=5. in a match if a team win, they'll get 3 points, if lose 0 points, and tied, 1 point. my idea of the solution is using 2d(n x n) array which act like a scoreboard.
A B C D E //column
A X 1 3 0 1 //r
B 1 X 0 1 0 //o
C 0 3 X 0 3 //w
D 3 1 3 X 1
E 1 3 0 1 X
so for every column and row representing one distinct team in a multiplication table fashion(team in column 1(a) is same as team row 1(A), and so on)note that the alphabet above and beside the array(A,B..) isn't included, just for clearance. every intersection between a row and a column is representing a match, except intersection between same column and row. e.g. column 1, row 2, means team A tied against team B, column 2, row 1 means team B tied against A.
my idea is to use recursive brute-force-wise algorithm to check every possibilities. i have developed one, it's work good enough in 4 teams setting, but doesn't so well for 5. so the algorithm work like starting from column 2 row 1 check 1 out of 3 possibility then crawl to the bottom-side and right-side of it and repeat through the second last column, and last row.
you may have noticed that x diagonal act like mirror. when we change column 1 row 3(A against C) to win, we must change column 3 row 1(C against A) to lose simultaneously. here some part of my code
/*
* scoreBoard[][] array <- the array which i have described above
* scores[] array <- store the given score
* x <- current column
* y <- current row
* n <- gnumber of team
*/
bool Solve(int x, int y, int scoreBoard[][5], int scores[], int n)
{
bool con1, con2, con3;
if((x < y)&&(y < n)) {
scoreBoard[x][y] = 3;//win-lose - possibiiity 1
scoreBoard[y][x] = 0;
//crawl to the right and bottom side array
con1 = (Solve( x + 1, y, scoreBoard, scores, n)) || (Solve( x, y + 1, scoreBoard, scores, n));
scoreBoard[x][y] = 0;//lose-win - possibility 2
scoreBoard[y][x] = 3;
con2 = (Solve( x + 1, y, scoreBoard, scores, n)) || (Solve( x, y + 1, scoreBoard, scores, n));
scoreBoard[x][y] = 1;//tied - possibility 3
scoreBoard[y][x] = 1;
//crawl to the right and bottom side array
con3 = (Solve( x + 1, y, scoreBoard, scores, n)) || (Solve( x, y + 1, scoreBoard, scores,n));
return con1 || con2 || con3;
} else {
if((x==y)&&(y==n-1))
return CheckArr(scoreBoard, scores, n); //to check whether the current array equal with the given score or not
else
return 0;
}
}
i presume, the problem is that this algorithm does not cover every possibility, because it work on(give the expected output for some, and dont so for other) a few 5 team setting possiblity. but i haven't managed how to fix it.
thanks in advance for every suggestion, and helpful link, also, i'll welcome any other strategy. hope this clear enough.
I came across this question recently; I thought about it a lot but could not find a solution:
Given a list of n players with strengths [s1 , s2, s3 ... sn], create two teams (A and B) of size k (k ≤ n/2), so that:
the total strength is maximized
the difference in strength is minimized
Strength(A) = sum of strength of all players in team A,
Strength(B) = sum of strength of all players in team B,
Total strength = strength(A) + strength (B),
Difference in strength = abs(strength(A) - strength(B)).
In case of same total strength, select the combination with the minimum difference in strength.
Example:
n = 5; k = 2
players: a b c d e
strength: 4 4 2 2 5
Option Team A Team B Strength Difference
1 [a,b] [c,e] 15 1
2 [a,b] [d,e] 15 1
3 [a,c] [b,e] 15 3
4 [a,d] [b,e] 15 3
5 [a,c] [b,d] 12 0
6 [a,d] [c,b] 12 0
7 [a,d] [c,e] 13 1
Option 1 and option 2 are winning combinations as their total strength is 15 (maximum), while their difference in strength is closer to the minimum than options 3 and 4.
My thoughts:
If 2k = n , strength is taken care of already (because all elements will be involved) and we just need to find two halves such that difference of sum of these two is minimum. But how to find that efficiently?
If 2k < n , we can probably sort the strength array and removed n-2k smallest elements and then we are back to 2k = n situation.
As mentioned in the comments, this is a variant of the Partitioning Problem, which itself is a special case of the Subset Sum Problem. These indeed have dynamic programming and approximation solutions, which you may be able to adapt to this problem. But the specific requirement of two equal-sized teams means that non-dp and non-greedy solutions are possible too.
Firstly, optimizing for total strength before taking the difference in strength between the teams into account, means that when the number of players n is odd, the weakest player can be discarded, and the size of the teams k is always half of n. If k is given as part of the input, then take the 2×k strongest players and discard the others.
(You could wonder whether the question was in fact to optimize for strength difference first, and then for total strength. If you find two subsets with difference x, then finding another two subsets with a similar difference y would mean you can combine them into two larger subsets with a smaller difference of |x-y|. This is an obvious basis for a dynamic programming approach.)
Alternative to dynamic programming solution
Let's look at the example of splitting n=23 (i.e. n=22) players into two teams of 11 players. If we used brute force to look at every option, we'd keep one of the players in team A (to avoid duplicate solutions) and try every combination of 10 additional players from the 21 others to complete team A. That means there are:
(n-1 choose k-1) = (21 choose 10) = 352,716 unique options
While this is a feasible number of options to check, larger numbers of players would quickly result in huge numbers of options; e.g. splitting 44 players into two teams of 22 would lead to more than 1012 options.
We can drastically reduce the number of options that need to be checked, by starting with an initial split into two teams, and then checking which 1 player, 2 players, ... , 10 players we'd need to swap to reduce the strength difference the most. This can be done without having to consider swapping each possible subset of team A with each possible equal-sized subset of team B.
We could do the initial split into teams randomly, but if we sort the players by strength, and alternatingly add a player to team A or team B, this should limit the initial difference in strength D, which in turn should make it more likely that a solution with a limited number of swaps is found quickly (if there are several perfect solutions).
Then we consider swapping 1 player; we make a list of all players in team A (except the first one, which we'll always keep in team A to avoid duplicate solutions) and sort it from weakest to strongest. We also make a list of all players in team B, and sort it from weakest to strongest. Then we iterate over both lists at the same time, at each step moving to the next value in the list that brings the difference in strength between the current player from team A and team B closer to the initial value of D.
Note that we don't compare every player in the first list with every player in the second list in a nested loop. We only iterate over the lists once (this is similar to finding the two integers with the smallest difference in two arrays; see e.g. here).
If we come across a pair of players that, when swapped, decreases D, we store this pair and set the new value of D.
Now we consider swapping 2 players; we make a list of every pair of 2 players from team A (excluding player 1 again) and a list of every pair of players from team B, sort the lists from weakest to strongests (adding the strength of the two players). Then we iterate over both lists again, looking for a pair of pairs that, when swapped, decreases the value of D.
We go on doing the same for sets of 3, 4, ... 10 players. For the example of 23 players, the size of these lists would be:
team A team B
swap 1 10 11
swap 2 45 55
swap 3 120 165
swap 4 210 330
swap 5 252 462
swap 6 210 462
swap 7 120 330
swap 8 45 165
swap 9 10 55
swap 10 1 11
---- ----
1023 2046
So, we'd find the optimal swap that results in two teams with the smallest difference in strength after at most 3,069 steps instead of 352,716 steps for the brute-force algorithm.
(We could further speed up the cases where there are several perfect solutions by checking swap sizes in the order 10, 1, 9, 2, 8, 3, 7, 4, 6, 5 to find a solution without having to generate the larger lists.)
The example of splitting 44 players into two teams of 22 would take at most 6,291,453 steps instead of more than 1012 steps. In general, the maximum number of steps is:
2k + 2k−1 − 3
and the time complexity is:
O(2k)
which doesn't look great, but is much better than the brute-force algorithm with its O(C(n-1,k-1)) complexity. Also, as soon as a solution with difference 0 or 1 is found, there is no need to look at further options, so a solution can be found after considering swaps of only 1 or a handful of players, and the average case complexity is much better than the worst-case complexity (this is discussed further below.)
Code example
Here's a Javascript code snippet as a proof of concept. Selections of players are represented by a bit array (you could also use an integer as a bit pattern). You'll see that the change in team strength after different swaps is calculated, but only one selection of players is actually swapped at the end; so it's not a greedy algorithm that gradually improves the strength difference by performing several swaps.
function compareStrength(a, b) { // for sorting players and selections
return a.strength - b.strength;
}
function teamStrength(players) {
return players.reduce(function(total, player) {return total + player.strength;}, 0);
}
function selectionStrength(players, selection) {
return players.reduce(function(total, player, index) {return total + player.strength * selection[index];}, 0);
}
function nextPermutation(selection) { // reverse-lexicographical next permutation of a bit array
var max = true, pos = selection.length, set = 1;
while (pos-- && (max || !selection[pos])) if (selection[pos]) ++set; else max = false;
if (pos < 0) return false;
selection[pos] = 0;
while (++pos < selection.length) selection[pos] = set-- > 0 ? 1 : 0;
return true;
}
function swapPlayers(wTeam, sTeam, wSelect, sSelect) {
for (var i = 0, j = 0; i < wSelect.length; i++) {
if (wSelect[i]) {
while (!sSelect[j]) ++j;
var temp = wTeam[i];
wTeam[i] = sTeam[j];
sTeam[j++] = temp;
}
}
}
function equalTeams(players) {
// SORT PLAYERS FROM WEAKEST TO STRONGEST
players.sort(compareStrength);
// INITIAL DISTRIBUTION OF PLAYERS INTO WEAKER AND STRONGER TEAM (ALTERNATING)
var wTeam = [], sTeam = [];
for (var i = players.length % 2; i < players.length; i += 2) {
wTeam.push(players[i]);
sTeam.push(players[i + 1]);
}
var teamSize = wTeam.length;
// CALCULATE INITIAL STRENGTH DIFFERENCE
var initDiff = teamStrength(sTeam) - teamStrength(wTeam);
var bestDiff = initDiff;
var wBestSel = [], sBestSel = [];
// CHECK SELECTIONS OF EVERY SIZE
for (var selSize = 1; selSize < teamSize && bestDiff > 1; selSize++) {
var wSelections = [], sSelections = [], selection = [];
// CREATE INITIAL SELECTION BIT-ARRAY FOR WEAKER TEAM (SKIP PLAYER 1)
for (var i = 0; i < teamSize; i++)
selection[i] = (i > 0 && i <= selSize) ? 1 : 0;
// STORE ALL SELECTIONS FROM WEAKER TEAM AND THEIR STRENGTH
do wSelections.push({selection: selection.slice(), strength: selectionStrength(wTeam, selection)});
while (nextPermutation(selection));
// SORT SELECTIONS FROM WEAKEST TO STRONGEST
wSelections.sort(compareStrength);
// CREATE INITIAL SELECTION BIT-ARRAY FOR STRONGER TEAM
for (var i = 0; i < teamSize; i++)
selection[i] = (i < selSize) ? 1 : 0;
// STORE ALL SELECTIONS FROM STRONGER TEAM AND THEIR STRENGTH
do sSelections.push({selection: selection.slice(), strength: selectionStrength(sTeam, selection)});
while (nextPermutation(selection));
// SORT SELECTIONS FROM WEAKEST TO STRONGEST
sSelections.sort(compareStrength);
// ITERATE OVER SELECTIONS FROM BOTH TEAMS
var wPos = 0, sPos = 0;
while (wPos < wSelections.length && sPos < sSelections.length) {
// CALCULATE STRENGTH DIFFERENCE IF THESE SELECTIONS WERE SWAPPED
var wStrength = wSelections[wPos].strength, sStrength = sSelections[sPos].strength;
var diff = Math.abs(initDiff - 2 * (sStrength - wStrength));
// SET NEW BEST STRENGTH DIFFERENCE IF SMALLER THAN CURRENT BEST
if (diff < bestDiff) {
bestDiff = diff;
wBestSel = wSelections[wPos].selection.slice();
sBestSel = sSelections[sPos].selection.slice();
// STOP SEARCHING IF PERFECT SOLUTION FOUND (DIFFERENCE 0 OR 1)
if (bestDiff < 2) break;
}
// ADVANCE TO NEXT SELECTION FROM WEAKER OR STRONGER TEAM
if (2 * (sStrength - wStrength) > initDiff) ++wPos; else ++sPos;
}
}
// PERFORM SWAP OF BEST PAIR OF SELECTIONS FROM EACH TEAM
swapPlayers(wTeam, sTeam, wBestSel, sBestSel);
return {teams: [wTeam, sTeam], strengths: [teamStrength(wTeam), teamStrength(sTeam)]};
}
var players = [{id:"Courtois", strength:65}, {id:"Mignolet", strength:21}, {id:"Casteels", strength:0},
{id:"Alderweireld", strength:83}, {id:"Vermaelen", strength:69}, {id:"Kompany", strength:82},
{id:"Vertonghen", strength:108}, {id:"Meunier", strength:30}, {id:"Boyata", strength:10},
{id:"Dendoncker", strength:6}, {id:"Witsel", strength:96}, {id:"De Bruyne", strength:68},
{id:"Fellaini", strength:87}, {id:"Carrasco", strength:30}, {id:"Tielemans", strength:13},
{id:"Januzaj", strength:9}, {id:"Dembele", strength:80}, {id:"Chadli", strength:51},
{id:"Lukaku", strength:75}, {id:"E. Hazard", strength:92}, {id:"Mertens", strength:75},
{id:"T. Hazard", strength:13}, {id:"Batshuayi", strength:19}];
var result = equalTeams(players);
for (var t in result.teams) {
for (var i in result.teams[t]) {
document.write(result.teams[t][i].id + " (" + result.teams[t][i].strength + ") ");
}
document.write("<br>→ team strength = " + result.strengths[t] + "<br><br>");
}
Probability of finding a perfect solution
When the algorithm finds a perfect solution (with a strength difference of 0 or 1), this cannot be improved further, so the algorithm can stop looking at other options and return the solution. This of course means that, for some input, a solution can be found almost instantly, and the algorithm can be used for a large number of players.
If there is no perfect solution, the algorithm has to run its full course to be sure it has found the best solution. With a large number of players, this could take a long time and use a lot of memory space (I was able to run a C++ version for up to 64 players on my computer).
Although it's straightforward to craft input that has no perfect solution (such as one player with strength 3 and the other players all with strength 1), testing with random data showed that the number of players for which almost all random input has a perfect solution is surprisingly low (similar to the Birthday Paradox).
With n=24 (two teams of 12) or more, ten million instances of random input provided not a single case where the strength difference between the teams was greater than 1, whether using 10, 100, 1000 or 10000 different integer values to express the strength of each player.
I am unable to generate a histogram in Matlab using array
% initialising the five arrays to hold the averages of five probabilities of interests
ar1=zeros(1,100);
ar2=zeros(1,100);
ar3=zeros(1,100);
ar4=zeros(1,100);
ar5=zeros(1,100);
%initialising the variable to count the number of experiments
k=1;
while k<=100,
%generating the required random numbers for the proble
%pi is the probablity in winning the ith game
p1=rand(1);
p2=rand(1)*p1;
p3=rand(1)*p2;
p4=rand(1)*p3;
%initialising variable to count the number of tournaments
count_tour=1;
%initialising the variables in order to get the sum of all probabilties of interests and then we can get our respective averages
t1=0; t2=0; t3=0; t4=0; t5=0;
%starting the loop for 50 tournaments
while count_tour<=50,
%Total probabilties of winning the ith game
W1=p1;
W2=p1*(1+p2-p1);
W3=(p1*p2*p3)+((p1*p1)*(2-p1-p2))+((p4)*(1-p1)*(1-p1));
%probabilty that player had won the first game given that he won the second game
W4=(p1*p2)/W2;
%probabilty of winning all three games
W5=p1*p2*p3;
%getting the sum of all probabilies in 50 tournaments
t1=t1+W1;
t2=t2+W2;
t3=t3+W3;
t4=t4+W4;
t5=t5+W5;
count_tour=count_tour+1;
end
%getting the averages of probabilties of interest in 50 tournaments
av1=t1/50;
av2=t2/50;
av3=t3/50;
av4=t4/50;
av5=t5/50;
ar1(k)=ar1(k)+av1;
ar2(k)=ar2(k)+av2;
ar3(k)=ar3(k)+av3;
ar4(k)=ar4(k)+av4;
ar5(k)=ar5(k)+av5;
k=k+1;
end
figure();
h1=histogram(ar1);
h2=histogram(ar2);
h3=histogram(ar3);
h4=histogram(ar4);
h5=histogram(ar5);
Assuming that the section calculating the arrays ar1, ar2, ar3, ar4, ar5 is correct, and also considering the update proposed in the answer from #EBH, the problem could be in the way you plot the histograms:
you first open a figure
the you call, in sequence, 5 time the functin histogram
This might work for the first histogram, nevertheless, the second one will be plot on the same figure and it will replace the first one; same for the others.
Possible solutions could be:
to have each histogram on a deedicated figure
all the histogram on one figure
In the first case it is sufficient to call figure before each call to histogram.
In the second case you can use the function subplot to create 5 axes in one figure on which to plot the histograms.
In the following, you can find a possible implementation of the proposed approach.
Two flags are used to control the drawing:
same_xy_lim: 1 => set the same xlim, ylim for all the axes
0 => do not modify the xlim, ylim
multi_fig: 1 => plot each histogram in a separate figure
0 => plot all the histograms in a single figure using
subplot
The plotting section of the the script could be updated as follows:
% Define and set the flags to control the drawing mode:
% same_xy_lim: 1 => set the same xlim, ylim for all the axes
% 0 => do not modify the xlim, ylim
% multi_fig: 1 => plot each histogram in a separate figure
% 0 => plot all the histograms in a single figure using
% subplot
same_xy_lim=1;
multi_fig=1;
% figure();
if(multi_fig)
figure
else
subplot(3,2,1)
end
h1=histogram(ar1);
if(same_xy_lim)
xlim([0 1])
ylim([0 100])
end
if(multi_fig)
figure
else
subplot(3,2,2)
end
h2=histogram(ar2);
if(same_xy_lim)
xlim([0 1])
ylim([0 100])
end
if(multi_fig)
figure
else
subplot(3,2,3)
end
h3=histogram(ar3);
if(same_xy_lim)
xlim([0 1])
ylim([0 100])
end
if(multi_fig)
figure
else
subplot(3,2,4)
end
h4=histogram(ar4);
if(same_xy_lim)
xlim([0 1])
ylim([0 100])
end
if(multi_fig)
figure
else
subplot(3,2,5)
end
h5=histogram(ar5);
if(same_xy_lim)
xlim([0 1])
ylim([0 100])
end
This generate, depending on the setting of the above mentioned flags:
All in one figure
One histogram per figure
Hope this helps,
Qapla'
Here is a more correct, simple, readable and working version of your code:
% initialising the five arrays to hold the averages of five probabilities
% of interests
ar = zeros(100,5);
for k = 1:100
% generating the required random numbers for the proble
% pi is the probablity in winning the ith game
p = cumprod(rand(4,1));
% initialising the variables in order to get the sum of all probabilties of interests and then we can get our respective averages
t = zeros(1,5);
% starting the loop for 50 tournaments
for count_tour = 1:50,
% Total probabilties of winning the ith game
W(1) = p(1);
W(2) = p(1)*(1+p(2)-p(1));
W(3) = p(1)*p(2)*p(3)+((p(1)*p(1))*(2-p(1)-p(2)))+((p(4))*(1-p(1))*(1-p(1)));
% probabilty that player had won the first game given that he won the second game
W(4) = (p(1)*p(2))/W(2);
% probabilty of winning all three games
W(5) = p(1)*p(2)*p(3);
% getting the sum of all probabilies in 50 tournaments
t = t+W;
end
% getting the averages of probabilties of interest in 50 tournaments
av = t./50;
ar(k,:)=ar(k,:)+av;
end
figure();
hold on
for k = 1:size(ar,2)
h(k) = histogram(ar(k,:));
end
hold off
Which result in (for example):
In fact, your inner loop is not needed at all, it does nothing, and the outer loop could be eliminated using the element-wise arithmetic, so this code can be shortened to a more efficient and compact version:
% generating the required random numbers for the proble
% pi is the probablity in winning the ith game
p = cumprod(rand(100,4));
% Total probabilties of winning the ith game
W(:,1) = p(:,1);
W(:,2) = p(:,1).*(1+p(:,2)-p(:,1));
W(:,3) = p(:,1).*p(:,2).*p(:,3)+((p(:,1).*p(:,1)).*(2-p(:,1)-p(:,2)))+...
((p(:,4)).*(1-p(:,1)).*(1-p(:,1)));
% probabilty that player had won the first game given that he won the second game
W(:,4) = (p(:,1).*p(:,2))./W(:,2);
% probabilty of winning all three games
W(:,5) = p(:,1).*p(:,2).*p(:,3);
figure();
hold on
for k = 1:size(W,2)
h(k) = histogram(W(k,:));
end
hold off
without changing any computation in your code, just eliminating unnecessary loops and variables.
I was asked this question in interview.
Given a list of 'N' coins, their values being in an array A[], return the minimum number of coins required to sum to 'S' (you can use as many coins you want). If it's not possible to sum to 'S', return -1
Note here i can use same coins multiple times.
Example:
Input #00:
Coin denominations: { 1,3,5 }
Required sum (S): 11
Output #00:
3
Explanation:
The minimum number of coins requires is: 3 - 5 + 5 + 1 = 11;
Is there any better way we can think except Sorting the array and start it by both ends?
This is the change-making problem.
A simple greedy approach, which you seem to be thinking of, won't always produce an optimal result. If you elaborate a bit on what exactly you mean by starting from both ends, I might be able to come up with a counter-example.
It has a dynamic programming approach, taken from here:
Let C[m] be the minimum number of coins of denominations d1,d2,...,dk needed to make change for m amount. In the optimal solution to making change for m amount, there must exist some first coin di, where di < m. Furthermore, the remaining coins in the solution must themselves be the optimal solution to making change for m - di.
Thus, if di is the first coin in the optimal solution to making change for m amount, then C[m] = 1 + C[m - di], i.e. one di coin plus C[m - di] coins to optimally make change for m - di amount. We don't know which coin di is the first coin; however, we may check all n such possibilities (subject to the constraint that di < m), and the value of the optimal solution must correspond to the minimum value of 1 + C[m - di], by definition.
Furthermore, when making change for 0, the value of the optimal solution is clearly 0 coins. We thus have the following recurrence.
C[p] = 0 if p = 0
min(i: di < p) {1 + C[p - di]} if p > 0
Pathfinding algorithms (Dijkstra, A*, meeting on the middle, etc.) could be suitable for this on graph like this:
0
1/|\5
/ |3\
/ | \
1 3 5
1/|\51/| ...
/ |3\/ |3
/ | /\ |
2 4 6
....
Other way is recursive bisection. Say, if we cannot get the sum S with one coin, we start to try to get amounts (S/2, S/2)...(S-1,1) recursively until we find suitable coin or reach S=1.