SPOJ Can you answer these queries I - c

I am trying to solve this problem on SPOJ. I found this problem in the segment tree section, so I am pretty sure that there could be some possible solution that uses segment tree. But I am unable to come up with the metadata that should be stored in the tree node. The maximum sum can be computed using Kadane's Algo. But how to compute that using segment tree. If we store just the output of algo for a range that would be correct for query for that particular range, but would be incorrect for parents to use that value. If we store some more information like negative sum prefix as well as negative sum suffix. I am able to solve some of the test cases. But its not completely correct. Please provide me some pointers as to how should I approach the metadata for solving this particular problem.
Thanks for helping.

You can solve it by building a segment tree on the prefix sums
sum[i] = sum[i - 1] + a[i]
and then keeping the following information in a node:
node.min = the minimum sum[i], x <= i <= y
([x, y] being the interval associated to node)
= minimum(node.left.min, node.right.min)
node.max = same but with maximum
node.best = maximum(node.left.best,
node.right.best,
node.right.max - node.left.min
)
Basically, the best field gives you the sum of the maximum sum subarray in the associated interval. This is either one of the maximum sum subarrays in the two child nodes, or a sequence that crosses both of the child intervals, which is obtained by subtracting the minimum in the left child from the maximum in the right child, which we also do in a possible linear solution: find the minimum sum[j], j < i for each each i, then compare sum[i] - sum[j] with the global max.
Now, to answer a query you will need to consider the nodes whose associated intervals make up your queried interval and do something similar to how we built the tree. You should try to figure it out on your own, but let me know if you get stuck somewhere.

Related

Solving MaxDouble Slice Kadane's Algorithm Variation

I was trying to sharpen my skills by solving the Codality problems. I reached this one: https://codility.com/programmers/lessons/9-maximum_slice_problem/max_double_slice_sum/
I actually theoretically understand the solution:
Use Kadane's Algorithm on the array and store the sum at every index.
Reverse the array and do the same.
Find a point where the sum of both is max by looping over both result sets one at a time.
The max is the max double slice.
My question is not so much about how to solve the problem. My question is about how does one imagine that this will be way in which this problem can be solved. There are at-least 3 different concepts that need to be made use of:
The understanding that if all elements in the array are positive, or negative it is a different case than when there are some positive and negative elements in the array.
Kadane's Algorithm
Going over the array forward and reversed.
Despite all of this, Codality has tagged this problem as "Painless".
My questions is am I missing something? It seems hard that I would be able to solve this problem without knowing some of these concepts.
Is there a technique where I can start from scratch and very basic concepts and work my way up to the concepts required to solve this problem. Or is it that I am expected to know these concepts before even starting the problem?
How can I prepare my self to solve such problems where I don't know the required concepts in the future?
I think you are overthinking the problem, that's why you find it more difficult than it is:
The understanding that if all elements in the array are positive, or negative it is a different case than when there are some positive and negative elements in the array.
It doesn't have to be a different case. You might be able to come up with an algorithm that doesn't care about this distinction and works anyway.
You don't need to start by understanding this distinction, so don't think about it until or even if you have to.
Kadane's Algorithm
Don't think of an algorithm, think of what the problem requires. Usually that 10+ paragraph problem statement can be expressed in much less.
So let's see how we can simplify the problem statement.
It first defines a slice as a triplet (x, y, z). It's defined at the sum of elements starting at x+1, ending at z-1 and not containing y.
Then it asks for the maximum sum slice. If we need the maximum slice, do we need x and z in the definition? We might as well let it start and end anywhere as long as it gets us the maximum sum, no?
So redefine a slice as a subset of the array that starts anywhere, goes up to some y-1, continues from y+1 and ends anywhere. Much simpler, isn't it?
Now you need the maximum such slice.
Now you might be thinking that you need, for each y, the maximum sum subarray that starts at y+1 and the maximum sum subarray that ends at y-1. If you can find these, you can update a global max for each y.
So how do you do this? This should now point you towards Kadane's algorithm, which does half of what you want: it computes the maximum sum subarray ending at some x. So if you compute it from both sides, for each y, you just have to find:
kadane(y - 1) + kadane_reverse(y + 1)
And compare with a global max.
No special cases for negatives and positives. No thinking "Kadane's!" as soon as you see the problem.
The idea is to simplify the requirement as much as possible without changing its meaning. Then you use your algorithmic and deductive skills to reach a solution. These skills are honed with time and experience.

How can I tell if a particular heuristic is admissible, and why mine is not?

The definition of an admissible heuristic is one that "does not overestimate the path of a particular goal".
I am attempting to write a Pac-Man heuristic for finding the fastest method to eat dots, some of which are randomly scattered across the grid. However it is failing my admissibility test.
Here are the steps of my algorithm:
sum = 0, list = grid.getListofDots()
1. Find nearest dot from starting position (or previous dot that was removed) using manhattan distance
2. add to sum
3. Remove dot from list of possible dots
4. repeat steps 1 - 3 until list is empty
5. return the sum
Since I'm using manhattan distance, shouldn't this be admissible? If not, are there any suggestions or other approaches to make this algorithm admissible?
As said your heuristics isn't admissible. Another example is:
Your cost is 9 but the best path has cost 6.
A very, very simple admissible heuristics is:
number_of_remaining_dots
but it isn't very tight. A small improvement is:
manhattan_distance_to_nearest_dot + dots_left_out
Other possibilities are:
distance_to_nearest_dot // Found via Breadth-first search
or
manhattan_distance_to_farthest_dot

Fast algorithm mapping int to monotonically increasing int subset

I have encountered variations of this problem multiple times, and most recently it became a bottleneck in my arithmetic coder implementation. Given N (<= 256) segments of known non-negative size Si laid out in order starting from the origin, and for a given x, I want to find n such that
S0 + S1 + ... + Sn-1 <= x < S0 + S1 + ... + Sn
The catch is that lookups and updates are done at about the same frequency, and almost every update is in the form of increasing the size of a segment by 1. Also, the bigger a segment, the higher the probability it will be looked up or updated again.
Obviously some sort of tree seems like the obvious approach, but I have been unable to come up with any tree implementation that satisfactorily takes advantage of the known domain specific details.
Given the relatively small size of N, I also tried linear approaches, but they turned out to be considerably slower than a naive binary tree (even after some optimization, like starting from the back of the list for numbers above half the total)
Similarly, I tested introducing an intermediate step that remaps values in such a way as to keep segments ordered by size, to make access faster for the most frequently used, but the added overhead exceeded gains.
Sorry for the unclear title -- despite it being a fairly basic problem, I am not aware of any specific names for it.
I suppose some BST would do... You may try to add a new numeric member (int or long) to each node to keep a sum of values of all left descendants. Then you'll seek for each item in approximately logarithmic time, and once an item is added, removed or modified you'll have to update just its ancestors on the returning path from the recursion. You may apply some self-organizing tree structure, for example AVL to keep the worst-case search optimal or a splay tree to optimize searches for those most often used items. Take care to update the left-subtree-sums during rebalancing or splaying.
You could use a binary tree where each node n contains two integers A_n
and U_n, where initially
A_n = S_0 + .. S_n and U_n = 0.
Let, at any fixed subsequent time, T_n = S_0 + .. + S_n.
When looking for the place of a query x, you would go along the tree, knowing that for each node m the current corresponding value of T_m is A_m + U_m + sum_{p : ancestors of m, we visited the right child of p to attain m} U_p.
This solves look up in O(log(N)).
For update of the n-th interval (increasing its size by y), you just look for it in the tree, increasing the value of U_m og y for each node m that you visit along the way. This also solves update in O(log(N)).

Graph Theory - Fill nodes with a limited number of routes

First I must say this is not a Homework or something related, this is a problem of a game named (freeciv).
Ok, in the game we have 'n' number of cities usually (8-12), each city can have a max number of trade-routes of 'k' usually (4), and those trade-routes need to be 'd' distance or further (8 Manhattan tiles).
The problem consist in to find the k*n trade-routes with (max distances or min distances), obviously this problem can be solved with a brute-force algorithm but it is really slow when you the player have more than 10 cities because the program has to make several iterations; I tried to solve it using graph theory but I am not an not an expert in it, I even asked some of my teachers and none could explain me an smart-algorithm, so I didn't come here to find the exact solution but I did to get the idea or the steps to analyze this.
The problem has two parts:
Calculate pair-wise distances between the cities
Select which pairs should become trade-route
I don't think the first part can be calculated faster than O(n·t) where t is number of tiles, as each run of Dijkstra's algorithm will give you distances from one city to all other cities. However if I understand correctly, distance between two cities never changes and is symmetrical. So whenever a new city is built, you just need to run Dijkstra's algorithm from it and cache the distances.
For the second part I would expect greedy algorithm to work. Order all pairs of cities by suitability and in each step pick the first one that does not violate the constraint of k routes per city. I am not sure whether it can be proven (the proof should be similar to the one for Kruskal's minimal spanning-tree algorithm if it exists. But I suspect it will work fine in practice even if you find that it does not work in theory (I haven't tried to either prove or disprove it; it's up to you)
continue #Jan Hudec way:
Init Stage:
lets say you have N cities (c1, c2,... cN). you should build a list of connections when each entity in the list will have a format of (cX, cY, Distance) (while X < Y, this is n^2/2 time) and order it by distance (descending for max distance or ascending for min distance), and you should also have an array/list which will hold the number of connection per City (cZ = W) which initialized for each city at N-1 because they all connected at the beginning.
Iterations:
iterate over the lists of connections
for each (cX, cY, D) if the number of connection (in the connection number array) of cX > k and cY > k then delete (cX, cY, D) from the connection list and also decrees by one the value of cX and cY in the connection array.
in the end, you'll have the connection list with the value you wish for.

How do I generate points that match a histogram?

I am working on a simulation system. I will soon have experimental data (histograms) for the real-world distribution of values for several simulation inputs.
When the simulation runs, I would like to be able to produce random values that match the measured distribution. I'd prefer to do this without storing the original histograms. What are some good ways of
Mapping a histogram to a set of parameters representing the distribution?
Generating values that based on those parameters at runtime?
EDIT: The input data are event durations for several different types of events. I expect that different types will have different distribution functions.
At least two options:
Integrate the histogram and invert numerically.
Rejection
Numeric integration
From Computation in Modern Physics by William R. Gibbs:
One can always numerically integrate [the function] and invert the [cdf]
but this is often not very satisfactory especially if the pdf is changing
rapidly.
You literally build up a table that translates the range [0-1) into appropriate ranges in the target distribution. Then throw your usual (high quality) PRNG and translate with the table. It is cumbersome, but clear, workable, and completely general.
Rejection:
Normalize the target histogram, then
Throw the dice to choose a position (x) along the range randomly.
Throw again, and select this point if the new random number is less than the normalized histogram in this bin. Otherwise goto (1).
Again, simple minded but clear and working. It can be slow for distribution with a lot of very low probability (peaks with long tails).
With both of these methods, you can approximate the data with piecewise polynomial fits or splines to generate a smooth curve if a step-function histogram is not desired---but leave that for later as it may be premature optimization.
Better methods may exist for special cases.
All of this is pretty standard and should appear in any Numeric Analysis textbook if I more detail is needed.
More information about the problem would be useful. For example, what sort of values are the histograms over? Are they categorical (e.g., colours, letters) or continuous (e.g., heights, time)?
If the histograms are over categorical data I think it may be difficult to parameterise the distributions unless there are many correlations between categories.
If the histograms are over continuous data you might try to fit the distribution using mixtures of Gaussians. That is, try to fit the histogram using a $\sum_{i=1}^n w_i N(m_i,v_i)$ where m_i and v_i are the mean and variance. Then, when you want to generate data you first sample an i from 1..n with probability proportional to the weights w_i and then sample an x ~ n(m_i,v_i) as you would from any Gaussian.
Either way, you may want to read more about mixture models.
So it seems that what I want in order to generate a given probablity distribution is a Quantile Function, which is the inverse of the
cumulative distribution function, as #dmckee says.
The question becomes: What is the best way to generate and store a quantile function describing a given continuous histogram? I have a feeling the answer will depend greatly on the shape of the input - if it follows any kind of pattern there should be simplifications over the most general case. I'll update here as I go.
Edit:
I had a conversation this week that reminded me of this problem. If I forgo describing the histogram as an equation, and just store the table, can I do selections in O(1) time? It turns out you can, without any loss of precision, at the cost of O(N lgN) construction time.
Create an array of N items. A uniform random selection into the array will find an item with probablilty 1/N. For each item, store the fraction of hits for which this item should actually be selected, and the index of another item which will be selected if this one is not.
Weighted Random Sampling, C implementation:
//data structure
typedef struct wrs_data {
double share;
int pair;
int idx;
} wrs_t;
//sort helper
int wrs_sharecmp(const void* a, const void* b) {
double delta = ((wrs_t*)a)->share - ((wrs_t*)b)->share;
return (delta<0) ? -1 : (delta>0);
}
//Initialize the data structure
wrs_t* wrs_create(int* weights, size_t N) {
wrs_t* data = malloc(sizeof(wrs_t));
double sum = 0;
int i;
for (i=0;i<N;i++) { sum+=weights[i]; }
for (i=0;i<N;i++) {
//what percent of the ideal distribution is in this bucket?
data[i].share = weights[i]/(sum/N);
data[i].pair = N;
data[i].idx = i;
}
//sort ascending by size
qsort(data,N, sizeof(wrs_t),wrs_sharecmp);
int j=N-1; //the biggest bucket
for (i=0;i<j;i++) {
int check = i;
double excess = 1.0 - data[check].share;
while (excess>0 && i<j) {
//If this bucket has less samples than a flat distribution,
//it will be hit more frequently than it should be.
//So send excess hits to a bucket which has too many samples.
data[check].pair=j;
// Account for the fact that the paired bucket will be hit more often,
data[j].share -= excess;
excess = 1.0 - data[j].share;
// If paired bucket now has excess hits, send to new largest bucket at j-1
if (excess >= 0) { check=j--;}
}
}
return data;
}
int wrs_pick(wrs_t* collection, size_t N)
//O(1) weighted random sampling (after preparing the collection).
//Randomly select a bucket, and a percentage.
//If the percentage is greater than that bucket's share of hits,
// use it's paired bucket.
{
int idx = rand_in_range(0,N);
double pct = rand_percent();
if (pct > collection[idx].share) { idx = collection[idx].pair; }
return collection[idx].idx;
}
Edit 2:
After a little research, I found it's even possible to do the construction in O(N) time. With careful tracking, you don't need to sort the array to find the large and small bins. Updated implementation here
If you need to pull a large number of samples with a weighted distribution of discrete points, then look at an answer to a similar question.
However, if you need to approximate some continuous random function using a histogram, then your best bet is probably dmckee's numeric integration answer. Alternatively, you can use the aliasing, and store the point to the left, and pick a uniform number between the two points.
To choose from a histogram (original or reduced),
Walker's alias method
is fast and simple.
For a normal distribution, the following may help:
http://en.wikipedia.org/wiki/Normal_distribution#Generating_values_for_normal_random_variables

Resources