Segment tree lazy propagation max query when update is non uniform - arrays

I am facing a problem in lazy propagation of segment tree.
I have an array A, of length N ,of smaller arrays (of max length 20).
I also have an array of indices B, referring to the index i am currently pointing to in the array Ai.
There are 2 operations,:
a) update the pointers of B for a given range to point to the next element.
b) print the max of the values the pointers are currently pointing to in the given range.
for example:-
int[][] array=
{
{1,2,3,4},
{8,4,0,0},
{3,4,2,5}
};
int[] B={1,1,1};
On making a query for range 1,2 max is 8.
This is because the pointers of array B are pointing to the first elements of the array.So we are working with 1,8.
On making a query of 2,3 max =8;
this is because we are working with the values 8,3.
In general,
int max(int[][] arr,int[] b,int l,int r){
int max=0;
for(int i=l;i<=r;i++){
max=Math.max(max,arr[i][b[i]]);//using java Math class here
}
return max;
}
void update(int[] b,int l,int r){
for(int i=l;i<=r;i++){
b[i]++;
}
}
These are the two methods in a very simple form.
However ,due to large input constraints, i need a O(logn) query and update time.That is why i thought of using segment trees(currently it's complexity is O(n^2).However I cannot seem to figure out how to update the intermediate nodes in lazy propagation.
Any insight will be helpful.
Also, if you could link any similar problem online, it would be really helpful as I could not(I do not know of any such as this is not from any website).
Thank you for any help.
NOTE : If b[i]>a[i].length then replace a[i][b[i]] with 1.

Related

Hacker Earth(Basic I/O Question) Play With Numbers [ subarry ]

I have been trying to solve this problem and it works good with small numbers but not the big 10^9 numbers in Hacker Earth
You are given an array of n numbers and q queries. For each query you have to print the floor of the expected value(mean) of the subarray from L to R.
INPUT:
First line contains two integers N and Q denoting number of array elements and number of queries.
Next line contains N space separated integers denoting array elements.
Next Q lines contain two integers L and R(indices of the array).
OUTPUT:
print a single integer denoting the answer.
Constraints:
1<= N ,Q,L,R <= 10^6
1<= Array elements <= 10^9
NOTE
Use Fast I/O
using namespace std;
long int solvepb(int a, int b, long int *arr,int n){
int result, count = 0;
vector<long int>res;
for(int i=0;i<n;i++){
if(i+1 >= a && i+1 <=b){
res.push_back(arr[i]);
count += arr[i];
}
}
result = count / res.size();
return result;
}
int main(){
int n,q;cin>>n>>q;
long int arr[n];
for(int i=0;i<n;i++){
cin>>arr[i];
}
while(q--){
int a,b;
cin>>a>>b;
cout<<solvepb(a,b,arr,n)<<endl;
}
return 0;
}```
So currently, the issue with your algorithm is that each time you are computing the mean over two indices in the array. This means that if the queries are particularly bad, for each of the Q queries, you might iterate through all N elements of the array.
How can one try to reduce this? Notice that because sums are additive, the sum up to an index i is the same as the sum up to an index j plus the sum of the numbers between i and j. Let me rewrite that as an equation -
sum[0:i] = sum[0:j] + sum[j+1:i]
It should be obvious now that by rearranging this equation, you can quickly get the sum between two indices by storing the sum of numbers up to an index. (i.e. sum[j+1:i] = sum[0:i] - sum[0:j]). This means that rather than having O(N*Q), you can have O(N + Q) runtime complexity. The O(N) part of the new complexity is from iterating the array once to get all the sums. The O(Q) part comes from answering the Q queries.
This kind of approach is called prefix sums. There are some optimized data structures like Fenwick trees made specifically for prefix sums that you can read about online or on Wikipedia. But for your question, a simple array should work just fine.
A few comments about your code:
In your for loop in the solvepb function, you are going from 0 to n always, but you didn't need to. You could have specified to go from a to b if you knew a was smaller than b. Otherwise, you go from b to a.
You also do not really use the vector. The vector in the solvepb function stores array elements, but these are never used again. You only seem to use it to find the number of elements from a to b, but you can get that by simply subtracting the difference between the two indices (i.e. b-a+1 if a < b otherwise a-b+1)

Processes for Insertion sort

I've been leariing sorting algorithms for couple of days. Presently i'm doing Insertion Sort. So the general algorithm is:
void insertionSort(int N, int arr[]) {
int i,j;
int value;
for(i=1;i<N;i++)
{
value=arr[i];
j=i-1;
while(j>=0 && value<arr[j])
{
arr[j+1]=arr[j];
j=j-1;
}
arr[j+1]=value;
}
for(j=0;j<N;j++)
{
printf("%d ",arr[j]);
}
printf("\n");
}
Now i've done this:
void print_array(int arr_count, int* arr){
int i;
for (i=0;i<arr_count;i++){
printf("%d ",arr[i]);
}
printf("\n");
}
void swap(int* m, int* n){
int t = 0;
t = *m;
*m = *n;
*n = t;
}
void insertionSort(int arr_count, int* arr) {
int i, j;
for(i = 0;i<arr_count;i++){
for (j=0;j<i;j++){
if (arr[i] < arr[j]){
swap(arr+i, arr+j);
}
}
//if (i!=0)
//print_array(arr_count, arr);
}
print_array(arr_count, arr);
}
Now, my question is whats the diffrence between my custom approach and the traditional appraoch.Both have N2 complexity....
Please help..
Thanks in advance
At each iteration, the original code you present moves each element into place by moving elements in a cycle. For an n-element cycle, that involves n+1 assignments.
It is possible to implement Insertion Sort by moving elements with pairwise swaps instead of in larger cycles. It is sometimes taught that way, in fact. This is possible because any permutation (not just cycles) can be expressed as a series of swaps. Implementing an n-element cycle via swaps requires n-1 swaps, and each swap, being a 2-element cycle, requires 2+1 = 3 assignments. For cycles larger than two elements, then, the approach using pairwise swaps does more work, scaling as 3*(n-1) as opposed to n+1. That does not change the asymptotic complexity, however, as you can see by the fact that the exponent of n does not change.
But note another key difference between the original code and yours: the original code scans backward through the list to find the insertion position, whereas you scan forward. Whether you use pairwise swaps or a larger cycle, scanning backward has the advantage that you can perform the needed reordering as you go, so that once you find the insertion position, you are done. This is one of the things that makes Insertion Sort so good among comparison sorts, and why it is especially fast for inputs that are initially nearly sorted.
Scanning forward means that once you find the insertion position, you've only started. You then have to cycle the elements. As a result, your approach examines every element of the sorted array head on every iteration. Additionally, when it actually performs the reordering, it does a bunch of unneeded comparisons. It could instead use the knowledge that the head of the list started sorted, and just perform a cycle (either way) without any more comparisons. The extra comparisons disguise the fact that the code is just performing the appropriate element cycling at that point (did you realize that?) and it's probably why several people mistook your implementation for a Bubble Sort.
Technically, yours is still an Insertion Sort, but it is an implementation that takes no advantage of the characteristics of the abstract Insertion Sort algorithm that give well-written implementations an advantage over other sorts of the same asymptotic complexity.
The main difference between insertion sort algorithm and your custom algorithm is the direction of processing.The insertion sort algorithm is moving one by one the smaller elements in range to the left side while your algorithm is one by one moving the larger elements in range to the right side.
Another key difference is in the best case time complexity of insertion sort and your algorithm.
The insertion sort stops if the value < arr[j] is not satisfying so it have the best case complexity of O(n){when the array is already sorted} while your algorithm always searches from index 0 to j so it takes O(n^2) steps even when the array is already sorted.

Given a list of n integers , find the minimum subset sum greater than X

Given an unsorted set of integers in the form of array, find minimum subset sum greater than or equal to a const integer x.
eg:- Our set is {4 5 8 10 10} and x=15
so the minimum subset sum closest to x and >=x is {5 10}
I can only think of a naive algorithm which lists all the subsets of set and checks if sum of subset is >=x and minimum or not, but its an exponential algorithm and listing all subsets requires O(2^N). Can I use dynamic programming to solve it in polynomial time?
If the sum of all your numbers is S, and your target number is X, you can rephrase the question like this: can you choose the maximum subset of the numbers that is less than or equal to S-X?
And you've got a special case of the knapsack problem, where weight and value are equal.
Which is bad news, because it means your problem is NP-hard, but on the upside you can just use the dynamic programming solution of the KP (which still isn't polynomial). Or you can try a polynomial approximation of the KP, if that's good enough for you.
I was revising DP. I thought of this question. Then I searched and I get this question but without a proper answer.
So here is the complete code (along with comments ): Hope it is useful.
sample image of table
//exactly same concept as subset-sum(find the minimum difference of subset-sum)
public class Main
{
public static int minSubSetSum(int[] arr,int n,int sum,int x){
boolean[][] t=new boolean[n+1][sum+1];
//initailization if n=0 return false;
for(int i=0;i<sum+1;i++)
t[0][i]=false;
//initialization if sum=0 return true because of empty set (a set can be empty)
for(int i=0;i<n+1;i++)
t[i][0]=true; //here if(n==0 && sum==0 return true) has been also initialized
//now DP top-down
for(int i=1;i<n+1;i++)
for(int j=1;j<sum+1;j++)
{
if(arr[i-1]<=j)
t[i][j]=t[i-1][j-arr[i-1]] || t[i-1][j]; // either include arr[i-1] or not
else
t[i][j]=t[i-1][j]; //not including arr[i-1] so sum is not deducted from j
}
//now as per question we have to take all element as it can be present in set1
//if not in set1 then in set2 ,so always all element will be a member of either set
// so we will look into last row(when i=n) and we have to find min_sum(j)
int min_sum=Integer.MAX_VALUE;
for(int j=x;j<=sum;j++)
if(t[n][j]==true){ //if in last row(n) w.r.t J , if the corresponding value true then
min_sum=j; //then that sum is possible
break;
}
if(min_sum==Integer.MAX_VALUE)
return -1;// because that is not possible
return min_sum;
}
public static void main(String[] args) {
int[] arr=new int[]{4,5,8,10,10};
int x=15;
int n=arr.length;
int sum=0;
for(int i=0;i<n;i++)
sum=sum+arr[i];
System.out.println("Min sum can formed greater than X is");
int min_sum=minSubSetSum(arr,n,sum,x);
System.out.println(min_sum);
}
}
As the problem was N-P complete so with DP time complexity reduces to
T(n)=O(n*sum)
and space complexity =O(n*sum);
As already mentioned, this is NP-complete. Another way of seeing that is, if one can solve this in polynomial time, then subset-sum problem could also be solved in polynomial time (if solution exist then it will be same).
I believe the other answers are incorrect. Your problem is actually a variation of the 0-1 knapsack problem (i.e. without repetitions) which is solvable in polynomial time with dynamic programming. You just need to formulate your criteria as in #biziclop's answer.
How about a greedy approach?
First we sort the list in descending order. Then we recursively pop the first element of the sorted list, subtract its value from x, and repeat until x is 0 or less.
In pseudocode:
sort(array)
current = 0
solution = []
while current < x:
if len(array) < 0:
return -1 //no solution possible
current += array[0]
solution.append(array.pop(0))
return solution

Finding K combination of three items with minimum product

While solving a problem I came to a situation where I have to find first k products of combination of three items from given array of positive numbers , such that product should be minimum.
Given Array A, with size n , find fist k products of three different items of array with minimum value efficiently. Lets call that MP such that
MP[i] = A[j]* A[l]*A[m]
where i<K, j!=l!=mand k<n
What I have tried at that point of time is get all possible products and then sort them to get first k products. But I know this is not efficient as first O(N^3) for finding all combinations product and then at least O(NlogN) for sorting N^3 combinations. So in my case the array size was not large, But I am wondering how to solve the same problem more efficiently.
The problem with other solutions is that their greedy choice is non-optimal.
A simple priority queue based solution will give the optimal solution to this problem. min_product is the function which delivers the required array and the map is used to keep track of already seen tuples. I have used a simple stl priority queue.
//// Asume the vector a(size>=3) is sorted
std::vector<int> a;
struct triplet{
int i,j,k;
};
long long value(triplet& p1){
return (long long)a[p1.i]*(long long)a[p1.j]*a[p1.k];
}
struct CompareTriplet {
bool operator()(triplet const & p1, triplet const & p2) {
return value(p1) > value(p2);
}
};
void push_heap(std::priority_queue<triplet, std::vector<triplet> pq, CompareTriplet>& pq,triplet &t,std::vector<triplet>& m;){
if (m.find(t)!=m.end()){
m[t]=1;
pq.push(t);
}
}
std::vector<long long> min_product(int k){
sort(a.begin(), a.end()); // sort if not sorted.
int n=a.size();
std::unodered_map<triplet,bool> m;
std::vector<long long> MP(k);
std::priority_queue<triplet, std::vector<triplet>, CompareTriplet> pq;
push_heap(pq,triplet{0,1,2},m);
for(int i=0; !pq.empty() and i<k;i++){
auto tp = pq.top(); pq.pop();
MP[i]=value(tp);
if (tp.i+1<tp.j){
push_heap(pq,triplet{tp.i+1,tp.j,tp.k},m);
}
if (tp.j+1<tp.k){
push_heap(pq,triplet{tp.i,tp.j+1,tp.k},m);
}
if (tp.k+1<n){
push_heap(pq,triplet{tp.i,tp.j,tp.k+1},m);
}
}
return MP
}
Complexity:
If the array is not sorted, then making it sorted is the bottleneck here. Actually at any time, we need top i (
For a sorted given array.
Since there can be at most 2*k elements in the heap and O(k) number of operations(both heap and map) are done for getting each element of MP. So, running time complexity is O( k*log(k) ).
And yes, it is independent of n.

C language - Matrix multiplication bug

I'm trying to write a code that gets a matrix A and its dimensions, a matrix B and its dimensions, and returns a matrix C such that C=AB.
It's safe to assume that the number of columns of A is equal to the number of rows of B and so C=AB is defined
This is my code:
int *matrix_multiplication(int *A,int row_A,int column_A,int *B,int row_B,int column_B)
{
int row_C,column_C,*C,i,j,k,sum=0;
row_C=row_A;
column_C=column_B;
C=(int*)malloc(row_C*column_C*sizeof(int));
for(i=0;i<row_C;i++)
{
for(j=0;j<column_C;j++)
{
for(k=0;k<column_A;k++)
sum+=(*(A+column_A*i+k))*(*(B+column_B*k+j));//A[i][k]B[k][j]
*(C+row_C*i+j)=sum;
sum=0;
}
}
return C;
}
A little explanation: I view a matrix as a single dimensional array, of size columns*rows*sizeof(int) and the given formula A[i][j]=*(A+column_A*i+j) where A is pointer to the first element of the array, and column_A is the amount of columns in "matrix" A.
My problem is that my code does not work for some inputs when row_C != column_C
For example, if A=[28,8,12;14,5,45] and B=[31;27;11] it returns C=[1216;-842150451]
Why does this happen? I can't seem to find the bug.
Try
*(C+column_C*i+j)=sum;
it might be an idea to make a function or macro for accessing matrix elements. That way similar problems in the future can be avoided. Better than that make a matrix class with method.

Resources