how to simulate loop in turbo prolog - loops

I want to compute the summation of odd numbers in a given range like 1 to 9 is the given input. and my program will show the summation of all odd numbers between 1 to 9. Though the task is so simple theoriticaly, but as a starter of turbo prolog, I can't handle the loop to compute the summation. Any help would be appreciating..
Advance thanks.

I'm not going to write the full solution for you, but can give an idea how to "loop" through a summation in a general way. Looping in Prolog is often done through recursion. The recursion gets around the fact that Prolog will not let you reinstantiate a variable within the same predicate clause once it's instantiated (unless you backtrack). The following is ISO Prolog syntax.
sum_values(First, Last, Sum) :-
sum_values(First, Last, 0, Sum).
sum_values(First, Last, Sum, Sum) :-
First > Last.
sum_values(First, Last, Acc, Sum) :-
First =< Last,
NewAcc is Acc + First,
NewFirst is First + 1,
sum_values(NewFirst, NewAcc, Sum).
The first clause sets up an accumulator starting at the value 0.
The second clause handles the normal recursive case where the first value does not exceed the last. The first value is added to the accumulator to create an updated accumulator, and the "first" value is incremented to create a new first value. The recursive call to sum_values computes the rest of the sum with the new accumulator.
The last (third) clause unifies the final sum with the accumulator when the first value finally exceeds the last.
Note that I could have implemented this without introducing the accumulator, but then I wouldn't have the tail recursion which can be optimized (if desired) by the Prolog system. The non-accumulator version looks like this:
sum_values(First, Last, 0) :- First > Last.
sum_values(First, Last, Sum) :-
First =< Last,
NewFirst is First + 1,
sum_values(NewFirst, Last, PartialSum),
Sum is PartialSum + First.
This is a little shorter, but there's no tail recursion that can be refactored.
Modifications you would need to make for your problem (these are ones I'm aware of, as I'm only a little familiar with some of TP's syntax):
Replace is/2 with =/2 (I think TP uses =/2 for expression evaluation)
You might have to replace =< with <= (I don't recall which one TP likes)
Check that First is odd. If it's not, you need to skip adding it to the accumulator.
You could also do an initial check for odd First and if it's not odd, increment it to form a new First, then proceed doing a summation incrementing by 2 through the recursion instead of by 1.

Related

c loop function computing time complexity

I am learning to compute the time complexity of algorithms.
Simple loops and nested loops can be compute but how can I compute if there are assignments inside the loop.
For example :
void f(int n){
int count=0;
for(int i=2;i<=n;i++){
if(i%2==0){
count++;
}
else{
i=(i-1)*i;
}
}
}
i = (i-1)*i affects how many times the loop will run. How can I compute the time complexity of this function?
As i * (i-1) is even all the time ((i * (i-1)) % 2 == 0), if the else part will be true for one time in the loop, i++ makes the i odd number. As result, after the first odd i in the loop, always the condition goes inside the else part.
Therefore, as after the first iteration, i will be equal to 3 which is odd and goes inside the else part, i will be increased by i * (i-1) +‌ 1 in each iteration. Hence, if we denote the time complexity of the loop by T(n), we can write asymptotically: T(n) = T(\sqrt(n)) + 1. So, if n = 2^{2^k}, T(n) = k = log(log(n)).
There is no general rule to calculate the time complexity for such algorithms. You have to use your knowledge of mathematics to get the complexity.
For this particular algorithm, I would approach it like this.
Since initially i=2 and it is even, let's ignore that first iteration.
So I am only considering from i=3. From there I will always be odd.
Your expression i = (i-1)*i along with the i++ in the for loop finally evaluates to i = (i-1)*i+1
If you consider i=3 as 1st iteration and i(j) is the value of i in the jth iteration, then i(1)=3.
Also
i(j) = [i(j-1)]^2 - i(j-1) + 1
The above equation is called a recurrence relation and there are standard mathematical ways to solve it and get the value of i as a function of j. Sometimes it is possible to get and sometimes it might be very difficult or impossible. Frankly, I don't know how to solve this one.
But generally, we don't get situations where you need to go that far. In practical situations, I would just assume that the complexity is logarithmic because the value of i is increasing exponentially.

Algorithm for highest value in a semi-sorted array, where complete binary search is not possible?

We're given a semi-sorted array:
(1, 2, ..., n, 1, 2, ..., n-1)
We know the maximum value in the array will be n, and for simplicity sake we know when we overshoot it (let's say checking that value will print/write a statement, or something along those lines).
2 scenarios:
If we overshoot the index of n, we are NOT allowed to overshoot again (except for the very last time so we know we're at the maximum value).
If we overshoot the index of n, we are allowed to overshoot it once more, and then we are not allowed to overshoot anymore (except for the very last time so we know we're at the maximum value).
We want this done using the least amount of steps in the worst case (preferably calculate the # of steps). And we want option 2 to use asymptotically fewer steps than option 1 (preferably calculate the # of steps).
Initially, I thought of the following:
Start at i=1
i=2i until overshoot
linear search from 1/2i to 2i-1, until we hit the maximum value (we would know by overshooting by one).
I thought this would be a O(logn) algorithm, but it actually appears to be O(n). This is because it's not like a binary search where we're able to continue until the end, because we must stop when we overshoot.
Now, I've thought about using exponents:
1. Start at i=1
2. i^2, if didn't overshoot then i=i+1, continue at this step until overshoot
3. linear search from (i-1)^2 to (i^2)1 , until we hit the maximum value (we would know by overshooting by one).
This seems like it would be O(n^1/2) , but when calculating the exact # of steps it seems like it would actually still be O(n), because the linear search could still be very large for high n.
For the second part, I thought about doing the same algorithm but using i^3.
Start at i=1
i^3, same as above
If overshoot then switch to i^2, same as above
....
I thought this would give O(n^1/3) .
Multi-part question:
Can these algorithms be improved so that we perform a minimum # of checks in the worst case?
Am I correct about the algorithmic complexity being O(n^1/2) and O(n^1/3) If so, what would the exact # of steps be, because it seems like that step ruins this?
The question of the optimal answer for n is hard. But finding the maximum number that can be done with k tests is much easier.
Let f(m, k) be the maximum size of array where you can locate the max with at most m overshoots and testing at most k numbers. Then the following statements hold:
f(m, 0) = 1 (with 1 option I know where the max is)
f(0, k) = k+1 (start at the beginning and go until you find it..if you fail in k tries then it is the last one you didn't look at)
f(m+1, k+1) = f(m, k) + 1 + f(m+1, k) (Test the f(m, k) + 1'th number, then do the appropriate thing depending on whether you overshot.)
It turns out that f(1, k) = k*(k+1)/2. From there they get messy. But for fixed m, you can show that f(m, k) = km/m! + O(km-1) Which verifies your guess about O(n1/2) and O(n1/3).

Does the array “sum and/or sub” to `x`?

Goal
I would like to write an algorithm (in C) which returns TRUE or FALSE (1 or 0) depending whether the array A given in input can “sum and/or sub” to x (see below for clarification). Note that all values of A are integers bounded between [1,x-1] that were randomly (uniformly) sampled.
Clarification and examples
By “sum and/or sub”, I mean placing "+" and "-" in front of each element of array and summing over. Let's call this function SumSub.
int SumSub (int* A,int x)
{
...
}
SumSub({2,7,5},10)
should return TRUE as 7-2+5=10. You will note that the first element of A can also be taken as negative so that the order of elements in A does not matter.
SumSub({2,7,5,2},10)
should return FALSE as there is no way to “sum and/or sub” the elements of array to reach the value of x. Please note, this means that all elements of A must be used.
Complexity
Let n be the length of A. Complexity of the problem is of order O(2^n) if one has to explore all possible combinations of pluses and minus. However, some combinations are more likely than others and therefore are worth being explored first (hoping the output will be TRUE). Typically, the combination which requires substracting all elements from the largest number is impossible (as all elements of A are lower than x). Also, if n>x, it makes no sense to try adding all the elements of A.
Question
How should I go about writing this function?
Unfortunately your problem can be reduced to subset-sum problem which is NP-Complete. Thus the exponential solution can't be avoided.
The original problem's solution is indeed exponential as you said. BUT with the given range[1,x-1] for numbers in A[] you can make the solution polynomial. There is a very simple dynamic programming solution.
With the order:
Time Complexity: O(n^2*x)
Memory Complexity: O(n^2*x)
where, n=num of elements in A[]
You need to use dynamic programming approach for this
You know the min,max range that can be made in in the range [-nx,nx]. Create a 2d array of size (n)X(2*n*x+1). Lets call this dp[][]
dp[i][j] = taking all elements of A[] from [0..i-1] whether its possible to make the value j
so
dp[10][3] = 1 means taking first 10 elements of A[] we CAN create the value 3
dp[10][3] = 0 means taking first 10 elements of A[] we can NOT create the value 3
Here is a kind of pseudo code for this:
int SumSub (int* A,int x)
{
bool dp[][];//set all values of this array 0
dp[0][0] = true;
for(i=1;i<=n;i++) {
int val = A[i-1];
for(j=-n*x;j<=n*x;j++) {
dp[i][j]=dp[ i-1 ][ j + val ] | dp[ i-1 ][ j - val ];
}
}
return dp[n][x];
}
Unfortunately this is NP-complete even when x is restricted to the value 0, so don't expect a polynomial-time algorithm. To show this I'll give a simple reduction from the NP-hard Partition Problem, which asks whether a given multiset of positive integers can be partitioned into two parts having equal sums:
Suppose we have an instance of the Partition Problem consisting of n positive integers B_1, ..., B_n. Create from this an instance of your problem in which A_i = B_i for each 1 <= i <= n, and set x = 0.
Clearly if there is a partition of B into two parts C and D having equal sums, then there is also a solution to the instance of your problem: Put a + in front of every number in C, and a - in front of every number in D (or the other way round). Since C and D have equal sums, this expression must equal 0.
OTOH, if the solution to the instance of your problem that we just created is YES (TRUE), then we can easily create a partition of B into two parts having equal sums: just put all the positive terms in one part (say, C), and all the negative terms (without the preceding - of course) in the other (say, D). Since we know that the total value of the expression is 0, it must be that the sum of the (positive) numbers in C is equal to the (negated) sum of the numbers in D.
Thus a YES to either problem instance implies a YES to the other problem instance, which in turn implies that a NO to either problem instance implies a NO to the other problem instance -- that is, the two problem instances have equal solutions. Thus if it were possible to solve your problem in polynomial time, it would be possible to solve the NP-hard Partition Problem in polynomial time too, by constructing the above instance of your problem, solving it with your poly-time algorithm, and reporting the result it gives.

use five point stencil to evaluate function with vector inputs and converge to maximum output value

I am familiar with iterative methods on paper, but MATLAB coding is relatively new to me and I cannot seem to find a way to code this.
In code language...
This is essentially what I have:
A = { [1;1] [2;1] [3;1] ... [33;1]
[1;2] [2;2] [3;2] ... [33;2]
... ... ... ... ....
[1;29] [2;29] [3;29] ... [33;29] }
... a 29x33 cell array of 2x1 column vectors, which I got from:
[X,Y] = meshgrid([1:33],[1:29])
A = squeeze(num2cell(permute(cat(3,X,Y),[3,1,2]),1))
[ Thanks to members of stackOverflow who helped me do this ]
I have a function that calls each of these column vectors and returns a single value. I want to institute a 2-D 5-point stencil method that evaluates a column vector and its 4 neighbors and finds the maximum value attained through the function out of those 5 column vectors.
i.e. if I was starting from the middle, the points evaluated would be :
1.
A{15,17}(1)
A{15,17}(2)
2.
A{14,17}(1)
A{14,17}(2)
3.
A{15,16}(1)
A{15,16}(2)
4.
A{16,17}(1)
A{16,17}(2)
5.
A{15,18}(1)
A{15,18}(2)
Out of these 5 points, the method would choose the one with the largest returned value from the function, move to that point, and rerun the method. This would continue on until a global maximum is reached. It's basically an iterative optimization method (albeit a primitive one). Note: I don't have access to the optimization toolbox.
Thanks a lot guys.
EDIT: sorry I didn't read the iterative part of your Q properly. Maybe someone else wants to use this as a template for a real answer, I'm too busy to do so now.
One solution using for loops (there might be a more elegant one):
overallmax=0;
for v=2:size(A,1)-1
for w=2:size(A,2)-1
% temp is the horizontal part of the "plus" stencil
temp=A((v-1):(v+1),w);
tmpmax=max(cat(1,temp{:}));
temp2=A(v,(w-1):(w+1));
% temp2 is the vertical part of the "plus" stencil
tmpmax2=max(cat(1,temp2{:}));
mxmx=max(tmpmax,tmpmax2);
if mxmx>overallmax
overallmax=mxmx;
end
end
end
But if you're just looking for max value, this is equivalent to:
maxoverall=max(cat(1,A{:}));

Big-Theta(n) linear sorting algorithm?

Design a linear algorithm to rearrange the elements of a given array of n elements so that all its negative numbers precede any zeroes, and any zeroes precede any positive numbers. It should also be space efficient so that it doesn't require more than a constant amount of additional space.
Everything I am thinking of is much bigger than O(n), and would love some tips/hints/help/java code!
Help? Hint: Quicksort's partition part with pivot as 0. See this Wikipedia article, look for in-place version.
I just realized if you implement teh exact version given in the link above it may not help if you have dupes of zero. My statement is still true that you need to use partition part of Quicksort, but the partition is going to be done by Dutch National Flag problem or three way partitioning. Here is the pseudo code for you
//assume index based 1
A[1..n]
p = 0
q = n+1
i = 1
while i < q
if A[i] < 0
swap(i, ++p)
else if A[i] > 0
swap(i, --q)
else
i++
Time complexity: O(n)
Space complexity: O(1)
Look into using a modified version of Radix Sort, the only sorts that can work in linear time are non-comparison based sorts (so entries in the list/array are not compared to each other) so that's something else to look at (proof involves comparison trees of minimum height as to why a sort that compares items will always be at least nlogn).
If you require only the rearrangement of items according to 3 ranges , negative zero and positive.
An easy solution will be count the number of negative, zeros and positives items with single array iteration (O(n)) (actually you don't need to count the number of positives if you already know the size of the array).
with a second iteration you will swap items (starting from the first one) according to their range to the appropriate index , then increase the index.
That's it, no additional memory and teta(n) time complexity.

Resources