implementation of round up function in c - c

Guys this is the function to implement ceil function Which is working fine,I want to ask what is the logic behind subtracting -1 from denominator ?
I am new to programming
please Help
int checkceil(int numerator,int denominator){
return (numerator+denominator-1)/denominator;
}

Since dividing two integers a and b will always floor (truncate) the result, we need to find some d (delta) such that floor(d + a/b) == ceil (a/b).
How do we find d? Think about it this way:
ceil(a/b) > floor(a/b), except when (a/b) is a whole number. So, we want to bump (a/b) to (or past) the next whole number, unless (a/b) is a whole number, by adding d. This way floor(a/b + d) will be equal to ceil(a/b). We want to find d so that for whole numbers, it won’t quite push them up to the next whole number, but for non-whole numbers it will.
So how much d is just enough?
So assuming (a/b) is not a whole number, the smallest leftover we can have is (1/b). So in order to bump (a/b) to the next whole number, it suffices to add d = 1 - (1/b). This is less than 1, which will not bump (a/b) to the next whole number in case (a/b) is a whole number, but still enough to bump (a/b) to the next whole number in case (a/b) is not a whole number.
Summing it up, we know adding d = 1 - (1/b) to (a/b) will fulfill the equality:
floor(a/b + d) = ceil(a/b). Thus we get:
ceil(a/b) = floor(a/b + d) = floor(a/b + 1 - 1/b) = floor((a + b - 1)/b)
When we write in terms of code it will be:
int myceil = (a + b - 1)/b;

The numerator n is of form n = a * d + b, where b is the remainder of n / d. The remainder is by definition smaller than d.
In C, the division of n/d returns the integral part of a. When b == 0, one can add only d - 1 to n to get the same result (ad + 0)/d == (ad + d-1)/d. For all other remainders 0<b<d, the division returns the next integer a+1, ie the ceiling.

Related

Rounding to the nearest multiple of a given value

If we have an arbitrary double value f, another one v and a multiplication factor p, how can I snap the value f to the nearest v power of p?
Example:
f = 3150.0
v = 100.0
p = 2
the multiplications will go like this
100 (v)
200 (multiplied by p)
400
800
1600
3200
...
f is closest to 3200.0 so the function should return 3200.0
There was actually a name for this, which I seem to have forgotten and maybe this is why I couldn't find such a function.
Let k = floor(log_p(f/v)) where log_p(x) = log(x)/log(p) is the logarithm to base p function. It follows from the properties of floor and log that p^k v <= f < p^(k+1) v, which gives the two closest values to f of the form p^n v.
Which of those two values to choose depends on the exact definition of "nearest" in your use-case. If taken in the multiplicative sense (as would be natural on a log scale), that "nearest" value can be calculated directly as p^n v where n = round(log_p(f/v)) = round(log(f/v)/log(p)).

Maximize number of inversion count in array

We are given an unsorted array A of integers (duplicates allowed) with size N possibly large. We can count the number of pairs with indices i < j, for which A[i] < A[j], let's call this X.
We can change maximum one element from the array with a cost equal to the difference in absolute values (for instance, if we replace element on index k with the new number K, the cost Y is | A[k] - K |).
We can only replace this element with other elements found in the array.
We want to find the minimum possible value of X + Y.
Some examples:
[1,2,2] should return 1 (change the 1 to 2 such that the array becomes [2,2,2])
[2,2,3] should return 1 (change the 3 to 2)
[2,1,1] should return 0 (because no changes are necessary)
[1,2,3,4] should return 6 (this is already the minimum possible value)
[4,4,5,5] should return 3 (this can accomplished by changing the first 4 into a 5 or the last 5 in a 4)
The number of pairs can be found with a naive O(n²) solution, here in Python:
def calc_x(arr):
n = len(arr)
cnt = 0
for i in range(n):
for j in range(i+1, n):
if arr[j] > arr[i]:
cnt += 1
return cnt
A brute-force solution is easily written as for example:
def f(arr):
best_val = calc_x(arr)
used = set(arr)
for i, v in enumerate(arr):
for replacement in used:
if replacement == v:
continue
arr2 = arr[0:i] + replacement + arr[i:]
y = abs(replacement - v)
x = calc_x(arr2)
best_val = min(best_val, x + y)
return best_val
We can count for each element the number of items right of it larger than itself in O(n*log(n)) using for instance an AVL-tree or some variation on merge sort.
However, we still have to search which element to change and what improvement it can achieve.
This was given as an interview question and I would like some hints or insights as how to solve this problem efficiently (data structures or algorithm).
Definitely go for a O(n log n) complexity when counting inversions.
We can see that when you change a value at index k, you can either:
1) increase it, and then possibly reduce the number of inversions with elements bigger than k, but increase the number of inversions with elements smaller than k
2) decrease it (the opposite thing happens)
Let's try not to count x every time you change a value. What do you need to know?
In case 1):
You have to know how many elements on the left are smaller than your new value v and how many elements on the right are bigger than your value. You can pretty easily check that in O (n). So what is your x now? You can count it with the following formula:
prev_val - your previous value
prev_x - x that you've counted at the beginning of your program
prev_l - number of elements on the left smaller than prev_val
prev_r - number of elements on the right bigger than prev_val
v - new value
l - number of elements on the right smaller than v
r - number of elements on the right bigger than v
new_x = prev_x + r + l - prev_l - prev_r
In the second case you pretty much do the opposite thing.
Right now you get something like O( n^3 ) instead of O (n^3 log n), which is probably still bad. Unfortunately that's all what I came up for now. I'll definitely tell you if I come up with sth better.
EDIT: What about memory limit? Is there any? If not, you can just for each element in the array make two sets with elements before and after the current one. Then you can find the amount of smaller/bigger in O (log n), making your time complexity O (n^2 log n).
EDIT 2: We can also try to check, what element would be the best to change to a value v, for every possible value v. You can make then two sets and add/erase elements from them while checking for every element, making the time complexity O(n^2 log n) without using too much space. So the algorithm would be:
1) determine every value v that you can change any element, calculate x
2) for each possible value v:
make two sets, push all elements into the second one
for each element e in array:
add previous element (if there's any) to the first set and erase element e from the second set, then count number of bigger/smaller elements in set 1 and 2 and calculate new x
EDIT 3: Instead of making two sets, you could go with prefix sum for a value. That's O (n^2) already, but I think we can go even better than this.

Matlab Error A(I) = B

I am currently looking at Binomial Option Pricing. I have written the code below, which works fine, when you enter the variables in one at a time. However, entering each set of values is very tedious, and I need to be able to analyse a large set of data. I have created arrays for each of the variables. But, I keep getting the error; A(I) = B, the number of elements in B must equal I. The function is shown below.
function C = BinC(S0,K,r,sig,T,N);
% PURPOSE:
% To return the value of a European call option using the Binomial method
%-------------------------------------------------------------------------
% INPUTS:
% S0 - The initial price of the underlying asset
% K - The strike price
% r - The risk free rate of return, expressed as a decimal
% sig - The volatility of the underlying asset, expressed as a decimal
% T - The time to maturity, expressed as a decimal
% N - The number of steps
%-------------------------------------------------------------------------
dt = T/N;
u = exp(sig*sqrt(dt));
d = 1/u;
p = (exp(r*dt) - d)/(u - d);
S = zeros(N+1,1);
% Price of underlying asset at time T
for n = 1:N+1
S(n) = S0*(d^(N+1-n))*(u^(n-1));
end
% Price of Option at time T
for n = 1:N+1
C(n) = max(S(n)- K, 0);
end
% Backtrack to get option price at time 0
for i = N:-1:1
for n = 1:i
C(n) = exp(-r*dt)*(p*C(n+1) + (1-p)*C(n));
end
end
disp(C(1))
After importing my data, I entered this in to the command window.
for i=1:20
w(i)= BinC(S0(i),K(i),r(i),sig(i),T(i),N(i));
end
When I enter w, all I get back is w = []. I have no idea how I can make A(I) = B. I apologise, if this is a very silly question, but I am new to Matlab and in need of help. Thanks
Your function computes an entire vector C, but displays only C(1). This display is deceptive: it makes you think the function is returning a scalar, but it's not: it's returning the entire vector C, which you try to store into a scalar location.
The solution is simple: Change your function definition to this (rename the output variable):
function out = BinC(S0,K,r,sig,T,N);
Then at the last line of the function, remove the disp, and replace it with
out = C(1);
To verify all of this (compare with your non-working example), try calling it by itself at the command line, and examine the output.

Optimize parameters of a pairwise distance function in Matlab

This question is related to matlab: find the index of common values at the same entry from two arrays.
Suppose that I have an 1000 by 10000 matrix that contains value 0,1,and 2. Each row are treated as a sample. I want to calculate the pairwise distance between those samples according to the formula d = 1-1/(2p)sum(a/c+b/d) where a,b,c,d can treated as as the row vector of length 10000 according to some definition and p=10000. c and d are probabilities such that c+d=1.
An example of how to find the values of a,b,c,d: suppose we want to find d between sample i and bj, then I look at row i and j.
If kth entry of row i and j has value 2 and 2, then a=2,b=0,c=1,d=0 (I guess I will assign 0/0=0 in this case).
If kth entry of row i and j has value 2 and 1 or vice versa, then a=1,b=0,c=3/4,d=1/4.
The similar assignment will give to the case for 2,0(a=0,b=0,c=1/2,d=1/2),1,1(a=1,b=1,c=1/2,d=1/2),1,0(a=0,b=1,c=1/4,d=3/4),0,0(a=0,b=2,c=0,d=1).
The matlab code I have so far is using for loops for i and j, then find the cases above by using find, then create two arrays for a/c and b/d. This is extremely slow, is there a way that I can improve the efficiency?
Edit: the distance d is the formula given in this paper on page 13.
Provided those coefficients are fixed, then I think I've successfully vectorised the distance function. Figuring out the formulae was fun. I flipped things around a bit to minimise division, and since I wasn't aware of pdist until #horchler's comment, you get it wrapped in loops with the constants factored out:
% m is the data
[n p] = size(m, 1);
distance = zeros(n);
for ii=1:n
for jj=ii+1:n
a = min(m(ii,:), m(jj,:));
b = 2 - max(m(ii,:), m(jj,:));
c = 4 ./ (m(ii,:) + m(jj,:));
c(c == Inf) = 0;
d = 1 - c;
distance(ii,jj) = sum(a.*c + b.*d);
% distance(jj,ii) = distance(ii,jj); % optional for the full matrix
end
end
distance = 1 - (1 / (2 * p)) * distance;

How to calculate a sum of sequence of numbers in Prolog

The task is to calculate a sum of natural numbers from 0 to M. I wrote the following code using SWI-Prolog:
my_sum(From, To, _) :- From > To, !.
my_sum(From, To, S) :-
From = 0,
Next is 1,
S is 1,
my_sum(Next, To, S).
my_sum(From, To, S) :-
From > 0,
Next is From + 1,
S is S + Next,
my_sum(Next, To, S).
But when I try to calculate:
my_sum(0,10,S), writeln(S).
I got False instead of correct number. What is going wrong with this example?
this is surely false for Next \= 0: S is S + Next. Another more fundamental problem is that you're doing the computation in 'reverse' order. That is, when From > To and the program stop, you don't 'get back' the result. Then you should add an accumulator (another parameter, to be propagated to all recursive calls) and unify it with the partial sum at that last step...
Anyway, should be simpler:
my_sum(From, To, S) :-
From < To,
Next is From + 1,
my_sum(Next, To, T),
S is T + From.
my_sum(N, N, N).
| ?- my_sum(2, 4, N).
N = 9
I'd write the predicate along these lines, using a worker predicate with an additional accumulator:
sum(X,Y,Z) :-
integer(X) ,
integer(Y) ,
sum(X,Y,0,Z)
.
sum(X,X,T,Z) :- Z is T+X .
sum(X,Y,T,Z) :- X < Y , X1 is X+1 , T1 is T+X , sum(X1,Y,T1,Z) .
sum(X,Y,T,Z) :- X > Y , X1 is X-1 , T1 is T+X , sum(X1,Y,T1,Z) .
This implementation is simple, bi-directional, meaning that sum(1,3,X) and sum(3,1,X) both yield 6 as a result (1+2+3), and tail recursive, meaning that it should be able to handle a range of any size without a stack overflow.
As it happens, there's a purely analytic solution as well:
sum(N, Sum) :- Sum is N * (N+1) / 2.
In use:
?- sum(100, N).
N = 5050.
You used the loop tag so this probably isn't an answer you desire, but it's good to prefer this kind of solution when one exists.
predicates
sum(integer,integer)
clauses
sum(0,0).
sum(N,R):-
N1=N-1,
sum(N1,R1),
R=R1+N.

Resources