Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
while(v!=0)
{
temp=u%v;
u=v;
v=temp;
}
I couldn't understand this equation.Why there is u=v and also v=temp.
how can this equation find greatest common devisor.And what does temp mean?
The algorihm is called "euclidian algorithm" (see Wikipedia).
Let x be the greatest common divisor (gcd) of u and v and u > v.
Then x is also gcd of v and u-v.
In the algorithm, you keep subtracting the smaller number from the larger number until one of them becomes the gcd x.
The temp = u % v means u modulo v (subtracting v from u as often as possible)
So after this step you have smaller numbers temp and v than you started with, that have the same gcd.
The smaller value is now in temp, so temp < v, otherwise you could continue subtracting.
To be able to reuse the code, you have to make sure the larger value is in u and the smaller value is in v, so v becomes your new u and temp becomes you new v.
To break the loop v (temp) has to become 0. To reach 0, u must be a multiple of v before the modulo operation.
The gcd of a number and its multiple is the number itself, so v stored to u in this case.
Since over all the time the gcd x of the numbers did not change, we finally have u == x.
This scheme with temp is commonly used to swap two values.
temp=a;
a=b;
b=temp;
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
While typing a program as a high level programmer, n = 0; looks more efficient and clean.
But is n = 0; really more efficient than if (n != 0) n = 0;?
when n is more likely to be 0.
when n is less likely to be 0.
when n is absolutely uncertainty.
Language: C (C90)
Compiler: Borland's Turbo C++
Minimal reproducible code
void scanf();
void main()
{
int n; // 2 bytes
n=0; // Expression 1
scanf("%d",&n); // Absolutely uncertain
if(n!=0) n=0; // Expression 2
}
Note: I have mentioned the above code only for your reference. Please don't go with it's flow.
If your not comfortable with the above language/standard/compiler, then please feel free to explain the above 3 cases in your preferred language/standard/compiler.
If n is a 2's complement integral type or an unsigned integral type, then writing n = 0 directly will certainly be no slower than the version with the condition check, and a good optimising compiler will generate the same code. Some compilers compile assignment to zero as XOR'ing a register value with itself, which is a single instruction.
If n is a floating point type, a 1s' complement integral type, or a signed magnitude integral type, then the two code snippets differ in behaviour. E.g. if n is signed negative zero for example. (Acknowledge #chqrlie.) Also if n is a pointer on a system than has multiple null pointers representations, then if (n != 0) n = 0; will not assign n, when n is one of the various null pointers. n = 0; imparts a different functionality.
"will always be more efficient" is not true. Should reading n have a low cost, writing n a high cost (Think of re-writing non-volatile memory that needs to re-write a page) and is likely n == 0, then n = 0; is slower, less efficient than if (n != 0) n = 0;.
n = 0;
will always be more efficient as there is no condition check.
https://godbolt.org/z/GEzfcD
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
Is there an option to store a hex complex number in c?
The remainder of a number divided by 3 equals to the sum of its digits modulo 3.
Once you calculate the remainders for the two numbers (not need to represent each number's value), sum those. If result modulo 3 is zero, the sum of the number is a multiplication of 3.
Well I guess you are not getting the problem. Rather getting the input is easier but processing it is not.
So no type would be big enough to accurately hold the value - these are large. Why not store it as string?
You can store it as a char array and use fgets for that (this is only if you want to print the number otherwise not needed). You can use getchar() also and do the sum as shown in the proof here.
After doing it, just do one thing - check each digit-char and then calculate it's sum mod 3. That way you will get the value of the result and keep it adding. (The resultant mod sum tells you about the divisibility). That is what you want exactly.
What I meant is?
(A + B) mod 3
= ( A(n)A(n-1)A(n-2)...A(1)A(0)
+ B(m)B(m-1)B(m-2)...B(1)B(0) ) mod 3
= ( [ A(n) + A(n-1) + A(n-2) + ... + A(1) + A(0) ] mod 3
+ [ B(m) + B(m-1) + B(m-2) + ... + B(1) + B(0) ] mod 3 ) mod 3
Rules:
if a≡b (mod m) and c≡d (mod m) then
a+c ≡ b+d (mod m) and
ac ≡ bd (mod m)
Example code
#include <stdio.h>
int main(void){
int c,sum = 0;
while(isdigit(c = getchar()))
sum+=(c-'0'),sum%=3;
while(isdigit(c = getchar()))
sum+=(c-'0'),sum%=3;
printf("%s\n", sum?"Non-divisible":"divisible");
return 0;
}
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I have this question that requires to multiply two large numbers . I thought of adding the first number A , B times ( B is the second number) . I made the algorithm for adding two large numbers . so I thought this would work . the question is , would it take a long time to do this algorithm ? , adding a number to itself a lot of times ?
the question is , would it take a long time to do this algorithm ? , adding a number to itself a lot of times ?
Yes. That's a very slow method of multiplying numbers as you need to to a additions when you add b to itself a times. For better performance and still a reasonably simple algorithm, consider a shift-and-add procedure like this (multiplying a and b, putting the result in q):
q ← 0, i ← 0
if 2i > a then return q
if a & (1 ≪ i) then q ← q + (b ≪ i)
i ← i + 1
goto 2
Fast algorithms for this kind of problem are Karatsuba multiplication, Toom-Cook multiplication and Schönhage-Strassen multiplication.
Check this algorithm:
long long multiply(long long a, long long b)
{
if(a < b)
swap(a, b);
long long c = 0;
for(int i = 0; (1ll << i) <= b; ++i)
{
if(((b >> i) & 1ll) == 1ll)
{
c += a << i;
}
}
return c;
}
It works in logarithmic speed of min(a, b).
If your numbers are really large, Fast Fourier Transform (https://en.wikipedia.org/wiki/Fast_Fourier_transform) and Karatsuba algorithm (https://en.wikipedia.org/wiki/Karatsuba_algorithm) might help you.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I have been looking at some code that fills arrays with samples created using an IFFT (Inverse Fast Fourier Transform).
When the author iterates the array he uses an if construct that looks like this:
int idx;
for (idx = 1; idx < (tableLen >> 1); idx++) {
freqWaveRe[idx] = 1.0 / idx; // sawtooth spectrum
freqWaveRe[tableLen - idx] = -freqWaveRe[idx]; // mirror
}
Can you explain the terminating condition:
idx < (tableLen >> 1)
Why would you do something like this and what does it mean?
The bit shift operator used in this expression:
idx < (tableLen >> 1)
Terminates the for loop after iterating through the first half of the array. The right shift operator moves the value one bit to the right. Moving it one bit to the right divides it by two.
1010 in binary = 10
If we right shift it one bit we get:
0101 in binary = 5
A couple more things:
Tony D mentioned some comments made that this 'will not work well if idx is negative'. Negative numbers are represented differently. Sometimes negatives are stored with the first bit representing the sign. If you shift the sign right you will lose that information and cause a bit of a mess.
Tony D also said "it was historically an optimisation when bit-shifting opcodes executed faster than division, and optimisers couldn't be trusted"
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
Given a array of N positive integers. Let the minimum element be L and sum of all elements is S.
I need to find out if, for each integer X,(where X is between L and S inclusive) can a subset of the array be chosen such that the sum of elements in this subset is equal to X.
EXAMPLE :
Let N=5 and array is {4,8,2,1,16} . Then here all elements can be made between 1 to 31 so here ans is "yes".
If suppose N=4 and array is {5,1,2,7} . Then for values between 1 and 15 the values 4 and 11 cannot be made. So answer here is "no".
I know to find the minimum number that cant be returned by this array,But dont know to how to solve this problem
First, does the array have only one element? If so, the answer is yes.
Otherwise, find the minimum impossible sum. Is it greater than S? If so, the answer is yes. Otherwise, the answer is no. (If the minimum is less than L, the array doesn't contain 1, and S-1 is an impossible sum.)
To find the lowest impossible sum, we sort the input, then find the lowest impossible sum of each prefix of the array. In Python:
def lowest_impossible_sum(nums):
nums = sorted(nums)
partial_sum = 0
for num in nums:
if num > partial_sum + 1:
return partial_sum + 1
partial_sum += num
return partial_sum + 1
Proof of correctness by induction:
Let A be the sorted array. If A[0] > 1, then 1 is the lowest impossible sum. Otherwise, the elements of A[:1] can produce all sums up to sum(A[:1]).
Suppose for induction that subsets of A[:k] can be selected to produce all sums up to sum(A[:k]).
If A[k] > sum(A[:k]) + 1, then sum(A[:k]) + 1 is the lowest impossible sum; it can't be produced by a subset of A[:k], and adding elements that aren't in A[:k] won't help, as they're all too big.
If A[k] <= sum(A[:k]) + 1, then subsets of A[:k+1] can produce every sum up to sum(A[:k+1]). Every sum up to sum(A[:k]) can already be produced by the inductive hypothesis, and sums from sum(A[:k]) + 1 to sum(A[:k+1]) can be produced by selecting A[k] and a suitable subset of A[:k] adding up to what's left.
Let x be the first index such that A[x] > sum(A[:x]) + 1, or len(A) if there is no such index. By induction, every sum up to sum(A[:x]) is possible. However, whether because x is past the end of the array or because A[x] > sum(A[:x]) + 1, it is impossible to produce the sum sum(A[:x]) + 1. Thus, we need merely search for x and return sum(A[:x]) + 1. That is what the algorithm does.
First sort all elements in the array. If you want to get all the values between L and S from the elements of the array, then L = 1 and the elements should be in the form of 2^i . And the greatest element may not be of form 2^i because the sum need not be of the form (2^i - 1).