How can i reduce the complexity of this code(given below)? [closed] - c

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
Problem statement:
Find the sum of all the multiples of 3 or 5 below N.
Input Format:
First line contains T that denotes the number of test cases. This is followed by T lines, each containing an integer,N.
Constraints :
1 <= T <= 10^5
1 <= N <= 10^9
Output Format:
For each test case, print an integer that denotes the sum of all the multiples of 3 or 5 below N.
This is my code :
#include <stdio.h>
int main() {
long t,i,x;
scanf("%ld",&t);
long y[t];
for(i=0; i<t; i++) {
scanf("%ld",&x);
long j,k,sum= 0;
if(x<=3)
sum= 0;
else if(x<=5)
sum= 3;
else {
for(j=3; j<x; j+=3)
sum= sum + j;
for(j=5; j<x; j+=5)
if(j%3!=0)
sum = sum + j;
}
y[i] = sum;
}
for(i=0; i<t; i++) {
printf("%ld\n",y[i]);
}
return 0;
}

There is a solution with a time complexity of O(T):
Use the formula for sum of integers 1+2+3+...+n = n*(n+1)/2.
Also note that 3+6+9+...+(3*n) = 3*(1+2+3+...+n) = 3*n*(n+1)/2.
Figure out how many numbers divisible by 3 there are. Calculate their sum.
Figure out how many numbers divisible by 5 there are. Calculate their sum.
Figure out how many numbers divisible by 15 (=3*5) there are. Calculate their sum.
Total sum is sum3 + sum5 - sum15. The numbers divisible by both 3 and 5 (hence by 15) are both in sum3 and in sum5, so unless we subtract sum15 they would be counted twice.
Note that the sum will overflow a 32 bit integer, so make sure you use a 64 bit integer type.

You can achieve constant complexity if you employ the Ancient Power of Mathematics.
First figure out how many numbers divisible by three there are.
Compute their sum.
(You do not need any loops for this, you only need elementary arithmetic.)
Repeat for five and fifteen.
The total is a simple expression involving those three sums.
The details left as an exercise for the reader.

Related

Given a range find the sum of number that is divisible by 3 or 5

The below code that works perfectly fine for smaller digits, But Time dilation for greater digits
given me the suggestion
#include<stdio.h>
int main()
{
int num;
int sum=0;
scanf("%d",&num);
for(int i=1;i<=num;i++)
{
if(i%3==0 || i%5==0)
sum += i;
}
printf("%d",sum);
}
Need efficient code for this
Try to reduce the time take for the code.
The answer can be computed with simple arithmetic without any iteration. Many Project Euler questions are intended to make you think about clever ways to find solutions without just using the raw power of computers to chug through calculations. (This was Project Euler question 1, except the Project Euler problem specifies the limit using less than instead of less than or equal to.)
Given positive integers N and F, the number of positive multiples of F that are less than or equal to N is ⌊N/F⌋. (⌊x⌋ is the greatest integer not greater than x.) For example, the number of multiples of 5 less than or equal to 999 is ⌊999/5⌋ = ⌊199.8⌋ = 199.
Let n be this number of multiples, ⌊N/F⌋.
The first multiple is F and the last multiple is n•F. For example, with 1000 and 5, the first multiple is 5 and the last multiple is 200•5 = 1000.
The multiples are evenly spaced, so the average of all of them equals the average of the first and the last, so it is (F + nF)/2.
The total of the multiples equals their average multiplied by the number of them, so the total of the multiples of F less than N is n • (F + n•F)/2.
Adding the sum of multiples of 3 and the sum of multiples of 5 includes the multiples of both 3 and 5 twice. We can correct for this by subtracting the sum of those numbers. Multiples of both 3 and 5 are multiples of 15.
Thus, we can compute the requested sum using simple arithmetic without any iteration:
#include <stdio.h>
static long SumOfMultiples(long N, long F)
{
long NumberOfMultiples = N / F;
long FirstMultiple = F;
long LastMultiple = NumberOfMultiples * F;
return NumberOfMultiples * (FirstMultiple + LastMultiple) / 2;
}
int main(void)
{
long N = 1000;
long Sum = SumOfMultiples(N, 3) + SumOfMultiples(N, 5) - SumOfMultiples(N, 3*5);
printf("%ld\n", Sum);
}
As you do other Project Euler questions, you should look for similar ideas.

Adding two large numbers together [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
I have two large numbers stored in an array(p->numbers[50] and q->numbers[50]), and printed out in hexadecimal
1319df046
111111111
When added together, I am returned with, in hexadecimal,
242af'11'257
However, apparently my answer "should" be
242af0157
There is a discrepancy when adding the f and 1 together, equaling 17, but printing 11 (as 17 is 11 in hexadecimal). I'm not sure why my output should be a 0 instead of 11
int sum = 0;
int carry = 0;
for(i = 9; i >= 0; i--)
{
sum = p->numbers[i] + q->numbers[i];
sum = sum + carry;
answer[i] = sum;
carry = sum / 10;
printf("%x", answer[i]);
}
I reproduced your results by defining the arrays of digits as follows:
int p[] = {0x6,0x4,0x0,0xf,0xd,0x9,0x1,0x3,0x1,0x0};
int q[] = {0x1,0x1,0x1,0x1,0x1,0x1,0x1,0x1,0x1,0x0};
This is not storing the number as decimal, but as hexadecimal digits.
With that in mind, there are three problems here:
First, the way you're calculating the carry is incorrect. Because the digits are hexadecimal and not decimal, the carry should be sum / 16 instead of sum / 10.
Second, when there is a carry involved, you're not removing the high digit of the sum. In one place, you have 0xf + 0x1 + 0x1 = 0x11 and both characters are being printed. You need to set the digit as answer[i] = sum % 16;
Third, you're adding the digits from largest to smallest. You need to add them from smallest to largest in one loop, then print the digits from largest to smallest in a separate loop.
With those fixes in place, your code should look like this:
for(i = 0; i < 10; i++)
{
sum = p[i] + q[i];
sum = sum + carry;
answer[i] = sum % 16;
carry = sum / 16;
}
for(i=9; i>=0; i--) {
printf("%x", answer[i]);
}

complex hex numbers in C [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
Is there an option to store a hex complex number in c?
The remainder of a number divided by 3 equals to the sum of its digits modulo 3.
Once you calculate the remainders for the two numbers (not need to represent each number's value), sum those. If result modulo 3 is zero, the sum of the number is a multiplication of 3.
Well I guess you are not getting the problem. Rather getting the input is easier but processing it is not.
So no type would be big enough to accurately hold the value - these are large. Why not store it as string?
You can store it as a char array and use fgets for that (this is only if you want to print the number otherwise not needed). You can use getchar() also and do the sum as shown in the proof here.
After doing it, just do one thing - check each digit-char and then calculate it's sum mod 3. That way you will get the value of the result and keep it adding. (The resultant mod sum tells you about the divisibility). That is what you want exactly.
What I meant is?
(A + B) mod 3
= ( A(n)A(n-1)A(n-2)...A(1)A(0)
+ B(m)B(m-1)B(m-2)...B(1)B(0) ) mod 3
= ( [ A(n) + A(n-1) + A(n-2) + ... + A(1) + A(0) ] mod 3
+ [ B(m) + B(m-1) + B(m-2) + ... + B(1) + B(0) ] mod 3 ) mod 3
Rules:
if a≡b (mod m) and c≡d (mod m) then
a+c ≡ b+d (mod m) and
ac ≡ bd (mod m)
Example code
#include <stdio.h>
int main(void){
int c,sum = 0;
while(isdigit(c = getchar()))
sum+=(c-'0'),sum%=3;
while(isdigit(c = getchar()))
sum+=(c-'0'),sum%=3;
printf("%s\n", sum?"Non-divisible":"divisible");
return 0;
}

What does the remainder operation (x % y) do when x < y? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
This program correctly prints whether a number is even or odd ...
#include <stdio.h>
int main(void)
{
int n;
printf("Please enter a number:");
scanf("%d", &n);
if(n % 2 == 0)
printf("%d is even", n);
else
printf("%d is odd",n);
return 0;
}
I don't understand how n % 2 can give a meaningful result when n is less than two. % is the remainder operation, right? If n is less than two, how can you divide it by two at all?
I am unable to understand the logic of n%2==0. If user input a value less than 2. Then how it give us correct answer?
The operator % performs the modulus (or remainder) operation. The remainder of dividing a number by 2 (when that number is less than 2) is the number itself (with the quotient being 0). For example, one divided by two has a quotient of 0 and a remainder of 1 so 1%2 = 1.

How to demonstrate a better average? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
Here's a question on Page 60-61 in A Book On C(3rd Edition) by Al Kelley/Ira Pohl:
The following code fragment shows two different methods of computing a running average:
int i;
double x;
double avg= sum= 0.0;
double navg;
for (i=1; scanf("%lf", &x)==1; ++i)
{
avg+= (x-avg)/i;
sum+= x;
navg= sum/i;
}
The original question as written in the book is: If you input some "ordinary" numbers, the avg and the navg seem to be identical. Demonstrate experimentally that the avg is better, even when sum does not overflow.
My question as a beginning programmer is:
What is the criteria for a "better" algorithm? I believe precision and the amount of running time are two key factors, but are there any other things that make an algorithm "better"?
In terms of precision and running time, how can I demonstrate experimentally that when overflow is ruled out, avg is still a better method than navg is? Should I use numbers that are "out of the ordinary", e.g., of largely different magnitudes?
the two algorithms don't have much difference on running time;
compared with navg, avg is better on precision.
(1)running time:
the following two pieces of code demonstrate that at the magnitude of 1000000, the two algorithms do not have much difference.
#include<stdio.h>
#include<time.h>
int main()
{
int i ;
double x ,sum = 0,avg = 0;
srand(time(NULL));
for(i = 0; i < 1000000 ; i++)
{
x = rand()%10+1;
sum += x;
}
avg = sum/i;
printf("%lf\n",avg);
printf("time use:%lf\n",(double)clock()/CLOCKS_PER_SEC);
}
#include<stdio.h>
#include<time.h>
int main()
{
double sum = 0,avg = 0;
double x;
int i;
srand(time(NULL));
for(i = 0 ; i < 1000000; i++)
{
x = rand()%10+1;
avg += (x-avg)/(i+1);
}
printf("%lf\n",avg);
printf("time use:%lf\n",(double)clock()/CLOCKS_PER_SEC);
}
(2)precision:
the code below demonstrates that, adding the differences between avg and every x, the result is 0; while as for navg the result is -2.44718e-005, which means that avg is better on precision.
#include <stdlib.h>
#include <stdio.h>
int main()
{
static double data[1000000];
double sum, avg, check_value;
int i;
int n = sizeof(data)/sizeof(data[0]);
avg = 0;
for( i = 0; i < n; ++ i)
{
avg += ( data[i] - avg) / (i + 1);
}
check_value = 0;
for( i = 0; i < n; ++ i)
{
check_value = check_value + ( data[i] - avg );
}
printf("\navg += (x[i] - avb) / i:\tavg = %g\t check_value = %g", avg, check_value );
for( i = 0; i < n; ++ i )
{
data[i] = 1.3;
}
sum = 0;
for( i = 0; i < n; ++ i)
{
sum += data[i];
}
avg = sum / n;
check_value = 0;
for( i = 0; i < n; ++ i)
{
check_value = check_value + ( data[i] - avg );
}
printf("\n avg = sum / N: \tavg = %g\t check_value = %g", avg, check_value );
getchar();
}
I think this is a valid question, although not too well phrased. A problem is that even the question referred to by furins was not phrased well and was closed before receiving a good answer.
The question itself however is interesting especially for the closed one which shows that it was even included in a book, so it could lead more people in one or another direction.
I think neither algorithm is particularly better. In the naive average it looks like we will lose precision, or that when averaging numbers with several magnitude of differences we even lose numbers, but the same may probably be discovered with the other algorithm as well, maybe just with different sets of input data.
So, especially for that it is from an existing book, I think this is a perfectly valid question seeking for some decent answer.
I attempt to try to cover up what I think about the two algorithm by an example. So imagine you have 4 numbers of roughly the same magnitude and you want to average them.
The naive method sums them up first, one after another. After summing the first two you obviously lost one bit of precision at the low end (since you now probably have one larger exponent). When you add the last number, you have 2 bits lost (which bits are now used for representing the high part of the sum). But then you divide by four which in this case essentially is just subtracting 2 from your exponent.
What did we lose during this process? Now it's easier to answer what if all the numbers were truncated by 2 bits first. This case obviously the last two bits of the fraction of the resulting average will be zero, and up to additional 2 bits worth of error may be introduced (if all the truncated bits were happened to be ones in the original numbers compared to if they were zeros). So essentially if the sources were single precision floats with 23 bits of fraction, the resulting avg would now have about 19 bits worth of precision.
The actual result from the naive method is better though as the first numbers summed didn't lose that much precision.
In the differential method in each iteration the appropriately weighted difference is added to the sum. If the numbers are of the same magnitude, then this difference will most likely be somewhere one magnitude below. It is then divided by the current count, in this operation nothing lost, but the resulting difference for the last number (with i=4 in this example) may be about 3 magnitudes below than the source numbers. We add this to the running average which is around the same magnitude as the original numbers.
So with the differential method in this example adding the last number seems to have lost about 3 bits of precision, for all the 4 numbers it may even look like we may be down to 5 essentially lost bits of precision - maybe even worse than the naive method?
The differential method is harder to follow, maybe I did some mistake in my assumptions. But I think it's clear: It just doesn't seem valid to regard one or another performing better, or if so, maybe it depends on the layout and magnitude differences of the data.
Note that you're dividing by zero in your for() loop even if you do ++i

Resources