Calculate possible combinations in C - c

I've started studying C and I'm trying to practice it developing a small application. Please, could you give any tips about what to do here?
I want to buy shoes from three different brands (brandA=50; brandB=100; brandC=150). I need to spend 2000 dollars on it and buy exactly 20 shoes.
How could I write a program to display all possible combinations?
E.g. brandA (10 shoes), brandB (0 shoe), brandC(10 shoes);
brandA(1 shoe), brandB (3 shoes), brandC (11 shoes), etc.
Please, I don't want the full code now but tips about how to do it.
I really appreciate any help. Tks!
I've updated my post to include a code. Does this code make any sense?
int main(void) {
int brandA=50, brandB=100, brandC=150, ba, bb, bc;
for(ba=0;ba<=20;ba++) {
for(bb=0;bb<=20;bb++) {
for(bc=0;bc<=20;bc++) {
if(ba+bb+bc==20 && (ba*brandA)+(bb*brandB)+(bc*brandC)==2000 {
printf("You can buy %d brandA, %d brandB, %d brandC", ba,bb,bc);
}
}
}
}
return 0;
}

For first you have to have an algorithm...
Take all zero shoes brandA=0; brandB=0; brandC=0
Check total quantity= 0+0+0 = 0
If it is not 20 pcs - pass it
If it equal 20 pcs ( for example: brandA=5; brandB=5; brandC=10) - check total price.
If total price equal 2000 - show it, if not - pass it.
Increment brandA value till 20
repeat steps 2-6
Increment brandB value till 20
repeat steps 2-8
Increment brandC value till 20
repeat steps 2-10
Note: you can use 3 included 'for' cycles :)

If you are a beginner I suggest to start with backtracking and recursion.
Even if backtracking is a costly technique it's great for a beginner to see how recursion can provide a simple yet powerful solution to a problem.
Here are some resources for you to start: http://web.cse.ohio-state.edu/~gurari/course/cis680/cis680Ch19.html
And if you are serious about programming you should also read some books about algorithms and data structures since you will rely heavily on these basic fundamentals:
1. The Algorithm Design Manual
2. The Pragmatic Programmer: From Journeyman to Master

Now that you have it working, I feel no guilt in suggesting a potential refinement. Rather than a pure brute force triple loop, as others have mentioned, you can use the relationship between A, B & C to eliminate the third loop. Remember in C, there are generally many ways to approach any given problem and many ways to handle the output. As long as the logic and syntax are correct, then the only difference will be in the efficiency of the algorithms. Here is an example of eliminating the third loop:
#include <stdio.h>
int main (void) {
int ca = 50;
int cb = 100;
int cc = 150;
int budget = 2000;
int pairs = 20;
int a, b, c, cost;
for (a = 0; a < budget/ca; a++)
for (b = 0; b < budget/cb; b++)
{
c = pairs - (a + b);
if ((cost = a * ca + b * cb + c * cc) != budget)
continue;
printf ("\n a (%2d) * %3d = %4d"
"\n b (%2d) * %3d = %4d"
"\n c (%2d) * %3d = %4d\n",
a, ca, a * ca, b, cb, b * cb, c, cc, c * cc);
printf (" ===================\n (%d) %d\n", pairs, budget);
}
return 0;
}
Compiling
Since you are new to C, when you compile your code, make sure you always compile with warnings enabled. The compiler warnings are there for a reason, and there are very, very few circumstances where you can rely on code that doesn't compile without warnings. At minimum, you will want to compile with -Wall -Wextra enabled (gcc). You can also include -pedantic if you want to check your code against virtually all possible warnings. For example, to compile the code above:
$ gcc -Wall -Wextra -pedantic -o bin/shoes shoes.c
If you want to add optimizations to the fullest extent, you can add:
-Ofast (-03 with gcc < 4.6)
Output
$ ./bin/shoes
a ( 1) * 50 = 50
b (18) * 100 = 1800
c ( 1) * 150 = 150
===================
(20) 2000
a ( 2) * 50 = 100
b (16) * 100 = 1600
c ( 2) * 150 = 300
===================
(20) 2000
a ( 3) * 50 = 150
b (14) * 100 = 1400
c ( 3) * 150 = 450
===================
(20) 2000
<..snip..>
a ( 9) * 50 = 450
b ( 2) * 100 = 200
c ( 9) * 150 = 1350
===================
(20) 2000
a (10) * 50 = 500
b ( 0) * 100 = 0
c (10) * 150 = 1500
===================
(20) 2000
Good luck learning C. There is no other comparable language that gives you the precise low-level control that you have in C. (assembler excluded) But that precise control doesn't come for free. There is a learning curve involved and there is a bit more to cover in C before you will lose that "fish out of water" feeling and feel comfortable with the language. The benefit of learning C, with the low-level access it provides, is it will greatly improve your understanding of how programming works. That knowledge is applicable to all other programming languages (no matter how hard the other languages work to hide the details from you). C is time well spent.

Related

may you explain this algorithm of calculate to average for noise

I am working on embedded programming with written code by other people.
this algorithm be used in calculate average for mic and accelerometer
sound_value_Avg = 0;
sound_value = 0;
memset((char *)soundRaw, 0x00, SOUND_COUNT*2);
for(int i2=0; i2 < SOUND_COUNT; i2++)
{
soundRaw[i2] = analogRead(PIN_ANALOG_IN);
if (i2 == 0)
{
sound_value_Avg = soundRaw[i2];
}
else
{
sound_value_Avg = (sound_value_Avg + soundRaw[i2]) / 2;
}
}
sound_value = sound_value_Avg;
acceleromter is similar to this
n1=p1
(n2+p1)/2 = p2
(n3+p2)/2 = p3
(n4+p3)/2 = p4
...
avg(n1~nx)=px
it not seems to be correct.
can someone explain why he used this algorithm?
is it specific way for sin graph? like noise, vibration?
It appears to be a flawed attempt at maintaining a cumulative mean. The error is in believing that:
An+1 = (An + sn) / 2
when in fact it should be:
An+1 = ((An * n) + s) / (n + 1)
However it is computationally simpler to maintain a running sum and generate an average in the usual manner:
S = S + s
An = S / n
It is possible that the intent was to avoid overflow when the sum grows large, but the attempt is mathematically flawed.
To see how wrong this statement is consider:
True
n s Running Avg. (An + sn) / 2
--------------------------------------
1 20 20 20
2 21 20.5 20.25
3 22 21 20.625
In this case however, nothing is done with the intermediate mean value, so you don'e in fact need to maintain a running mean at all. You simply need to accumulate a running sum and calculate the average at the end. For example:
sum = 0 ;
sound_value = 0 ;
for( int i2 = 0; i2 < SOUND_COUNT; i2++ )
{
soundRaw[i2] = analogRead( PIN_ANALOG_IN ) ;
sum += soundRaw[i2] ;
}
sound_value = sum / SOUND_COUNT ;
In this you do need to make sure that the data type forsum can accommodate a value of the maximum analogRead() return multiplied by SOUND_COUNT.
However you say that this is used for some sort of signal conditioning or processing of both a microphone and an accelerator. These devices have rather dissimilar bandwidth and dynamics, and it seems rather unlikely that the same filter would suit both. Applying robust DSP techniques such as IIR or FIR filters with suitably calculated coefficients would make a great deal more sense. You'd also need a suitable fixed sample rate that I am willing to bet is not achieved by simply reading the ADC in a loop with no specific timing

Cubes in Cuboids

Helo everybody,
I need a C program to calculate the minimum amount of cuboids of size A by B by C to house a N cubes of side length S, where 1 <= N <= pow(10, 9), 1 <= S <= min(A, B, C), 1 <= A, B, C <= 1000. I did the following:
#include <stdio.h>
int main() {
unsigned long long int cubenum, length, boxnum, a, b, c, cpb;
scanf("%llu %llu %llu %llu %llu", &cubenum, &length, &a, &b, &c); getchar();
// how many cubes per box?
cpb = a/length * b/length * c/length;
// how many boxes given the amount of cubes?
boxnum = (cubenum + (cpb - 1)) / cpb;
printf("%llu\n", boxnum);
return 0;
}
The following testcases are given:
testcase #1
stdin: 24 4 8 8 8
stdout: 3
testcase #2
stdin: 27 3 8 4 10
stdout: 5
I added the following testcases myself:
testcase #3
stdin: 1 1 1 1 1
stdout: 1
testcase #4
stdin: 1000000000 500 999 999 999
stdout: 1000000000
testcase #5
stdin: 1000000000 499 999 999 999
stdout: 125000000
testcase #6
stdin: 1000000000 2 999 999 999
stdout: 9
I compiled with Clang version 10.0.0-4ubuntu1. The given testcases passed correctly on my device, and the ones I added myself seem correct when doing the math manually, however upon submission my program was declared "wrong". Unfortunately, there isn't any feedback as to where, why, or how it failed. The compiler the jury uses is unknown, however my past experience tell me it's likely running Linux (I tried using a Windows specific library function). Therefore, I would like to know, are there any test cases where my code would fail that I haven't caught? Or are there other oversights that I have made?
Thank you for your time.
Side question:
The part I suspect I am getting wrong is here:
boxnum = (cubenum + (cpb - 1)) / cpb;
I have tried using ceil() in math.h, but it feels really hacky with the double casts and then back to unsigned long long int, but it does work on all the testcases. I had to compile with clang -lm main.c -o main instead of clang main.c -o main, but it did run. Could it be that the jury has a modified math.h lib? On a different program, I used sqrt() and pow() and they were both accepted as correct, which tells me either the problem isn't where I suspect it to be, or that the jury indeed does have a modified math.h lib. Or could it be something else?
The line
cpb = a/length * b/length * c/length;
is wrong because this expression is calculated from left to right and truncation may not work well for b and c.
For example, with this input
15 10 100 10 19
The formula will be calculated like
a/length * b/length * c/length
= 100/10 * 10/10 * 19/10
= 10 * 10 / 10 * 19 / 10
= 100 / 10 * 19 / 10
= 10 * 19 / 10
= 190 / 10
= 19
Therefore, your program will output 1 because the required 15 cubes can be covered by 19 cubes while the correct output is 2 because actually only 10 cubes can be created from one box.
Try this:
cpb = (a/length) * (b/length) * (c/length);

Configuring and limiting output of PI controller

I have implemented simple PI controller, code is as follows:
PI_controller() {
// handling input value and errors
previous_error = current_error;
current_error = 0 - input_value;
// PI regulation
P = current_error //P is proportional value
I += previous_error; //I is integral value
output = Kp*P + Ki*I; //Kp and Ki are coeficients
}
Input value is always between -π and +π.
Output value must be between -4000 and +4000.
My question is - how to configure and (most importantly) limit the PI controller properly.
Too much to comment but not a definitive answer. What is "a simple PI controller"? And "how long is a piece of string"? I don't see why you (effectively) code
P = (current_error = 0 - input_value);
which simply negates the error of -π to π. You then aggregate the error with
I += previous_error;
but haven't stated the cumulative error bounds, and then calculate
output = Kp*P + Ki*I;
which must be -4000 <= output <= 4000. So you are looking for values of Kp and Ki that keep you within bounds, or perhaps don't keep you within bounds except in average conditions.
I suggest an empirical solution. Try a series of runs, filing the results, stepping the values of Kp and Ki by 5 steps each, first from extreme neg to pos values. Limit the output as you stated, counting the number of results that break the limit.
Next, halve the range of one of Kp and Ki and make a further informed choice as to which one to limit. And so on. "Divide and conquer".
As to your requirement "how to limit the PI controller properly", are you sure that 4000 is the limit and not 4096 or even 4095?
if (output < -4000) output = -4000;
if (output > 4000) output = 4000;
To configure your Kp and Ki you really should analyze the frequency response of your system and design your PI to give the desired response. To simply limit the output decide if you need to freeze the integrator, or just limit the immediate output. I'd recommend freezing the integrator.
I_tmp = previous_error + I;
output_tmp = Kp*P + Ki*I_tmp;
if( output_tmp < -4000 )
{
output = -4000;
}
else if( output_tmp > 4000 )
{
output = 4000;
}
else
{
I = I_tmp;
output = output_tmp;
}
That's not a super elegant, vetted algorithm, but it gives you an idea.
If I understand your question correctly you are asking about anti windup for your integrator.
There are more clever ways to to it, but a simple
if ( abs (I) < x)
{
I += previous_error;
}
will prevent windup of the integrator.
Then you need to figure out x, Kp and Ki so that abs(x*Ki) + abs(3.14*Kp) < 4000
[edit] Off cause as macduff states, you first need to analyse your system and choose the korrect Ki and Kp, x is the only really free variable in the above equation.

31 bit limit on bit operations in R

I am trying to get around the 31-bit limit for bit operations in R. I can do this in pure R, but my issue is about implementing this in C for use in R.
Example
For example I have the data
> x = c(2147028898, 2147515013)
where each element is at most 32 bits, unsigned, and on which I'd like to do bit operations such as (but not limited to) (x >> 20) & 0xFFF. The end goal would be using many of these kinds of operations in a single function.
The two numbers are of different bit lengths.
> log2(x)
[1] 30.99969446331090239255 31.00002107107989246515
Normal bitwise operations in R yield the following result, ie NAs are introduced for the larger of the two.
> bitwShiftR(x,20)
[1] 2047 NA
Warning message:
In bitwShiftR(x, 20) : NAs introduced by coercion
> bitwAnd(x,20)
[1] 0 NA
Warning message:
In bitwAnd(x, 20) : NAs introduced by coercion
Workaround with R package 'bitops'
The bitopspackage does what I want, but my end goal is something more advanced, and I want to be able to use C, see below.
> library(bitops)
> bitShiftR(x,20)
[1] 2047 2048
I have looked at the C code for this package, but I don't really understand it. Does it have to be that complicated, or is that just for optimization for vectorized inputs and outputs?
Workaround in C (the issue)
My code is as follows, only a simple expression so far. I have tried different types in C, but to no avail.
#include <R.h>
void myBitOp(int *x, int *result) {
*result = (*x >> 20) & 0xFFF;
}
which I then compile with R CMD SHLIB myBitOp.c on a 64 bit machine.
$uname -a
Linux xxxxxxxxx 3.0.74-0.6.8-xen #1 SMP Wed May 15 07:26:33 UTC 2013 (5e244d7) x86_64 x86_64 x86_64 GNU/Linux
In R I load this with
> dyn.load("myBitOp.so")
> myBitOp <- function(x) .C("myBitOp", as.integer(x), as.integer(0))[[2]]
When I run the function I get back
> myBitOp(x[1])
[1] 2047
> myBitOp(x[2])
Error in myBitOp(x[2]) : NAs in foreign function call (arg 1)
In addition: Warning message:
In myBitOp(x[2]) : NAs introduced by coercion
So the question is, why do I get these NAs with this C code, and how do I fix it? The return value will always be much less than 31 bits btw.
Thank you!
Update
After studying the bitops code a bit more, and going through this presentation among other links I came up with this code (bonus vectorization here)
#include <R.h>
#include <Rdefines.h>
SEXP myBitOp(SEXP x) {
PROTECT (x = AS_NUMERIC(x) ) ;
double *xx = NUMERIC_POINTER(x);
SEXP result = PROTECT(NEW_NUMERIC(length(x)));
double *xresult = NUMERIC_POINTER(result);
for( int i=0; i < length(x); i++) {
xresult[i] = (double) ((((unsigned int) xx[i]) >> 20) & 0xFFF);
}
UNPROTECT(2);
return(result);
}
Compile with R CMD SHLIB myBitOp.c
And in R:
> dyn.load("myBitOp.so")
> myBitOp <- function(x) .Call("myBitOp", x)
> myBitOp(x)
[1] 2047 2048
I don't fully understand why or how yet, but it works, well seems to work for this example at least.
The second element of as.integer(x) will be NA because it's larger than .Machine$integer.max. NAOK = FALSE in your call to .C, so that NA in your input results in an error. Your call to .C will "succeed" if you set NAOK = TRUE (because, in this case, NA is technically NA_integer_, which is a special int value in C).
You'll have to be creative to get around this. You could try splitting values > 2^31-1 into two values, pass both of them to C, convert them to unsigned integers, sum them, convert the result to a signed integer, then pass back to R.

Comparing speed of Haskell and C for the computation of primes

I initially wrote this (brute force and inefficient) method of calculating primes with the intent of making sure that there was no difference in speed between using "if-then-else" versus guards in Haskell (and there is no difference!). But then I decided to write a C program to compare and I got the following (Haskell slower by just over 25%) :
(Note I got the ideas of using rem instead of mod and also the O3 option in the compiler invocation from the following post : On improving Haskell's performance compared to C in fibonacci micro-benchmark)
Haskell : Forum.hs
divisibleRec :: Int -> Int -> Bool
divisibleRec i j
| j == 1 = False
| i `rem` j == 0 = True
| otherwise = divisibleRec i (j-1)
divisible::Int -> Bool
divisible i = divisibleRec i (i-1)
r = [ x | x <- [2..200000], divisible x == False]
main :: IO()
main = print(length(r))
C : main.cpp
#include <stdio.h>
bool divisibleRec(int i, int j){
if(j==1){ return false; }
else if(i%j==0){ return true; }
else{ return divisibleRec(i,j-1); }
}
bool divisible(int i){ return divisibleRec(i, i-1); }
int main(void){
int i, count =0;
for(i=2; i<200000; ++i){
if(divisible(i)==false){
count = count+1;
}
}
printf("number of primes = %d\n",count);
return 0;
}
The results I got were as follows :
Compilation times
time (ghc -O3 -o runProg Forum.hs)
real 0m0.355s
user 0m0.252s
sys 0m0.040s
time (gcc -O3 -o runProg main.cpp)
real 0m0.070s
user 0m0.036s
sys 0m0.008s
and the following running times :
Running times on Ubuntu 32 bit
Haskell
17984
real 0m54.498s
user 0m51.363s
sys 0m0.140s
C++
number of primes = 17984
real 0m41.739s
user 0m39.642s
sys 0m0.080s
I was quite impressed with the running times of Haskell. However my question is this : can I do anything to speed up the haskell program without :
Changing the underlying algorithm (it is clear that massive speedups can be gained by changing the algorithm; but I just want to understand what I can do on the language/compiler side to improve performance)
Invoking the llvm compiler (because I dont have this installed)
[EDIT : Memory usage]
After a comment by Alan I noticed that the C program uses a constant amount of memory where as the Haskell program slowly grows in memory size. At first I thought this had something to do with recursion, but gspr explains below why this is happening and provides a solution. Will Ness provides an alternative solution which (like gspr's solution) also ensures that the memory remains static.
[EDIT : Summary of bigger runs]
max number tested : 200,000:
(54.498s/41.739s) = Haskell 30.5% slower
max number tested : 400,000:
3m31.372s/2m45.076s = 211.37s/165s = Haskell 28.1% slower
max number tested : 800,000:
14m3.266s/11m6.024s = 843.27s/666.02s = Haskell 26.6% slower
[EDIT : Code for Alan]
This was the code that I had written earlier which does not have recursion and which I had tested on 200,000 :
#include <stdio.h>
bool divisibleRec(int i, int j){
while(j>0){
if(j==1){ return false; }
else if(i%j==0){ return true; }
else{ j -= 1;}
}
}
bool divisible(int i){ return divisibleRec(i, i-1); }
int main(void){
int i, count =0;
for(i=2; i<8000000; ++i){
if(divisible(i)==false){
count = count+1;
}
}
printf("number of primes = %d\n",count);
return 0;
}
The results for the C code with and without recursion are as follows (for 800,000) :
With recursion : 11m6.024s
Without recursion : 11m5.328s
Note that the executable seems to take up 60kb (as seen in System monitor) irrespective of the maximum number, and therefore I suspect that the compiler is detecting this recursion.
This isn't really answering your question, but rather what you asked in a comment regarding growing memory usage when the number 200000 grows.
When that number grows, so does the list r. Your code needs all of r at the very end, to compute its length. The C code, on the other hand, just increments a counter. You'll have to do something similar in Haskell too if you want constant memory usage. The code will still be very Haskelly, and in general it's a sensible proposition: you don't really need the list of numbers for which divisible is False, you just need to know how many there are.
You can try with
main :: IO ()
main = print $ foldl' (\s x -> if divisible x then s else s+1) 0 [2..200000]
(foldl' is a stricter foldl from Data.List that avoids thunks being built up).
Well bang patters give you a very small win (as does llvm, but you seem to have expected that):
{-# LANUGAGE BangPatterns #-}
divisibleRec !i !j | j == 1 = False
And on my x86-64 I get a very big win by switching to smaller representations, such as Word32:
divisibleRec :: Word32 -> Word32 -> Bool
...
divisible :: Word32 -> Bool
My timings:
$ time ./so -- Int
2262
real 0m2.332s
$ time ./so -- Word32
2262
real 0m1.424s
This is a closer match to your C program, which is only using int. It still doesn't match performance wise, I suspect we'd have to look at core to figure out why.
EDIT: and the memory use, as was already noted I see, is about the named list r. I just inlined r, made it output a 1 for each non-divisble value and took the sum:
main = print $ sum $ [ 1 | x <- [2..800000], not (divisible x) ]
Another way to write down your algorithm is
main = print $ length [()|x<-[2..200000], and [rem x d>0|d<-[x-1,x-2..2]]]
Unfortunately, it runs slower. Using all ((>0).rem x) [x-1,x-2..2] as a test, it runs slower still. But maybe you'd test it on your setup nevertheless.
Replacing your code with explicit loop with bang patterns made no difference whatsoever:
{-# OPTIONS_GHC -XBangPatterns #-}
r4::Int->Int
r4 n = go 0 2 where
go !c i | i>n = c
| True = go (if not(divisible i) then (c+1) else c) (i+1)
divisibleRec::Int->Int->Bool
divisibleRec i !j | j == 1 = False
| i `rem` j == 0 = True
| otherwise = divisibleRec i (j-1)
When I started programming in Haskell I was also impressed about its speed. You may be interested in reading point 5 "The speed of Haskell" of this article.

Resources