scheme out of memory racket using cond instead of if [duplicate] - loops

In Scheme, I modified the basic 'if' command as:
(define (modified-if predicate then-clause else-clause)
(if predicate
then-clause
else-clause))
And then I defined a simple factorial generating program using the modified version of if:
(define (factorial n)
(modified-if (= n 0)
(* n (factorial (- n 1)))))
Now, when I call the above function, it goes into an infinite loop. Why does that happen?

Scheme has eager evaluation. This means that, unless you're using a special form (like if) or a macro (like cond or case) that delegates to such a special form, all subexpressions are evaluated first.
That means for your expression
(modified-if (= n 0)
1
(* n (factorial (- n 1))))
the (* n (factorial (- n 1))) is evaluated first, before modified-if is run. (It may be run before or after (= n 0), but it doesn't matter either way, the recursive call still happens regardless.) And since this is a recursive call, that means that your program will infinitely recurse, and you will eventually run out of stack.
Here's a simple example: consider this:
(if #t
(display "Yay!")
(error "Oh noes!"))
Because if is a special form, and it only evaluates the necessary branch, in this case it will only evaluate (display "Yay!") and not evaluate (error "Oh noes!"). But if you switch to using your modified-if, both expressions will be evaluated, and your program will raise an error.

Use the stepper in DrRacket to see how your modified-if behaves.
Choose the "Beginner" language. Enter the program below, and then click the stepper icon:
(define (modif predicate then-clause else-clause)
(if predicate then-clause else-clause))
(define (factorial n)
(modif (= n 0) 1 (* n (factorial (- n 1)))))
(factorial 5)
You can now step for step see how the computation evolves. Notice that modif follows the rule of function application.
For an example of how the DrRacket stepper looks like:

Related

Understanding "well founded" proofs in Coq

I'm writing a fixpoint that requires an integer to be incremented "towards" zero at every iteration. This is too complicated for Coq to recognize as a decreasing argument automatically and I'm trying prove that my fixpoint will terminate.
I have been copying (what I believe is) an example of a well-foundedness proof for a step function on Z from the standard library. (Here)
Require Import ZArith.Zwf.
Section wf_proof_wf_inc.
Variable c : Z.
Let Z_increment (z:Z) := (z + ((Z.sgn c) * (-1)))%Z.
Lemma Zwf_wf_inc : well_founded (Zwf c).
Proof.
unfold well_founded.
intros a.
Qed.
End wf_proof_wf_inc.
which creates the following context:
c : Z
wf_inc := fun z : Z => (z + Z.sgn c * -1)%Z : Z -> Z
a : Z
============================
Acc (Zwf c) a
My question is what does this goal actually mean?
I thought that the goal I'd have to prove for this would at least involve the step function that I want to show has the "well founded" property, "Z_increment".
The most useful explanation I have looked at is this but I've never worked with the list type that it uses and it doesn't explain what is meant by terms like "accessible".
Basically, you don't need to do a well founded proof, you just need to prove that your function decreases the (natural number) abs(z). More concretely, you can implement abs (z:Z) : nat := z_to_nat (z * Z.sgn z) (with some appropriate conversion to nat) and then use this as a measure with Function, something like Function foo z {measure abs z} := ....
The well founded business is for showing relations are well-founded: the idea is that you can prove your function terminates by showing it "decreases" some well-founded relation R (think of it as <); that is, the definition of f x makes recursive subcalls f y only when R y x. For this to work R has to be well-founded, which intuitively means it has no infinitely descending chains. CPDT's general recursion chapter as a really good explanation of how this really works.
How does this relate to what you're doing? The standard library proves that, for all lower bounds c, x < y is a well-founded relation in Z if additionally its only applied to y >= c. I don't think this applies to you - instead you move towards zero, so you can just decrease abs z with the usual < relation on nats. The standard library already has a proof that this relation is well founded, and that's what Function ... {measure ...} uses.

Why is for loop so slow in Racket code

I am trying to find smallest non-divisor of numbers (https://codegolf.stackexchange.com/questions/105412/find-the-smallest-number-that-doesnt-divide-n). Following version using 'named let' works properly:
(define (f1 m)
(let loop ((n 2))
(cond
[(= 0 (modulo m n))
(loop (+ 1 n))]
[else n])))
I am testing with:
(f 24)
(f 1234567)
(f 12252240)
(f 232792560)
Above version produces prompt output of: 5 2 19 and 23.
However, following version which uses built-in for loop is very slow and actually crashes with out of memory error with larger numbers:
(define (f2 m)
(for/first ((i (range 2 m))
#:when (not (= 0 (modulo m i)))
#:final (not (= 0 (modulo m i))))
i))
Is there some error in the code of second version or is for loop inefficient as compared with named let in Racket?
The range function actually allocates a list, so the time for your function is dominated by the huge list allocation. Use in-range instead:
(define (f2 m)
(for/first ([i (in-range 2 m)]
#:when (not (= 0 (modulo m i)))
#:final (not (= 0 (modulo m i))))
i))
See also the section on for performance in the Racket Guide for more notes on the relative speed of different sequence forms.
In my opinion, recursion is generally more well suited for Racket than loops. I'd approach the problem this way...
(define start-counter 1)
(define (f number counter)
(cond [(not (= (modulo number counter) 0)) counter]
[else (f number (add1 counter))]))
(define (f2 number)
(f number start-counter))

Turning a recursive procedure to a iterative procedure - SICP exercise 1.16

In the book Structure and interpretation of computer programs, there is a recursive procedure for computing exponents using successive squaring.
(define (fast-expt b n)
(cond ((= n 0)
1)
((even? n)
(square (fast-expt b (/ n 2))))
(else
(* b (fast-expt b (- n 1))))))
Now in exercise 1.16:
Exercise 1.16: Design a procedure that evolves an iterative exponentiation process that uses successive squaring and uses a logarithmic number of steps,
as does `fast-expt`. (Hint: Using the observation that
(b(^n/2))^2 = (b(^2))^n/2
, keep, along with the exponent n and the base b, an additional state variable a, and define the state transformation in such a way that the product ab^n is unchanged from state to state. At the beginning of the process a is taken to be 1, and the answer is given by the value of a at the end of the process. In general, the technique of defining an invariant quantity that remains unchanged from state to state is a powerful way to think about the design of iterative algorithms.)
I spent a week and I absolutely can't figure how to do this iterative procedure, so I gave up and looked for solutions. All solutions I found is this:
(define (fast-expt a b n)
(cond ((= n 0)
a)
((even? n)
(fast-expt a (square b) (/ n 2)))
(else
(fast-expt (* a b) b (- n 1)))))
Now, I can understand
(fast-expt a (square b) (/ n 2)))
using the hint from the book, but my brain exploded when n is odd. In the recursive procedure, I got why
(* b (fast-expt b (- n 1))))))
works. But in the iterative procedure, it becomes totally different,
(fast-expt (* a b) b (- n 1)))))
It's working perfectly but I absolutely don't understand how to arrive at this solution by myself. it seems extremely clever.
Can someone explain why the iterative solution is like this? And what's the general way to think of solving these types of problems?
2021 update: Last year, I completely forgot about this exercise and the solutions I've seen. I tried solving it and I finally solved it on my own using the invariant provided in the exercise as a basis for transforming the state variables. I used the now accepted answer to verify my solution. Thanks #Óscar López.
Here's a slightly different implementation for making things clearer, notice that I'm using a helper procedure called loop to preserve the original procedure's arity:
(define (fast-expt b n)
(define (loop b n acc)
(cond ((zero? n) acc)
((even? n) (loop (* b b) (/ n 2) acc))
(else (loop b (- n 1) (* b acc)))))
(loop b n 1))
What's acc in here? it's a parameter that is used as an accumulator for the results (in the book they name this parameter a, IMHO acc is a more descriptive name). So at the beginning we set acc to an appropriate value and afterwards in each iteration we update the accumulator, preserving the invariant.
In general, this is the "trick" for understanding an iterative, tail-recursive implementation of an algorithm: we pass along an extra parameter with the result we've calculated so far, and return it in the end when we reach the base case of the recursion. By the way, the usual implementation of an iterative procedure as the one shown above is to use a named let, this is completely equivalent and a bit simpler to write:
(define (fast-expt b n)
(let loop ((b b) (n n) (acc 1))
(cond ((zero? n) acc)
((even? n) (loop (* b b) (/ n 2) acc))
(else (loop b (- n 1) (* b acc))))))

How do I make this imperative (arrays, loops, etc) Clojure code run half as fast as Java?

I've read that there are ways of coaxing the Clojure compiler into producing code that rivals the performance of similar code in Java, at least for code that already looks a lot like the Java code you want it to turn into. That's sound reasonable to me: idiomatic, high level Clojure code might have performance in the ballpark of what I'm used from CPython or MRI, but "ugly" Java-like code runs more or less like Java. This is a tradeoff I appreciate in Haskell, for example. Low level Haskell code with mutable arrays, loops and what not runs under GHC with appropriate compiler flags about as fast as it does in C (and then some high-tech libraries can sometimes squeeze similar performance out of prettier, higher level code).
I want help learning how to get my Java-like Clojure code to run as fast as in Java. Take this example:
(defn f [x y z n]
(+ (* 2 (+ (* x y) (+ (* y z) (* x z))))
(* 4 (+ x y z n -2) (- n 1))))
(defmacro from [[var ini cnd] & body]
`(loop [~var ~ini]
(when ~cnd
~#body
(recur (inc ~var)))))
(defn g [n]
(let [c (long-array (inc n))]
(from [x 1 (<= (f x x x 1) n)]
(from [y x (<= (f x y y 1) n)]
(from [z y (<= (f x y z 1) n)]
(from [k 1 (<= (f x y z k) n)]
(let [l (f x y z k)]
(aset c l (inc (aget c l))))))))
c))
(defn h [x]
(loop [n 1000]
(let [^longs c (g n)]
(if-let [k (some #(when (= x (aget c %)) %)
(range 1 (inc n)))]
k
(recur (* 2 n))))))
(time (print (h 1000)))
It takes about 85 seconds using Clojure 1.6 on my (admittedly) slow machine. Equivalent code in Java runs in about 0.4 seconds. I'm not greedy, I just want to get the Clojure code to run in, say, around 2 seconds.
The first thing I did was enable *warn-on-reflection* but sadly, with that lonely type hint there are no further warnings. What am I doing wrong?
This gist contains both the Java and Clojure versions of the code.
Unfortunately *warn-on-reflection* doesn't warn you about primitive boxing - which I think is the main problem here. You want to be using unboxed primitive arithmetic at all times for maximum speed.
The following hints should help you optimise this:
Do a (set! *unchecked-math* true) to get faster primitive numerical operations
Try initialising your loops with (long ~ini). You want to force the use of primitives this way
Try putting a primitive hint ^long n to the function g
Try type-hinting your long array ^longs c - this should hopefully make Clojure use the faster primitive aget.
Type hint f as a primitive function ^long [^long x ^long y ^long z ^long n] or similar. This is very important, otherwise f will return boxed numbers....
If you succeed in eliminating all the boxed numbers, then this kind of code should be nearly as fast as pure Java.

What is the difference between these two blocks of Clojure code?

While working on the Clojure Koans, I had to calculate the factorial of a number iterativly, I did find the solution, but I have a question about the difference between 2 solutions, one that works and one that doens't, although I don't understand why:
The one that works:
(defn factorial [n]
(loop [n n
acc 1]
(if (zero? n)
acc
(recur (dec n) (* n acc )))
)
The one that desn't:
(defn factorial [n]
(loop [n n
acc 1]
(if (zero? n)
1
(recur (dec n) (* n acc )))
)
Note that the only difference is the returned value of the If block if the condition is met.
The second factorial function always returns 1. The code is built to use an accumulator variable (acc), and the first code block gets it right by returning this accumulator variable.
A factorial function can be written to return 1, though, if an accumulator variable is not used. Since this method does not utilize loop / recur, it can cause a stack overflow easily: try (fact 5000).
(defn factorial [x]
(if (<= x 1)
1
(* x (factorial (- x 1)))))
(source)
it's hard to work out what you think should be happening for the question to make sense.
i think maybe you think loop is doing more than it does? your code is almost equivalent to:
(defn factorial
([n] (factorial n 1)
([n acc]
(if (zero? n)
acc
(recur (dec n) (* n acc)))))
which is a stack-safe version of
(defn factorial
([n] (factorial n 1)
([n acc]
(if (zero? n)
acc
(factorial (dec n) (* n acc)))))
so the acc (or 1) is the final value returned from the function.
all that loop does is give a different target for recur, which is useful if you have some code between the start of the function and the point where you want to repeat. it's basically a label for a goto.

Resources