Artificial time implementation in clojure - timer

I know that it's usually a good idea to keep away from state when programming clojure. However, time seems to me something very stateful.
My goal is to represent time in my game I am writing in clojure. My idea was to run a timer as a seperate process and read its value whenever needed but there doesn't seem to be any natural way of acheiving this in clojure.
How would you implement time in a simple roleplaying game in clojure?

I've adapted the code from this Java answer into my date-dif function:
(defn date-dif [d1 d2]
(let [tu java.util.concurrent.TimeUnit/SECONDS
t1 (. d1 getTime)
t2 (. d2 getTime)]
(. tu convert (- t2 t1) java.util.concurrent.TimeUnit/MILLISECONDS)))
(defn start-timer []
(let [cur-time (new java.util.Date)]
(fn [] (date-dif cur-time (new java.util.Date)))))
start-timer will create and return a function that, when called, returns the number of seconds since the function was created. To use it, do something like this:
rpg-time.core> (def my-timer (start-timer))
#'rpg-time.core/my-timer
rpg-time.core> (my-timer)
3
rpg-time.core> (my-timer)
5
rpg-time.core> (my-timer)
6
If you want a different unit of time, instead of seconds, replace SECONDS with something more appropriate from here.
There are certainly other options you can consider. This is just the first one I thought of, and it isn't very hard to code or use.

Timer is pretty stateless. Actually timer is a point in time in which it started.
I will use clj-time here for simplicity.
(require '[clj-time.core :as t]
'[clj-time.coerce :as c])
(defn timer
[]
(let [now-sec #(-> (t/now) c/to-long (/ 1000) int)
start (now-sec)]
#(- (now-sec) start)))
(def my-timer (timer))
;; after a second
(my-timer)
;;=> 1
;; and after 2 seconds more
(my-timer)
;;=> 3

Related

Z3 Forall with array

Z3 provides unknown for the simple problem:
(assert
(forall ((y (Array Int Int)))
(= (select y 1) 0))
)
(check-sat)
I've found that it becomes sat if negate the forall, but this seems like a particularly simple thing to be unable to solve.
This is causing issues because the class of problems I want to solve are more like,
(declare-fun u () Int)
(assert
(forall ((y (Array Int Int)) )
(=>
(= u 0) (<= (select y 1) 0))
)
)
(check-sat)
Where negating the forall alone is not the same problem, so that cannot be done here. Is there some way to pose this style of problem to Z3 to get an un/sat result?
Problems with quantifiers are always problematic with SMT solvers, especially if they involve arrays and alternating quantifiers like in your example. You essentially have exits u. forall y. P(u, y). Z3, or any other SMT solver, will have hard time dealing with these sorts of problems.
When you have a quantified assertion like you do where you have forall's either at the top-level or nested with exists, the logic becomes semi-decidable. Z3 uses MBQI (model-based quantifier instantiation) to heuristically solve such problems, but it more often than not fails to do so. The issue isn't merely that z3 is not capable: There's no decision procedure for such problems, and z3 does its best.
You can try giving quantifier patterns for such problems to help z3, but I don't see an easy way to apply that in your problem. (Quantifier patterns apply when you have uninterpreted functions and quantified axioms. See https://rise4fun.com/z3/tutorialcontent/guide#h28). So, I don't think it'll work for you. Even if it did, patterns are very finicky to program with, and not robust with respect to changes in your specification that might otherwise look innocuous.
If you're dealing with such quantifiers, SMT solvers are probably just not a good fit. Look into semi-automated theorem provers such as Lean, Isabelle, Coq, etc., which are designed to deal with quantifiers in a much more disciplined way. Of course, you lose full automation, but most of these tools can use an SMT solver to discharge subgoals that are "easy" enough. That way, you still do the "heavy-lifting" manually, but most subgoals are automatically handled by z3. (Especially in the case of Lean, see here: https://leanprover.github.io/)
There's one extra closing (right) parentheses, which needs to be removed. Also, add assert before the forall statement.
(assert ( forall ( (y (Array Int Int) ) )
(= (select y 1) 0)
))
(check-sat)
Run the above code and you should get unsat as the answer.
For the second program, alias' answer may be useful to you.

Constrain variable to be in array

The problem I'm working on involves making sure that certain variables be perfect squares.
As far as I understood, there is no native support for sqrt in z3 (yet). My idea was to simply have an array with the first say 300 squares and check if the variable is included. How would I go about that?
As I'm frankly not extremely proficient in z3, there might be better suggestions on how to approach the problem, open for anything!
Without knowing exactly what you are trying to do it is hard to come up with good advice here. But, perhaps you don't need sqrt? If all you want are numbers that are perfect squares, then you can go the other way around:
(declare-fun sqrtx () Int)
(declare-fun x () Int)
; this will make sure x is a perfect square:
(assert (and (>= sqrtx 0) (= x (* sqrtx sqrtx))))
; make it interesting:
(assert (> x 10))
(check-sat)
(get-value (x sqrtx))
This prints:
sat
((x 16)
(sqrtx 4))
In essence, for each "perfect-square" you want, you can declare a ghost variable and assert the required relation.
Note that this gives rise to nonlinearity (since you're multiplying two symbolic values), so the solver might have a hard time handling all your constraints. But without seeing what you're actually trying to do, I think this would be the simplest approach to having perfect squares and reasoning with them.

Matrix-like operations for 3D arrays in R?

I'm currently estimating a model in R using optim, but it's really slow, on the order of 30 minutes if I initialize it with zeroes. When I profile the whole thing, I find that apply is taking the most time, which makes sense. So that leads me to my question:
x.arr <- array(1:9, c(3, 10, 3))
b <- 1:3
f <- function(x, b) {
exp(x%*%b)
}
u.mat <- apply(x.arr, 2, f, b = b)
Is there a more efficient way to do this? x.arr is a 3D array, so it seems like there ought to be some way to use matrix operations to solve the same goal.
Additionally, I run Linux, so I assume that I can also easily do something with mclapply or something, but every time that I've made the attempt, I've managed to hang my entire R session.
There's also a package, tensor but everything I've tried from it so far was so far from what I was actually looking for that I wasn't even sure what I was getting back.
My linear algebra isn't the best, but something tells me there ought to be some sort of good option without using apply.
As these things go, I found a solution that speeds it up considerably using the tensor package. (I spent 4 hours on this yesterday, but apparently today things just clicked.)
require(tensor)
x.arr <- array(1:9, c(3, 10, 3))
b <- 1:3
u.mat <- exp(tensor(x.arr, b, alongA = 3, alongB = 1))
Which now takes me from a time of ~ 30 minutes to a time of around ~ 10 minutes.
I'm still interested if anyone has an idea of how to make it faster, of course, but maybe if someone else finds this question, this will at least be a satisfactory answer for them.

Haskell real-time update and lookup performance

I am writing a game-playing ai (aichallenge.org - Ants), which requires a lot of updating of, and referring to data-structures. I have tried both Arrays and Maps, but the basic problem seems to be that every update creates a new value, which makes it slow. The game boots you out if you take more than one second to make your move, so the application counts as "hard-real-time". Is it possible to have the performance of mutable data-structures in Haskell, or should I learn Python, or rewrite my code in OCaml?
I have completely rewritten the Ants "starter-pack". Changed from Arrays to Maps because my tests showed that Maps update much faster.
I ran the Maps version with profiling on, which showed that about 20% of the time is being taken by Map updates alone.
Here is a simple demonstration of how slow Array updates are.
slow_array =
let arr = listArray (0,9999) (repeat 0)
upd i ar = ar // [(i,i)]
in foldr upd arr [0..9999]
Now evaluating slow_array!9999 takes almost 10 seconds! Although it would be faster to apply all the updates at once, the example models the real problem where the array must be updated each turn, and preferably each time you choose a move when planning your next turn.
Thanks to nponeccop and Tener for the reference to the vector modules. The following code is equivalent to my original example, but runs in 0.06 seconds instead of 10.
import qualified Data.Vector.Unboxed.Mutable as V
fast_vector :: IO (V.IOVector Int)
fast_vector = do
vec <- V.new 10000
V.set vec 0
mapM_ (\i -> V.write vec i i) [0..9999]
return vec
fv_read :: IO Int
fv_read = do
v <- fast_vector
V.read v 9999
Now, to incorporate this into my Ants code...
First of all, think if you can improve your algorithm. Also note that the default Ants.hs is not optimal and you need to roll your own.
Second, you should use a profiler to find where the performance problem is instead of relying on hand-waving. Haskell code is usually much faster than Python (10-30 times faster, you can look at Language Shootout for example comparison) even with functional data structures, so probably you do something wrong.
Haskell supports mutable data pretty well. See ST (state thread) and libraries for mutable arrays for the ST. Also take a look at vectors package. Finally, you can use data-parallel haskell, haskell-mpi or other ways of parallelization to load all available CPU cores, or even distribute work over several computers.
Are you using compiled code (e.g. cabal build or ghc --make) or use runhaskell or ghci? The latter ones are bytecode interpreters and create much slower code than the native code compiler. See Cabal reference - it is the preferred way to build applications.
Also make sure you have optimization turned on (-O2 and other flags). Note that -O vs -O2 can make a difference, and try different backends including the new LLVM backend (-fllvm).
Updating arrays one element at a time is incredibily inefficient because each update involves making a copy of the whole array. Other data structures such as Map are implemented as trees and thus allow logarithmic time updates. However, in general updating functional data structures one element at a time is often sub-optimal, so you should try to take a step back and think about how you can implement something as a transformation of the whole structure at once instead of a single element at a time.
For example, your slow_array example can be written much more efficiently by doing all the updates in one step, which only requires the array to be copied once.
faster_array =
let arr = listArray (0,9999) (repeat 0)
in arr // [(i,i) | i <- [0..9999]]
If you cannot think of an alternative to the imperative one-element-at-a-time algorithm, mutable data structures have been mentioned as another option.
You are basically asking for mutable data structure. Apart from standard libraries I would recommend you lookup this:
vector: http://hackage.haskell.org/package/vector
That said, I'm not so sure that you need them. There are neat algorithms for persitent data structures as well. A fast replacement for Data.Map is hash table from this package:
unordered-containers: http://hackage.haskell.org/package/unordered-containers

Cross between "dotimes" and "for" functionality?

I frequently find myself wanting to efficiently run a Clojure function multiple times with an integer index (like "dotimes") but also get the results out as a ready-made sequence/list (like "for").
i.e. I'd like to do something like this:
(fortimes [i 10] (* i i))
=> (0 1 4 9 16 25 36 49 64 81)
Clearly it would be possible to do:
(for [i (range 10)] (* i i))
But I'd like to avoid creating and throwing away the temporary range list if at all possible.
What's the best way to achieve this in Clojure?
Generating a range in a for loop, as you show in your second example, is the idiomatic solution for solving this problem in Clojure.
Since Clojure is grounded in the functional paradigm, programming in Clojure, by default, will generate temporary data structures like this. However, since both the "range" and the "for" command operate with lazy sequences, writing this code does not force the entire temporary range data structure to exist in memory at once. If used properly, there is therefore a very low memory overhead for lazy seqs as used in this example. Also, the computational overhead for your example is modest and should only grow linearly with the size of the range. This is considered an acceptable overhead for typical Clojure code.
The appropriate way to completely avoid this overhead, if the temporary range list is absolutely, positively unacceptable for your situation, is to write your code using atoms or transients: http://clojure.org/transients. It you do this, however, you will give up many of the advantages of the Clojure programming model in exchange for slightly better performance.
I've written an iteration macro that can do this and other types of iteration very efficiently. The package is called clj-iterate, both on github and clojars. For example:
user> (iter {for i from 0 to 10} {collect (* i i)})
(0 1 4 9 16 25 36 49 64 81 100)
This will not create a temporary list.
I'm not sure why you're concerned with "creating and throwing away" the lazy sequence created by the range function. The bounded iteration done by dotimes is likely more efficient, it being an inline increment and compare with each step, but you may pay an additional cost to express your own list concatenation there.
The typical Lisp solution is to prepend new elements to a list that you build as you go, then reverse that built-up list destructively to yield the return value. Other techniques to allow appending to a list in constant time are well known, but they do not always prove to be more efficient than the prepend-then-reverse approach.
In Clojure, you can use transients to get there, relying on the destructive behavior of the conj! function:
(let [r (transient [])]
(dotimes [i 10]
(conj! r (* i i))) ;; destructive
(persistent! r))
That seems to work, but the documentation on transients warns that one should not use conj! to "bash values in place"—that is, to count on destructive behavior in lieu of catching the return value. Hence, that form needs to be rewritten.
In order to rebind r above to the new value yielded by each call to conj!, we'd need to use an atom to introduce one more level of indirection. At that point, though, we're just fighting against dotimes, and it would be better to write your own form using loop and recur.
It would be nice to be able to preallocate the vector to be of the same size as the iteration bound. I don't see a way to do so.
(defmacro fortimes [[i end] & code]
`(let [finish# ~end]
(loop [~i 0 results# '()]
(if (< ~i finish#)
(recur (inc ~i) (cons ~#code results#))
(reverse results#)))))
example:
(fortimes [x 10] (* x x))
gives:
(0 1 4 9 16 25 36 49 64 81)
Hmm, can't seem to answer your comment because I wasn't registered. However, clj-iterate uses a PersistentQueue, which is part of the runtime library, but not exposed through the reader.
It's basically a list on which you can conj to the end.

Resources