The result produced by the SMT solver is:
(or (and (>= c1 2) (<= c1 4) (= (+ c0 c1 c2) 5) (= (+ c0 c1) 4))
(and (>= c1 1) (<= c1 3) (= (+ c0 c1 c2) 5) (= (+ c0 c1) 4)))
But I am expecting something like:
(and (>= c1 1) (<= c1 4) (= (+ c0 c1 c2) 5) (= (+ c0 c1) 4)
can someone guide me to achieve this with Z3 solver?
Link : http://rise4fun.com/Z3/1Xz3
Z3 does not support this kind of simplification. It does not support simplification into a "normal form". Recall that the main interface to Z3 is to check whether a formula is satisfiable or unsatisfiable.
You can ask many queries to the SMT solver to extract a conjunction which is equivalent to a formula in case you are able to identify which literals should be tested for membership in the conjunction. This is not always possible to do in a syntactic way as your example suggests.
Related
I am looking on how to translate this pseudocode to idiomatic scheme:
for c in range(1, 1000):
for b in range(1, c):
for a in range(1, b):
do something if condition is true
Your pseudocode is translated into Scheme straightforwardly enough as a sequence of nested named lets, as
(let loopc ((c 1)) ; start the top loop with c = 1,
(if (> c 1000) ; if `c > 1000`,
#f ; exit the top loop ; or else,
(let loopb ((b 1)) ; start the first-nested loop with b = 1,
(if (> b c) ; if `b > c`,
(loopc (+ c 1)) ; end the first-nested loop, and
; continue the top loop with c := c+1 ; or else,
(let loopa ((a 1)) ; start the second-nested loop with a = 1,
(if (> a b) ; if `a > b`,
(loopb (+ b 1)) ; end the second-nested loop, and
; continue the first-nested loop with b := b+1
(begin ; or else,
(if condition ; if condition holds,
(do_some a b c) ; do something with a, b, c,
#f) ; and then,
(loopa (+ a 1)) ; continue the second-nested loop with a := a+1
)))))))
Of course it's a bit involved and error prone. On the other hand it is easy to shortcut the innermost loop (on as) and restart the top one (on cs) directly, by calling loopc from within loopa (always in tail position, NB !), should the need arise.
For example, the above code is directly applicable to the problem which you state in the comments, of finding a Pythagorean triplet which sums to 1000. Moreover, when you've hit upon the solution, you can directly call (loopc 1001) to immediately exit the whole threefold nested loops construction.
As a side note,
for c in range(3, 1000):
for b in range(2, c):
for a in range(1, b):
if a**2 + b**2 == c**2 and a + b + c == 1000:
print(a * b * c)
is not the most efficient solution. That would be, at least, first,
for c in range(3, 1000):
for b in range(2, 1000-c-1):
for a in range(1, 1000-c-b):
if a**2 + b**2 == c**2 and a + b + c == 1000:
print(a * b * c)
and furthermore,
for c in range(3, 1000):
c2 = c**2
for b in range(2, 1000-c-1):
a = 1000-c-b
if a**2 + b**2 == c2:
print(a * b * c)
# exit the loops
It is pretty easy to express any kind of loop (and indeed much more general control structures) in Scheme. For instance if you want a loop which simply counts you can do it like this:
(let loop ([i 0])
(if (>= i 10)
(values)
(begin
(display i)
(loop (+ i 1)))))
And you can obviously nest these.
But this is a lot less easy to read than, for instance, what you would write in Python:
for i in range(10):
print(i)
Well, OK. But Lisp-family languages are about building languages: if the syntax you want does not exist, you make it exist. Here is an example of doing that:
(define-syntax nloop*
;; Nested numerical loop
(syntax-rules ()
[(_ () form ...)
(begin form ...
(values))]
[(_ ((variable lower-inclusive upper-exclusive) more ...) form ...)
(let loop ([variable lower-inclusive])
(if (< variable upper-exclusive)
(begin
(nloop* (more ...) form ...)
(loop (+ variable 1)))
(values)))]
[(_ ((variable start-inclusive end-exclusive step) more ...) form ...)
(let ([cmp? (if (>= step 0) < >)])
(let loop ([variable start-inclusive])
(if (cmp? variable end-exclusive)
(begin
(nloop* (more ...) form ...)
(loop (+ variable step)))
(values))))]))
And now:
> (nloop* ((i 0 10))
(print i))
0123456789
But also
> (nloop* ((i 0 10)
(j 20 0 -4))
(displayln (list i j)))
(0 20)
(0 16)
(0 12)
(0 8)
(0 4)
...
(9 20)
(9 16)
(9 12)
(9 8)
(9 4)
So this nloop* construct I have just invented will do perfectly general numerical loops, including nested ones (this is why the *: nloop (which does not exist) would loop in parallel).
Of course in industrial-strength scheme-derived languages like Racket there are already constructs for doing this:
(for ([i (in-range 10)])
(for ([j (in-range 20 0 -4)])
(displayln (list i j))))
would be the idiomatic Racket way of doing this. But for and all its variants and infrastructure are just language constructs that you could in principle write yourself in a very bare-bones Scheme.
These are languages which are designed for building languages.
There are incredible many ways to cope with your problem. Here is the simplest that passed in my mind now:
(define iota
(lambda (n k)
((lambda (s) (s s 1))
(lambda (s x)
(k x)
(or (= n x)
(s s (+ x 1)))))))
(iota 1000
(lambda (c)
(iota c
(lambda (b)
(iota b
(lambda (a)
(newline)
(display a) (display " ")
(display b) (display " ")
(display c) (display " ")))))))
If the initial interval of c is 1..3 instead of 1..1000, the nested code will contain these values of (a b c):
a b c
1 1 1
1 1 2
1 2 2
2 2 2
1 1 3
1 2 3
2 2 3
1 3 3
2 3 3
3 3 3
I have two (1d) arrays, a long one A (size m) and a shorter one B (size n). I want to update the long array by adding each element of the short array at a particular index.
Schematically the arrays are structured like this,
A = [a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 ... am]
B = [ b1 b2 b3 b4 b5 b6 b7 b8 b9 ... bn ]
and I want to update A by adding the corresponding elements of B.
The most straightforward way is to have some index array indarray (same size as B) which tells us which index of A corresponds to B(i):
Option 1
do i = 1, size(B)
A(indarray(i)) = A(indarray(i)) + B(i)
end do
However, there is an organization to this problem which I feel should allow for some better performance:
There should be no barrier to doing this in vectorized way. I.e. the updates for each i are independent and can be done in any order.
There is no need to jump back and forth in array A. The machine should know to just loop once through the arrays only updating A where necessary.
There should be no need for any temporary arrays.
What is the best way to do this in Fortran?
Option 2
One way might be using PACK, UNPACK, and a boolean mask M (same size as A) that serves the same purpose as indarray:
A = [a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 ... am]
B = [ b1 b2 b3 b4 b5 b6 b7 b8 b9 ... bn ]
M = [. T T T . T T . . T . T T T T . ]
(where T represents .true. and . is .false.).
And the code would just be
A = UNPACK(PACK(A, M) + B, M, A)
This is very concise and maybe satisfies (1) and sort of (2) (seems to do two loops through the arrays instead of just one). But I fear the machine will be creating a few temporary arrays in the process which seems unnecessary.
Option 3
What about using where with UNPACK?
where (M)
A =A + UNPACK(B, M, 0.0d0)
end where
This seems about the same as option 2 (two loops and maybe creates temporary arrays). It also has to fill the M=.false. elements of the UNPACK'd array with 0's which seems like a total waste.
Option 4
In my situation the .true. elements of the mask will usually be in continuous blocks (i.e. a few true's in a row then a bunch of false's, then another block of true's, etc). Maybe this could lead to something similar to option 1. Let's say there's K of these .true. blocks. I would need an array indstart (of size K) giving the index into A of the start of each true block, and an index blocksize (size K) with the length of the true block.
j = 1
do i = 1, size(indstart)
i0 = indstart(i)
i1 = i0 + blocksize(i) - 1
A(i0:i1) = A(i0:i1) + B(j:j+blocksize(i)-1)
j = j + blocksize(i)
end do
At least this only does one loop through. This code seems more explicit about the fact that there's no jumping back and forth within the arrays. But I don't think the compiler will be able to figure that out (blocksize could have negative values for example). So this option probably won't result in a vectorized result.
--
Any thoughts on a nice way to do this? In my situation the arrays indarray, M, indstart, and blocksize would be created once but the adding operation must be done many times for different arrays A and B (though these arrays will have constant sizes). The where statement seems like it could be relevant.
I know that the XOR operator in base 2 for two bits A and B is (A+B)%2. In others words, it's addition modulo 2.
If I want to compute the truth table for XOR operation in a ternary system (base 3), is it same as addition modulo 3?
Eg: In a base 3 system, does 2 XOR 2 = 1 (since (2+2)%3 = 1)?
I read this link which indicated that 2 XOR 2 in a base 3 system is 2 and I can't understand the formula behind that?
In general, for any base 'x', is the XOR operation for that base - addition modulo x?
While I don't know that XOR has technically been defined in higher bases, the properties of XOR can be maintained in higher bases such that:
a ⊕ b ⊕ b = a
a ⊕ b ⊕ a = b
As the blog post shows, using (base - (a + b) % base) % base works. The part you were missing is the first instance of base. In the example of 2 ⊕ 2 in base 3, we get (3 - (2 + 2) % 3) % 3) which does give 2. This formula only works with single digit numbers. If you want to extend to multiple digits, you would use the same formula with each pair of digits just as standard XOR in binary does it bitwise.
For example, 185 ⊕ 42 in base 10 when run for each pair of digits (i.e. hundreds, tens, ones) gives us:
(10 - (1 + 0) % 10) % 10 => 9
(10 - (8 + 4) % 10) % 10 => 8
(10 - (5 + 2) % 10) % 10 => 3
Or 983 when put together. If you run 983 ⊕ 145, you'll find it comes out to 85.
Well, XOR stands for eXclusive OR and it is a logical operation. And this operation is only defined in binary logic. In your link author defines completely different operator which works the same as XOR for binary base. You may call it an "extension" of XOR operation for bases greater than 2. But as you mentioned in your question, there are multiple ways to do this extension. And each way would preserve some properties of "original" XOR and loose some other. For example, you can stick to observation that
a ⊕ b = (a + b) mod 2
In this case your "extended XOR" for base 3 would produce output of 1 for inputs 2, 2. But this XOR3 will no longer satisfy other equations that work for standard XOR, e.g. these ones:
a ⊕ b ⊕ b = a
a ⊕ b ⊕ a = b
If you choose to "save" those, you will get the operation from your link. You can also preserve some different property of XOR, say
a ⊕ a = 0
and get another operation that is different from the former two.
So the short answer is: the phrase "XOR function for non binary bases" doesn't make any sense. XOR operator is only defined in binary logic. If you want to extend it for non-binary bases or non-integer number or complex numbers or whatever, you can do it and define some extension function with whatever behavior and whatever "truth table" you want. But this extension won't be a XOR function anymore.
While reading this: http://graphics.stanford.edu/~seander/bithacks.html#ReverseByteWith64BitsDiv
I came to the phrase:
The last step, which involves modulus division by 2^10 - 1, has the
effect of merging together each set of 10 bits (from positions 0-9,
10-19, 20-29, ...) in the 64-bit value.
(it is about reversing the bits in a number)...
so I did some calculations:
reverted = (input * 0x0202020202ULL & 0x010884422010ULL) % 1023;
b = 74 : 01001010
b
* 0x0202020202 : 1000000010000000100000001000000010
= 9494949494 :01001010010010100100101001001010010010100
& 10884422010 :10000100010000100010000100010000000010000
= 84000010 : 10000100000000000000000000010000
% 1023 : 1111111111
= 82 : 01010010
Now, the only part which is somewhat unclear is the part where the big number modulo by 1023 (2^10 - 1) packs and gives me the inverted bits... I did not find any good doc about relationship between bit operations and the modulo operation (beside x % 2^n == x & (2^n - 1))) so maybe if someone would cast a light on this it would be very fruitful.
The modulo operation does not give you the inverted bits per se, it is just a binning operation.
First Line : word expansion
b * 0x0202020202 = 01001010 01001010 01001010 01001010 01001010 0
The multiplication operation has a convolution property, which means it replicate the input variable several times (5 here since it's a 8-bit word).
First Line : reversing bits
That's the most tricky part of the hack. You have to remember that we are working on a 8-bit word : b = abcdefgh, where [a-h] are either 1 or 0.
b * 0x0202020202 = abcdefghabcdefghabcdefghabcdefghabcdefgha
& 10884422010 = a0000f000b0000g000c0000h000d00000000e0000
Last Line : word binning
Modulo has a peculiar property : 10 ≡ 1 (mod 9) so 100 ≡ 10*10 ≡ 10*1 (mod 9) ≡ 1 (mod 9).
More generally, for a base b, b ≡ 1 (mod b - 1) so for all number a ≡ sum(a_k*b^k) ≡ sum (a_k) (mod b - 1).
In the example, base = 1024 (10 bits) so
b ≡ a0000f000b0000g000c0000h000d00000000e0000
≡ a*base^4 + 0000f000b0*base^3 + 000g000c00*base^2 + 00h000d000*base +00000e0000
≡ a + 0000f000b0 + 000g000c00 + 00h000d000 + 00000e0000 (mod b - 1)
≡ 000000000a
+ 0000f000b0
+ 000g000c00
+ 00h000d000
+ 00000e0000 (mod b - 1)
≡ 00hgfedcba (mod b - 1) since there is no carry (no overlap)
I want to describe the following problem using Z3.
int []array1=new int[100];
int []array2=new int[100];
array1[0~99]={i0, i1, ..., i99}; (i0...i99 > 0)
array2[0~99]={j0, j1, ..., j99}; (j0...j99 < 0)
int i, j; (0<=i<=99, 0<=j<=99)
does array1[i]==array2[j]?
This is unsatisfiable.
I use Z3 to describe this problem as follows:
(declare-const a1 (Array Int Int))
(declare-const a2 (Array Int Int))
(declare-const a1f (Array Int Int))
(declare-const a2f (Array Int Int))
(declare-const x0 Int)
....
(declare-const x99 Int)
(assert (> x0 0))
....
(assert (> x99 0))
(declare-const y0 Int)
....
(declare-const y99 Int)
(assert (< y0 0))
....
(assert (< y99 0))
(declare-const i1 Int)
(declare-const c1 Int)
(assert (<= i1 99))
(assert (>= i1 0))
(declare-const i2 Int)
(declare-const c2 Int)
(assert (<= i2 99))
(assert (>= i2 0))
(assert (= a1f (store (store (store (store (store (store (store (store ........ 95 x95) 96 x96) 97 x97) 98 x98) 99 x99)))
(assert (= a2f (store (store (store (store (store (store (store (store ........ 95 y95) 96 y96) 97 y97) 98 y98) 99 y99)))
(assert (= c1 (select a1f i1)))
(assert (= c2 (select a2f i2)))
(assert (= c1 c2))
(check-sat)
Is it right?
Is there other more efficient way to describe this by using the array theory?
I mean, a more efficient way requires less solving time for Z3.
Thanks.
For solving this problem, Z3 will use a Brute-force approach, it will essentially try all possible combinations. It will not manage to find the "smart" proof that we (as humans) immediately see.
On my machine, it takes approximately 17 secs for solving for arrays of size 100, 2.5 secs for arrays of size 50, and 0.1 secs for arrays of size 10.
However, if we encode the problem using quantifiers, it can instantaneously prove for any array size, we don't even need to specify a fixed array size.
In this encoding, we say that for all i in [0, N), a1[i] > 0 and a2[i] < 0. Then, we say we want to find j1 and j2 in [0, N) s.t. a1[j1] = a2[j2]. Z3 will immediately return unsat. Here is the problem encoded using the Z3 Python API. It is also available online at rise4fun.
a1 = Array('a1', IntSort(), IntSort())
a2 = Array('a2', IntSort(), IntSort())
N = Int('N')
i = Int('i')
j1 = Int('j1')
j2 = Int('j2')
s = Solver()
s.add(ForAll(i, Implies(And(0 <= i, i < N), a1[i] > 0)))
s.add(ForAll(i, Implies(And(0 <= i, i < N), a2[i] < 0)))
s.add(0 <= j1, j1 < N)
s.add(0 <= j2, j2 < N)
s.add(a1[j1] == a2[j2])
print s
print s.check()
In the encoding above, we are using the quantifiers to summarize the information that Z3 could not figure out by itself in your encoding. The fact that for indices in [0, N) one array has only positive values, and the other one only negative values.