R = (Not Y)(X ⊕ Z) + X (Y ⊕ Z) + Y(X ⊕ Z)
I have been trying to simplify this into minimal DNF but I keep getting a term with 3 variables. There is a hint that the solution has terms which all have only two variables each.
Any help would be appreciated.
Is this the what you were looking for ?
((~Y^~X^Z)V(~Y^X^~Z))V((X^~Y^Z)V(X^Y^~Z))V((Y^~X^Z)V(Y^X^~Z))
I am not sure if this is minimal, though
Related
I fail to understand why, in the below example, only x1 turns into a 1000 column array while y is a single number.
x = [0:1:999];
y = (7.5*(x))/(18000+(x));
x1 = exp(-((x)*8)/333);
Any clarification would be highly appreciated!
Why is x1 1x1000?
As given in the documentation,
exp(X) returns the exponential eˣ for each element in array X.
Since x is 1x1000, so -(x*8)/333 is 1x1000 and when exp() is applied on it, exponentials of all 1000 elements are computed and hence x1 is also 1x1000. As an example, exp([1 2 3]) is same as [exp(1) exp(2) exp(3)].
Why is y a single number?
As given in the documentation,
If A is a rectangular m-by-n matrix with m~= n, and B is a matrix
with n columns, then x = B/A returns a least-squares solution of the
system of equations x*A = B.
In your case,
A is 18000+x and size(18000+x) is 1x1000 i.e. m=1 and n=1000, and m~=n
and B is 7.5*x which has n=1000 columns.
⇒(7.5*x)/(18000+x) is returning you least-squares solution of equations x*(18000+x) = 7.5*x.
Final Remarks:
x = [0:1:999];
Brackets are unnecessary here and it should better be use like this: x=0:1:999 ;
It seems that you want to do element-wise division for computing x1 for which you should use ./ operator like this:
y=(7.5*x)./(18000+x); %Also removed unnecessary brackets
Also note that addition is always element-wise. .+ is not a valid MATLAB syntax (It works in Octave though). See valid arithmetic array and matrix operators in MATLAB here.
3. x1 also has some unnecessary brackets.
The question has already been answered by other people. I just want to point out a small thing. You do not need to write x = 0:1:999. It is better written as x = 0:999 as the default increment value used by MATLAB or Octave is 1.
Try explicitly specifying that you want to do element-wise operations rather than matrix operations:
y = (7.5.*(x))./(18000+(x));
In general, .* does elementwise multiplication, ./ does element-wise division, etc. So [1 2] .* [3 4] yields [3 8]. Omitting the dots will cause Matlab to use matrix operations whenever it can find a reasonable interpretation of your inputs as matrices.
I'm still fairly new to clojure, but a pattern that I find myself using frequently in it goes something like this: I have some collections and I want to build a new collection, usually a hash-map, out of them with some filters or conditions. There are always a few ways to do this: using loop or using reduce combined with map/filter for example, but I would like to implement something more like the for macro, which has great syntax for controlling what gets evaluated in the loop. I'd like to produce a macro with syntax that goes like this:
(defmacro build
"(build sym init-val [bindings...] expr) evaluates the given expression expr
over the given bindings (treated identically to the bindings in a for macro);
the first time expr is evaluated the given symbol sym is bound to the init-val
and every subsequent time to the previous expr. The return value is the result
of the final expr. In essence, the build macro is to the reduce function
as the for macro is to the map function.
Example:
(build m {} [x (range 4), y (range 4) :when (not= x y)]
(assoc m x (conj (get m x #{}) y)))
;; ==> {0 #{1 3 2}, 1 #{0 3 2}, 2 #{0 1 3}, 3 #{0 1 2}}"
[sym init-val [& bindings] expr]
`(...))
Looking at the for code in clojure.core, it's pretty clear that I don't want to re-implement its syntax myself (even ignoring the ordinary perils of duplicating code), but coming up with for-like behavior in the above macro is a lot trickier than I initially expected. I eventually came up with the following, but I feel that (a) this probably isn't terribly performant and (b) there ought to be a better, still clojure-y, way to do this:
(defmacro build
[sym init-val bindings expr]
`(loop [result# ~init-val, s# (seq (for ~bindings (fn [~sym] ~expr)))]
(if s#
(recur ((first s#) result#) (next s#))
result#))
;; or `(reduce #(%2 %1) ~init-val (for ~bindings (fn [~sym] ~expr)))
My specific questions:
Is there a built-in clojure method or library that solves this already, perhaps more elegantly?
Can someone who is more familiar with clojure performance give me an idea of whether this implementation is problematic and whether/how much I should be worried about performance, assuming that I may use this macro very frequently for relatively large collections?
Is there any good reason that I should use the loop over the reduce version of the macro above, or vice versa?
Can anyone see a better implementation of the macro?
Your reduce version was also my first approach based on the problem statement. I think it's nice and straightforward and I'd expect it to work very well, particularly since for will produce a chunked seq that reduce will be able to iterate over very quickly.
for generates functions to do output generation anyway and I wouldn't expect the extra layer introduced by the build expansion to be particularly problematic. It may still be worthwhile to benchmark this version based on volatile! as well:
(defmacro build [sym init-val bindings expr]
`(let [box# (volatile! ~init-val)] ; AtomicReference would also work
(doseq ~bindings
(vreset! box# (let [~sym #box#] ~expr)))
#box#))
Criterium is great for benchmarking and will eliminate any performance-related guesswork.
I don't want to quite take your example code of your doc string since it's not idiomatic clojure. But taking plumbing.core's for-map, you can come up with a similar for-map-update:
(defn update!
"Like update but for transients."
([m k f] (assoc! m k (f (get m k))))
([m k f x1] (assoc! m k (f (get m k) x1)))
([m k f x1 x2] (assoc! m k (f (get m k) x1 x2)))
([m k f x1 x2 & xs] (assoc! m k (apply f (get m k) x1 x2 xs))))
(defmacro for-map-update
"Like 'for-map' for building maps but accepts a function as the value to build map values."
([seq-exprs key-expr val-expr]
`(for-map-update ~(gensym "m") ~seq-exprs ~key-expr ~val-expr))
([m-sym seq-exprs key-expr val-expr]
`(let [m-atom# (atom (transient {}))]
(doseq ~seq-exprs
(let [~m-sym #m-atom#]
(reset! m-atom# (update! ~m-sym ~key-expr ~val-expr))))
(persistent! #m-atom#))))
(for-map-update
[x (range 4)
y (range 4)
:when (not= x y)]
x (fnil #(conj % y) #{} ))
;; => {0 #{1 3 2}, 1 #{0 3 2}, 2 #{0 1 3}, 3 #{0 1 2}}
Suppose I have an MxNx3 array A, where the first two indexes refer to the coordinates a point, and the last index (the number '3') refers to the three components of a vector. e.g. A[4,7,:] = [1,2,3] means that the vector at point (7,4) is (1,2,3).
Now I need to implement the following operations:
Lx = D*ux - (x-xo)
Ly = D*uy + (y-yo)
Lz = D
where D, ux, uy, xo, yo are all constants that are already known. Lx, Ly and Lz are the three components of the vector at each point (x,y) (note: x is the column index and y is the row index respectively). The biggest problem is about the x-xo and y-yo, as x and y are different for different points. So how to carry out these operations for an MxNx3 array efficiently, using vectorized code or some other fast methods?
thanks
You could use the meshgrid function from numpy:
import numpy as np
M=10
N=10
D=1
ux=0.5
uy=0.5
xo=1
yo=1
A=np.empty((M,N,3))
x=range(M)
y=range(N)
xv, yv = np.meshgrid(x, y, sparse=False, indexing='ij')
A[:,:,0]=D*ux - (xv-xo)
A[:,:,1]=D*uy - (yv-yo)
A[:,:,2]=D
If you want to operate on the X and Y values, you should include them in the matrix (or in other matrix) instead of relying in their indexes.
For that, you could use some of range creation routines from Numpy, specially numpy.mgrid.
The following code is from perm.v in the Ssreflect Coq library.
I want to know what this result is.
Lemma perm_invK s : cancel (fun x => iinv (perm_onto s x)) s.
Proof. by move=> x /=; rewrite f_iinv. Qed.
Definitions in Ssreflect can involve lots of concepts, and sometimes it is hard to understand what is actually going on. Let's analyze this by parts.
iinv (defined in fintype.v) has type
iinv : forall (T : finType) (T' : eqType) (f : T -> T')
(A : pred T) (y : T'),
y \in [seq f x | x in A] -> T
What this does is to invert any function f : T -> T' whose restriction to a subdomain A \subset T is surjective on T'. Put in other words, if you give me an y that is in the list of results of applying f to all elements of A, then I can find you an x \in A such that f x = y. Notice that this relies crucially on the fact that T is a finite type and that T' has decidable equality. The correctness of iinv is stated in lemma f_iinv, which is used above.
perm_onto has type codom s =i predT, where s is some permutation defined on a finite type T. This is saying, as its name implies, that s is surjective (which is obvious, since it is injective, by the definition of permutations in perm.v, and by the fact that the domain and codomain are the same). Thus, fun x => iinv (perm_onto s x) is a function that maps an element x to an element y such that s y = x. In other words, its the inverse of s. perm_invK is simply stating that this function is indeed the inverse (to be more precise, it is saying that it is the left inverse of s).
The definition that is actually useful, however, is perm_inv, which appears right below. What it does is that it packages fun x => iinv (perm_onto s x) with its proof of correctness perm_invK to define an element of perm_inv s of type {perm T} such that perm_inv s * s = s * perm_inv s = 1. Thus, you can view it as saying that the type {perm T} is closed under inverses, which allows you to use a lot of the ssr machinery for e.g. finite groups and monoids.
Excuse me if I get a little mathy for a second:
I have two sets, X and Y, and a many-to-many relation ℜ ⊆ X✗Y.
For all x ∈ X, let xℜ = { y | (x,y) ∈ ℜ } ⊆ Y, the subset of Y associated with x by ℜ.
For all y ∈ Y, let ℜy = { x | (x,y) ∈ ℜ } ⊆ X, the subset of X associated with y by ℜ.
Define a query as a set of subsets of Y, Q ⊆ ℘(Y).
Let the image of the query be the union of the subsets in Q:image(Q) = Uq∈Q q
Say an element of X x satisifies a query Q if for all q ∈ Q, q ∩ xℜ ≠ ∅, that is if all subsets in Q overlap with the subset of Y associated with x.
Define evidence of satisfaction of an element x of a query Q such that:evidence(x,Q) = xℜ ∩ image(Q)
That is, the parts of Y that are associated with x and were used to match some part of Q. This could be used to verify whether x satisfies Q.
My question is how should I store my relation ℜ so that I can efficiently report which x∈X satisfy queries, and preferably report evidence of satisfaction?
The relation isn't overly huge, as csv it's only about 6GB. I've got a couple ideas, neither of which I'm particularly happy with:
I could store { (x, xℜ) | ∀ x∈X } just in a flat file, then do O(|X||Q||Y|) work checking each x to see if it satisfies the query. This could be parallelized, but feels wrong.
I could store ℜ in a DB table indexed on Y, retrieve { (y, ℜy) | ∀ y∈image(Q) }, then invert it to get { (x, evidence(x,Q)) | ∀ x s.t. evidence(x,Q) ≠ ∅ }, then check just that to find the x that satisfy Q and the evidence. This seems a little better, but I feel like inverting it myself might be doing something I could ask my RDBMS to do.
How could I be doing this better?
I think #2 is the way to go. Also, if Q can be represented in CNF you can use several queries plus INTERSECT to get the RDBMS to do some of the heavy lifting. (Similarly with DNF and UNION.)
This also looks a bit a you want a "inverse index", which some RDBMS have support for. X = set of documents, Y = set of words, q = set of words matching the glob "a*c".
HTH