Is there a way to query a Julia Timer to see whether it is running or not? - timer

Julia has a Timer object which can run a callback function at a set repetition rate. According to the standard library, the only functions using a Timer are start_timer() and stop_timer().
Is there a way, given a Timer, to check whether it is currently running or not?

Best way to look for something like this is methodswith. Unfortunately, there aren't many methods defined for Julia Timer objects:
julia> methodswith(Timer, true) # true to check super types, too (but not Any)
5-element Array{Method,1}:
stop_timer(timer::Timer) at stream.jl:499
close(t::Timer) at stream.jl:460
start_timer(timer::Timer,timeout::Int64,repeat::Int64) at deprecated.jl:204
start_timer(timer::Timer,timeout::Real,repeat::Real) at stream.jl:490
close(t::Timer) at stream.jl:460
So we've got to dig a bit deeper. Looking at the implementation for Timer reveals that it simply wraps a libuv timer object. So I just did a search through libuv/include/uv.h for the timer API, and found int uv_is_active(const uv_handle_t* handle), which looks very promising. I simply wrap this c call in a Julian function, and it works like a charm:
julia> isactive(t::Timer) = bool(ccall(:uv_is_active, Cint, (Ptr{Void},), t.handle));
julia> t = Timer((x)->println(STDOUT,"\nboo"));
julia> isactive(t)
false
julia> start_timer(t, 10., 0); # fire in 10 seconds, don't repeat
julia> isactive(t)
true
julia>
boo
julia> isactive(t)
false

Related

How do I implement a controlled Rx in Cirq/Tensorflow Quantum?

I am trying to implement a controlled rotation gate in Cirq/Tensorflow Quantum.
The readthedocs.io at https://cirq.readthedocs.io/en/stable/gates.html states:
"Gates can be converted to a controlled version by using Gate.controlled(). In general, this returns an instance of a ControlledGate. However, for certain special cases where the controlled version of the gate is also a known gate, this returns the instance of that gate. For instance, cirq.X.controlled() returns a cirq.CNOT gate. Operations have similar functionality Operation.controlled_by(), such as cirq.X(q0).controlled_by(q1)."
I have implemented
cirq.rx(theta_0).on(q[0]).controlled_by(q[3])
I get the following error:
~/.local/lib/python3.6/site-packages/cirq/google/serializable_gate_set.py in
serialize_op(self, op, msg, arg_function_language)
193 return proto_msg
194 raise ValueError('Cannot serialize op {!r} of type {}'.format(
--> 195 gate_op, gate_type))
196
197 def deserialize_dict(self,
ValueError: Cannot serialize op cirq.ControlledOperation(controls=(cirq.GridQubit(0, 3),), sub_operation=cirq.rx(sympy.Symbol('theta_0')).on(cirq.GridQubit(0, 0)), control_values=((1,),)) of type <class 'cirq.ops.controlled_gate.ControlledGate'>
I have the qubits and symbols initialized as:
q = cirq.GridQubit.rect(1, 4)
symbol_names = x_0, x_1, x_2, x_3, theta_0, theta_1, z_2, z_3
I do re-use the circuits with various circuits.
My question: How do I properly implement a controlled Rx in Cirq/Tensorflow Quantum?
P.S. I can't find a tag for Google Cirq
Follow up:
How does this generalize to the similar situations of Controlled Ry and controlled Rz?
For Rz I found a gate decomposition at https://threeplusone.com/pubs/on_gates.pdf, involving H.on(q1), CNOT(q0, q1), H.on(q2), but this is not yet an CRz with an arbitrary angle. Would I introduce the angle before the H?
For the Ry, I did not find a decomposition yet, neither the CRy.
What you have is a completely correct implementation of a controlled X rotation in Cirq. It can be used in simulation and other things like cirq.unitary without any issues.
TFQ only supports a subset of gates in Cirq. For example a cirq.ControlledGate can have an arbitrary number of control qubits, which in some cases can make it harder to decompose down to primitive gates that are compatible with NiSQ hardware platforms (This is why cirq.decompose doesn't do anything to ControlledOperations). TFQ only supports these primitive style gates , for a full list of the supported gates, you can do:
tfq.util.get_supported_gates().keys()
In your case it is possible to come up with a simpler implementation of this gate. First we can note that cirq.rx(some angle) is equal to cirq.X**(some angle / pi) offset by a global phase:
>>> a = cirq.rx(0.3)
>>> b = cirq.X**(0.3 / np.pi)
>>> cirq.equal_up_to_global_phase(cirq.unitary(a), cirq.unitary(b))
True
Lets move to using X now. Then the operation we are after is:
>>> qs = cirq.GridQubit.rect(1,2)
>>> a = (cirq.X**0.3)(qs[0]).controlled_by(qs[1])
>>> b = cirq.CNOT(qs[0], qs[1]) ** 0.3
>>> cirq.equal_up_to_global_phase(cirq.unitary(a), cirq.unitary(b))
True
Since cirq.CNOT is in the TFQ supported gates it should be serializable without any issues. If you want to make a symbolized version of the gate you can just replace the 0.3 with a sympy.Symbol.
Answer to follow up: If you want to do a CRz you can do the same thing you did above, swapping out the CNOT gate for the CZ gate. For CRy it's not as easy. For that I would recommend doing some combination of: cirq.Y(0) and cirq.YY(0, 1).
Edit: tfq-nightly builds and likely releases after 0.4.0 now include support for arbitrary controlled gates. So on these versions of tfq you could also do things like cirq.Y(...).controlled_by(...) to achieve the desired result now too.

Scala: array.toList vs array.to[List]

I am wondering what is the difference between .toList vs .to[List] in arrays. I made this test in the spark-shell and there is no difference in the result, but I don't know what is better to use. Any comments?
scala> val l = Array(1, 2, 3).toList
l: List[Int] = List(1, 2, 3)
scala> val l = Array(1, 2, 3).to[List]
l: List[Int] = List(1, 2, 3)
Adding to Luis' comment, the to[List] is (as Luis mentioned) actually using a factory parameter to construct the list. However, the linked source is only valid from Scala 2.13+, after the collections overhaul. The syntax you use wouldn't work in Scala 2.13, instead you would have to write .to(List), explicitly passing the factory argument. In previous Scala versions, the method looked like this. The CanBuildFrom is essentially a factory passed as an implicit parameter.
The reason this method exists is that it is generic beyond collections in the standard library (and you don't have to define every single possible transformation as a separate method). You can use another collections library (e.g. Breeze), and either construct a factory or use an included one, and use the to(factory) method. You can also use it in generic functions where you take the factory as a parameter and just pass it on to the conversion method, for example something like:
def mapAndConvert[A, B, C](list: List[A], f: A => B, factory: Factory[B, C]): C[B] =
list.map(f).to(factory)
In this example I don't need to know what collection C is, yet I can still return one with ease.

Retrievieng SharedArray on Julia 0.4

Edit: I did try using fetch()
I seem to have break something in Julia this week. I had played with the SharedArray type on a computer with 12 threads (6 doble thread cpu), I maneged to get a result and to print it and save it as matrix text file without problem, more or less following the instructions on http://docs.julialang.org/en/release-0.4/manual/parallel-computing/#shared-arrays. I had this routine where I initialized a number of workers and an Array, pass them as argument to a function, and expected to obtain a SharedArray with numerical values in return. It went more or less like this
addprocs(11)
BCero=rand(128,128)
ConjuntoX=Array[]
for j,k=1:128
push!(ConjuntoX, [j,k])
end
function obtenerKernelParalell(LasB::Array, lasX::Array, jmax::Int)
result=SharedArray(Float64,(jmax,jmax))
#sync #parallel for j=1:jmax
xj=lasX[j]
for k=1:j
xk=lasX[k]
for l=1:jmax
xl=lasX[l]
result[j,k]+= LasB[(xk-xl+xconstante)...]*LasB[(xj-xl+xconstante)...]
end
end
end
end
KSuaveParalel=obtenerKernelParalell(BceroSuave, ConjuntoX,128);
What I did get after runing this for the first time was an array that behaved itself like a normal Array. If I typed KSuaveParalel[3,12] I obtained a value. But now I get the next thing from the REPL:
KSuaveParalel
11-element Array{Any,1}:
RemoteRef{Channel{Any}}(2,1,122)
RemoteRef{Channel{Any}}(3,1,123)
RemoteRef{Channel{Any}}(4,1,124)
RemoteRef{Channel{Any}}(5,1,125)
RemoteRef{Channel{Any}}(6,1,126)
RemoteRef{Channel{Any}}(7,1,127)
RemoteRef{Channel{Any}}(8,1,128)
RemoteRef{Channel{Any}}(9,1,129)
RemoteRef{Channel{Any}}(10,1,130)
RemoteRef{Channel{Any}}(11,1,131)
RemoteRef{Channel{Any}}(12,1,132)
So I got an array of References... and I do not know how to get its values. Also using fetch() doesn't seem to work.
What is going on here?
Edit:
You need to make sure you return the array in the function.
i.e. return result
You will want to call fetch() on each Remote Reference object to wait and get the value that will be returned.
[fetch(x) for x in KSuaveParalel]
The RemoteRef object is returned immediately (i.e. before the computation has actually been done). See this answer (Julia Parallel macro does not seem to work) and the docs for more info.
http://docs.julialang.org/en/release-0.4/stdlib/parallel/#Base.fetch
Okey, sort of figured a way to do it, but is not probably the most efficient way to do it: If I convert INSIDE the function the result from SharredArray to Array, I get the (apparently) correct result:
function obtenerKernelParalell(LasB::Array, lasX::Array, jmax::Int)
result=SharedArray(Float64,(jmax,jmax))
#sync #parallel for j=1:jmax
xj=lasX[j]
for k=1:j
xk=lasX[k]
for l=1:jmax
xl=lasX[l]
result[j,k]+= LasB[(xk-xl+xconstante)...]*LasB[(xjxl+xconstante)...]
end
end
end
result=Array(result)
return result
end
Is this because the conversion does the fetch() with the right settings automatically?

How do I make RGeo::Feature::Geometry methods available to RGeo::Geographic::SphericalMultiPolygonImpl?

I am using Rails 4.2 with PostGIS, rgeo and the activerecord-postgis-adapter gem on Ubuntu. I have also installed the following libraries: libgeos++-dev libgeos-3.4.2 libgeos-c1 libgeos-dbg libgeos-dev libgeos-doc libgeos-ruby1.8 ruby-geos. An RGeo::Error::UnsupportedOperation is being raised when I call contains? on an RGeo::Geographic::SphericalMultiPolygonImpl. How do I make the Feature::Geometry methods available to my RGeo::Geographic::SphericalMultiPolygonImpl?
You probably need to break up that multipolygon into pieces and run a "contains" call on each of them. I'm guessing the #contains method must be run on one polygon at a time. Here's what that operation might look like:
responses = {}
n = this_shape.num_geometries
(0..n).to_a.each do |i|
responses[i] = this_shape.geometry_n(i).contains?(other_shape)
end
Alternatively, you could break up those multipolygons into individual polygons and then run the loop on the array...

How do I convert a Zip into an array in rust 0.8?

The docs seem to indicate that after zipping two iterators together, you can turn them into an array with .from_iterator(), but when I try to do this, rust reports:
std::iter::Zip<std::vec::VecIterator<,int>,std::vec::VecIterator<,int>>` does not implement any method in scope named `from_iterator`
Could someone please give working sample code for rust 0.8 that turns a Zip into an array?
That would be FromIterator::from_iterator(iterator).
The more commonly used interface for that is Iterator.collect (link is to master docs, but it's the same in 0.8 and 0.9), whereby you will call iterator.collect().
Rust 0.8 is dated, you should upgrade to 0.9. The following works in 0.9:
let a = ~[1,12,3,67];
let b = ~[56,74,13,2];
let c: ~[(&int,&int)] = a.iter().zip(b.iter()).collect();
println!("{:?}", c);
Result:
~[(&1, &56), (&12, &74), (&3, &13), (&67, &2)]

Resources