Scalar multiplication of tensors - tensorflow.js

In TensorFlow Core for Python there is an operation called tf.math.scalar_mul.
I would like to scale tensors in TensorFlow.js. By trying for instance a * 0.1 I get an error message (at least from Typescript):The left-hand side of an arithmetic operation must be of type 'any', 'number', 'bigint' or an enum type.ts(2362).
Is scaling tensors applicable without making them arrays, scale elementwise then transform back to tensors?

Though, tf.scalar can be used, one can also use direclty tensor.mul(number) like the following
tf.tensor([1, 2, 3, 4]).mul(5).print(); // [5, 10, 15, 20]

I found the answer in the API documentation. For multiplying a tensor a with 5, just a.mul(tf.scalar(5)).

Related

How do I implement a controlled Rx in Cirq/Tensorflow Quantum?

I am trying to implement a controlled rotation gate in Cirq/Tensorflow Quantum.
The readthedocs.io at https://cirq.readthedocs.io/en/stable/gates.html states:
"Gates can be converted to a controlled version by using Gate.controlled(). In general, this returns an instance of a ControlledGate. However, for certain special cases where the controlled version of the gate is also a known gate, this returns the instance of that gate. For instance, cirq.X.controlled() returns a cirq.CNOT gate. Operations have similar functionality Operation.controlled_by(), such as cirq.X(q0).controlled_by(q1)."
I have implemented
cirq.rx(theta_0).on(q[0]).controlled_by(q[3])
I get the following error:
~/.local/lib/python3.6/site-packages/cirq/google/serializable_gate_set.py in
serialize_op(self, op, msg, arg_function_language)
193 return proto_msg
194 raise ValueError('Cannot serialize op {!r} of type {}'.format(
--> 195 gate_op, gate_type))
196
197 def deserialize_dict(self,
ValueError: Cannot serialize op cirq.ControlledOperation(controls=(cirq.GridQubit(0, 3),), sub_operation=cirq.rx(sympy.Symbol('theta_0')).on(cirq.GridQubit(0, 0)), control_values=((1,),)) of type <class 'cirq.ops.controlled_gate.ControlledGate'>
I have the qubits and symbols initialized as:
q = cirq.GridQubit.rect(1, 4)
symbol_names = x_0, x_1, x_2, x_3, theta_0, theta_1, z_2, z_3
I do re-use the circuits with various circuits.
My question: How do I properly implement a controlled Rx in Cirq/Tensorflow Quantum?
P.S. I can't find a tag for Google Cirq
Follow up:
How does this generalize to the similar situations of Controlled Ry and controlled Rz?
For Rz I found a gate decomposition at https://threeplusone.com/pubs/on_gates.pdf, involving H.on(q1), CNOT(q0, q1), H.on(q2), but this is not yet an CRz with an arbitrary angle. Would I introduce the angle before the H?
For the Ry, I did not find a decomposition yet, neither the CRy.
What you have is a completely correct implementation of a controlled X rotation in Cirq. It can be used in simulation and other things like cirq.unitary without any issues.
TFQ only supports a subset of gates in Cirq. For example a cirq.ControlledGate can have an arbitrary number of control qubits, which in some cases can make it harder to decompose down to primitive gates that are compatible with NiSQ hardware platforms (This is why cirq.decompose doesn't do anything to ControlledOperations). TFQ only supports these primitive style gates , for a full list of the supported gates, you can do:
tfq.util.get_supported_gates().keys()
In your case it is possible to come up with a simpler implementation of this gate. First we can note that cirq.rx(some angle) is equal to cirq.X**(some angle / pi) offset by a global phase:
>>> a = cirq.rx(0.3)
>>> b = cirq.X**(0.3 / np.pi)
>>> cirq.equal_up_to_global_phase(cirq.unitary(a), cirq.unitary(b))
True
Lets move to using X now. Then the operation we are after is:
>>> qs = cirq.GridQubit.rect(1,2)
>>> a = (cirq.X**0.3)(qs[0]).controlled_by(qs[1])
>>> b = cirq.CNOT(qs[0], qs[1]) ** 0.3
>>> cirq.equal_up_to_global_phase(cirq.unitary(a), cirq.unitary(b))
True
Since cirq.CNOT is in the TFQ supported gates it should be serializable without any issues. If you want to make a symbolized version of the gate you can just replace the 0.3 with a sympy.Symbol.
Answer to follow up: If you want to do a CRz you can do the same thing you did above, swapping out the CNOT gate for the CZ gate. For CRy it's not as easy. For that I would recommend doing some combination of: cirq.Y(0) and cirq.YY(0, 1).
Edit: tfq-nightly builds and likely releases after 0.4.0 now include support for arbitrary controlled gates. So on these versions of tfq you could also do things like cirq.Y(...).controlled_by(...) to achieve the desired result now too.

Function imwrite on imageio (Python) seems to be rescaling image data

The function imwrite() on imageio (Python) seems to be rescaling image data prior to saving. My image data has values in the range [30, 255] but when I save it, it stretches the data so the final image spreads from [0, 255], hence creating "holes" in the histogram so as increasing overall contrast.
Is there any parameter to fix this and make imwrite() not to modify the data?
Thanks
So far I am setting a pixel to 0 to prevent this from happening:
prediction[0, 0, 0] = 0
(prediction is a [1024, 768, 3] array containing a colour photograph)
imageio.imwrite('prediction.png', prediction)
Fixed! I was using uint32 values instead of uint8, then imwrite() seems to perform some scaling corrections because it expects uint8 type. The problem is solved using:
prediction = np.round(prediction*255).astype('uint8')
Instead of converting to 32-bit integer, which I did at the beginning:
prediction = np.round(prediction*255).astype(int)

TensorFlow - Cannot get the shape of matrix with the get_shape command

I can't seem to get the shape of the tensor when I do
get_shape().as_list()
Here is the code I have written:
matrix1 = tf.placeholder(tf.int32)
matrix2 = tf.placeholder(tf.int32)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
a = sess.run(matrix1, {matrix1: [[1,2,3],[4,5,6],[7,8,9]]})
b = sess.run(matrix2, {matrix2: [[10,11,12],[13,14,15], [16,17,18]]})
print(a.get_shape().as_list()) #ERROR
I get the following error:
AttributeError: 'numpy.ndarray' object has no attribute 'get_shape'
I want to know the shape of the matrix so that I can take in an arbitrary matrix and loop through its rows and columns.
Just summarizing the discussion in the comments with few notes
Both matrix1 and a are multidimensional arrays, but there is a difference:
matrix1 is an instance of tf.Tensor, which supports two ways to access the shape: matrix1.shape attribute and matrix1.get_shape() method.
The result of tf.Tensor evaluation, a, is a numpy ndarray, which has just a.shape attribute.
Historically, tf.Tensor had only get_shape() method, shape was added later to make it similar to numpy. And one more note: in tensorflow, tensor shape can be dynamic (like in your example), in which case neither get_shape() nor shape will return a number. In this case, one can use tf.shape function to access it in runtime (here's an example when it might be useful).

PyTorch 2d Convolution with sparse filters

I am trying to perform a spatial convolution (e.g. on an image) in pytorch on dense input using a sparse filter matrix.
Sparse Tensors are implemented in PyTorch. I tried to use a sparse Tensor, but it ends up with a segmentation fault.
import torch
from torch.autograd import Variable
from torch.nn import functional as F
# build sparse filter matrix
i = torch.LongTensor([[0, 1, 1],[2, 0, 2]])
v = torch.FloatTensor([3, 4, 5])
filter = Variable(torch.sparse.FloatTensor(i, v, torch.Size([3,3])))
inputs = Variable(torch.randn(1,1,6,6))
F.conv2d(inputs, filter)
Can anyone just give me a hint how to do that?
Thanks in advance!
dymat
I know this question is outdated but I also know that there are still people looking for an answer (like myself) so here goes...
On sparse filters
If you'd like sparse convolution without the freedom to specify the sparsity pattern yourself, take a look at dilated conv (also called atrous conv). This is implemented in PyTorch and you can control the degree of sparsity by adjusting the dilation param in Conv2d.
If you'd like to specify the sparsity pattern yourself, to the best of my knowledge, this feature is not currently available in PyTorch. But you may want to check this out if you are ok with using Tensorflow. There is also a blog post providing more details on this repo.
On sparse input
A list of existing and TODO sparse tensor operations is available here.
This talks about the current state of sparse tensors in PyTorch.
This lets you propose your own sparse tensor use case to the PyTorch contributors.
But at the time of this writing, I did not see conv on sparse tensors being an implemented feature or on the TODO list. nn.Linear on sparse input, however, is supported.
And if you build a sparse tensor and apply a conv layer to it, PyTorch (1.1.0) throws an exception:
>>> a = torch.zeros((1, 3, 2, 2), layout=torch.sparse_coo)
>>> net = torch.nn.Conv2d(1, 1, 1)
>>> b = net(a)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 338, in forward
self.padding, self.dilation, self.groups)
RuntimeError: sparse tensors do not have is_contiguous
>>> torch.__version__
'1.1.0'
Changing to a linear layer and it would work:
>>> c = torch.zeros((1, 2), layout=torch.sparse_coo)
>>> another_net = torch.nn.Linear(2, 1)
>>> d = another_net(c)
>>> d
tensor([[0.1944]], grad_fn=<AddmmBackward>)
>>> d.backward()
>>> another_net.weight.grad
tensor([[0., 0.]])
>>> another_net.bias.grad
tensor([1.])
these guys did something like a sparse conv2d - https://github.com/numenta/nupic.torch/

SceneKit custom geometry .. SCNGeometrySource with 16-bit integers

Using SceneKit to build some large custom geometries, in order to keep array sizes as small as possible, I fed SCNGeometrySource with vectors with (x,y,z) as short (Int16) integers, as follows:
let vertexSource = SCNGeometrySource(data: dataContent!,
semantic: SCNGeometrySourceSemanticVertex, vectorCount: 4960,
floatComponents: false, componentsPerVector: 3, bytesPerComponent: 2,
dataOffset: 0, dataStride: 6)
.. however, this data was not acceptable, generating the error:
SceneKit: error, C3DMeshSourceGetValueAtIndexAsVector3 - Type not supported
Question: What forms of vector components does SceneKit support?
After writing this question I'm going to try trial & error to see what happens with triples and quads of Float32, Int32, etc. I know Float64 does work but I don't need better than ±2^15 precision, so why waste memory? Perhaps someone can provide a definitive answer .. Apple's documentation appears to not do so.

Resources