PyTorch 2d Convolution with sparse filters - sparse-matrix

I am trying to perform a spatial convolution (e.g. on an image) in pytorch on dense input using a sparse filter matrix.
Sparse Tensors are implemented in PyTorch. I tried to use a sparse Tensor, but it ends up with a segmentation fault.
import torch
from torch.autograd import Variable
from torch.nn import functional as F
# build sparse filter matrix
i = torch.LongTensor([[0, 1, 1],[2, 0, 2]])
v = torch.FloatTensor([3, 4, 5])
filter = Variable(torch.sparse.FloatTensor(i, v, torch.Size([3,3])))
inputs = Variable(torch.randn(1,1,6,6))
F.conv2d(inputs, filter)
Can anyone just give me a hint how to do that?
Thanks in advance!
dymat

I know this question is outdated but I also know that there are still people looking for an answer (like myself) so here goes...
On sparse filters
If you'd like sparse convolution without the freedom to specify the sparsity pattern yourself, take a look at dilated conv (also called atrous conv). This is implemented in PyTorch and you can control the degree of sparsity by adjusting the dilation param in Conv2d.
If you'd like to specify the sparsity pattern yourself, to the best of my knowledge, this feature is not currently available in PyTorch. But you may want to check this out if you are ok with using Tensorflow. There is also a blog post providing more details on this repo.
On sparse input
A list of existing and TODO sparse tensor operations is available here.
This talks about the current state of sparse tensors in PyTorch.
This lets you propose your own sparse tensor use case to the PyTorch contributors.
But at the time of this writing, I did not see conv on sparse tensors being an implemented feature or on the TODO list. nn.Linear on sparse input, however, is supported.
And if you build a sparse tensor and apply a conv layer to it, PyTorch (1.1.0) throws an exception:
>>> a = torch.zeros((1, 3, 2, 2), layout=torch.sparse_coo)
>>> net = torch.nn.Conv2d(1, 1, 1)
>>> b = net(a)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 338, in forward
self.padding, self.dilation, self.groups)
RuntimeError: sparse tensors do not have is_contiguous
>>> torch.__version__
'1.1.0'
Changing to a linear layer and it would work:
>>> c = torch.zeros((1, 2), layout=torch.sparse_coo)
>>> another_net = torch.nn.Linear(2, 1)
>>> d = another_net(c)
>>> d
tensor([[0.1944]], grad_fn=<AddmmBackward>)
>>> d.backward()
>>> another_net.weight.grad
tensor([[0., 0.]])
>>> another_net.bias.grad
tensor([1.])

these guys did something like a sparse conv2d - https://github.com/numenta/nupic.torch/

Related

Type hinting numpy arrays and batches

I'm trying to create a few array types for a scientific python project. So far, I have created generic types for 1D, 2D and ND numpy arrays:
from typing import Any, Generic, Protocol, Tuple, TypeVar
import numpy as np
from numpy.typing import _DType, _GenericAlias
Vector = _GenericAlias(np.ndarray, (Tuple[int], _DType))
Matrix = _GenericAlias(np.ndarray, (Tuple[int, int], _DType))
Tensor = _GenericAlias(np.ndarray, (Tuple[int, ...], _DType))
The first issue is that mypy says that Vector, Matrix and Tensor are not valid types (e.g. when I try myvar: Vector[int] = np.array([1, 2, 3]))
The second issue is that I'd like to create a generic type Batch that I'd like to use like so: Batch[Vector[complex]] should be like Matrix[complex], Batch[Matrix[float]] should be like Tensor[float] and Batch[Tensor[int] should be like Tensor[int]. I am not sure what I mean by "should be like" I guess I mean that mypy should not complain.
How to I get about this?
You should not be using protected members (names starting with an underscore) from the outside. They are typically marked this way to indicated implementation details that may change in the future, which is exactly what happened here between versions of numpy. For example in 1.24 your import line from numpy.typing fails at runtime because the members you try to import are no longer there.
There is no need to use internal alias constructors because numpy.ndarray is already generic in terms of the array shape and its dtype. You can construct your own type aliases fairly easily. You just need to ensure you parameterize the dtype correctly. Here is a working example:
from typing import Tuple, TypeVar
import numpy as np
T = TypeVar("T", bound=np.generic, covariant=True)
Vector = np.ndarray[Tuple[int], np.dtype[T]]
Matrix = np.ndarray[Tuple[int, int], np.dtype[T]]
Tensor = np.ndarray[Tuple[int, ...], np.dtype[T]]
Usage:
def f(v: Vector[np.complex64]) -> None:
print(v[0])
def g(m: Matrix[np.float_]) -> None:
print(m[0])
def h(t: Tensor[np.int32]) -> None:
print(t.reshape((1, 4)))
f(np.array([0j+1])) # prints (1+0j)
g(np.array([[3.14, 0.], [1., -1.]])) # prints [3.14 0. ]
h(np.array([[3.14, 0.], [1., -1.]])) # prints [[ 3.14 0. 1. -1. ]]
The issue currently is that shapes have almost no typing support, but work is underway to implement that using the new TypeVarTuple capabilities provided by PEP 646. Until then, there is little practical use in discriminating the types by shape.
The batch issue should be a separate question. Try and ask one question at a time.

Scalar multiplication of tensors

In TensorFlow Core for Python there is an operation called tf.math.scalar_mul.
I would like to scale tensors in TensorFlow.js. By trying for instance a * 0.1 I get an error message (at least from Typescript):The left-hand side of an arithmetic operation must be of type 'any', 'number', 'bigint' or an enum type.ts(2362).
Is scaling tensors applicable without making them arrays, scale elementwise then transform back to tensors?
Though, tf.scalar can be used, one can also use direclty tensor.mul(number) like the following
tf.tensor([1, 2, 3, 4]).mul(5).print(); // [5, 10, 15, 20]
I found the answer in the API documentation. For multiplying a tensor a with 5, just a.mul(tf.scalar(5)).

How do I implement a controlled Rx in Cirq/Tensorflow Quantum?

I am trying to implement a controlled rotation gate in Cirq/Tensorflow Quantum.
The readthedocs.io at https://cirq.readthedocs.io/en/stable/gates.html states:
"Gates can be converted to a controlled version by using Gate.controlled(). In general, this returns an instance of a ControlledGate. However, for certain special cases where the controlled version of the gate is also a known gate, this returns the instance of that gate. For instance, cirq.X.controlled() returns a cirq.CNOT gate. Operations have similar functionality Operation.controlled_by(), such as cirq.X(q0).controlled_by(q1)."
I have implemented
cirq.rx(theta_0).on(q[0]).controlled_by(q[3])
I get the following error:
~/.local/lib/python3.6/site-packages/cirq/google/serializable_gate_set.py in
serialize_op(self, op, msg, arg_function_language)
193 return proto_msg
194 raise ValueError('Cannot serialize op {!r} of type {}'.format(
--> 195 gate_op, gate_type))
196
197 def deserialize_dict(self,
ValueError: Cannot serialize op cirq.ControlledOperation(controls=(cirq.GridQubit(0, 3),), sub_operation=cirq.rx(sympy.Symbol('theta_0')).on(cirq.GridQubit(0, 0)), control_values=((1,),)) of type <class 'cirq.ops.controlled_gate.ControlledGate'>
I have the qubits and symbols initialized as:
q = cirq.GridQubit.rect(1, 4)
symbol_names = x_0, x_1, x_2, x_3, theta_0, theta_1, z_2, z_3
I do re-use the circuits with various circuits.
My question: How do I properly implement a controlled Rx in Cirq/Tensorflow Quantum?
P.S. I can't find a tag for Google Cirq
Follow up:
How does this generalize to the similar situations of Controlled Ry and controlled Rz?
For Rz I found a gate decomposition at https://threeplusone.com/pubs/on_gates.pdf, involving H.on(q1), CNOT(q0, q1), H.on(q2), but this is not yet an CRz with an arbitrary angle. Would I introduce the angle before the H?
For the Ry, I did not find a decomposition yet, neither the CRy.
What you have is a completely correct implementation of a controlled X rotation in Cirq. It can be used in simulation and other things like cirq.unitary without any issues.
TFQ only supports a subset of gates in Cirq. For example a cirq.ControlledGate can have an arbitrary number of control qubits, which in some cases can make it harder to decompose down to primitive gates that are compatible with NiSQ hardware platforms (This is why cirq.decompose doesn't do anything to ControlledOperations). TFQ only supports these primitive style gates , for a full list of the supported gates, you can do:
tfq.util.get_supported_gates().keys()
In your case it is possible to come up with a simpler implementation of this gate. First we can note that cirq.rx(some angle) is equal to cirq.X**(some angle / pi) offset by a global phase:
>>> a = cirq.rx(0.3)
>>> b = cirq.X**(0.3 / np.pi)
>>> cirq.equal_up_to_global_phase(cirq.unitary(a), cirq.unitary(b))
True
Lets move to using X now. Then the operation we are after is:
>>> qs = cirq.GridQubit.rect(1,2)
>>> a = (cirq.X**0.3)(qs[0]).controlled_by(qs[1])
>>> b = cirq.CNOT(qs[0], qs[1]) ** 0.3
>>> cirq.equal_up_to_global_phase(cirq.unitary(a), cirq.unitary(b))
True
Since cirq.CNOT is in the TFQ supported gates it should be serializable without any issues. If you want to make a symbolized version of the gate you can just replace the 0.3 with a sympy.Symbol.
Answer to follow up: If you want to do a CRz you can do the same thing you did above, swapping out the CNOT gate for the CZ gate. For CRy it's not as easy. For that I would recommend doing some combination of: cirq.Y(0) and cirq.YY(0, 1).
Edit: tfq-nightly builds and likely releases after 0.4.0 now include support for arbitrary controlled gates. So on these versions of tfq you could also do things like cirq.Y(...).controlled_by(...) to achieve the desired result now too.

How to make a square root fit to data in Julia 1.0

I have a very simple question. How would one fit a square root model to a dataset in Julia. I'm currently using the GLM package, which works very well with linear data. I need to plot a phase velocity as a function of string tension, and it seems like #formula(v ~ sqrt(T)) does not work in
import GLM, DataFrames # No global namespace imports
df = DataFrames.DataFrame(
v = [1, 1.5, 1.75],
T = [1, 2, 3]
)
fit = GLM.glm(GLM.#formula(v ~ T^(1/2)), vs)
Is GLM at all viable here, or do I need to resort to another package such as LsqFit?
You can use sqrt in your model formula. Just do it e.g. like this:
GLM.lm(GLM.#formula(v ~ sqrt(T)), df)
if you want to fit a linear model use lm function and a second argument should be a data frame, which is df in your case.

numpy array has different shape when I pass it as input to a keras layer

I have a keras encoder (part of an autoencoder) built this way:
input_vec = Input(shape=(200,))
encoded = Dense(20, activation='relu')(input_vec)
encoder = Model(input_vec, encoded)
I want to generate a dummy input using numpy.
>>> np.random.rand(200).shape
(200,)
But if i try to pass it as input to the encoder I get a ValueError:
>>> encoder.predict(np.random.rand(200))
>>> Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/home/francesco/PycharmProjects/W2VAutoencoded/venv/lib/python3.6/site-packages/keras/engine/training.py", line 1817, in predict
check_batch_axis=False)
File "/home/francesco/PycharmProjects/W2VAutoencoded/venv/lib/python3.6/site-packages/keras/engine/training.py", line 123, in _standardize_input_data
str(data_shape))
ValueError: Error when checking : expected input_1 to have shape (200,) but got array with shape (1,)
What am I missing?
While Keras Layers (Input, Dense, etc.) take as parameters the shape(s) for a single sample, Model.predict() takes as input batched data (i.e. samples stacked over the 1st dimension).
Right now your model believes you are passing it a batch of 200 samples of shape (1,).
This would work:
batch_size = 1
encoder.predict(np.random.rand(batch_size, 200))

Resources