How to make a square root fit to data in Julia 1.0 - package

I have a very simple question. How would one fit a square root model to a dataset in Julia. I'm currently using the GLM package, which works very well with linear data. I need to plot a phase velocity as a function of string tension, and it seems like #formula(v ~ sqrt(T)) does not work in
import GLM, DataFrames # No global namespace imports
df = DataFrames.DataFrame(
v = [1, 1.5, 1.75],
T = [1, 2, 3]
)
fit = GLM.glm(GLM.#formula(v ~ T^(1/2)), vs)
Is GLM at all viable here, or do I need to resort to another package such as LsqFit?

You can use sqrt in your model formula. Just do it e.g. like this:
GLM.lm(GLM.#formula(v ~ sqrt(T)), df)
if you want to fit a linear model use lm function and a second argument should be a data frame, which is df in your case.

Related

Type hinting numpy arrays and batches

I'm trying to create a few array types for a scientific python project. So far, I have created generic types for 1D, 2D and ND numpy arrays:
from typing import Any, Generic, Protocol, Tuple, TypeVar
import numpy as np
from numpy.typing import _DType, _GenericAlias
Vector = _GenericAlias(np.ndarray, (Tuple[int], _DType))
Matrix = _GenericAlias(np.ndarray, (Tuple[int, int], _DType))
Tensor = _GenericAlias(np.ndarray, (Tuple[int, ...], _DType))
The first issue is that mypy says that Vector, Matrix and Tensor are not valid types (e.g. when I try myvar: Vector[int] = np.array([1, 2, 3]))
The second issue is that I'd like to create a generic type Batch that I'd like to use like so: Batch[Vector[complex]] should be like Matrix[complex], Batch[Matrix[float]] should be like Tensor[float] and Batch[Tensor[int] should be like Tensor[int]. I am not sure what I mean by "should be like" I guess I mean that mypy should not complain.
How to I get about this?
You should not be using protected members (names starting with an underscore) from the outside. They are typically marked this way to indicated implementation details that may change in the future, which is exactly what happened here between versions of numpy. For example in 1.24 your import line from numpy.typing fails at runtime because the members you try to import are no longer there.
There is no need to use internal alias constructors because numpy.ndarray is already generic in terms of the array shape and its dtype. You can construct your own type aliases fairly easily. You just need to ensure you parameterize the dtype correctly. Here is a working example:
from typing import Tuple, TypeVar
import numpy as np
T = TypeVar("T", bound=np.generic, covariant=True)
Vector = np.ndarray[Tuple[int], np.dtype[T]]
Matrix = np.ndarray[Tuple[int, int], np.dtype[T]]
Tensor = np.ndarray[Tuple[int, ...], np.dtype[T]]
Usage:
def f(v: Vector[np.complex64]) -> None:
print(v[0])
def g(m: Matrix[np.float_]) -> None:
print(m[0])
def h(t: Tensor[np.int32]) -> None:
print(t.reshape((1, 4)))
f(np.array([0j+1])) # prints (1+0j)
g(np.array([[3.14, 0.], [1., -1.]])) # prints [3.14 0. ]
h(np.array([[3.14, 0.], [1., -1.]])) # prints [[ 3.14 0. 1. -1. ]]
The issue currently is that shapes have almost no typing support, but work is underway to implement that using the new TypeVarTuple capabilities provided by PEP 646. Until then, there is little practical use in discriminating the types by shape.
The batch issue should be a separate question. Try and ask one question at a time.

TensorFlow - Cannot get the shape of matrix with the get_shape command

I can't seem to get the shape of the tensor when I do
get_shape().as_list()
Here is the code I have written:
matrix1 = tf.placeholder(tf.int32)
matrix2 = tf.placeholder(tf.int32)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
a = sess.run(matrix1, {matrix1: [[1,2,3],[4,5,6],[7,8,9]]})
b = sess.run(matrix2, {matrix2: [[10,11,12],[13,14,15], [16,17,18]]})
print(a.get_shape().as_list()) #ERROR
I get the following error:
AttributeError: 'numpy.ndarray' object has no attribute 'get_shape'
I want to know the shape of the matrix so that I can take in an arbitrary matrix and loop through its rows and columns.
Just summarizing the discussion in the comments with few notes
Both matrix1 and a are multidimensional arrays, but there is a difference:
matrix1 is an instance of tf.Tensor, which supports two ways to access the shape: matrix1.shape attribute and matrix1.get_shape() method.
The result of tf.Tensor evaluation, a, is a numpy ndarray, which has just a.shape attribute.
Historically, tf.Tensor had only get_shape() method, shape was added later to make it similar to numpy. And one more note: in tensorflow, tensor shape can be dynamic (like in your example), in which case neither get_shape() nor shape will return a number. In this case, one can use tf.shape function to access it in runtime (here's an example when it might be useful).

PyTorch 2d Convolution with sparse filters

I am trying to perform a spatial convolution (e.g. on an image) in pytorch on dense input using a sparse filter matrix.
Sparse Tensors are implemented in PyTorch. I tried to use a sparse Tensor, but it ends up with a segmentation fault.
import torch
from torch.autograd import Variable
from torch.nn import functional as F
# build sparse filter matrix
i = torch.LongTensor([[0, 1, 1],[2, 0, 2]])
v = torch.FloatTensor([3, 4, 5])
filter = Variable(torch.sparse.FloatTensor(i, v, torch.Size([3,3])))
inputs = Variable(torch.randn(1,1,6,6))
F.conv2d(inputs, filter)
Can anyone just give me a hint how to do that?
Thanks in advance!
dymat
I know this question is outdated but I also know that there are still people looking for an answer (like myself) so here goes...
On sparse filters
If you'd like sparse convolution without the freedom to specify the sparsity pattern yourself, take a look at dilated conv (also called atrous conv). This is implemented in PyTorch and you can control the degree of sparsity by adjusting the dilation param in Conv2d.
If you'd like to specify the sparsity pattern yourself, to the best of my knowledge, this feature is not currently available in PyTorch. But you may want to check this out if you are ok with using Tensorflow. There is also a blog post providing more details on this repo.
On sparse input
A list of existing and TODO sparse tensor operations is available here.
This talks about the current state of sparse tensors in PyTorch.
This lets you propose your own sparse tensor use case to the PyTorch contributors.
But at the time of this writing, I did not see conv on sparse tensors being an implemented feature or on the TODO list. nn.Linear on sparse input, however, is supported.
And if you build a sparse tensor and apply a conv layer to it, PyTorch (1.1.0) throws an exception:
>>> a = torch.zeros((1, 3, 2, 2), layout=torch.sparse_coo)
>>> net = torch.nn.Conv2d(1, 1, 1)
>>> b = net(a)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 338, in forward
self.padding, self.dilation, self.groups)
RuntimeError: sparse tensors do not have is_contiguous
>>> torch.__version__
'1.1.0'
Changing to a linear layer and it would work:
>>> c = torch.zeros((1, 2), layout=torch.sparse_coo)
>>> another_net = torch.nn.Linear(2, 1)
>>> d = another_net(c)
>>> d
tensor([[0.1944]], grad_fn=<AddmmBackward>)
>>> d.backward()
>>> another_net.weight.grad
tensor([[0., 0.]])
>>> another_net.bias.grad
tensor([1.])
these guys did something like a sparse conv2d - https://github.com/numenta/nupic.torch/

Pure Python - Py3 Matrix with Array or list And Image

I have study array module in Python 3 also struct, and use make matrices (without using NUMPY). This reason not use Numpy or PIL libs it's that I wish port my code for use with tablet device (mobile device supports python, example using QPython). Like my PC brooken was used solutions like this for me or other people programming with tablet ( question of price $$ money) and use it construct my modules.
Well I don't know using array or list have better performance, in this moment I use implementation with list to create simple matrix and matrix for image PPM.
>>> arr = lambda m,n : [ [ for j in range(n)] for i in range(m) ]
>>> m = arr(2,3)
>>> m
image = lambda m,n: [ [ {'r':0,'g':0,'b':0} for j in range(n) ]
for i in range(m) ]

Theano: Restoring broadcastable settings after dense -> sparse -> dense transformation

Background: I'm working on a project that historically has relied on sparse matrices for a lot of the math, and developing a plugin to outsource some of the heavy lifting to theano. Since theano's sparse support is limited, we're building a dense version first -- but hopefully that explains why we're interested in the approach below.
The task: apply some operator to only the nonzero values of a matrix.
The following subroutine works most of the time:
import theano.sparse.basic as TSB
def _applyOpToNonzerosOfDense(self,op,expr):
sparseExpr = TSB.clean(TSB.csr_from_dense(expr))
newData = op(TSB.csm_data(sparseExpr)).flatten()
newSparse = TS.CSR(newData, \
TSB.csm_indices(sparseExpr), \
TSB.csm_indptr(sparseExpr), \
TSB.csm_shape(sparseExpr))
ret = TSB.dense_from_sparse(newSparse)
return ret
The problem comes when expr is not a canonical matrix tensor, but a row tensor (so, expr is 1xN and expr.broadcastable is (True, False)). When that happens, we need to be able to retain or restore the broadcast status in the returned tensor.
Some things I've tried that don't work:
dense_from_sparse doesn't support broadcastable settings
Theano 0.9 doesn't support assignment to ret.broadcastable
ret.dimshuffle( ('x',1) ) fails with "You cannot drop a non-broadcastable dimension."
ret has (ought to have) exactly the same shape as expr, so I wasn't expecting this to be hard. How do I get my broadcast settings back?
LOL, it's in the API: T.addbroadcast(x,*axes)

Resources