I feel like this is a pretty basic question but I can't see to get my head around it. I have a velocity vector V with two components in x and in y that both depend on time. v_x(t) = sin(at) and v_y(t) = exp(bt).
I have created an array for t ranging from 0 to 100 with the function np.arange(0,100,1). I want to plot with matplotlib the resulting vector and its evolution with respect to t. How do I do that?
A simple way that you might try is the following:
import matplotlib.pyplot as plt
import numpy as np
t = np.arange(0,100,1)
a = 0.1
b = 0.05
vel = np.array([np.sin(a*t), np.exp(b*t)],float)
plt.plot(vel[0,:],vel[1,:])
plt.xlabel('x-axis')
plt.ylabel('y-axis')
plt.show()
This gave me the plot
The line vel = np.array([np.sin(a*t), np.exp(b*t)],float) basically does all the magic. np.sin(a*t) makes a new array using each value in t to calculate each element (and np.exp() works similarly).
It would also be possible (and fun) to make an animation of the evolution of the vector.
Related
I have a csv-file with 140.000 points(rows). It consists of:
longitude value
latitude value
subsidence value at specific points. I assume that these points are spatially correlated.
I want to perform a spatial interpolation analysis of the area of the points. Meaning, I will do a geostatistical interpolation analysis using for example Kriging i.e gaussian process regression.
I'm reading on the sci-kit learn page about gaussian regression. But I'm unsure how to implement it.
What characteristics determine which kernel I can use? How do I implement this with my spatial data correctly?
First, you should convert your data to a projected coordinate system. The best one depends on where your data are located; essentially you want the conformal projection with the least amount of distortion for your location (e.g. Mercator near the equator, or Transverse Mercator if your data are all close to a single meridian. You can achieve this in geopandas for example:
import pandas as pd
import geopandas as gpd
data = {'latitude': [54, 56, 58], 'longitude': [-62, -63, -64], 'subsidence': [10, 20, 30]}
df = pd.DataFrame(data)
params ={
'geometry': gpd.points_from_xy(df.longitude, df.latitude),
'crs': 'epsg:4326', # WGS84
}
gdf_ = gpd.GeoDataFrame(df, **params)
gdf = gdf_.to_crs('epsg:2961') # UTM20N
gdf
This GeoDataFrame is now in projected coordinates. Now you can do some spatial prediction:
import numpy as np
from sklearn.gaussian_process.kernels import RBF
from sklearn.gaussian_process import GaussianProcessRegressor
kernel = RBF(length_scale=100_000)
gpr = GaussianProcessRegressor(kernel=kernel)
X = np.array([gdf.geometry.x, gdf.geometry.y]).T
y = gdf.subsidence
gpr.fit(X, y)
Now you can predict at a location, e.g. gpr.predict([(500_000, 5_900_000)]) gives array([22.86764555]) for my toy data.
To predict on a grid, you could do this:
x_min, x_max = np.min(gdf.geometry.x) - 10_000, np.max(gdf.geometry.x) + 10_000
y_min, y_max = np.min(gdf.geometry.y) - 10_000, np.max(gdf.geometry.y) + 10_000
grid_y, grid_x = np.mgrid[y_min:y_max:10_000, x_min:x_max:10_000]
X_grid = np.stack([grid_x.ravel(), grid_y.ravel()]).T
y_grid = gpr.predict(X_grid).reshape(grid_x.shape)
Things to think about:
You should read the docs for geopandas and sklearn.gaussian_process
You should fit the kernel to your data.
You might want to use an anisotropic kernel.
The estimator has a few hypterparameters which you should pay attention to.
Don't forget to do some validation of your estimates, check the distribution of the residuals, etc.
You might want to use a specialist geostats package like gstools, which will do a lot of the fiddly things for you.
This is probably a trivial question, but I want to select a portion of a complex array in order to plot it in Matlab. My MWE is
n = 100;
t = linspace(-1,1,n);
x = rand(n,1)+1j*rand(n,1);
plot(t(45):t(55),real(x(45):x(55)),'.--')
plot(t(45):t(55),imag(x(45):x(55)),'.--')
I get an error
Error using plot
Vectors must be the same length.
because the real(x(45):x(55)) bit returns an empty matrix: Empty matrix: 1-by-0. What is the easiest way to fix this problem without creating new vectors for the real and imaginary x?
It was just a simple mistake. You were doing t(45):t(55), but t is generated by rand, so t(45) would be, say, 0.1, and t(55), 0.2, so 0.1:0.2 is only 0.1. See the problem?
Then when you did it for x, the range was different and thus the error.
What you want is t(45:55), to specify the vector positions from 45 to 55.
This is what you want:
n = 100;
t = linspace(-1,1,n);
x = rand(n,1)+1j*rand(n,1);
plot(t(45:55),real(x(45:55)),'.--')
plot(t(45:55),imag(x(45:55)),'.--')
I have 3 graphs of an IV curve (monotonic increasing function. consider a positive quadratic function in the 1st quadrant. Photo attached.) at 3 different temperatures that are not obtained linearly. That is, one is obtained at 25C, one at 125C and one at 150C.
What I want to make is an interpolated 2D array to fill in the other temperatures. My current method to build a meshgrid-type array is as follows:
H = 5;
W = 6;
[Wmat,Hmat] = meshgrid(1:W,1:H);
X = [1:W; 1:W];
Y = [ones(1,W); H*ones(1,W)];
Z = [vecsatIE25; vecsatIE125];
img = griddata(X,Y,Z,Wmat,Hmat,'linear')
This works to build a 6x6 array, which I can then index one row from, then interpolate from that 1D array.
This is really not what I want to do.
For example, the rows are # temps = 25C, 50C, 75C, 100C, 125C and 150C. So I must select a temperature of, say, 50C when my temperature is actually 57.5C. Then I can interpolate my I to get my V output. So again for example, my I is 113.2A, and I can actually interpolate a value and get a V for 113.2A.
When I take the attached photo and digitize the plot information, I get an array of points. So my goal is to input any Temperature and any current to get a voltage by interpolation. The type of interpolation is not as important, so long as it produces reasonable values - I do not want nearest neighbor interpolation, linear or something similar is preferred. If it is an option, I will try different kinds of interpolation later (cubic, linear).
I am not sure how I can accomplish this, ideally. The meshgrid array does not need to exist. I simply need the 1 value.
Thank you.
If I understand the question properly, I think what you're looking for is interp2:
Vq = interp2(X,Y,V,Xq,Yq) where Vq is the V you want, Xq and Yq are the temperature and current, and X, Y, and V are the input arrays for temperature, current, and voltage.
As an option, you can change method between 'linear', 'nearest', 'cubic', 'makima', and 'spline'
How do I create a numpy array with a symmtric range of indices?
I tried:
np.zeros(-100:100,-100:100)
expecting an array with index -100 to +100.
NumPy doesn't have support for this; indexing always starts at zero. You could try writing your own subclass of ndarray, but you'd have a lot of awkward design decisions to make; for example, if you have an array with indices from -100 to 100, where do the indices of array[1:] start and end? And how do you broadcast operations across arrays with compatible shapes, but different indices? What would the bounds be of the result of something like dot?
After some searching I found this. I really didn't need a symmetrical array index. What I wanted was a way to simply specify a circular aperture without an x,y loop. I really don't understand what this ogrid thingy does, but it works.
Cheers,
Gert
import numpy as np
import matplotlib.pyplot as plt
r= 800
s= 1000
y,x = np.ogrid[-s:s+1, -s:s+1]
mask = x*x + y*y <= r*r
aperture = np.ones((2*s+1, 2*s+1))
aperture[mask] = 0
plt.imshow(aperture)
plt.show()
I am doing a tutorial (code here) and video here (13:00 minutes in).
My only change is using the mnist training set from a different location (creating a one-hot encoding) but it is not working. I literally copy-pasted all the code (except for the mnist loading) in this example. Here is the code:
import theano
from theano import tensor as T
import numpy as np
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata("MNIST Original")
trX, teX, trY_digit, teY_digit = train_test_split(mnist.data, mnist.target, test_size=.4)
#Get one-hot encoding
enc = OneHotEncoder()
enc.fit([[n] for n in range(10)])
trY, teY = sparse_to_floatX(enc.transform(trY_digit[:,newaxis])), sparse_to_floatX(enc.transform(teY_digit[:,newaxis]))
def floatX(X):
return np.asarray(X, dtype=theano.config.floatX)
def init_weights(shape):
return theano.shared(floatX(np.random.randn(*shape) * 0.1))
def model(X, w):
return T.nnet.softmax(T.dot(X, w))
X = T.fmatrix()
Y = T.fmatrix()
w = init_weights((784, 10))
py_x = model(X, w)
y_pred = T.argmax(py_x, axis=1)
cost = T.mean(T.nnet.categorical_crossentropy(py_x, Y))
gradient = T.grad(cost=cost, wrt=w)
update = [[w, w - gradient * 0.05]]
train = theano.function(inputs=[X, Y], outputs=cost, updates=update, allow_input_downcast=True)
predict = theano.function(inputs=[X], outputs=y_pred, allow_input_downcast=True)
for i in range(10):
print w.get_value()
cost = train(trX, trY)
print i, predict(teX)
The weight vector updates once, and becomes all NaN on the second update. I am very new to theano, but I am looking for tips to figure this out, especially if someone has already done this tutorial.
UPDATE.
It looks like the gradient is the issue.
When I add this
the_grad = T.sum(gradient)
f_grad = theano.function(inputs=[X, Y], outputs=the_grad, allow_input_downcast=True)
print f_grad(trX, trY)
It prints NaN. This appears to be the correct usage of T.grad though.
UPDATE 2.
When I change the cost function to this:
cost = T.mean(T.sum(T.sqr(py_x - Y), axis=1), axis=0)
It is working now but I only have 70% accuracy which is really bad.
UPDATE 3.
I downloaded the MNIST data used in the tutorial and it worked with 92% accuary.
I am not sure why my first mnist datasource was dying with the crossentropy cost, and then performing really poor with mean squared error cost function.