How to get the arclen between two curve points in Maya? - maya

In Maya 2015, I can get the arclen of a curve using this command:
cmds.arclen('bezier1')
But now I want to get the arclen of two points in my curve. Is there anyway to get this?

Using the Maya API, you can use the MFnNurbsCurve::findLengthFromParam (Maya 2016+ only). If you need it between two points, then call this function with each parameters and subtract.
If you don't want to use the api, then the other option is create a duplicate of your original curve and use "detach" it at the needed points and then use the arclen command on that new curve to get your length. So that is another way.
Note that when detaching a curve, it appears to try to keep the curvature as close the original as possible, but this isn't exact so the length may not be the same compared to the original curve. Maybe rebuilding the curve to have more points may increase the accuracy if that is an important factor for you.

Using Maya's API is surely the best way to do it as #scottiedoo said, but here is a function I made when I didn't know API and which gives you the same results.
from maya import cmds
def computeCrvLength(crv, startParam = None, endParam = None):
'''
Compute the length of a curve between the two given UParameters. If the both
UParameters arguments are set to None (default), will compute the length of
the whole curve.
Arguments:
- crv = string; an existing nurbCurve
- startParam = 0 <= float <= 1 or None; default = None; point parameter
value, if not None, will compute the points only between the startPt and
EndPt values.
- endParam = 0 <= float <= 1 or None; default = None; point parameter
value, if not None, will compute the points only between the startPt and
EndPt values.
Returns:
- The length of the curve between the given UParameters
- The length of the curve from its start to the startParam
- The length of the curve from its start to the endParam
'''
###### Exceptions
if cmds.objExists(crv) == False:
cmds.error ('The curve "%s" does\'nt exists.' % crv)
if cmds.filterExpand (crv, sm = 9) == None:
cmds.error ('The object "%s" is not a nurbCurve.' % crv)
if startParam != None:
if (0 <= startParam <= 1) == False:
cmds.error ('The start point parameter value must be between 0 and 1.')
if endParam != None:
if (0 <= endParam <= 1) == False:
cmds.error ('The end point parameter value must be between 0 and 1.')
if (startParam == None and endParam != None) or (startParam != None and endParam == None):
cmds.error ('The start and end points parameters must be both None or ' +
'both have values.')
if startParam != None and endParam != None:
if endParam < startParam:
cmds.error ('The end point parameter value cannot be less or ' +
'equal to start point parameter value.')
###### Function
if startParam == None and endParam == None:
crvLength = cmds.arclen (crv, ch = False)
distCrvToStartParam = 0
distCrvToEndParam = crvLength
else:
tmpArclenDim = cmds.arcLengthDimension (cmds.listRelatives(crv, s = True)[0]
+ '.u[0]')
cmds.setAttr (cmds.listRelatives(tmpArclenDim, p = True)[0] +
'.uParamValue', startParam)
distCrvToStartParam = cmds.getAttr (tmpArclenDim + '.al')
cmds.setAttr (cmds.listRelatives(tmpArclenDim, p = True)[0] +
'.uParamValue', endParam)
distCrvToEndParam = cmds.getAttr (tmpArclenDim + '.al')
cmds.delete (tmpArclenDim)
crvLength = (distCrvToEndParam - distCrvToStartParam)
return crvLength, distCrvToStartParam, distCrvToEndParam

Related

Snowpark with GEOMETRY type fails

I try to parse WKT and create GEOMETRY type with Snowpark Python API, but it fails:
session.sql("select to_geometry('POINT(1820.12 890.56)')").show()
TypeError: '>' not supported between instances of 'NoneType' and 'int'
I tested both Version 0.9.0 and 0.8.0, same result
The above SQL works fine in Snowflake's worksheets
It seems Snowpark Python can not handle the output of the geometry objects when showing. This is the failing code in type_utils.py:
if column_type_name == "DECIMAL" or (
(column_type_name == "FIXED" or column_type_name == "NUMBER") and scale != 0
):
if precision != 0 or scale != 0:
if precision > DecimalType._MAX_PRECISION:
return DecimalType(
DecimalType._MAX_PRECISION,
scale + precision - DecimalType._MAX_SCALE,
)
else:
return DecimalType(precision, scale)
else:
return DecimalType(38, 18)
The code fails when it can't determine the precision. So adding a check mitigates the issue:
if column_type_name == "DECIMAL" or (
(column_type_name == "FIXED" or column_type_name == "NUMBER") and scale != 0
):
if precision is None:
return DecimalType(precision, scale)
if precision != 0 or scale != 0:
if precision > DecimalType._MAX_PRECISION:
return DecimalType(
DecimalType._MAX_PRECISION,
scale + precision - DecimalType._MAX_SCALE,
)
else:
return DecimalType(precision, scale)
else:
return DecimalType(38, 18)
Please submit a support ticket to Snowflake, so the code can be fixed.

CUDA.jl: ERROR: LoadError: MethodError: no method matching typeof(fillpixel!)(::CuDeviceMatrix{RGB{Float32}, 1})

I had made a very minimal ray tracer in Julia, and was in the process of implementing a faster version that uses CUDA. The full code is too extensive to share, but here is the part that I think is most relevant to the question:
world = World(RGB(1, 1, 1), 5e-6, shapes, lights, 0.2, 4)
camera = Camera((0, -5000, -5000), 1000, (0, 0, 0), 1920, 1080)
canvas = CUDA.fill(world.background, camera.height, camera.width)
function fillpixel!(arr::CuArray)
height = size(arr)[1]
for j in 1:length(arr)
ind = (j % height, ceil(j / height))
ray = [([ind[2], ind[1]] - [camera.width / 2, camera.height / 2])..., camera.depth]
(ray[2], ray[3]) = (cos(camera.rotation[1] + atan(ray[3], ray[2])), sin(camera.rotation[1] + atan(ray[3], ray[2]))) .* sqrt(ray[2]^2 + ray[3]^2)
(ray[1], ray[3]) = (cos(camera.rotation[2] + atan(ray[3], ray[1])), sin(camera.rotation[2] + atan(ray[3], ray[1]))) .* sqrt(ray[1]^2 + ray[3]^2)
(ray[1], ray[2]) = (cos(camera.rotation[3] + atan(ray[2], ray[1])), sin(camera.rotation[3] + atan(ray[2], ray[1]))) .* sqrt(ray[2]^2 + ray[1]^2)
v = (Inf, nothing, nothing)
for object in world.objects
t = traceray(ray, camera.position, object, mindistance=camera.depth)
t !== nothing && t[1] < v[1] && (v = (t[1], t[2], object))
end
v[1] != Inf && (arr[j] = computecolor(v[3].material, ray, v[1], v[2], world, camera.position .+ v[1] * ray, v[3]))
return nothing
end
end
#cuda fillpixel!(canvas)
but when I run the program, it gives me the following error:
CUDA.jl: ERROR: LoadError: MethodError: no method matching typeof(fillpixel!)(::CuDeviceMatrix{RGB{Float32}, 1})
and I am unable to find out what causes this error and what exactly I'm doing wrong.
Thanks.
Two comments: fillpixel!(arr::CuArray) limits your function to only the type CuArray. CUDA.jl translates the host side representation CuArray to the device side representation `CuDeviceArray. So if you loosen your type restrictions you won't run into this issue.
Secondly you don't want to iterate over the array inside the kernel you launched. You either want to use a function like map or map! to implement the data-parallelism or use the CUDA index primitives.

Solving multi-armed bandit problems with continuous action space

My problem has a single state and an infinite amount of actions on a certain interval (0,1). After quite some time of googling I found a few paper about an algorithm called zooming algorithm which can solve problems with a continous action space. However my implementation is bad at exploiting. Therefore I'm thinking about adding an epsilon-greedy kind of behavior.
Is it reasonable to combine different methods?
Do you know other approaches to my problem?
Code samples:
import portion as P
def choose_action(self, i_ph):
# Activation rule
not_covered = P.closed(lower=0, upper=1)
for arm in self.active_arms:
confidence_radius = calc_confidence_radius(i_ph, arm)
confidence_interval = P.closed(arm.norm_value - confidence_radius, arm.norm_value + confidence_radius)
not_covered = not_covered - confidence_interval
if not_covered != P.empty():
rans = []
height = 0
heights = []
for i in not_covered:
rans.append(np.random.uniform(i.lower, i.upper))
height += i.upper - i.lower
heights.append(i.upper - i.lower)
ran_n = np.random.uniform(0, height)
j = 0
ran = 0
for i in range(len(heights)):
if j < ran_n < j + heights[i]:
ran = rans[i]
j += heights[i]
self.active_arms.append(Arm(len(self.active_arms), ran * (self.sigma_square - lower) + lower, ran))
# Selection rule
max_index = float('-inf')
max_index_arm = None
for arm in self.active_arms:
confidence_radius = calc_confidence_radius(i_ph, arm)
# indexfunction from zooming algorithm
index = arm.avg_learning_reward + 2 * confidence_radius
if index > max_index:
max_index = index
max_index_arm = arm
action = max_index_arm.value
self.current_arm = max_index_arm
return action
def learn(self, action, reward):
arm = self.current_arm
arm.avg_reward = (arm.pulled * arm.avg_reward + reward) / (arm.pulled + 1)
if reward > self.max_profit:
self.max_profit = reward
elif reward < self.min_profit:
self.min_profit = reward
# normalize reward to [0, 1]
high = 100
low = -75
if reward >= high:
reward = 1
self.high_count += 1
elif reward <= low:
reward = 0
self.low_count += 1
else:
reward = (reward - low)/(high - low)
arm.avg_learning_reward = (arm.pulled * arm.avg_learning_reward + reward) / (arm.pulled + 1)
arm.pulled += 1
# zooming algorithm confidence radius
def calc_confidence_radius(i_ph, arm: Arm):
return math.sqrt((8 * i_ph)/(1 + arm.pulled))
You may find this useful, full algorithm description is here. They grid out the probes uniformly, informing this choice (e.g. normal centering on a reputed high energy arm) is also possible (but this might invalidate a few bounds I am not sure).

How to fix 'ValueError: shapes (1,3) and (1,1) not aligned: 3 (dim 1) != 1 (dim 0)' error in numpy

I am currently learning about how to code neural networks in numpy/python. I used the code from this tutorial and tried to adapt it to make an importable module. However, when i tried using my own dataset. It threw a numpy error ValueError: shapes (1,3) and (1,1) not aligned: 3 (dim 1) != 1 (dim 0).
I have already tried reshaping all of the matrices from (x,) to (x,1) but with no success. After a bit of reading around, transposing the arrays was also meant to fix the issue, but i tried that as well and no success there either.
Here is the module (called hidden_net):
import numpy as np
class network:
def __init__(self,layer_num,learning_rate=0.7,seed=None,logistic_coefficent=0.9):
self.logistic_coefficent=logistic_coefficent
self.learning_rate=learning_rate
self.w0 = np.random.random((layer_num[0],layer_num[1]))
self.w1 = np.random.random((layer_num[1],layer_num[2]))
np.random.seed(seed)
def sigmoid(self,x,reverse=False):
if(reverse==True):
return x*(1-x)
return 1/(1+np.exp(-x*self.logistic_coefficent))
def train(self,inps,outs):
inps=np.array(inps)
layer0 = inps
layer1 = self.sigmoid(np.dot(layer0,self.w0))
layer2 = self.sigmoid(np.dot(layer1,self.w1))
layer2_error = outs - layer2
layer2_delta = layer2_error*self.sigmoid(layer2,reverse=True)#*self.learning_rate
layer1_error = layer2_delta.dot(self.w1.T)
layer1_delta = layer1_error * self.sigmoid(layer1,reverse=True)#*self.learning_rate
layer1= np.reshape(layer1, (layer1.shape[0], 1))
layer2= np.reshape(layer2, (layer2.shape[0], 1))
layer1_delta= np.reshape(layer1_delta, (layer1_delta.shape[0], 1)) #Other attempts to reshape to avoid this error
layer2_delta= np.reshape(layer2_delta, (layer2_delta.shape[0], 1))
self.w1 += layer1.T.dot(layer2_delta)
self.w0 += layer0.T.dot(layer1_delta)
Here is the program importing that module:
import hidden_net
op=open('Mall_Customers_Mod.txt','r')
full=op.read()
op.close()
full_lines=full.split('\n')
training_lines=[]
for i in range(174):
training_lines.append(full_lines[0])
del full_lines[0]
training_inputs=[]
training_outputs=[]
for j in training_lines:
training_inputs.append([float(j.split(',')[0]),float(j.split(',')[1])])
training_outputs.append(float(j.split(',')[2]))
testing_lines=full_lines
testing_inputs=[]
testing_outputs=[]
for l in testing_lines:
testing_inputs.append([float(l.split(',')[0]),float(l.split(',')[1])])
testing_outputs.append(float(l.split(',')[2]))
nn=hidden_net.network([2,3,1],seed=10)
for i in range(1000):
for cur in range(len(training_inputs)):
nn.train(training_inputs[cur],training_outputs[cur])
and here is part of my data set (Mall_Customers_Mod.txt)
-1,19,15
-1,21,15
1,20,16
1,23,16
1,31,17
1,22,17
1,35,18
1,23,18
-1,64,19
1,30,19
-1,67,19
1,35,19
1,58,20
1,24,20
-1,37,20
-1,22,20
1,35,21
-1,20,21
-1,52,23
The error is on line 30:
self.w1 += layer1.T.dot(layer2_delta)
ValueError: shapes (1,3) and (1,1) not aligned: 3 (dim 1) != 1 (dim 0)
Also sorry, i know i am meant to avoid pasting entire files, but it seems pretty unavoidable here
The lines below are wrong, layer0 is the input layer and does not contain any neurons.
self.w1 += layer1.T.dot(layer2_delta)
self.w0 += layer0.T.dot(layer1_delta)
They should be:
self.w1 += layer2.T.dot(layer2_delta)
self.w0 += layer1.T.dot(layer1_delta)
All the reshape operations should be removed too. The updated train function
def train(self,inps,outs):
inps=np.array(inps)
layer0 = inps
layer1 = self.sigmoid(np.dot(layer0,self.w0))
layer2 = self.sigmoid(np.dot(layer1,self.w1))
layer2_error = outs - layer2
layer2_delta = layer2_error*self.sigmoid(layer2,reverse=True)#*self.learning_rate
layer1_error = layer2_delta.dot(self.w1.T)
layer1_delta = layer1_error * self.sigmoid(layer1,reverse=True)#*self.learning_rate
self.w1 += layer2.T.dot(layer2_delta)
self.w0 += layer1.T.dot(layer1_delta)

Repeat current poly reduce function on multiple objects that are selected?

I'm looping through multiple objects, but the loop stops before going to the next object.
Created a loop with condition. If condition is met, it calls a ReduceEdge() function. Problem is it will only iterate once and not go to the next object and repeat the procedure.
global proc ReduceEdge()
{
polySelectEdgesEveryN "edgeRing" 2;
polySelectEdgesEveryN "edgeLoop" 1;
polyDelEdge -cv on;
}
string $newSel[] = `ls -sl`;
for($i = 0; $i < size($newSel); $i++)
{
select $newSel[$i];
int $polyEval[] = `polyEvaluate -e $newSel[$i]`;
int $temp = $polyEval[0];
for($k = 0; $k < $temp; $k++)
{
string $polyInfo[] = `polyInfo -fn ($newSel[$i] + ".f[" + $k + "]")`;
$polyInfo = stringToStringArray($polyInfo[$i]," ");
float $vPosX = $polyInfo[2];
float $vPosY = $polyInfo[3];
float $vPosZ = $polyInfo[4];
if($vPosX == 0 && $vPosY == 0 && $vPosZ == 1.0)
{
select ($newSel[$i] + ".e[" + $k + "]");
ReduceEdge();
}
}
}
Expected results:
If I select 4 cylinders, all their edges will reduce by half the current amount.
Actual results:
When 4 cylinders are selected, only one reduces down to half the edges. The rest stay the same.
Since my comment did help you out, I'll try and give a more thorough explanation.
Your first loop (with $i) iterates over each object in your selection. This is fine.
Your second loop (with $k) iterates over the number of edges for the current object in the loop. So far, so good. Though, I'm wondering if it would be more correct to loop of the number of faces...
Now you ask for an array of all face normals of the face at index $k at object $i, with string $polyInfo[] = `polyInfo -fn ($newSel[$i] + ".f[" + $k + "]")`;.
If you try and print the size and values in $polyInfo, you'll realize you have an array with one element, which is the face normal of the particular face you queried just before. Therefore, it will always be element 0, and not $i, which would increases with every iteration.
I have made a Python/PyMEL version of the script, which may be nice for you to see.
import pymel.core as pm
import maya.mel as mel
def reduceEdge():
mel.eval('polySelectEdgesEveryN "edgeRing" 2;')
mel.eval('polySelectEdgesEveryN "edgeLoop" 1;')
pm.polyDelEdge(cv=True)
def reducePoly():
selection = pm.ls(sl=True)
for obj in selection:
for i, face in enumerate(obj.f):
normal = face.getNormal()
if (normal.x == 0.0 and normal.y == 0.0 and normal.z == 1.0):
pm.select(obj + '.e[' + str(i) + ']')
reduceEdge()
reducePoly()

Resources