Sink and Source are different length Vecs - concatenation

The following code is to concatenate two strings. It is getting compiled but shows errors after elaboration.
Code:
package problems
import chisel3._
import chisel3.util._
class Compare1 extends Module {
val io = IO(new Bundle {
val in1 = Input(Vec(5, UInt(3.W)))
val in2 = Input(Vec(5, UInt(3.W)))
val out = Output(Vec(6, UInt(3.W)))
})
val L = 5
io.out := io.in2
val ml = 4
for (l <- 0 until ml) {
when (io.in2(l) === io.in1(L - ml + l)) {
io.out(l) := io.in1(l)
}
}
val m = (2*L) - ml
for (i <- ml until m) {
io.out(i) := io.in2(i - (L - ml))
}
}
Testbench:
I am poking 19333 and 23599 and expecting 154671
Error:
To sum it up, this is what I get
Errors: 1: in the following tutorials
Tutorial Compare1: exception Connection between sink (Vec(chisel3.core.UInt#80, chisel3.core.UInt#82, chisel3.core.UInt#84, chisel3.core.UInt#86, chisel3.core.UInt#88, chisel3.core.UInt#8a)) and source (Vec(chisel3.core.UInt#6a, chisel3.core.UInt#6c, chisel3.core.UInt#6e, chisel3.core.UInt#70, chisel3.core.UInt#72)) failed #: Sink and Source are different length Vecs.

The error is with the line: io.out := io.in2, io.out is a Vec of length 6 while io.in2 is a Vec of length 5. As the error says, you cannot connect Vecs of different lengths together.
If you wish to connect indices 0 to 4 of io.in2 to io.out, try
for (i <- 0 until io.in2.size) { io.out(i) := io.in2(i) }

Related

How to fix 'ValueError: shapes (1,3) and (1,1) not aligned: 3 (dim 1) != 1 (dim 0)' error in numpy

I am currently learning about how to code neural networks in numpy/python. I used the code from this tutorial and tried to adapt it to make an importable module. However, when i tried using my own dataset. It threw a numpy error ValueError: shapes (1,3) and (1,1) not aligned: 3 (dim 1) != 1 (dim 0).
I have already tried reshaping all of the matrices from (x,) to (x,1) but with no success. After a bit of reading around, transposing the arrays was also meant to fix the issue, but i tried that as well and no success there either.
Here is the module (called hidden_net):
import numpy as np
class network:
def __init__(self,layer_num,learning_rate=0.7,seed=None,logistic_coefficent=0.9):
self.logistic_coefficent=logistic_coefficent
self.learning_rate=learning_rate
self.w0 = np.random.random((layer_num[0],layer_num[1]))
self.w1 = np.random.random((layer_num[1],layer_num[2]))
np.random.seed(seed)
def sigmoid(self,x,reverse=False):
if(reverse==True):
return x*(1-x)
return 1/(1+np.exp(-x*self.logistic_coefficent))
def train(self,inps,outs):
inps=np.array(inps)
layer0 = inps
layer1 = self.sigmoid(np.dot(layer0,self.w0))
layer2 = self.sigmoid(np.dot(layer1,self.w1))
layer2_error = outs - layer2
layer2_delta = layer2_error*self.sigmoid(layer2,reverse=True)#*self.learning_rate
layer1_error = layer2_delta.dot(self.w1.T)
layer1_delta = layer1_error * self.sigmoid(layer1,reverse=True)#*self.learning_rate
layer1= np.reshape(layer1, (layer1.shape[0], 1))
layer2= np.reshape(layer2, (layer2.shape[0], 1))
layer1_delta= np.reshape(layer1_delta, (layer1_delta.shape[0], 1)) #Other attempts to reshape to avoid this error
layer2_delta= np.reshape(layer2_delta, (layer2_delta.shape[0], 1))
self.w1 += layer1.T.dot(layer2_delta)
self.w0 += layer0.T.dot(layer1_delta)
Here is the program importing that module:
import hidden_net
op=open('Mall_Customers_Mod.txt','r')
full=op.read()
op.close()
full_lines=full.split('\n')
training_lines=[]
for i in range(174):
training_lines.append(full_lines[0])
del full_lines[0]
training_inputs=[]
training_outputs=[]
for j in training_lines:
training_inputs.append([float(j.split(',')[0]),float(j.split(',')[1])])
training_outputs.append(float(j.split(',')[2]))
testing_lines=full_lines
testing_inputs=[]
testing_outputs=[]
for l in testing_lines:
testing_inputs.append([float(l.split(',')[0]),float(l.split(',')[1])])
testing_outputs.append(float(l.split(',')[2]))
nn=hidden_net.network([2,3,1],seed=10)
for i in range(1000):
for cur in range(len(training_inputs)):
nn.train(training_inputs[cur],training_outputs[cur])
and here is part of my data set (Mall_Customers_Mod.txt)
-1,19,15
-1,21,15
1,20,16
1,23,16
1,31,17
1,22,17
1,35,18
1,23,18
-1,64,19
1,30,19
-1,67,19
1,35,19
1,58,20
1,24,20
-1,37,20
-1,22,20
1,35,21
-1,20,21
-1,52,23
The error is on line 30:
self.w1 += layer1.T.dot(layer2_delta)
ValueError: shapes (1,3) and (1,1) not aligned: 3 (dim 1) != 1 (dim 0)
Also sorry, i know i am meant to avoid pasting entire files, but it seems pretty unavoidable here
The lines below are wrong, layer0 is the input layer and does not contain any neurons.
self.w1 += layer1.T.dot(layer2_delta)
self.w0 += layer0.T.dot(layer1_delta)
They should be:
self.w1 += layer2.T.dot(layer2_delta)
self.w0 += layer1.T.dot(layer1_delta)
All the reshape operations should be removed too. The updated train function
def train(self,inps,outs):
inps=np.array(inps)
layer0 = inps
layer1 = self.sigmoid(np.dot(layer0,self.w0))
layer2 = self.sigmoid(np.dot(layer1,self.w1))
layer2_error = outs - layer2
layer2_delta = layer2_error*self.sigmoid(layer2,reverse=True)#*self.learning_rate
layer1_error = layer2_delta.dot(self.w1.T)
layer1_delta = layer1_error * self.sigmoid(layer1,reverse=True)#*self.learning_rate
self.w1 += layer2.T.dot(layer2_delta)
self.w0 += layer1.T.dot(layer1_delta)

rscala package: How to access the elements of Scala cached reference Array in R

I am using rscala to communicate Scala and R. Lets say I have a Scala function that returns an Array of Array[Double] and a Double as:
Array(projectedTrainToMatrix,predictedTrain,kernelParam,projection,thresholdsToArray)
where kernelParam is of type Double and the others are Array[Double]. When I run the method from R as:
myfit<-s$.cristinahg.ocapis.kdlor$kdlorfit(traindata,trainlabels,kerneltype,params)
I get myfit as
ScalaCachedReference... _: Array[Array[_]]
[Ljava.lang.Object;#b9dfc5a
But I want to access each of the values in myfitArray. I have tried to access them via myfit$'(1)' but I get this instead the desired Array of Double:
function (..., .AS.REFERENCE = NA, .EVALUATE = TRUE, .PARENTHESES = FALSE)
{
args <- list(...)
if (!is.null(names(args)))
stop("Arguments should not have names.")
names <- paste0(rep("$", length(args)), seq_len(length(args)))
header <- mkHeader(args, names)
identifier <- ""
body <- if (inherits(reference, "ScalaInterpreterReference"))
paste0(reference[["identifier"]], ".", method)
else if (inherits(reference, "ScalaCachedReference")) {
if (.EVALUATE) {
identifier <- reference[["identifier"]]
paste0("R.cached($0).asInstanceOf[", reference[["type"]],
"].", method)
}
else {
paste0("R.cached(\\"", reference[["identifier"]],
"\\").asInstanceOf[", reference[["type"]], "].",
method)
}
}
else if (inherits(reference, "ScalaInterpreterItem")) {
if (method == "new")
paste0("new ", reference[["snippet"]])
else paste0(reference[["snippet"]], ".", method)
}
else stop("Unrecognized reference type.")
argsList <- paste0(names, collapse = ",")
if ((nchar(argsList) > 0) || .PARENTHESES)
argsList <- paste0("(", argsList, ")")
snippet <- paste0(header, paste0(body, argsList))
if (get("show.snippet", envir = interpreter[["env"]]))
cat("<<<\\n", snippet, "\\n>>>\\n", sep = "")
cc(interpreter)
wb(interpreter, DEF)
wc(interpreter, snippet)
flush(interpreter[["socketIn"]])
status <- rb(interpreter, "integer")
if (status != OK) {
if (get("serializeOutput", envir = interpreter[["env"]]))
echoResponseScala(interpreter)
stop("Problem defining function.")
}
functionName <- rc(interpreter)
if (get("serializeOutput", envir = interpreter[["env"]]))
echoResponseScala(interpreter)
f <- function(..., .NBACK = 1) {
args <- list(...)
if (length(args) != length(names))
stop("Incorrect number of arguments.")
if (!is.null(names(args)))
stop("Arguments should not have names.")
workspace <- new.env(parent = parent.frame(.NBACK))
assign(".rsI", interpreter, envir = workspace)
for (i in seq_len(length(args))) assign(names[i], args[[i]],
envir = workspace)
cc(interpreter)
wb(interpreter, INVOKE)
wc(interpreter, functionName)
wc(interpreter, identifier)
flush(interpreter[["socketIn"]])
rServe(interpreter, TRUE, workspace)
status <- rb(interpreter, "integer")
if (get("serializeOutput", envir = interpreter[["env"]]))
echoResponseScala(interpreter)
if (status != OK)
stop("Problem invoking function.")
result <- scalaGet(interpreter, "?", .AS.REFERENCE)
if (is.null(result))
invisible(result)
else result
}
if (.EVALUATE)
f(..., .NBACK = 2)
else f
}
<bytecode: 0x55b1ba98b3f8>
<environment: 0x55b1bd2616c0>
So how can I access each element of the Scala Array in R?
Your example shows that you are using rscala < 3.0.0. While it could be done in older versions, I recommend you use a recent version on CRAN. Below I provide a solution using rscala 3.1.0.
library(rscala)
scala()
s + '
def kdlorfit(
projectedTrainToMatrix: Array[Double], predictedTrain: Array[Double],
kernelParam: Double, projection: Array[Double], thresholdsToArray: Array[Double]) =
{
Array(projectedTrainToMatrix,predictedTrain,kernelParam,projection,thresholdsToArray)
}
'
x1 <- c(1,2,3)
x2 <- c(11,12,13)
x3 <- 34
x4 <- c(100,110,120)
x5 <- c(50,51)
myfit <- s$kdlorfit(x1,x2,x3,x4,x5)
scalaType(myfit)
identical(x1,myfit(0L)$"asInstanceOf[Array[Double]]"())
identical(x3,myfit(2L)$"asInstanceOf[Double]"())
Note the need to cast using asInstanceOf because the Scala type of myfit is Array[Any].
If the function returned Array[Array[Double]] instead of Array[Any], no casting would be needed, as shown below.
s + '
def kdlorfit2(
projectedTrainToMatrix: Array[Double],
predictedTrain: Array[Double],
kernelParam: Array[Double],
projection: Array[Double],
thresholdsToArray: Array[Double]) =
{
Array(projectedTrainToMatrix,predictedTrain,kernelParam,projection,thresholdsToArray)
}
'
myfit <- s$kdlorfit2(x1,x2,I(x3),x4,x5)
scalaType(myfit)
identical(x1,myfit(0L))
identical(x3,myfit(2L))
Note that, when calling kdlorfit2, the argument x3 is passed as Array[Double] because it is wrapped in I(). Without wrapping, it is a passed as a Double as in the previous example.

OpenMDAO v0.13: performing an optimization when using multiple instances of a components initiated in a loop

I am setting up an optimization in OpenMDAO v0.13 using several components that are used many times. My assembly seems to be working just fine with the default driver, but when I run with an optimizer it does not solve. The optimizer simply runs with the inputs given and returns the answer using those inputs. I am not sure what the issue is, but I would appreciate any insights. I have included a simple code mimicking my structure that reproduces the error. I think the problem is in the connections, summer.fs does not update after initialization.
from openmdao.main.api import Assembly, Component
from openmdao.lib.datatypes.api import Float, Array, List
from openmdao.lib.drivers.api import DOEdriver, SLSQPdriver, COBYLAdriver, CaseIteratorDriver
from pyopt_driver.pyopt_driver import pyOptDriver
import numpy as np
class component1(Component):
x = Float(iotype='in')
y = Float(iotype='in')
term1 = Float(iotype='out')
a = Float(iotype='in', default_value=1)
def execute(self):
x = self.x
a = self.a
term1 = a*x**2
self.term1 = term1
print "In comp1", self.name, self.a, self.x, self.term1
def list_deriv_vars(self):
return ('x',), ('term1',)
def provideJ(self):
x = self.x
a = self.a
dterm1_dx = 2.*a*x
J = np.array([[dterm1_dx]])
print 'In comp1, J = %s' % J
return J
class component2(Component):
x = Float(iotype='in')
y = Float(iotype='in')
term1 = Float(iotype='in')
f = Float(iotype='out')
def execute(self):
y = self.y
x = self.x
term1 = self.term1
f = term1 + x + y**2
self.f = f
print "In comp2", self.name, self.x, self.y, self.term1, self.f
class summer(Component):
total = Float(iotype='out', desc='sum of all f values')
def __init__(self, size):
super(summer, self).__init__()
self.size = size
self.add('fs', Array(np.ones(size), iotype='in', desc='f values from all cases'))
def execute(self):
self.total = sum(self.fs)
print 'In summer, fs = %s and total = %s' % (self.fs, self.total)
class assembly(Assembly):
x = Float(iotype='in')
y = Float(iotype='in')
total = Float(iotype='out')
def __init__(self, size):
super(assembly, self).__init__()
self.size = size
self.add('a_vals', Array(np.zeros(size), iotype='in', dtype='float'))
self.add('fs', Array(np.zeros(size), iotype='out', dtype='float'))
print 'in init a_vals = %s' % self.a_vals
def configure(self):
# self.add('driver', SLSQPdriver())
self.add('driver', pyOptDriver())
self.driver.optimizer = 'SNOPT'
# self.driver.pyopt_diff = True
#create this first, so we can connect to it
self.add('summer', summer(size=len(self.a_vals)))
self.connect('summer.total', 'total')
print 'in configure a_vals = %s' % self.a_vals
# create instances of components
for i in range(0, self.size):
c1 = self.add('comp1_%d'%i, component1())
c1.missing_deriv_policy = 'assume_zero'
c2 = self.add('comp2_%d'%i, component2())
self.connect('a_vals[%d]' % i, 'comp1_%d.a' % i)
self.connect('x', ['comp1_%d.x'%i, 'comp2_%d.x'%i])
self.connect('y', ['comp1_%d.y'%i, 'comp2_%d.y'%i])
self.connect('comp1_%d.term1'%i, 'comp2_%d.term1'%i)
self.connect('comp2_%d.f'%i, 'summer.fs[%d]'%i)
self.driver.workflow.add(['comp1_%d'%i, 'comp2_%d'%i])
self.connect('summer.fs[:]', 'fs[:]')
self.driver.workflow.add(['summer'])
# set up main driver (optimizer)
self.driver.iprint = 1
self.driver.maxiter = 100
self.driver.accuracy = 1.0e-6
self.driver.add_parameter('x', low=-5., high=5.)
self.driver.add_parameter('y', low=-5., high=5.)
self.driver.add_objective('summer.total')
if __name__ == "__main__":
""" the result should be -1 at (x, y) = (-0.5, 0) """
import time
from openmdao.main.api import set_as_top
a_vals = np.array([1., 1., 1., 1.])
test = set_as_top(assembly(size=len(a_vals)))
test.a_vals = a_vals
print test.a_vals
test.x = 2.
test.y = 2.
tt = time.time()
test.run()
print "Elapsed time: ", time.time()-tt, "seconds"
print 'result = ', test.summer.total
print '(x, y) = (%s, %s)' % (test.x, test.y)
print test.fs
I played around with your model, and found that the following line caused problems:
#self.connect('summer.fs[:]', 'fs[:]')
When I commented it out, I got the optimization to move.
I am not sure what is happening there, but the graph transformations sometimes have some issues with component input nodes that are promoted as outputs on the assembly boundary. If you still want those values to be available on the assembly, you could try promoting the outputs from the comp2_n components instead.

How to declare interface array in Go

I am having difficulty in what should be a trivial task of creating an interface array. Here is my code,
var result float64
for i := 0; i < len(diff); i++ {
result += diff[i]
}
result = 1 / (1 + math.Sqrt(result))
id1 := user1.UserId
id2 := user2.UserId
user1.Similar[id2] = [2]interface{id2, result}
user2.Similar[id1] = [2]interface{id1, result}
result is a float and user*.UserId is an int.
My error message is
syntax error: name list not allowed in interface type
For example,
package main
import (
"fmt"
)
func main() {
x, y := 1, "#"
a := [2]interface{}{x, y}
fmt.Println(a)
b := [2]interface{}{0, "x"}
fmt.Println(b)
}
Output:
[1 #]
[0 x]

customizable PageRank algorithm in Gremlin?

I'm looking for a Gremlin version of a customizable PageRank algorithm. There are a few old versions out there, one (from: http://www.infoq.com/articles/graph-nosql-neo4j) is pasted below. I'm having trouble fitting the flow into the current GremlinGroovyPipeline-based structure. What is the modernized equivalent of this or something like it?
$_g := tg:open()
g:load('data/graph-example-2.xml')
$m := g:map()
$_ := g:key('type', 'song')[g:rand-nat()]
repeat 2500
$_ := ./outE[#label='followed_by'][g:rand-nat()]/inV
if count($_) > 0
g:op-value('+',$m,$_[1]/#name, 1.0)
end
if g:rand-real() > 0.85 or count($_) = 0
$_ := g:key('type', 'song')[g:rand-nat()]
end
end
g:sort($m,'value',true())
Another version is available on slide 55 of http://www.slideshare.net/slidarko/gremlin-a-graphbased-programming-language-3876581. The ability to use the if statements and change the traversal based on them is valuable for customization.
many thanks
I guess I'll answer it myself in case somebody else needs it. Be warned that this is not a very efficient PageRank calculation. It should only be viewed as a learning example.
g = new TinkerGraph()
g.loadGraphML('graph-example-2.xml')
m = [:]
g.V('type','song').sideEffect{m[it.name] = 0}
// pick a random song node that has 'followed_by' edge
def randnode(g) {
return(g.V('type','song').filter{it.outE('followed_by').hasNext()}.shuffle[0].next())
}
v = randnode(g)
for(i in 0..2500) {
v = v.outE('followed_by').shuffle[0].inV
v = v.hasNext()?v.next():null
if (v != null) {
m[v.name] += 1
}
if ((Math.random() > 0.85) || (v == null)) {
v = randnode(g)
}
}
msum = m.values().sum()
m.each{k,v -> m[k] = v / msum}
println "top 10 songs: (normalized PageRank)"
m.sort {-it.value }[0..10]
Here's a good reference for a simplified one-liner:
https://groups.google.com/forum/m/#!msg/gremlin-users/CRIlDpmBT7g/-tRgszCTOKwJ
(as well as the Gremlin wiki: https://github.com/tinkerpop/gremlin/wiki)

Resources