Matrix dotProduct with diffrent result from python - arrays

I'm studying the multi-layer perceptron algorithm and I'm translating python code to golang.
I have 2 matrices. Let's call this matrix M1:
[[0 0 1 1]
[0 1 0 1]]
Let's call this matrix M2:
[[ 0.00041597 0.02185088 -0.00362142]
[-0.00057384 -0.02866677 0.00488404]
[-0.00056316 -0.02705587 0.00410378]
[ 0.00048268 0.01692128 -0.00262183]]
I'm implementing the dotProduct(M1,M2) in python and it gives me this result
[[ -8.04778516e-05 -1.01345901e-02 1.48194623e-03]
[ -9.11603819e-05 -1.17454886e-02 2.26221011e-03]]
I'm doing it in golang with the same inputs matrix(M1,M2)
but the golang code returns this matrix:
[[-8.047785157755936e-05 -0.010134590118173147 0.0014819462317188985]
[-9.116038191682538e-05 -0.011745488603430228 0.0022622101145935328]]
In python I'm using numpy's dot operation:
resultMatrix = M1.dot(M2)
In go, I'm using this package to work with matrix in go
The problem here is because I calculate others dotProcut calculos with golang and it are all ok
I make N tests with other values, i'm using this package(the same dotProduct method) in others parts of this my code and all has been ok
My Go code at line 128
Tutorial Python code at line 61
Matrix golang package method that implemets the golang dotProduct at line 30
The code in python is not mine, and because this, the code it's written in Portuguese, but my go code is written in English
In python i know that's right because all the neural network works well, but in go I'm not sure
i read the matrix go package method many times and dont get the "bug code implementation", some one know where I'm wrong?

Well, actually the results are pretty much the same. The thing that might confuse you is that formatting is different but still Python's -1.01345901e-02 = -0.0101345901 (see Scientific notation and particularly its E-notation" section) which is pretty close to Go's -0.010134590118173147 and just to make it clear let's align them
Python -1.01345901e-02
Go -0.010134590118173147
So if you have any problems in your code, they probably come from some other source than matrix multiplication.

Related

"Expecting miMATRIX type" error when reading MATLAB MAT-file with SciPy

This is a MATLAB question: the problem is caused by interactions with MATLAB files and Python/numpy. I am tying to write a 3-D array of type uint8 in MATLAB, and then read it in Python using numpy. This is the MATLAB code that creates the file:
voxels = zeros(30, 30, 30);
....
fileID1 = fopen(fullFileNameOut,'w','s');
fwrite(fileID1, voxels, 'uint8');
fclose(fileID1);
This is the Python code that tries to read the file:
filename = 'File3DArray.mat'
arr = scipy.io.loadmat(filename)['instance'].astype(np.uint8)
This is the error that I get when I run the python code:(I think this is the source of the problem. What is MM
raise TypeError('Expecting miMATRIX type here, got %d' % mdtype)
This is the output of the Linux command 'file' on the 3D array file
that I created (I think this is the source of the problem. What is MMDF Mailbox?):
File3DArray.mat: MMDF mailbox
This is the output of the same Linux command 'file' on another 3D array file
that was created by someone else in MATLAB:
GoodFile.mat: Matlab v5 mat-file (little endian) version 0x0100
I want the files I create in MATLAB to be the same as GoodFile.mat (so that I can read them with the Python/Numpy code segment above). The output of the Linux 'file' command should be the same as the GoodFile output, I think.
What is the MATLAB code that does that?
To create a MAT-file, use the MATLAB save command:
voxels = zeros(30, 30, 30, 'uint8');
save(fullFileNameOut, 'voxels', '-v7')
You need to add '-v7' (or '-v6') as an argument to save to create a file in an older format, as SciPy doesn't recognize the '-v7.3' files created by default.

Solving Stiff Ordinary differential Equations using C and MATLAB

I have 2*m+3 STIFF ordinary differential equations to be solved. I have tried to solve it using MATLAB ode15s for m=1 and it works fine. But I would like to use Sundials package CVODE to solve the equations. While trying to do so, I used 'backward differentiation formulae' and Newton iteration. I do not supply the jacobian and allow it to be computed numerically. but it is not working and shows the error:
[CVODE WARNING] CVode
Internal t = 0 and h = 0 are such that t + h = t on the next step. The solver will continue anyway.
[CVODE ERROR] CVode
At t = 0 and h = 0, the correction convergence test failed repeatedly or with :h: = hmin.
SUNDIALS_ERROR: CVode<> failed with flag -4
I believe that CVODE uses the same backward differentiation as ode15s. Then why is it not working?
Should I try to use Krylov solver with preconditioning in CVODE?
Looking forward for any help. Thank you.

Julia parallel file processing

I'm relatively new to Julia language, and I 've been recently trying to process some files in parallel manner. My code looks something like>
for ln in eachline (somefile)
...
proces this line
for ln2 in eachline (someotherfile)
..
..
process ln and ln2
..
..
I've been trying to speed things up a bit with #everywhere and #parallel functions, but it doesn't seem to work for eachline function.
Am I missing something?
Thanks for help.
From #parallel macro we already know that:
#parallel [reducer] for var = range
body
end
The specified range is partitioned and locally executed across all workers.
To do the above job in minimum time, #parallel gets length(range) then partitions it between nworkers().
for more details you can:
. see macro output -> macroexpand(:(#parallel for i in 1:5 i end))
or:
. check macro source -> milti.jl
EachLine is one of Julia iterables, it implements all mandatory methods of iterable interface, but length() is not one of those. (check this discussion), so EachLine is not a range and #parallel fails to do it's task because lack of length() function.
But there are at list two solutions to parallelize the process part:
use lis=readlines() to collect a range of lines, the #parallel for li in lis
use pmap()
Julia’s pmap() (page 483) is designed for the case where each function
call does a large amount of work. In contrast, #parallel for can
handle situations where each iteration is tiny, perhaps merely summing
two numbers.
a sample code:
len=function(s::AbstractString)
string(length(s)) * " " * string(myid());
end
function test()
open("eula.1028.txt") do io
pmap(len,eachline(io))
end
end

python 3.5: numpy - how to avoid deprecated technique

I'm just starting to teach myself to program in Python and I'm getting a "error" message I don't understand:
DeprecationWarning: converting an array with ndim > 0 to an index will
result in an error in the future
value_could_be[i0:i1, j0:j1, k] = 0
value_could_be is a 9*9*9 numpy.ndarray of integers.
i0, i1, j0, j1, k are all integers, presumably valid as subscripts as the code works just as I want it to.
value_could_be[i, :, k] = 0
doesn't generate the same warning
How should I code this to be future proof?
I'm running numpy 1.10.1, python 3.5, spyder 2.3.7, anaconda 2.2.0 (originally installed with python 2.7 and python 3.5 added later). The whole is running under OSX Mountain Lion.
When I Google the message I find references to "theano" but as far as I am aware I'm not using that. Neither do I want to just suppress the message.

How can I run this DTrace script to profile my application?

I was searching online for something to help me do assembly line profiling. I searched and found something on http://www.webservertalk.com/message897404.html
There are two parts of to this problem; finding all instructions of a particular type (inc, add, shl, etc) to determine groupings and then figuring out which are getting executed and summing correcty. The first bit is tricky unless grouping by disassembler is sufficient. For figuring which instructions are being executed, Dtrace is of course your friend here( at least in userland).
The nicest way of doing this would be instrument only the begining of each basic block; finding these would be a manual process right now... however, instrumenting each instruction is feasible for small applications. Here's an example:
First, our quite trivial C program under test:
main()
{
int i;
for (i = 0; i < 100; i++)
getpid();
}
Now, our slightly tricky D script:
#pragma D option quiet
pid$target:a.out::entry
/address[probefunc] == 0/
{
address[probefunc]=uregs[R_PC];
}
pid$target:a.out::
/address[probefunc] != 0/
{
#a[probefunc,(uregs[R_PC]-address[probefunc]), uregs[R_PC]]=count();
}
END
{
printa("%s+%#x:\t%d\t%#d\n", #a);
}
main+0x1: 1
main+0x3: 1
main+0x6: 1
main+0x9: 1
main+0xe: 1
main+0x11: 1
main+0x14: 1
main+0x17: 1
main+0x1a: 1
main+0x1c: 1
main+0x23: 101
main+0x27: 101
main+0x29: 100
main+0x2e: 100
main+0x31: 100
main+0x33: 100
main+0x35: 1
main+0x36: 1
main+0x37: 1
From the example given, this is exactly what i need. However I have no idea what it is doing, how to save the DTrace program, how to execute with the code that i want to get the results of. So i opened this hoping some people with good DTrace background could help me understand the code, save it, run it and hopefully get the results shown.
If all you want to do is run this particular DTrace script, simply save it to a .d script file and use a command like the following to run it against your compiled executable:
sudo dtrace -s dtracescript.d -c [Path to executable]
where you replace dtracescript.d with your script file name.
This assumes that you have DTrace as part of your system (I'm running Mac OS X, which has had it since Leopard).
If you're curious about how this works, I wrote a two-part tutorial on using DTrace for MacResearch a while ago, which can be found here and here.

Resources