do you know any application beside pattern recog. worthe in order to implement Hopfield neural network model?
Recurrent neural networks (of which hopfield nets are a special type) are used for several tasks in sequence learning:
Sequence Prediction (Map a history of stock values to the expected value in the next timestep)
Sequence classification (Map each complete audio snippet to a speaker)
Sequence labelling (Map an audio snippet to the sentence spoken)
Non-markovian reinforcement learning (e.g. tasks that require deep memory as the T-Maze benchmark)
I am not sure what you mean by "pattern recognition" exactly, since it basically is a whole field into which each task for which neural networks can be used fits.
You can use Hopfield network for optimization problems as well.
You can checkout this repository --> Hopfield Network
There you have an example for test a pattern after train the Network off-line.
This is the test
#Test
public void HopfieldTest(){
double[] p1 = new double[]{1.0, -1.0,1.0,-1.0,1.0,-1.0,1.0,-1.0,1.0};
double[] p2 = new double[]{1.0, 1.0,1.0,-1.0,1.0,-1.0,-1.0,1.0,-1.0};
double[] p3 = new double[]{1.0, 1.0,-1.0,-1.0,1.0,-1.0,-1.0,1.0,-1.0};
ArrayList<double[]> patterns = new ArrayList<>();
patterns.add(p1);
patterns.add(p2);
Hopfield h = new Hopfield(9, new StepFunction());
h.train(patterns); //train and load the Weight matrix
double[] result = h.test(p3); //Test a pattern
System.out.println("\nConnections of Network: " + h.connections() + "\n"); //show Neural connections
System.out.println("Good recuperation capacity of samples: " + Hopfield.goodRecuperation(h.getWeights().length) + "\n");
System.out.println("Perfect recuperation capacity of samples: " + Hopfield.perfectRacuperation(h.getWeights().length) + "\n");
System.out.println("Energy: " + h.energy(result));
System.out.println("Weight Matrix");
Matrix.showMatrix(h.getWeights());
System.out.println("\nPattern result of test");
Matrix.showVector(result);
h.showAuxVector();
}
And after run the test you can see
Running HopfieldTest
Connections of Network: 72
Good recuperation capacity of samples: 1
Perfect recuperation capacity of samples: 1
Energy: -32.0
Weight Matrix
0.0 0.0 2.0 -2.0 2.0 -2.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 -2.0 2.0 -2.0
2.0 0.0 0.0 -2.0 2.0 -2.0 0.0 0.0 0.0
-2.0 0.0 -2.0 0.0 -2.0 2.0 0.0 0.0 0.0
2.0 0.0 2.0 -2.0 0.0 -2.0 0.0 0.0 0.0
-2.0 0.0 -2.0 2.0 -2.0 0.0 0.0 0.0 0.0
0.0 -2.0 0.0 0.0 0.0 0.0 0.0 -2.0 2.0
0.0 2.0 0.0 0.0 0.0 0.0 -2.0 0.0 -2.0
0.0 -2.0 0.0 0.0 0.0 0.0 2.0 -2.0 0.0
Pattern result of test
1.0 1.0 1.0 -1.0 1.0 -1.0 -1.0 1.0 -1.0
-------------------------
The auxiliar vector is empty
I hope this can help you
Related
My code currently creates an output that comes out as so (using example numbers)
0.0 0.0 0.0
0.0 1.0 0.0
0.0 0.0 0.0
0.0 0.0 2.0
0.0 0.0 0.0
0.0 0.0 2.0
0.0 0.0 0.0
1.0 0.0 0.0
0.0 0.0 0.0
0.0 3.0 0.0
0.0 0.0 0.0
0.0 0.0 1.0
3.0 0.0 0.0
0.0 0.0 0.0
1.0 0.0 2.0
0.0 0.0 0.0
3.0 0.0 0.0
0.0 0.0 0.0
1.0 0.0 2.0
0.0 0.0 0.0
3.0 0.0 0.0
0.0 0.0 0.0
1.0 0.0 2.0
0.0 0.0 0.0
3.0 0.0 0.0
0.0 0.0 0.0
1.0 0.0 2.0
0.0 0.0 0.0
was hoping for a solution on how to plot this data either as a 3D splot or as a gif that cycles through each matrix (actual code contains a few hundred matrices). I'm able to alter the output format if necessary. So far I've tried
do for [i=1:7] {
plot "data.txt" matrix with image
}
As well as attempting other solutions I've found on the site but none seem to be trying to do the same thing as me.
If anyone who has gnuplot experience could help me that would be a huge help (I'm using mac if that makes a difference)
Welcome to StackOverflow! I assume all separations of your matrices are two empty lines.
If this is the case you can address the matrices via index (check help index).
You can find out with stats (check help stats) how many blocks you have. Loop through these blocks and set the output to term gif animate (check help gif). Instead of plotting the datablock $Data simply plot your file.
Scrupt:
### plot matrices as asnimation
reset session
$Data <<EOD
0.0 0.0 0.0
0.0 1.0 0.0
0.0 0.0 0.0
0.0 0.0 2.0
0.0 0.0 0.0
0.0 0.0 2.0
0.0 0.0 0.0
1.0 0.0 0.0
0.0 0.0 0.0
0.0 3.0 0.0
0.0 0.0 0.0
0.0 0.0 1.0
3.0 0.0 0.0
0.0 0.0 0.0
1.0 0.0 2.0
0.0 0.0 0.0
3.0 0.0 0.0
0.0 0.0 0.0
1.0 0.0 2.0
0.0 0.0 0.0
3.0 0.0 0.0
0.0 0.0 0.0
1.0 0.0 2.0
0.0 0.0 0.0
EOD
stats $Data u 0 nooutput # get the number of blocks
N = STATS_blocks
set term gif size 600,400 animate delay 30
set output "SO72250259.gif"
set size ratio -1
set cbrange [0:3]
set xrange [-0.5:2.5]
set yrange [-0.5:3.5]
do for [i=0:N-1] {
plot $Data index i matrix w image
}
set output
### end of script
Result:
When in the Julia shell if you run the function zeros(5, 5) you get something that looks like this:
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
If you store the multidimensional array in a variable and print it (or directly print it) in the shell or an external script, you will get the much uglier:
[0.0 0.0 0.0 0.0 0.0; 0.0 0.0 0.0 0.0 0.0; 0.0 0.0 0.0 0.0 0.0; 0.0 0.0 0.0 0.0 0.0; 0.0 0.0 0.0 0.0 0.0]
Is there a way to access the Array's builtin STDOUT formatter that displays it the human readable way in the shell?
Use display(x) instead of print(x).
Note that print(x) can be useful in situations where you need copy-paste-runnable code.
to complete #crstnbr answer I would also suggest show
M=rand(2,3)
f = open("test.txt","w")
show(f, "text/plain", M)
close(f)
then if you read and print test.txt you get:
julia> print(read("test.txt",String))
2×3 Array{Float64,2}:
0.73478 0.184505 0.0678265
0.309209 0.204602 0.831286
note: instead of file f you can also use stdout.
To save some data in a stream the function show is more suited than display, as explained in the docs (?display):
In general, you cannot assume that display output goes to stdout (unlike print(x)
or show(x)). For example, display(x) may open up a separate window with an image.
display(x) means "show x in the best way you can for the current output device(s)."
If you want REPL-like text output that is guaranteed to go to stdout, use
show(stdout, "text/plain", x) instead.
Given the y Array, is there a cleaner or more idiomatic way to create a 2D Array such as Y?
y = [1.0 2.0 3.0 4.0 1.0 2.0]'
Y = ifelse(y .== 1, 1.0, 0.0)
for j in 2:length(unique(y))
Y = hcat(Y, ifelse(y .== j, 1.0, 0.0) )
end
julia> Y
6x4 Array{Float64,2}:
1.0 0.0 0.0 0.0
0.0 1.0 0.0 0.0
0.0 0.0 1.0 0.0
0.0 0.0 0.0 1.0
1.0 0.0 0.0 0.0
0.0 1.0 0.0 0.0
One alternative approach is to use broadcast:
julia> broadcast(.==, y, (1:4)')
6x4 Array{Float64,2}:
1.0 0.0 0.0 0.0
0.0 1.0 0.0 0.0
0.0 0.0 1.0 0.0
0.0 0.0 0.0 1.0
1.0 0.0 0.0 0.0
0.0 1.0 0.0 0.0
(.== broadcasts automatically, so if you just wanted a BitArray you could write y .== (1:4)'.)
This avoids the explicit for loop and also the use of hcat to build the array. However, depending on the size of the array you're looking to create, it might be most efficient to allocate an array of zeros of the appropriate shape and then use indexing to add the ones to the appropriate column on each row.
Array comprehension is an idiomatic and fast way to create matrices in Julia. For the example in the question:
y = convert(Vector{Int64},vec(y)) # make sure indices are integer
Y = [j==y[i] ? 1.0 : 0.0 for i=1:length(y),j=1:length(unique(y))]
What was probably intended was:
Y = [j==y[i] ? 1.0 : 0.0 for i=1:length(y),j=1:maximum(y)]
In both cases Y is:
6x4 Array{Float64,2}:
1.0 0.0 0.0 0.0
0.0 1.0 0.0 0.0
0.0 0.0 1.0 0.0
0.0 0.0 0.0 1.0
1.0 0.0 0.0 0.0
0.0 1.0 0.0 0.0
In numerical analysis, a sparse matrix is a matrix in which most of
the elements are zero.
And from Julia Doc:
sparse(I,J,V,[m,n,combine])
Create a sparse matrix S of dimensions m x n such that S[I[k], J[k]] =
V[k]. The combine function is used to combine duplicates. If m and n
are not specified, they are set to max(I) and max(J) respectively. If
the combine function is not supplied, duplicates are added by default.
y = [1, 2, 3, 4, 1, 2]
rows=length(y);
clms=4 # must be >= maximum(y);
s=sparse(1:rows,y,ones(rows),rows,clms);
full(s) # =>
6x4 Array{Float64,2}:
1.0 0.0 0.0 0.0
0.0 1.0 0.0 0.0
0.0 0.0 1.0 0.0
0.0 0.0 0.0 1.0
1.0 0.0 0.0 0.0
0.0 1.0 0.0 0.0
I need help with this simple iteration problem. I am trying to divide...
vhdl
number : Float := 55.0;
loop
number := number / 3.0;
Put (number);
exit when number <= 0.0;
end loop;
I want it to exit at the first 0.0.
i keep getting infinite loop of
18.3 6.1 2.0 0.7 0.2 0.1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
The first printed 0.0 is not zero, it is some fairly large number in float terms, rounded to one decimal place.
No matter how many times you divide by 3, if your arithmetic is accurate, you will never actually get zero this way, so you would have written an infinite loop.
Now, arithmetic in Ada isn't really THAT accurate but for this specific example it apparently rounds in such a way as to give the same effect. Or, as Simon says, you didn't wait long enough. It's not reliable; chances are that Long_Float or
type Big_Float is digits 18;
package Big_Float_IO is new Float_IO(Num => Big_Float);
use Big_Float_IO;
number : Big_Float := 55.0;
might give different results.
EDIT: On any system employing IEEE P754 floating point arithmetic with a standard-compliant divide instruction, it will eventually exit, unless you have selected a specific optional rounding mode. BUT that still doesn't make it a good way to program!
If your goal is exactly as you described, then re-state it more formally: exit at the first number representing 0.0 when rounded to one decimal place.
That means, any number < 0.05.
So re-write the loop termination as
exit when number < 0.05;
and be happy.
Otherwise, what is it you are REALLY trying to do?
The code you've posted wouldn't compile; there's no standard operation & which takes a String on the left and a Float on the right, and returns a String.
That said, I think you may not have waited long enough: for me, it stops after 99 lines,
...
number= 8.40779E-45
number= 2.80260E-45
number= 1.40130E-45
number= 0.00000E+00
I wonder why your comparison is <=? How could number become negative?
I have just started reading about neural networks and I have a basic question. Regarding "initializing" the Hopfield network, I am unable to understand that notion of initialization. That is, do we input some random numbers? or do input a well defined pattern which makes the neurons settle down first time up, assuming all neurons were at state equal to zero, with other stable states being either 1 or -1 after the input.
Consider the neural network below. Which I have taken from HeatonResearch
Glad if someone clears this to me.
When initialising neural networks, including the recurrent Hopfield networks, it is common to initialise with random weights, as that in general will give good learning times over multiple trials and over an ensemble of runs, it will avoid local minima. It is usually not a good idea to start from the same starting weights over multiple runs as you will likely encounter the same local minima. With some configurations, the learning can be sped up by doing an analysis of the role of the node in the functional mapping, but that is often a later step in the analysis after getting something working.
The purpose of a Hopefiled network is to recall the data it has been shown, serving as content-addressable memory. It begins as a clean slate, with all weights set to zero. Training the network on a vector adjusts the weights to respond to it.
The output of a node in a Hopfield network depends on the state of each other node and the weight of the node's connection to it. States correspond to the input, with intput 0 mapping to -1, and the input 1 mapping to 1. So, if the network in your example had input 1010, N1 would have state 1, N2 -1, N3 1, and N4 -1.
Training the network means adding the dot product between the output and itself to the weight matrix setting the diagonal to zero. So, to train on 10101, we would add [1 -1 1 -1 ] · [1 -1 1 -1 ]ᵀ to the weight matrix.
You can checkout this repository --> Hopfield Network
There you have an example for test a pattern after train the Network off-line. This is the test
#Test
public void HopfieldTest(){
double[] p1 = new double[]{1.0, -1.0,1.0,-1.0,1.0,-1.0,1.0,-1.0,1.0};
double[] p2 = new double[]{1.0, 1.0,1.0,-1.0,1.0,-1.0,-1.0,1.0,-1.0};
double[] p3 = new double[]{1.0, 1.0,-1.0,-1.0,1.0,-1.0,-1.0,1.0,-1.0};
ArrayList<double[]> patterns = new ArrayList<>();
patterns.add(p1);
patterns.add(p2);
Hopfield h = new Hopfield(9, new StepFunction());
h.train(patterns); //train and load the Weight matrix
double[] result = h.test(p3); //Test a pattern
System.out.println("\nConnections of Network: " + h.connections() + "\n"); //show Neural connections
System.out.println("Good recuperation capacity of samples: " + Hopfield.goodRecuperation(h.getWeights().length) + "\n");
System.out.println("Perfect recuperation capacity of samples: " +
Hopfield.perfectRacuperation(h.getWeights().length) + "\n");
System.out.println("Energy: " + h.energy(result));
System.out.println("Weight Matrix");
Matrix.showMatrix(h.getWeights());
System.out.println("\nPattern result of test");
Matrix.showVector(result);
h.showAuxVector();
}
And after run the test you can see
Running HopfieldTest
Connections of Network: 72
Good recuperation capacity of samples: 1
Perfect recuperation capacity of samples: 1
Energy: -32.0
Weight Matrix
0.0 0.0 2.0 -2.0 2.0 -2.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 -2.0 2.0 -2.0
2.0 0.0 0.0 -2.0 2.0 -2.0 0.0 0.0 0.0
-2.0 0.0 -2.0 0.0 -2.0 2.0 0.0 0.0 0.0
2.0 0.0 2.0 -2.0 0.0 -2.0 0.0 0.0 0.0
-2.0 0.0 -2.0 2.0 -2.0 0.0 0.0 0.0 0.0
0.0 -2.0 0.0 0.0 0.0 0.0 0.0 -2.0 2.0
0.0 2.0 0.0 0.0 0.0 0.0 -2.0 0.0 -2.0
0.0 -2.0 0.0 0.0 0.0 0.0 2.0 -2.0 0.0
Pattern result of test
1.0 1.0 1.0 -1.0 1.0 -1.0 -1.0 1.0 -1.0
-------------------------
The auxiliar vector is empty
I hope you find it useful. Regards