Do the function “GetDiameter” in JGraphT cost much internal memory? - jgrapht

Here is the problem:
Recently I would like to use JGraphT to get the diameter from a graph with 5 million vertices.But it shows that "out of memory java heap space" even I add -Xmx 500000m.How could I solve this issue? Thanks a lot!
Here is the part of my code:
public static void main(String[] args) throws URISyntaxException,ExportException,Exception {
Graph<Integer, DefaultEdge> subGraph = createSubGraph();
System.out.println(GetDiameter(subGraph));
}
private static Graph<Integer, DefaultEdge> createSubGraph() throws Exception
{
Graph<Integer, DefaultEdge> g = new DefaultUndirectedGraph<>(DefaultEdge.class);
int j;
String edgepath = "sub_edge10000.txt";
FileReader fr = new FileReader(edgepath);
BufferedReader bufr = new BufferedReader(fr);
String newline = null;
while ((newline = bufr.readLine())!=null) {
String[] parts = newline.split(":");
g.addVertex(Integer.parseInt(parts[0]));
}
bufr.close();
fr = new FileReader(edgepath);
bufr = new BufferedReader(fr);
while ((newline = bufr.readLine())!=null) {
String[] parts = newline.split(":");
int origin=Integer.parseInt(parts[0]);
parts=parts[1].split(" ");
for(j=0;j<parts.length;j++){
int target=Integer.parseInt(parts[j]);
g.addEdge(origin,target);
}
}
bufr.close();
return g;
}
private static double GetDiameter(Graph<Integer, DefaultEdge> subGraph)
{
GraphMeasurer g=new GraphMeasurer(subGraph,new JohnsonShortestPaths(subGraph));
return g.getDiameter();
}

If n is the number of vertices of your graph, then the library internally creates an n by n matrix to store all shortest paths. So, yes, the memory consumption is substantial. This is due to the fact that internally the library uses an all-pairs shortest-path algorithm such as Floyd-Warshall or Johnson's algorithm.
Since you do not have enough memory, you could try to compute the diameter using a single-source shortest path algorithm. This will be slower, but will not require so much memory. The following code demonstrates this assuming an undirected graph and non-negative weights and thus using Dijkstra's algorithm.
package org.myorg.diameterdemo;
import org.jgrapht.Graph;
import org.jgrapht.alg.interfaces.ShortestPathAlgorithm;
import org.jgrapht.alg.interfaces.ShortestPathAlgorithm.SingleSourcePaths;
import org.jgrapht.alg.shortestpath.DijkstraShortestPath;
import org.jgrapht.graph.DefaultWeightedEdge;
import org.jgrapht.graph.builder.GraphTypeBuilder;
import org.jgrapht.util.SupplierUtil;
public class App {
public static void main(String[] args) {
Graph<Integer, DefaultWeightedEdge> graph = GraphTypeBuilder
.undirected()
.weighted(true)
.allowingMultipleEdges(true)
.allowingSelfLoops(true)
.vertexSupplier(SupplierUtil.createIntegerSupplier())
.edgeSupplier(SupplierUtil.createDefaultWeightedEdgeSupplier())
.buildGraph();
Integer a = graph.addVertex();
Integer b = graph.addVertex();
Integer c = graph.addVertex();
Integer d = graph.addVertex();
Integer e = graph.addVertex();
Integer f = graph.addVertex();
graph.addEdge(a, c);
graph.addEdge(d, c);
graph.addEdge(c, b);
graph.addEdge(c, e);
graph.addEdge(b, e);
graph.addEdge(b, f);
graph.addEdge(e, f);
double diameter = Double.NEGATIVE_INFINITY;
for(Integer v: graph.vertexSet()) {
ShortestPathAlgorithm<Integer, DefaultWeightedEdge> alg = new DijkstraShortestPath<Integer, DefaultWeightedEdge>(graph);
SingleSourcePaths<Integer, DefaultWeightedEdge> paths = alg.getPaths(v);
for(Integer u: graph.vertexSet()) {
diameter = Math.max(diameter, paths.getWeight(u));
}
}
System.out.println("Graph diameter = " + diameter);
}
}
If you do have negative weights, then you need to replace the shortest path algorithm with Bellman-Ford using new BellmanFordShortestPath<>(graph) in the above code.
Additionally, one could also employ the technique by Johnson to transform the edge weights to non-negative first by using Bellman-Ford and then start executing calls to Dijkstra. However, this would require non-trivial changes. Take a look at the source code of class JohnsonShortestPaths in the JGraphT library.

Related

How to use custom edge implementation with EdmondsKarp max flow algorithm

I'm trying to implement and simulate a network where I can try some routing methods.
My problem is that one of my routing methods is require me to calculate MaxFlow/MinCut.
I have a custom implementation for the edges, where I added some new fields like Capacity.
Here is my implementation:
import org.jgrapht.graph.DefaultWeightedEdge;
import java.io.Serializable;
public class MyDefaultWeightedEdge extends DefaultWeightedEdge implements Serializable {
protected int freecapacity;
protected boolean isFeasable;
public MyDefaultWeightedEdge(){
this.isFeasable = true;
}
protected int getFreeCapacity(){return this.freecapacity;}
protected void setFreeCapacity(int i)
{
this.freecapacity = i;
}
protected boolean getFeasable(){return this.isFeasable;}
protected void setFeasable(boolean b){this.isFeasable = b;}
#Override
protected Object getSource() {
return super.getSource();
}
#Override
protected Object getTarget() {
return super.getTarget();
}
#Override
protected double getWeight(){
System.out.println("getWeight");
StackTraceElement[] stacktrace = Thread.currentThread().getStackTrace();
StackTraceElement e = stacktrace[2];//maybe this number needs to be corrected
String methodName = e.getMethodName();
if(methodName.equals(""))
{
return this.freecapacity;
}
else {
return super.getWeight();
}
}
public String toString() {
return "(" + this.getSource() + " : " + this.getTarget() + ") " + "Weight " + this.getWeight() + " Capacity " + this.getFreeCapacity();
}
}
When I try to use EdmondsKarpMFImpl my problem is that the algorithm uses the edgeweight as the capacity.
Question:
How can I use my implementation of the edge?
Question:
How can I get all of the edges which are in MinCut/MaxFlow ?
Thanks!
There's a lot of different solutions.
Standard approach. If you only have 1 type of weight (e.g. a capacity, or a cost), you could simply use a DefaultWeightedEdge and use the graph's setEdgeWeight and getEdgeWeight methods to define the edge's weight. You are free to interpret this weight in whatever way that fits your application.
public static void exampleNF(){
//Standard approach
Graph<Integer, DefaultWeightedEdge> graph = new DefaultUndirectedWeightedGraph<>(DefaultWeightedEdge.class);
Graphs.addAllVertices(graph, Arrays.asList(1,2,3,4));
Graphs.addEdge(graph, 1,2,10);
Graphs.addEdge(graph, 2,3,4);
Graphs.addEdge(graph, 2,4,3);
Graphs.addEdge(graph, 1,4,8);
Graphs.addEdge(graph, 4,3,15);
MaximumFlowAlgorithm<Integer, DefaultWeightedEdge> mf = new EdmondsKarpMFImpl<>(graph);
System.out.println(mf.getMaximumFlow(1,3));
}
Use an AsWeightedGraph. This is convenient if your graph doesn't have weights, or, if your edges have more than 1 weight (e.g. both a cost and a capacity) and you want to switch between them.
public static void exampleNF2(){
//Make an unweighted graph weighted using an AsWeightedGraph wrapper
Graph<Integer, DefaultEdge> graph = new DefaultUndirectedGraph<>(DefaultEdge.class);
Graphs.addAllVertices(graph, Arrays.asList(1,2,3,4));
DefaultEdge e1 = graph.addEdge(1,2);
DefaultEdge e2 = graph.addEdge(2,3);
DefaultEdge e3 = graph.addEdge(2,4);
DefaultEdge e4 = graph.addEdge(1,4);
DefaultEdge e5 = graph.addEdge(4,3);
Map<DefaultEdge, Double> capacities = Map.of(e1, 10.0, e2, 4.0, e3, 3.0, e4, 8.0, e5, 15.0);
MaximumFlowAlgorithm<Integer, DefaultEdge> mf = new EdmondsKarpMFImpl<>(new AsWeightedGraph<>(graph, capacities));
System.out.println(mf.getMaximumFlow(1,3));
}
Again using an AsWeightedGraph, but this time using a function as a 'pass-through' to get a specific weight stored on the arcs themselves
public static void exampleNF3(){
//Using the AsWeightedGraph as a function
Graph<Integer, MyEdge> graph = new DefaultUndirectedGraph<>(MyEdge.class);
Graphs.addAllVertices(graph, Arrays.asList(1,2,3,4));
graph.addEdge(1,2, new MyEdge(10));
graph.addEdge(2,3, new MyEdge(4));
graph.addEdge(2,4, new MyEdge(3));
graph.addEdge(1,4, new MyEdge(8));
graph.addEdge(4,3, new MyEdge(15));
MaximumFlowAlgorithm<Integer, MyEdge> mf = new EdmondsKarpMFImpl<>(new AsWeightedGraph<>(graph, e -> e.capacity, false, false));
System.out.println(mf.getMaximumFlow(1,3));
}
private static class MyEdge {
private final double capacity;
public MyEdge(double capacity){
this.capacity=capacity;
}
}
We could also implement our own custom graph and override the getEdgeWeight and setEdgeWeight methods. In this example, we use the MyEdge class from the previous example.
public static void exampleNF4(){
//Using a custom graph
MyGraph graph = new MyGraph(MyEdge.class);
Graphs.addAllVertices(graph, Arrays.asList(1,2,3,4));
graph.addEdge(1,2, new MyEdge(10));
graph.addEdge(2,3, new MyEdge(4));
graph.addEdge(2,4, new MyEdge(3));
graph.addEdge(1,4, new MyEdge(8));
graph.addEdge(4,3, new MyEdge(15));
MaximumFlowAlgorithm<Integer, MyEdge> mf = new EdmondsKarpMFImpl<>(graph);
System.out.println(mf.getMaximumFlow(1,3));
}
private static class MyGraph extends SimpleWeightedGraph<Integer, MyEdge>{
public MyGraph(Class<? extends MyEdge> edgeClass) {
super(edgeClass);
}
#Override
public double getEdgeWeight(MyEdge e){
return e.capacity;
}
}
There's probably more, but this covers quite a range of different approaches already. Personally I would not implement my own graph class unless I need it for something very specific.

Create and populate multiple arrays with a for loop

I am learning to code. I am using Unity and C#, and I am finding some difficulties trying to create and populate multiple array though a for loop.
In other languages you could do something like this:
for (int j = 0; j <= 3; j++)
{
scenes[j] = new float[2] {test[j], test2[j] };
}
But apparently I cannot do something similar in C#. Is that right?
How should I do then?
I need something that create something like this:
scenes1 = {x1, y1}
scenes2 = {x2, y2}
and so on...
Multi dimensional arrays may give you a solution to the problem, all of you data may go into a single array, in your case you may use structure like scenes['index or position', 'test', 'test2'], I am not well versed in C# (unfortunately) but you can see more here. Hope this helps.
Basing on your answers in comments I still don't understand what exactly you need. AFAIU you have two pieces of data: scenes and heights; and you want to generate permutations of compound (scene, height) elements. I assume that you either need:
Generate a random list of all possible permuations exactly once
Generate a long (possibly infinite) stream of random different permuations
So here is some code that might help.
First let's define some boilerplate:
public class Scene
{
public readonly string Something;
public Scene(string something)
{
Something = something;
}
// something else
}
public struct CompoundSceneData
{
public readonly Scene Scene;
public readonly float Height;
public CompoundSceneData(Scene scene, float height)
{
Scene = scene;
Height = height;
}
}
Of course your Scene class is most probably more complicated. CompoundSceneData is a struct representing single item of scene + height.
#1 Generate a random list of all possible permuations exactly once:
// Fisher–Yates shuffle of indices 0..size
int[] GenerateRandomIndicesPermutation(int size)
{
int[] permutation = Enumerable.Range(0, size).ToArray();
Random rnd = new Random();
for (int cur = size; cur >= 2; cur--)
{
int swapPos = rnd.Next(cur);
int tmp = permutation[swapPos];
permutation[swapPos] = permutation[cur - 1];
permutation[cur - 1] = tmp;
}
return permutation;
}
List<CompoundSceneData> GenerateAllRandomPermutationsOnce(Scene[] scenes, float[] heights)
{
int scenesCount = scenes.Length;
int heightsCount = heights.Length;
int totalCount = scenesCount * heightsCount;
List<CompoundSceneData> permutations = new List<CompoundSceneData>(totalCount);
foreach (var compoundIndex in GenerateRandomIndicesPermutation(totalCount))
{
int sceneIndex = compoundIndex % scenesCount;
int heightIndex = compoundIndex / scenesCount;
permutations.Add(new CompoundSceneData(scenes[sceneIndex], heights[heightIndex]));
}
return permutations;
}
void TestUsageAllOnce()
{
Scene[] scenes = new Scene[] { new Scene("Scene #1"), new Scene("Scene #2") };
float[] heights = new float[] { 0.1f, 0.2f, 0.3f };
// this is effectively endless loop
foreach (CompoundSceneData sceneData in GenerateAllRandomPermutationsOnce(scenes, heights))
{
// will be called excactly 2*3 = 6 times
DrawScene(sceneData);
}
}
There are a few key ideas there:
If we have N scenes and M heights there will be NM permutations and given a number in range [0, NM-1] you can select a pair. For example, 2*N + 5 means 5-th scene + 2-nd height (in 0-based indices(!)).
Thus if we want to generate a sequence of different pairs of N scenes and M heights, it is enough to generate a random permuation of numbers [0, N*M-1] and use it as sequence of indices
There is a well known Fisher–Yates shuffle algorithm to create a random permutation.
#2 Generate an infinite stream of random different permuations:
IEnumerable<CompoundSceneData> GenerateInfiniteRandomStream(Scene[] scenes, float[] heights)
{
Random rnd = new Random();
while (true)
{
int sceneIndex = rnd.Next(scenes.Length);
int heightIndex = rnd.Next(heights.Length);
yield return new CompoundSceneData(scenes[sceneIndex], heights[heightIndex]);
}
}
void TestUsageInfinite()
{
Scene[] scenes = new Scene[] { new Scene("Scene #1"), new Scene("Scene #2") };
float[] heights = new float[] { 0.1f, 0.2f, 0.3f };
// this is effectively endless loop
foreach (CompoundSceneData sceneData in GenerateInfiniteRandomStream(scenes, heights))
{
DrawScene(sceneData);
// this is the only thing that will stop the loop
if (IsEndOfGame)
break;
}
}
void TestUsageInfinite2()
{
Scene[] scenes = new Scene[] { new Scene("Scene #1"), new Scene("Scene #2") };
float[] heights = new float[] { 0.1f, 0.2f, 0.3f };
List<CompoundSceneData> fixedSizeList = GenerateInfiniteRandomStream(scenes, heights).Take(100).ToList();
foreach (CompoundSceneData sceneData in fixedSizeList)
{
// this will be called 100 times (as specified in Take)
DrawScene(sceneData);
}
}
The only interesting thing here is a usage of a C# feature yield return. This feature allows creating streams of data (IEnumerable) from code that looks sequential.
Note that for the solution #2 there is no guarantee that each combination (scene+data) will occur only once per (N*M) items. It just generates random combinations that will have good statistical properties only in long run. It is possible to achieve this guarantee as well but it significantly complicates the code and probably the user will not notice anyway.

apache-flink KMeans operation on UnsortedGrouping

I have a flink DataSet (read from a file) that contains sensor readings from many different sensors. I use flinks groupBy() method to organize the data as an UnsortedGrouping per sensor. Next, I would like to run the KMeans algorithm on every UnsortedGrouping in my DataSet in a distributed way.
My question is, how to efficiently implement this functionality using flink.
Below is my current implementation: I have written my own groupReduce() method that applies the flink KMeans algorithm to every UnsortedGrouping. This code works, but seems very slow and uses high amounts of memory.
I think this has to do with the amount of data reorganization I have to do. Multiple data conversions have to be performed to make the code run, because I don't know how to do it more efficiently:
UnsortedGrouping to Iterable (start of groupReduce() method)
Iterable to LinkedList (need this to use the fromCollection() method)
LinkedList to DataSet (required as input to KMeans)
resulting KMeans DataSet to LinkedList (to be able to iterate for Collector)
Surely, there must be a more efficient and performant way to implement this?
Can anybody show me how to implement this in a clean and efficient flink way?
// *************************************************************************
// VARIABLES
// *************************************************************************
static int numberClusters = 10;
static int maxIterations = 10;
static int sensorCount = 117;
static ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
// *************************************************************************
// PROGRAM
// *************************************************************************
public static void main(String[] args) throws Exception {
final long startTime = System.currentTimeMillis();
String fileName = "C:/tmp/data.nt";
DataSet<String> text = env.readTextFile(fileName);
// filter relevant DataSet from text file input
UnsortedGrouping<Tuple2<Integer,Point>> points = text
.filter(x -> x.contains("Value") && x.contains("valueLiteral")).filter(x -> !x.contains("#string"))
.map(x -> new Tuple2<Integer, Point>(
Integer.parseInt(x.substring(x.indexOf("_") + 1, x.indexOf(">"))) % sensorCount,
new Point(Double.parseDouble(x.split("\"")[1]))))
.filter(x -> x.f0 < 10)
.groupBy(0);
DataSet<Tuple2<Integer, Point>> output = points.reduceGroup(new DistinctReduce());
output.print();
// print the execution time
final long endTime = System.currentTimeMillis();
System.out.println("Total execution time: " + (endTime - startTime) + "ms");
}
public static class DistinctReduce implements GroupReduceFunction<Tuple2<Integer, Point>, Tuple2<Integer, Point>> {
private static final long serialVersionUID = 1L;
#Override public void reduce(Iterable<Tuple2<Integer, Point>> in, Collector<Tuple2<Integer, Point>> out) throws Exception {
AtomicInteger counter = new AtomicInteger(0);
List<Point> pointsList = new LinkedList<Point>();
for (Tuple2<Integer, Point> t : in) {
pointsList.add(new Point(t.f1.x));
}
DataSet<Point> points = env.fromCollection(pointsList);
DataSet<Centroid> centroids = points
.distinct()
.first(numberClusters)
.map(x -> new Centroid(counter.incrementAndGet(), x));
//DataSet<String> test = centroids.map(x -> String.format("Centroid %s", x)); //test.print();
IterativeDataSet<Centroid> loop = centroids.iterate(maxIterations);
DataSet<Centroid> newCentroids = points // compute closest centroid for each point
.map(new SelectNearestCenter()).withBroadcastSet(loop,"centroids") // count and sum point coordinates for each centroid
.map(new CountAppender())
.groupBy(0)
.reduce(new CentroidAccumulator()) // compute new centroids from point counts and coordinate sums
.map(new CentroidAverager());
// feed new centroids back into next iteration
DataSet<Centroid> finalCentroids = loop.closeWith(newCentroids);
DataSet<Tuple2<Integer, Point>> clusteredPoints = points // assign points to final clusters
.map(new SelectNearestCenter()).withBroadcastSet(finalCentroids, "centroids");
// emit result System.out.println("Results from the KMeans algorithm:");
clusteredPoints.print();
// emit all unique strings.
List<Tuple2<Integer, Point>> clusteredPointsList = clusteredPoints.collect();
for(Tuple2<Integer, Point> t : clusteredPointsList) {
out.collect(t);
}
}
}
You have to group the data points and the centroids first. Then you iterate over the centroids and co groups them with the data points. For each point in a group you assign it to the closest centroid. Then you group on the initial group index and the centroid index to reduce all points assigned to the same centroid. That will be the result of one iteration.
The code could look the following way:
DataSet<Tuple2<Integer, Point>> groupedPoints = ...
DataSet<Tuple2<Integer, Centroid>> groupCentroids = ...
IterativeDataSet<Tuple2<Integer, Centroid>> groupLoop = groupCentroids.iterate(10);
DataSet<Tuple2<Integer, Centroid>> newGroupCentroids = groupLoop
.coGroup(groupedPoints).where(0).equalTo(0).with(new CoGroupFunction<Tuple2<Integer,Centroid>, Tuple2<Integer,Point>, Tuple4<Integer, Integer, Point, Integer>>() {
#Override
public void coGroup(Iterable<Tuple2<Integer, Centroid>> centroidsIterable, Iterable<Tuple2<Integer, Point>> points, Collector<Tuple4<Integer, Integer, Point, Integer>> out) throws Exception {
// cache centroids
List<Tuple2<Integer, Centroid>> centroids = new ArrayList<>();
Iterator<Tuple2<Integer, Centroid>> centroidIterator = centroidsIterable.iterator();
for (Tuple2<Integer, Point> pointTuple : points) {
double minDistance = Double.MAX_VALUE;
int minIndex = -1;
Point point = pointTuple.f1;
while (centroidIterator.hasNext()) {
centroids.add(centroidIterator.next());
}
for (Tuple2<Integer, Centroid> centroidTuple : centroids) {
Centroid centroid = centroidTuple.f1;
double distance = point.euclideanDistance(centroid);
if (distance < minDistance) {
minDistance = distance;
minIndex = centroid.id;
}
}
out.collect(Tuple4.of(minIndex, pointTuple.f0, point, 1));
}
}})
.groupBy(0, 1).reduce(new ReduceFunction<Tuple4<Integer, Integer, Point, Integer>>() {
#Override
public Tuple4<Integer, Integer, Point, Integer> reduce(Tuple4<Integer, Integer, Point, Integer> value1, Tuple4<Integer, Integer, Point, Integer> value2) throws Exception {
return Tuple4.of(value1.f0, value1.f1, value1.f2.add(value2.f2), value1.f3 + value2.f3);
}
}).map(new MapFunction<Tuple4<Integer,Integer,Point,Integer>, Tuple2<Integer, Centroid>>() {
#Override
public Tuple2<Integer, Centroid> map(Tuple4<Integer, Integer, Point, Integer> value) throws Exception {
return Tuple2.of(value.f1, new Centroid(value.f0, value.f2.div(value.f3)));
}
});
DataSet<Tuple2<Integer, Centroid>> result = groupLoop.closeWith(newGroupCentroids);

NeuroPh Error won't diminish

I am creating a network for water level forecasting. I am using NeuroPh 2.91 for windows. I set the network to 3 inputs since it is accepting 3 inputs namely water level, rainfall, and inflow. I am using multi-layer perceptron, tanh as transfer function and backpropagation as learning rule with 9 hidden neurons.
I am always having this output:
Starting neural network training...
Training network try using data set adminshet
Training error: null
and the total network error according to the graph is 20,000+ .
What should i do? I am really new to ANN and Neuroph.
I've had the same issue here. Similar setup. It works for me, whe I strongly limit the Max-Iterations. e.g. to 10.
This makes me think, that there is a bug in the NeurophStudio.
Short Tip, which worked for me:
Do it yourself! Open Eclipse, add a Project, add the Neuroph jars and build your network. Its hard, but this works exactly as expected. You have to dump your own results into a csv file and display it with Excel. But ANN-handling doesn't work just by "clicking on a gui".
package de.sauer.dispe;
import org.neuroph.core.Layer;
import org.neuroph.core.NeuralNetwork;
import org.neuroph.core.Neuron;
import org.neuroph.core.data.DataSet;
import org.neuroph.core.transfer.Linear;
import org.neuroph.core.transfer.Tanh;
import org.neuroph.nnet.MultiLayerPerceptron;
import org.neuroph.nnet.learning.BackPropagation;
public class DirSpeCntrl {
private static final int MAX_ITER = 2000;
private static final double MAX_ERROR = 0.005;
private static final double LEARNING_RATE = 0.1;
public static void main(String[] args) {
System.out.println("Create ANN");
NeuralNetwork<BackPropagation> nn = new MultiLayerPerceptron(3, 15, 15, 1);
// Setting ALL neurons to TanH transferfunction (important, if you have negativ values)
Layer[] layers = nn.getLayers();
for(Layer curLayer: layers) {
for(Neuron curNeuron: curLayer.getNeurons()) {
curNeuron.setTransferFunction(new Tanh());
}
}
for(Neuron curNeuron: layers[3].getNeurons()) {
curNeuron.setTransferFunction(new Linear());
}
nn.randomizeWeights();
System.out.println("Load Sampledata...");
DataSet ds = DataSet.createFromFile(
"C:\\Users\\meist_000\\Documents\\Thesis\\vanilla_eng.csv",
3, 1, ";");
System.out.println("done: "+ds.getRows().size()+". Learn...");
// Setting stuff
BackPropagation lr = new BackPropagation();
lr.setLearningRate(LEARNING_RATE);
lr.setMaxIterations(MAX_ITER);
lr.setTrainingSet(ds);
lr.setNeuralNetwork(nn);
nn.setLearningRule(lr);
// bla.learn(ds); Faster bulk operation...
// Slower single operation with logging:
for(int i=0;i<MAX_ITER;i++) {
lr.doLearningEpoch(ds);
double curError = lr.getTotalNetworkError();
System.out.println(curError);
if(curError < MAX_ERROR) {
System.out.println("Stopped on "+i);
break;
}
}
// Testing the network
nn.setInput(new double[] {0.080484492, -0.138512128, -0.140826873});
nn.calculate();
double[] prediction = nn.getOutput();
System.out.println("Pred: "+prediction[0]);
}
}

Saving values from a (Float) ArrayList into a Bundle

I'm writing a game using Surfaceview and have a question relating to saving Data into a Bundle.
Initially, I had an arraylist which stored the Y co-ordinates (in the form of Integers) of sprites that will move only up and down. Declared as:
static ArrayList<Integer> ycoordinates = new ArrayList<Integer>();
I saved them to a Bundle using the following:
myBundle.putIntegerArrayList("myycoordinates", ycoordinates);
And restored them using this:
ycoordinates.addAll(savedState.getIntegerArrayList("ycoordinates"));
This all worked perfectly. However, I've had to change the whole coordinates system so it's based on Delta time to allow my sprites to move at a uniform speed across different screens. This is, again, working perfectly.
However, as a result of this change, I now have to store these values as floats rather than integers.
So, I am declaring as:
static ArrayList<Float> ycoordinates = new ArrayList<Float>();
So that's the background, now my question is, how do I store and restore values from a Float Arraylist? There doesn't seem to be a "putFloatArrayList" or "getFloatArrayList".
(I've used an Arraylist rather than an Array as the number of sprites needs to be dynamic).
Any help would be appreciated.
Many thanks
I've written a couple of simple methods to convert between List and float[]. You can use the Bundle putFloatArray() and getFloatArray on the float[].
import java.util.ArrayList;
import java.util.List;
public class Test {
public static void main(String[] args){
List<Float> in = new ArrayList<Float>();
in.add(3.0f);
in.add(1f);
in.add((float)Math.PI);
List<Float>out = toList(toArray(in));
System.out.println(out);
}
public static float[] toArray(List<Float> in){
float[] result = new float[in.size()];
for(int i=0; i<result.length; i++){
result[i] = in.get(i);
}
return result;
}
public static List<Float> toList(float[] in){
List<Float> result = new ArrayList<Float>(in.length);
for(float f : in){
result.add(f);
}
return result;
}
}

Resources