NeuroPh Error won't diminish - artificial-intelligence

I am creating a network for water level forecasting. I am using NeuroPh 2.91 for windows. I set the network to 3 inputs since it is accepting 3 inputs namely water level, rainfall, and inflow. I am using multi-layer perceptron, tanh as transfer function and backpropagation as learning rule with 9 hidden neurons.
I am always having this output:
Starting neural network training...
Training network try using data set adminshet
Training error: null
and the total network error according to the graph is 20,000+ .
What should i do? I am really new to ANN and Neuroph.

I've had the same issue here. Similar setup. It works for me, whe I strongly limit the Max-Iterations. e.g. to 10.
This makes me think, that there is a bug in the NeurophStudio.
Short Tip, which worked for me:
Do it yourself! Open Eclipse, add a Project, add the Neuroph jars and build your network. Its hard, but this works exactly as expected. You have to dump your own results into a csv file and display it with Excel. But ANN-handling doesn't work just by "clicking on a gui".
package de.sauer.dispe;
import org.neuroph.core.Layer;
import org.neuroph.core.NeuralNetwork;
import org.neuroph.core.Neuron;
import org.neuroph.core.data.DataSet;
import org.neuroph.core.transfer.Linear;
import org.neuroph.core.transfer.Tanh;
import org.neuroph.nnet.MultiLayerPerceptron;
import org.neuroph.nnet.learning.BackPropagation;
public class DirSpeCntrl {
private static final int MAX_ITER = 2000;
private static final double MAX_ERROR = 0.005;
private static final double LEARNING_RATE = 0.1;
public static void main(String[] args) {
System.out.println("Create ANN");
NeuralNetwork<BackPropagation> nn = new MultiLayerPerceptron(3, 15, 15, 1);
// Setting ALL neurons to TanH transferfunction (important, if you have negativ values)
Layer[] layers = nn.getLayers();
for(Layer curLayer: layers) {
for(Neuron curNeuron: curLayer.getNeurons()) {
curNeuron.setTransferFunction(new Tanh());
}
}
for(Neuron curNeuron: layers[3].getNeurons()) {
curNeuron.setTransferFunction(new Linear());
}
nn.randomizeWeights();
System.out.println("Load Sampledata...");
DataSet ds = DataSet.createFromFile(
"C:\\Users\\meist_000\\Documents\\Thesis\\vanilla_eng.csv",
3, 1, ";");
System.out.println("done: "+ds.getRows().size()+". Learn...");
// Setting stuff
BackPropagation lr = new BackPropagation();
lr.setLearningRate(LEARNING_RATE);
lr.setMaxIterations(MAX_ITER);
lr.setTrainingSet(ds);
lr.setNeuralNetwork(nn);
nn.setLearningRule(lr);
// bla.learn(ds); Faster bulk operation...
// Slower single operation with logging:
for(int i=0;i<MAX_ITER;i++) {
lr.doLearningEpoch(ds);
double curError = lr.getTotalNetworkError();
System.out.println(curError);
if(curError < MAX_ERROR) {
System.out.println("Stopped on "+i);
break;
}
}
// Testing the network
nn.setInput(new double[] {0.080484492, -0.138512128, -0.140826873});
nn.calculate();
double[] prediction = nn.getOutput();
System.out.println("Pred: "+prediction[0]);
}
}

Related

Is there a way to schedule jobs to specific processor in Apache Flink?

I am a new user of Apache Flink and I am currently aiming at testing out a scheduling algorithm on a heterogeneous processing system. Hence, which processor I am deploying each job to becomes quite important. However, I could not find how I can specify the processor ID that I am deploying my jobs to, nor could I find a way to make the processors return the availability of them.
I sincerely appreciate your help if you could kindly give me some hints of how I can do these. Hope that you enjoy your day:)
I passed throgh a similar problem to schedule and monitor the flink subtasks to specific cpu cores of the machines. I use LinuxJNAAffinity to my problem (https://github.com/OpenHFT/Java-Thread-Affinity) . Maybe you can base your solution on mine. Here is one of my UDFs.
import java.util.BitSet;
import java.util.List;
import org.apache.flink.api.common.functions.RichMapFunction;
import org.apache.flink.api.java.tuple.Tuple3;
import org.apache.flink.configuration.Configuration;
import org.sense.flink.pojo.Point;
import org.sense.flink.pojo.ValenciaItem;
import org.sense.flink.util.CRSCoordinateTransformer;
import org.sense.flink.util.CpuGauge;
import org.sense.flink.util.SimpleGeographicalPolygons;
import net.openhft.affinity.impl.LinuxJNAAffinity;
public class ValenciaItemDistrictMap extends RichMapFunction<ValenciaItem, ValenciaItem> {
private static final long serialVersionUID = 624354384779615610L;
private SimpleGeographicalPolygons sgp;
private transient CpuGauge cpuGauge;
private BitSet affinity;
private boolean pinningPolicy;
public ValenciaItemDistrictMap() {
this(false);
}
public ValenciaItemDistrictMap(boolean pinningPolicy) {
this.pinningPolicy = pinningPolicy;
}
#Override
public void open(Configuration parameters) throws Exception {
super.open(parameters);
this.sgp = new SimpleGeographicalPolygons();
this.cpuGauge = new CpuGauge();
getRuntimeContext().getMetricGroup().gauge("cpu", cpuGauge);
if (this.pinningPolicy) {
// listing the cpu cores available
int nbits = Runtime.getRuntime().availableProcessors();
// pinning operator' thread to a specific cpu core
this.affinity = new BitSet(nbits);
affinity.set(((int) Thread.currentThread().getId() % nbits));
LinuxJNAAffinity.INSTANCE.setAffinity(affinity);
}
}
#Override
public ValenciaItem map(ValenciaItem value) throws Exception {
// updates the CPU core current in use
this.cpuGauge.updateValue(LinuxJNAAffinity.INSTANCE.getCpu());
System.err.println(ValenciaItemDistrictMap.class.getSimpleName() + " thread[" + Thread.currentThread().getId()
+ "] core[" + this.cpuGauge.getValue() + "]");
List<Point> coordinates = value.getCoordinates();
boolean flag = true;
int i = 0;
while (flag) {
Tuple3<Long, Long, String> adminLevel = sgp.getAdminLevel(coordinates.get(i));
if (adminLevel.f0 != null && adminLevel.f1 != null) {
value.setId(adminLevel.f0);
value.setAdminLevel(adminLevel.f1);
value.setDistrict(adminLevel.f2);
flag = false;
} else {
i++;
}
}
if (flag) {
// if we did not find a district with the given coordinate we assume the
// district 16
value.clearCoordinates();
value.addCoordinates(
new Point(724328.279007, 4374887.874634, CRSCoordinateTransformer.DEFAULT_CRS_EPSG_25830));
value.setId(16L);
value.setAdminLevel(9L);
value.setDistrict("Benicalap");
}
return value;
}
}

Integration testing flink job

I've written a small flink application. I has some input, and enriches it with data from an external source. It's an RichAsyncFunction and within the open method I construct a http client to be used for the enrichment.
Now I want to write an integration test for my job. But since the http client is created within the open method I have no means to provide it, and mock it in my integration test. I've tried to refactor it providing it within the constructor, but I'm always getting serialisation errors.
This is the example I'm working from:
https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/stream/operators/asyncio.html
Thanks in advance :)
This question was posted over a year ago but I'll post the answer in-case anyone stumbles upon this in the future.
The serialization exception you are seeing is likely this
Exception encountered when invoking run on a nested suite. *** ABORTED *** (610 milliseconds)
java.lang.NullPointerException:
at java.util.Objects.requireNonNull(Objects.java:203)
at org.apache.flink.streaming.runtime.streamrecord.StreamElementSerializer.<init>(StreamElementSerializer.java:64)
at org.apache.flink.streaming.api.operators.async.AsyncWaitOperator.setup(AsyncWaitOperator.java:136)
at org.apache.flink.streaming.api.operators.SimpleOperatorFactory.createStreamOperator(SimpleOperatorFactory.java:77)
at org.apache.flink.streaming.api.operators.StreamOperatorFactoryUtil.createOperator(StreamOperatorFactoryUtil.java:70)
at org.apache.flink.streaming.util.AbstractStreamOperatorTestHarness.setup(AbstractStreamOperatorTestHarness.java:366)
at org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness.setup(OneInputStreamOperatorTestHarness.java:165)
...
The reason is that your test operator needs to know how to deserialize the DataStream input type. The only way to provide this is by supplying it directly while initializing the testHarness and then passing it to the setup() method call.
So to test the example from the Flink docs you linked you can do something like this (my implementation is in Scala but you can adapt it to Java as well)
import org.apache.flink.api.common.ExecutionConfig
import org.apache.flink.api.java.typeutils.TypeExtractor
import org.apache.flink.configuration.Configuration
import org.apache.flink.streaming.api.datastream.AsyncDataStream.OutputMode
import org.apache.flink.streaming.api.operators.async.AsyncWaitOperator
import org.apache.flink.streaming.runtime.tasks.{StreamTaskActionExecutor, TestProcessingTimeService}
import org.apache.flink.streaming.runtime.tasks.mailbox.{MailboxExecutorImpl, TaskMailboxImpl}
import org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness
import org.scalatest.{BeforeAndAfter, FunSuite, Matchers}
/**
This test case is written using Flink 1.11+.
Older versions likely have a simpler constructor definition for [[AsyncWaitOperator]] so you might have to remove the last two arguments (processingTimeService and mailboxExecutor)
*/
class AsyncDatabaseRequestSuite extends FunSuite with BeforeAndAfter with Matchers {
var testHarness: OneInputStreamOperatorTestHarness[String, (String, String)] = _
val TIMEOUT = 1000
val CAPACITY = 1000
val MAILBOX_PRIORITY = 0
def createTestHarness: Unit = {
val operator = new AsyncWaitOperator[String, (String, String)](
new AsyncDatabaseRequest {
override def open(configuration: Configuration): Unit = {
client = new MockDatabaseClient(host, post, credentials); // put your mock DatabaseClient object here
}
},
TIMEOUT,
CAPACITY,
OutputMode.UNORDERED,
new TestProcessingTimeService,
new MailboxExecutorImpl(
new TaskMailboxImpl,
MAILBOX_PRIORITY,
StreamTaskActionExecutor.IMMEDIATE
)
)
// supply the TypeSerializer for the "input" type of the operator
testHarness = new OneInputStreamOperatorTestHarness[String, (String, String)](
operator,
TypeExtractor.getForClass(classOf[String]).createSerializer(new ExecutionConfig)
)
// supply the TypeSerializer for the "output" type of the operator to the setup() call
testHarness.setup(
TypeExtractor.getForClass(classOf[(String, String)]).createSerializer(new ExecutionConfig)
)
testHarness.open()
}
before {
createTestHarness
}
after {
testHarness.close()
}
test("Your test case goes here") {
// fill in your test case here
}
}
Here is the solution in Java
class TestingClass {
#InjectMocks
ClassUnderTest cut;
private static OneInputStreamOperatorTestHarness<IN, OUT> testHarness; // replace IN, OUT with your asyncFunction's
private static long TIMEOUT = 1000;
private static int CAPACITY = 1000;
private static int MAILBOX_PRIORITY = 0;
private long UNUSED_TIME = 0L;
Driver driverRef;
public void createTestHarness() throws Exception {
cut = new ClassUnderTest() {
#Override
public void open(Configuration parameters) throws Exception {
driver = mock(Driver.class); // mock your driver (external data source here).
driverRef = driver; // create external ref to driver to refer to in test
}
};
MailboxExecutorImpl mailboxExecutorImpl = new MailboxExecutorImpl(
new TaskMailboxImpl(), MAILBOX_PRIORITY, StreamTaskActionExecutor.IMMEDIATE
);
AsyncWaitOperator operator = new AsyncWaitOperator<>(
gatewayEnrichment,
TIMEOUT,
CAPACITY,
ORDERED,
new TestProcessingTimeService(),
mailboxExecutorImpl
);
testHarness = new OneInputStreamOperatorTestHarness<IN, OUT>(
operator,
TypeExtractor.getForClass(IN.class).createSerializer(new ExecutionConfig())
);
testHarness.setup(TypeExtractor.getForClass(OUT.class).createSerializer(new ExecutionConfig()));
testHarness.open();
}
#BeforeEach()
void setUp() throws Exception {
createTestHarness();
MockitoAnnotations.openMocks(this);
}
#AfterEach
void tearDown() throws Exception {
testHarness.close();
}
#Test
public void test_yourTestCase() throws Exception {
}
}

Do the function “GetDiameter” in JGraphT cost much internal memory?

Here is the problem:
Recently I would like to use JGraphT to get the diameter from a graph with 5 million vertices.But it shows that "out of memory java heap space" even I add -Xmx 500000m.How could I solve this issue? Thanks a lot!
Here is the part of my code:
public static void main(String[] args) throws URISyntaxException,ExportException,Exception {
Graph<Integer, DefaultEdge> subGraph = createSubGraph();
System.out.println(GetDiameter(subGraph));
}
private static Graph<Integer, DefaultEdge> createSubGraph() throws Exception
{
Graph<Integer, DefaultEdge> g = new DefaultUndirectedGraph<>(DefaultEdge.class);
int j;
String edgepath = "sub_edge10000.txt";
FileReader fr = new FileReader(edgepath);
BufferedReader bufr = new BufferedReader(fr);
String newline = null;
while ((newline = bufr.readLine())!=null) {
String[] parts = newline.split(":");
g.addVertex(Integer.parseInt(parts[0]));
}
bufr.close();
fr = new FileReader(edgepath);
bufr = new BufferedReader(fr);
while ((newline = bufr.readLine())!=null) {
String[] parts = newline.split(":");
int origin=Integer.parseInt(parts[0]);
parts=parts[1].split(" ");
for(j=0;j<parts.length;j++){
int target=Integer.parseInt(parts[j]);
g.addEdge(origin,target);
}
}
bufr.close();
return g;
}
private static double GetDiameter(Graph<Integer, DefaultEdge> subGraph)
{
GraphMeasurer g=new GraphMeasurer(subGraph,new JohnsonShortestPaths(subGraph));
return g.getDiameter();
}
If n is the number of vertices of your graph, then the library internally creates an n by n matrix to store all shortest paths. So, yes, the memory consumption is substantial. This is due to the fact that internally the library uses an all-pairs shortest-path algorithm such as Floyd-Warshall or Johnson's algorithm.
Since you do not have enough memory, you could try to compute the diameter using a single-source shortest path algorithm. This will be slower, but will not require so much memory. The following code demonstrates this assuming an undirected graph and non-negative weights and thus using Dijkstra's algorithm.
package org.myorg.diameterdemo;
import org.jgrapht.Graph;
import org.jgrapht.alg.interfaces.ShortestPathAlgorithm;
import org.jgrapht.alg.interfaces.ShortestPathAlgorithm.SingleSourcePaths;
import org.jgrapht.alg.shortestpath.DijkstraShortestPath;
import org.jgrapht.graph.DefaultWeightedEdge;
import org.jgrapht.graph.builder.GraphTypeBuilder;
import org.jgrapht.util.SupplierUtil;
public class App {
public static void main(String[] args) {
Graph<Integer, DefaultWeightedEdge> graph = GraphTypeBuilder
.undirected()
.weighted(true)
.allowingMultipleEdges(true)
.allowingSelfLoops(true)
.vertexSupplier(SupplierUtil.createIntegerSupplier())
.edgeSupplier(SupplierUtil.createDefaultWeightedEdgeSupplier())
.buildGraph();
Integer a = graph.addVertex();
Integer b = graph.addVertex();
Integer c = graph.addVertex();
Integer d = graph.addVertex();
Integer e = graph.addVertex();
Integer f = graph.addVertex();
graph.addEdge(a, c);
graph.addEdge(d, c);
graph.addEdge(c, b);
graph.addEdge(c, e);
graph.addEdge(b, e);
graph.addEdge(b, f);
graph.addEdge(e, f);
double diameter = Double.NEGATIVE_INFINITY;
for(Integer v: graph.vertexSet()) {
ShortestPathAlgorithm<Integer, DefaultWeightedEdge> alg = new DijkstraShortestPath<Integer, DefaultWeightedEdge>(graph);
SingleSourcePaths<Integer, DefaultWeightedEdge> paths = alg.getPaths(v);
for(Integer u: graph.vertexSet()) {
diameter = Math.max(diameter, paths.getWeight(u));
}
}
System.out.println("Graph diameter = " + diameter);
}
}
If you do have negative weights, then you need to replace the shortest path algorithm with Bellman-Ford using new BellmanFordShortestPath<>(graph) in the above code.
Additionally, one could also employ the technique by Johnson to transform the edge weights to non-negative first by using Bellman-Ford and then start executing calls to Dijkstra. However, this would require non-trivial changes. Take a look at the source code of class JohnsonShortestPaths in the JGraphT library.

The request to API call datastore_v3.Put() was too large without using datastore

com.google.apphosting.api.ApiProxy$RequestTooLargeException: The request to API call datastore_v3.Put() was too large.
public static List<Area> readAreas(URL url) {
List<Area> areas = new ArrayList<Area>();
try {
BufferedReader br = new BufferedReader(new FileReader(new File(url.toURI())));
String row;
while ((row = br.readLine()) != null) {
if (row.contains(SEARCHED_ROW)) {
//get the part after "c"
String coord[] = (row.split("c"));
String startCoordM = ((coord[0].trim()).split(" "))[1];
String curvesCoord= coord[1];
Area area = new Area();
area.mPoint= Point.toStartPoint(Point.readPoints(startCoordM));
area.curves = Curve.readCurves (curvesCoord);
areas.add(area);
}
}
br.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (URISyntaxException e) {
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return areas;
}
This method runs without any errors but when I log out and log in to the same page of my web application this method runs again and again without problem but then this exception is thrown. I'm using google app engine 1.8.1 with jsf2 and primefaces 3.5. This method is invoked from managed bean :
public MapMB () {
eps = EPDAO.getEPList();
populateAdvancedModel(eps);
drawPolilines();
}
void drawPolilines() {
List<Area> areas = Area.readAreas(getFacesContext().getClass().getResource("/map-inksc.svg") );
for (Area area : areas) {
List<Curve> curves = area.getCurves();
Point endPoint = area.getmPoint();
Polyline polyline = new Polyline();
polyline.setStrokeWeight(1);
polyline.setStrokeColor("#FF0000");
polyline.setStrokeOpacity(1);
for (Curve curve : curves) {
polyline.getPaths().add( new LatLng(endPoint.getY(),endPoint.getX()) );
// curve start point is the end point of previous curve (endPoint.getX(),endPoint.getY() )
double step = 0.01;
for (double t=0;t<= 1;t=t+step) {
double x = getCoordFromCurve(endPoint.getX(), endPoint.getX() + curve.getP1().getX(),endPoint.getX() + curve.getP2().getX(),endPoint.getX() + curve.getP3().getX(), t);
double y = getCoordFromCurve(endPoint.getY(), endPoint.getY() + curve.getP1().getY(),endPoint.getY() + curve.getP2().getY(),endPoint.getY() + curve.getP3().getY(), t);
polyline.getPaths().add( new LatLng(y, x) );
}
endPoint = new Point (endPoint.getX() + curve.getP3().getX(), endPoint.getY() + curve.getP3().getY());
}
advancedModel.addOverlay(polyline);
polyline = new Polyline();
}
}
When I don't read any data (don't use readAreas() above) then everything works fine. So how reading from file is connected to this error? I don't understand.
If there is some information that I didn't put here please just say. All these methods run without errors and then this exception is thrown
See the edit
Ok. So ... somehow the problem is solved. How? I'm not sure. So I had:
a.xhtml < include b.xhtml
c.xhtml < include b.xhtml
a.xhtml and c.xhtml had the same method bFilterMethod()
JSF beans:
a, b, c all ViewScoped
b had a and c as Managed Properties
a.xhtml and c.xhtml have bFilterMethod() that getsSome() data from the database and sets aProperty and cProperty(which are the same). I saw in google app engine logs that the method getsSome() runs about 20 times like infinite loop after that the exception was thrown.
Now all beans are request scoped
a.xhtml has aFilterMethod that getsSome() data
b.xhtml has bFilterMethod that getsSome() data
and a and b has c as Managed Property
Hope this helps someone but as I sad I'm not sure what is the exact error but obviously is caused by too big request from the database no matter this request contains only 3 rows (this request is invoked too many times)
EDIT
After so many years I came back to my topic accidentally. The real reason for all this is that GAE saves the session in the datastore and jsf ViewScoped beans are not removed from the session as in normal java application server. So the solution is just don't use ViewScoped beans

Saving values from a (Float) ArrayList into a Bundle

I'm writing a game using Surfaceview and have a question relating to saving Data into a Bundle.
Initially, I had an arraylist which stored the Y co-ordinates (in the form of Integers) of sprites that will move only up and down. Declared as:
static ArrayList<Integer> ycoordinates = new ArrayList<Integer>();
I saved them to a Bundle using the following:
myBundle.putIntegerArrayList("myycoordinates", ycoordinates);
And restored them using this:
ycoordinates.addAll(savedState.getIntegerArrayList("ycoordinates"));
This all worked perfectly. However, I've had to change the whole coordinates system so it's based on Delta time to allow my sprites to move at a uniform speed across different screens. This is, again, working perfectly.
However, as a result of this change, I now have to store these values as floats rather than integers.
So, I am declaring as:
static ArrayList<Float> ycoordinates = new ArrayList<Float>();
So that's the background, now my question is, how do I store and restore values from a Float Arraylist? There doesn't seem to be a "putFloatArrayList" or "getFloatArrayList".
(I've used an Arraylist rather than an Array as the number of sprites needs to be dynamic).
Any help would be appreciated.
Many thanks
I've written a couple of simple methods to convert between List and float[]. You can use the Bundle putFloatArray() and getFloatArray on the float[].
import java.util.ArrayList;
import java.util.List;
public class Test {
public static void main(String[] args){
List<Float> in = new ArrayList<Float>();
in.add(3.0f);
in.add(1f);
in.add((float)Math.PI);
List<Float>out = toList(toArray(in));
System.out.println(out);
}
public static float[] toArray(List<Float> in){
float[] result = new float[in.size()];
for(int i=0; i<result.length; i++){
result[i] = in.get(i);
}
return result;
}
public static List<Float> toList(float[] in){
List<Float> result = new ArrayList<Float>(in.length);
for(float f : in){
result.add(f);
}
return result;
}
}

Resources