How to use custom edge implementation with EdmondsKarp max flow algorithm - jgrapht

I'm trying to implement and simulate a network where I can try some routing methods.
My problem is that one of my routing methods is require me to calculate MaxFlow/MinCut.
I have a custom implementation for the edges, where I added some new fields like Capacity.
Here is my implementation:
import org.jgrapht.graph.DefaultWeightedEdge;
import java.io.Serializable;
public class MyDefaultWeightedEdge extends DefaultWeightedEdge implements Serializable {
protected int freecapacity;
protected boolean isFeasable;
public MyDefaultWeightedEdge(){
this.isFeasable = true;
}
protected int getFreeCapacity(){return this.freecapacity;}
protected void setFreeCapacity(int i)
{
this.freecapacity = i;
}
protected boolean getFeasable(){return this.isFeasable;}
protected void setFeasable(boolean b){this.isFeasable = b;}
#Override
protected Object getSource() {
return super.getSource();
}
#Override
protected Object getTarget() {
return super.getTarget();
}
#Override
protected double getWeight(){
System.out.println("getWeight");
StackTraceElement[] stacktrace = Thread.currentThread().getStackTrace();
StackTraceElement e = stacktrace[2];//maybe this number needs to be corrected
String methodName = e.getMethodName();
if(methodName.equals(""))
{
return this.freecapacity;
}
else {
return super.getWeight();
}
}
public String toString() {
return "(" + this.getSource() + " : " + this.getTarget() + ") " + "Weight " + this.getWeight() + " Capacity " + this.getFreeCapacity();
}
}
When I try to use EdmondsKarpMFImpl my problem is that the algorithm uses the edgeweight as the capacity.
Question:
How can I use my implementation of the edge?
Question:
How can I get all of the edges which are in MinCut/MaxFlow ?
Thanks!

There's a lot of different solutions.
Standard approach. If you only have 1 type of weight (e.g. a capacity, or a cost), you could simply use a DefaultWeightedEdge and use the graph's setEdgeWeight and getEdgeWeight methods to define the edge's weight. You are free to interpret this weight in whatever way that fits your application.
public static void exampleNF(){
//Standard approach
Graph<Integer, DefaultWeightedEdge> graph = new DefaultUndirectedWeightedGraph<>(DefaultWeightedEdge.class);
Graphs.addAllVertices(graph, Arrays.asList(1,2,3,4));
Graphs.addEdge(graph, 1,2,10);
Graphs.addEdge(graph, 2,3,4);
Graphs.addEdge(graph, 2,4,3);
Graphs.addEdge(graph, 1,4,8);
Graphs.addEdge(graph, 4,3,15);
MaximumFlowAlgorithm<Integer, DefaultWeightedEdge> mf = new EdmondsKarpMFImpl<>(graph);
System.out.println(mf.getMaximumFlow(1,3));
}
Use an AsWeightedGraph. This is convenient if your graph doesn't have weights, or, if your edges have more than 1 weight (e.g. both a cost and a capacity) and you want to switch between them.
public static void exampleNF2(){
//Make an unweighted graph weighted using an AsWeightedGraph wrapper
Graph<Integer, DefaultEdge> graph = new DefaultUndirectedGraph<>(DefaultEdge.class);
Graphs.addAllVertices(graph, Arrays.asList(1,2,3,4));
DefaultEdge e1 = graph.addEdge(1,2);
DefaultEdge e2 = graph.addEdge(2,3);
DefaultEdge e3 = graph.addEdge(2,4);
DefaultEdge e4 = graph.addEdge(1,4);
DefaultEdge e5 = graph.addEdge(4,3);
Map<DefaultEdge, Double> capacities = Map.of(e1, 10.0, e2, 4.0, e3, 3.0, e4, 8.0, e5, 15.0);
MaximumFlowAlgorithm<Integer, DefaultEdge> mf = new EdmondsKarpMFImpl<>(new AsWeightedGraph<>(graph, capacities));
System.out.println(mf.getMaximumFlow(1,3));
}
Again using an AsWeightedGraph, but this time using a function as a 'pass-through' to get a specific weight stored on the arcs themselves
public static void exampleNF3(){
//Using the AsWeightedGraph as a function
Graph<Integer, MyEdge> graph = new DefaultUndirectedGraph<>(MyEdge.class);
Graphs.addAllVertices(graph, Arrays.asList(1,2,3,4));
graph.addEdge(1,2, new MyEdge(10));
graph.addEdge(2,3, new MyEdge(4));
graph.addEdge(2,4, new MyEdge(3));
graph.addEdge(1,4, new MyEdge(8));
graph.addEdge(4,3, new MyEdge(15));
MaximumFlowAlgorithm<Integer, MyEdge> mf = new EdmondsKarpMFImpl<>(new AsWeightedGraph<>(graph, e -> e.capacity, false, false));
System.out.println(mf.getMaximumFlow(1,3));
}
private static class MyEdge {
private final double capacity;
public MyEdge(double capacity){
this.capacity=capacity;
}
}
We could also implement our own custom graph and override the getEdgeWeight and setEdgeWeight methods. In this example, we use the MyEdge class from the previous example.
public static void exampleNF4(){
//Using a custom graph
MyGraph graph = new MyGraph(MyEdge.class);
Graphs.addAllVertices(graph, Arrays.asList(1,2,3,4));
graph.addEdge(1,2, new MyEdge(10));
graph.addEdge(2,3, new MyEdge(4));
graph.addEdge(2,4, new MyEdge(3));
graph.addEdge(1,4, new MyEdge(8));
graph.addEdge(4,3, new MyEdge(15));
MaximumFlowAlgorithm<Integer, MyEdge> mf = new EdmondsKarpMFImpl<>(graph);
System.out.println(mf.getMaximumFlow(1,3));
}
private static class MyGraph extends SimpleWeightedGraph<Integer, MyEdge>{
public MyGraph(Class<? extends MyEdge> edgeClass) {
super(edgeClass);
}
#Override
public double getEdgeWeight(MyEdge e){
return e.capacity;
}
}
There's probably more, but this covers quite a range of different approaches already. Personally I would not implement my own graph class unless I need it for something very specific.

Related

How to define an array in hadoop partitioner

I am new in hadoop and mapreduce programming and don't know what should i do. I want to define an array of int in hadoop partitioner. i want to feel in this array in main function and use its content in partitioner. I have tried to use IntWritable and array of it but none of them didn't work . I tried to use IntArrayWritable but again it didn't work. I will be pleased if some one help me. Thank you so much
public static IntWritable h = new IntWritable[1];
public static void main(String[] args) throws Exception {
h[0] = new IntWritable(1);
}
public static class CaderPartitioner extends Partitioner <Text,IntWritable> {
#Override
public int getPartition(Text key, IntWritable value, int numReduceTasks) {
return h[0].get();
}
}
if you have limited number of values, you can do in the below way.
set the values on the configuration object like below in main method.
Configuration conf = new Configuration();
conf.setInt("key1", value1);
conf.setInt("key2", value2);
Then implement the Configurable interface for your Partitioner class and get the configuration object, then key/values from it inside your Partitioner
public class testPartitioner extends Partitioner<Text, IntWritable> implements Configurable{
Configuration config = null;
#Override
public int getPartition(Text arg0, IntWritable arg1, int arg2) {
//get your values based on the keys in the partitioner
int value = getConf().getInt("key");
//do stuff on value
return 0;
}
#Override
public Configuration getConf() {
// TODO Auto-generated method stub
return this.config;
}
#Override
public void setConf(Configuration configuration) {
this.config = configuration;
}
}
supporting link
https://cornercases.wordpress.com/2011/05/06/an-example-configurable-partitioner/
note if you have huge number of values in a file then better to find a way to get cache files from job object in Partitioner
Here's a refactored version of the partitioner. The main changes are:
Removed the main() which isnt needed, initialization should be done in the constructor
Removed static from the class and member variables
public class CaderPartitioner extends Partitioner<Text,IntWritable> {
private IntWritable[] h;
public CaderPartitioner() {
h = new IntWritable[1];
h[0] = new IntWritable(1);
}
#Override
public int getPartition(Text key, IntWritable value, int numReduceTasks) {
return h[0].get();
}
}
Notes:
h doesn't need to be a Writable, unless you have additional logic not included in the question.
It isn't clear what the h[] is for, are you going to configure it? In which case the partitioner will probably need to implement Configurable so you can use a Configurable object to set the array up in some way.

mapreduce fails with message "The request to API call datastore_v3.Put() was too large."

I am running a mapreduce job over 50 million User records.
For each user I read two other Datastore entities and then stream stats for each player to bigquery.
My first dry run (with streaming to bigquery disabled) failed with the following stacktrace.
/_ah/pipeline/handleTask
com.google.appengine.tools.cloudstorage.NonRetriableException: com.google.apphosting.api.ApiProxy$RequestTooLargeException: The request to API call datastore_v3.Put() was too large.
at com.google.appengine.tools.cloudstorage.RetryHelper.doRetry(RetryHelper.java:121)
at com.google.appengine.tools.cloudstorage.RetryHelper.runWithRetries(RetryHelper.java:166)
at com.google.appengine.tools.cloudstorage.RetryHelper.runWithRetries(RetryHelper.java:157)
at com.google.appengine.tools.pipeline.impl.backend.AppEngineBackEnd.tryFiveTimes(AppEngineBackEnd.java:196)
at com.google.appengine.tools.pipeline.impl.backend.AppEngineBackEnd.saveWithJobStateCheck(AppEngineBackEnd.java:236)
I have googled this error and the only thing I find is related to that the Mapper is too big to be serialized but our Mapper has no data at all.
/**
* Adds stats for a player via streaming api.
*/
public class PlayerStatsMapper extends Mapper<Entity, Void, Void> {
private static Logger log = Logger.getLogger(PlayerStatsMapper.class.getName());
private static final long serialVersionUID = 1L;
private String dataset;
private String table;
private transient GbqUtils gbq;
public PlayerStatsMapper(String dataset, String table) {
gbq = Davinci.getComponent(GbqUtils.class);
this.dataset = dataset;
this.table = table;
}
private void readObject(java.io.ObjectInputStream in) throws IOException, ClassNotFoundException {
in.defaultReadObject();
log.info("IOC reinitating due to deserialization.");
gbq = Davinci.getComponent(GbqUtils.class);
}
#Override
public void beginShard() {
}
#Override
public void endShard() {
}
#Override
public void map(Entity value) {
if (!value.getKind().equals("User")) {
log.severe("Expected a User but got a " + value.getKind());
return;
}
User user = new User(1, value);
List<Map<String, Object>> rows = new LinkedList<Map<String, Object>>();
List<PlayerStats> playerStats = readPlayerStats(user.getUserId());
addRankings(user.getUserId(), playerStats);
for (PlayerStats ps : playerStats) {
rows.add(ps.asMap());
}
// if (rows.size() > 0)
// gbq.insert(dataset, table, rows);
}
.... private methods only
}
The maprecuce job is started with this code
MapReduceSettings settings = new MapReduceSettings().setWorkerQueueName("mrworker");
settings.setBucketName(gae.getAppName() + "-playerstats");
// #formatter:off <I, K, V, O, R>
MapReduceSpecification<Entity, Void, Void, Void, Void> spec =
MapReduceSpecification.of("Enque player stats",
new DatastoreInput("User", shardCountMappers),
new PlayerStatsMapper(dataset, "playerstats"),
Marshallers.getVoidMarshaller(),
Marshallers.getVoidMarshaller(),
NoReducer.<Void, Void, Void> create(),
NoOutput.<Void, Void> create(1));
// #formatter:on
String jobId = MapReduceJob.start(spec, settings);
Well I solved this by backing to appengine-mapreduce-0.2.jar which was the one we had used before. The one used above was appengine-mapreduce-0.5.jar which actually turned out not to work for us.
When backing to 0.2 the console _ah/pipiline/list started to work again as well!
Anyone else that have encountered similar problem with 0.5?

Protected? getSource and getTarget methods on JGraphT DefaultEdge class

The methods getSource() and getTarget() of DefaultEdge on org.jgrapht.graph.DefaultEdge are protected.
How should I access source and target vertices of each of the edges returned by the edgeSet() of org.jgrapht.graph.SimpleGraph ?
The code below shows what is happening.
import java.util.Set;
import org.jgrapht.UndirectedGraph;
import org.jgrapht.graph.DefaultEdge;
import org.jgrapht.graph.SimpleGraph;
public class TestEdges
{
public static void main(String [] args)
{
UndirectedGraph<String, DefaultEdge> g =
new SimpleGraph<String, DefaultEdge>(DefaultEdge.class);
String A = "A";
String B = "B";
String C = "C";
// add the vertices
g.addVertex(A);
g.addVertex(B);
g.addVertex(C);
g.addEdge(A, B);
g.addEdge(B, C);
g.addEdge(A, C);
Set<DefaultEdge> edges = g.edgeSet();
for(DefaultEdge edge : edges) {
String v1 = edge.getSource(); // Error getSource() is protected method
String v2 = edge.getTarget(); // Error getTarget() is protected method
}
}
}
The "correct" method to access edges source and target, according to JGraphT mailing list is to use the method getEdgeSource(E) and getEdgeTarget(E) from the interface Interface Graph<V,E> of org.jgrapht
the modification of the code is then
for(DefaultEdge edge : edges) {
String v1 = g.getEdgeSource(edge);
String v2 = g.getEdgeTarget(edge);
}
I was having a similar issue when trying to extract the values of the edges, and although not OP's case, might be helpful for anyone else facing this issue.
When I instantiated my graph and passed it an edge class:
DirectedGraph graph = new SimpleDirectedGraph(DefaultEdge.class);
Netbeans gave me the option for what DefaultEdge.class file to import, I chose the wrong one. I used the org.jgraph library instead of the org.jgrapht.
If you are using the DefaultEdge class make sure you are using the one from jgrapht.
import org.jgrapht.graph.DefaultEdge;

Sending JPA data from server to client?

I'm using Eclipse juno IDE.
I Have a client-server application. In the server side I have an Entity (Travels)
and I Have another class that handle the JPA queries. I'm recieving the data from the database
but when I'm trying to send it as a vector to the client i'm getting an exception in the
client side , that says "Cant cast pack.db.Travels to java.util.vector"
Here is my Code:
Entity:
package pack.db;
import java.io.Serializable;
import java.sql.Date;
import java.sql.Time;
import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
#Entity
public class Travels implements Serializable {
private static final long serialVersionUID = 1L;
#Id
#GeneratedValue(strategy=GenerationType.SEQUENCE)
#Column(name="id")
private int id;
#Column(name="taxi_number")
private String taxiNumber;
#Column(name="travel_date")
private Date travelDate;
#Column(name="travel_time")
private Time travelTime;
#Column(name="cost")
private Double travelCost;
public Travels() {
super();
}
public void setNumber(String number)
{
this.taxiNumber = number;
}
public void setDate(Date date)
{
this.travelDate = date;
}
public void setTime(Time time)
{
this.travelTime = time;
}
public void setCost(Double cost)
{
this.travelCost = cost;
}
}
QueryClass
public Vector retrieveAllTravelsData(Date[] travelDate, Time[] travelTime) {
List<Object[]> allTravels = (List<Object[]>)em.createQuery("SELECT t FROM Travels t WHERE t.travelDate between ?1 and ?2 and " +
"t.travelTime between ?3 and ?4")
.setParameter(1, travelDate[0])
.setParameter(2, travelDate[1])
.setParameter(3, travelTime[0])
.setParameter(4, travelTime[1]).getResultList();
return (Vector) allTravels;
}
So what I want to do is to send "allTravels" as a vector to the client side becasuse
I need to populate a JTABLE in the client-side. so I tried to cast the return data
from the query to OBJECT[] (because the constructor of the JTABLE need Object[][] for the rows) and send it.. but i'm still get the exception in client side that says
"Cannot cast pack.db.Travel to java.util.Vector".. i don't think that i need to add
the travel Entity in the client side.. so how can i send the data to the client?
To be more specific.. I have this code with JDBC implemntation
public Vector retrieveAllTravelsData(Date[] travelDate, Time[] travelTime) {
Vector rows_data = new Vector();
String sql = "SELECT * FROM taxis.travels " + " WHERE travel_date BETWEEN ? AND ? AND travel_time BETWEEN ? AND ?";
try {
statement = (PreparedStatement) connection.prepareStatement(sql);
statement.setDate(1, travelDate[0]);
statement.setDate(2, travelDate[1]);
statement.setTime(3, travelTime[0]);
statement.setTime(4, travelTime[1]);
rs = statement.executeQuery();
ResultSetMetaData meta = rs.getMetaData();
int cols_count = meta.getColumnCount();
while (rs.next()) {
Vector record = new Vector();
for (int i = 0; i < cols_count; i++) {
record.add(rs.getString(i+1));
}
rows_data.addElement(record);
}
} catch (SQLException e) {
while (e != null) {
e.printStackTrace();
e = e.getNextException();
}
}
return rows_data;
Here i can get each column data from each column save it as a record and then put it in the Vector. so how that can be implemented by JPA? is it possible?
Casting an object to another class doesn't magically change the type of the object. It only allows referencing it as a more concrete class. So casting a List to Vector only works if the list is indeed a Vector.
getResultList() returns a List. That's what the javadoc says. The concrete class returned depends on the JPA provider, but I'm pretty sure none of them returns a Vector, since Vector is a class that should not be used anymore, since Java 1.2.
Moreover, this particular query doesn't return Object[], but instances of Travels (which should be named Travel, BTW).
So the method should be:
public List<Travel> retrieveAllTravelsData(Date[] travelDate, Time[] travelTime) {
List<Travel> allTravels = (List<Travel>) em.createQuery("SELECT t FROM Travel t WHERE t.travelDate between ?1 and ?2 and " +
"t.travelTime between ?3 and ?4")
.setParameter(1, travelDate[0])
.setParameter(2, travelDate[1])
.setParameter(3, travelTime[0])
.setParameter(4, travelTime[1]).getResultList();
return allTravels;
}
The server shouldn't care that the client-side needs a Vector to satisfy an old Swing class. If you really need a Vector at client-side, then create one from the returned list:
Vector<Travel> travelsAsVector = new Vector(travelsAsList);
Ok i solve it like that:
public Vector retrieveAllTravelsData(Date[] travelDate, Time[] travelTime) {
javax.persistence.Query q = (javax.persistence.Query)em.createNativeQuery("SELECT *
from Travels WHERE travel_date between ?1 and ?2")
.setParameter(1, travelDate[0])
.setParameter(2, travelDate[1])
.setParameter(3, travelTime[0])
.setParameter(4, travelTime[1]);
List<Object[]> result = (List<Object[]>)q.getResultList();
Vector rows = new Vector();
for (int i = 0 ; i < result.size(); i++)
{
Vector rec = new Vector();
for (int j = 0; j < columnCount; j++)
{
rec.add(result.get(i)[j].toString()); // returns the specific column value and add it to the vector
}
rows.addElement(rec);
}
return rows;
}

Using Sencha GXT 3, generate a line chart populated with a dynamic number of line series fields?

Using Sencha GXT 3.0 is it possible to generate a line chart and populate it with a dynamic number of line series fields, and if so, what is the recommended method?
I know multiple series fields can be added to a chart, but the line chart examples (and the other chart examples for that matter) make use of an interface which extends PropertyAccess<?> and the interface specifies a static number of expected fields (e.g. data1(), data2(), data3(), etc.). If the interface is to be used to specify the fields to add to the chart, how could you account for a chart which may require n number of fields (i.e. n number of line series on a given chart).
Example provided on Sencha's site:
http://www.sencha.com/examples/#ExamplePlace:linechart
I ran into the same issue. It would be a much nicer design if each series had a store instead of having one store per chart.
I had one long list of metric values in metricDataStore. Each metric value has a description. I wanted all the metric values with the same description displayed on one (and only one) series. I had my value providers for each series return null for both the x and y axis if the value wasn't supposed to be in the series.
This seems like a hack to me but it works for my usage:
myChart = new Chart<MetricData>();
myChart.setStore(metricDataStore);
.
.
.
for (MetricInfo info : metricInfoData) {
LineSeries<MetricData> series = new LineSeries<MetricData>();
series.setChart(myChart);
series.setSmooth(false);
series.setShownInLegend(true);
series.setHighlighting(true);
series.setYAxisPosition(Chart.Position.LEFT);
series.setYField(new MetricValueProvider(info.getName()));
series.setXAxisPosition(Chart.Position.BOTTOM);
series.setXField(new MetricTimeProvider(info.getName()));
myChart.addSeries(series);
}
.
.
.
private class MetricTimeProvider extends Object implements ValueProvider<MetricData, Long> {
private String metricName;
public MetricTimeProvider(String metricName) {
this.metricName = metricName;
}
#Override
public Long getValue(MetricData m) {
if (metricName != null && metricName.equals(m.getLongDesc()))
return m.getId();
else
return null;
}
#Override
public void setValue(MetricData m, Long value) {
}
#Override
public String getPath() {
return null;
}
}
private class MetricValueProvider extends Object implements ValueProvider<MetricData, Double> {
private String metricName;
public MetricValueProvider(String metricName) {
this.metricName = metricName;
}
#Override
public Double getValue(MetricData m) {
if (metricName != null && metricName.equals(m.getLongDesc()))
return m.getMetricValue();
else
return null;
}
#Override
public void setValue(MetricData m, Double value) {
}
#Override
public String getPath() {
return null;
}
}

Resources