Multiple weights per Edge in a JGraphT DAG - jgrapht

Is there a way in JGraphT that I can assign multiple weights to a single edge? For example, suppose I have a graph representing travel-time between cities. I want to assign edge-weights for "time by plane", "time by car", "time by bus", etc., and then find least-cost route by some specified mode of travel.
One approach I can think of is to have distinct graph for each travel mode and then add every city vertex to every graph but that seems like a messy and memory intensive solution.
My next thought was that I might be able to extend the class implementing the graph ( probably DirectedWeightedPseudograph) and customize the getEdgeWeight() method to take an additional argument specifying which weight value to use. That, however, would require extending all the algorithm classes as well (e.g., DijkstraShortestPath) which I am trying to avoid.
To get around that problem I considered the following:
Extend my Graph class by adding a method setWeightMode(enum mode)
customize the getEdgeWeight() method to use the currently assigned mode to determine which weight value to return to the caller.
On the plus side it would be 100% transparent to any existing analysis classes. On the negative side, it would not be thread-safe.
At this point I'm out of ideas. Can anyone suggest an approach that is scalable for large graphs, supports multi-threading, and minimizes the need to re-implement code already provided by JGraphT?

There exists a much easier solution: you want to use the AsWeightedGraph class. This is a wrapper class that allows you to create different weighted views of an underlying graph. From the class description:
Provides a weighted view of a graph. The class stores edge weights internally. All getEdgeWeight calls are handled by this view; all other graph operations are propagated to the graph backing this view.
This class can be used to make an unweighted graph weighted, to override the weights of a weighted graph, or to provide different weighted views of the same underlying graph. For instance, the edges of a graph representing a road network might have two weights associated with them: a travel time and a travel distance. Instead of creating two weighted graphs of the same network, one would simply create two weighted views of the same underlying graph.

Related

Facial Detection with LBPH - Extracting Features

I've created the the framework of the system, which takes a picture, converts it to an LBPH image, and then gets the histograms from each tile of the grid(8x8). I'm following this paper on it, but am confused what to do next to identify features after step 4. Do I just compare each square of the grid with a set of known feature squares and find the closest match? This is my first facial detection program so I'm very new to it.
So basically image processing works like this. Pixel intensity values are way too variant and uninformative by themselves to be useful for algorithms to make sense of an image. Much more useful is the local relationships between pixel intensity values So image processing for recognition, detection is basically a 2-step process.
Feature Extraction - Transform the low-level, high variance, uninformative features such as pixel intensities into a high-level, lower variance, more informative feature set (e.g. edges, visual patterns, etc.) this is referred to as feature extraction. Over the years, there have been a number of feature extraction mechanisms suggested such as edge detection with Sobel filters, histogram of oriented gradients (HOG), Haar-like features, Scale invariant features (SIFTS) and LBPH as you are trying to use. (Note that in most modern applications that are not computationally limited, convolutional neural networks (CNNs) are used for the feature extraction step because they empirically work much much better.
Use Transformed Features - once more useful information (a more informative set of features) has been extracted, you need to use these features to perform the reasoning operation you're hoping to accomplish. In this step, you fit a model (function approximator) such that given your high-level features as an input, the model outputs the information you want (in this case a classification on whether an image contains a face I think). Thus, you need to select and fit a model that can make use of the high-level features for classification. Some classic approaches to this include decision trees, support vector machines, and neural networks. Essentially, model fitting is a standard machine learning problem, and will require using a labelled set of training data to "teach" the model what the high-level feature set will look like for an image that contains a face, versus an image that does not.
It sounds like your code in its current state is missing the second piece. As a good starting place, look into using sci-kit learn's decision tree package.

Kmeans as custom layer in functional model

We are planning to use kmeans to split our data and have 10 separate fully connected models to estimate results for each group from kmeans separately.
One obvious way is to have 10 separate tfjs models and separate kmeans on the beginning.
As tfjs supports functional models and custom layers. Alternative is to have kmeans as fist custom layer and then several dense layers connected to it. Is it possible to use existing layer API to receive 20 Tensors, perform kmeans and have 10 different set of 20 Tensors as output to next layers? Do you see any issues with this approach? Is there another alternative?
Kmeans is not yet implemented in tfjs. Had it been it couldn't have been considered as a layer itself. You can however create a two stage model in your class by supposing that you yourself you manage to have your own implementation of kmeans.
You'll simply have to pass the result of one model to the other using a conditional statement. The first model -kmeans - will output the class of the data and the second model - one out 10 - is chosen based on the output of the first model.
Having said that, all these can be done in one shot using either the sequential API tf.sequential or the functional one tf.model. There are kmeans implementation in js that will return js arrays as vectors. These arrays can be converted to tensors whose shape will determine the shape of the layers. Using FCNN , we can have an output for each kmeans class.

Better way to store hierarchical data with known depth?

I have a (actually quite simple) data structure that has a tree-like adjacency. I am trying to find a good way to represent data for a film-industry based web-app which needs to store data about film projects. The data consists of:
project -> scene -> shot -> version - each adjacent to the previous in a "one-to-many" fashion.
Right now I am thinking about a simple adjacency list, but I am having trouble believing that it would be sufficiently efficient to quickly retrieve the name of the project, given just the version, as I'd have to cycle through the other tables to get it. The (simplified) layout would be like this:
simple adjacency layout
I was thinking about - instead of referencing only the direct parent - referencing all higher level parents (like this), knowing that the hierarchy has a fixed depth. That way, I could use these shortcuts to get my information with only one query. But is this bad data modeling? Are there any other ways to do it?
It's not good data modelling from the perspective of normalisation. If you realise that you put the wrong scene in for a project, you then have to move it and everything down the hierarchy.
But... does efficiency matter to you? How much data are you talking about? How fast do you need a response? I'd say go with what you've got and if you need it faster, have something that regularly extracts the data to a cache.
Try a method called Modified Preorder Tree Traversal: http://www.sitepoint.com/hierarchical-data-database/

ArangoDB: (1 Graph with several Edge Definition) Vs (1 Edge Definition per Graph)

I was wondering if there is any advantage to having several edges definition in one single graph Vs having several graphs each one with a single edge definition.
Thanks for you help,
There are different reasons for using multiple edge definitions instead of only one:
To show differences in content: You may want to have different edge collections for bought and watched. This is also possible by using a label however and comes down to personal preference.
Edge definitions allow you to restrict the collections on the in and on the out side of the edges. So you can for example say that your bought transitions always start in a document for the people collection and go to the product collection. You would otherwise need to do that inside of your application.

How to model nested lists with many items using Google Drive Realtime API?

I'd like to model ordered nested lists of uniform items (like what you would see in a standard tree widget) using the Google Drive realtime API. These trees could get quite large, ideally working well with many thousands of items.
One approach would be:
Item:
title: CollaborativeString
attributes: CollaborativeMap
children: CollaborativeList // recursivly hold other items
But I'm unsure if this is feesible when dealing with a large number of items.
An alternative might be to store all items tree order in a single CollaborativeList and add an additional "level" attribute. Then reconstruct the tree structure based on that level on the client. That would change from having to maintain thousands of CollaborativeLists to just a single big one. Probably lots of other alternatives that I don't know about.
Thanks for pointers on the best way to model this in the Google Drive Realtime API.
So long as the total size of the document is within the size limits, there shouldn't be a significant performance difference between the approaches from a framework perspective. (One caveat, using ObjectChangedListeners with a highly connected graph may slow things down. Prefer registering listeners on the specific objects instead.)
Modeling it as a real tree makes sense, since that will be the easiest to work with, and you can use the new move operation to atomically rearrange items in the lists.

Resources