Difference between types of HTN planners (total, partial, unordered) - artificial-intelligence

In the context of HTN (hierarchical task network) planners, what is the difference between Total order, Partial order, and Unordered planning?

Related

Sorting groups in solr by top result in each group

Is there a way to sort groups in solr based on just the top result in the group and not all the members? For example, I'm grouping product variants by model, and sorting each group by the number of resellers. This gives me the product variant with the most resellers at the top of each group. Next I'm applying a low price sort to the entire result set. Currently all the groups are being ordered by the lowest price within the group, rather than the lowest price of the top product.
For faceting you can do group.truncate=true if you are just concerned with the top result, but I don't see anything similar for sorting.
Is this possible or will I just have to do my own sorting in custom request handler.

ArangoDB - Graph based recommender system

I am using ArangoDB and I am trying to build a graph-based recommender system with it.
The data model just contains users, items and ratings (edges).
Therefore want to calculate the affinity of a user to a movie with the katz measure.
Eventually I want to do this:
Get all (or a certain number of) paths between a user and a item
For all of these paths do the following:
Multiply each edge's rating with a damping factor (e.g. 0.7)
Sum up all calculated values within a path
Calculate the average of all calculated path values
The result is some kind of affinity between a user and an item, weighted with the intermediary ratings and damped by a defined factor.
I was trying to realize something like that in AQL but it was either wrong or much too slow. How could a algorithm like this look in AQL?
From a performance point of view there might be better choices for graph based recommender systems. If someone has a suggestion (e.g. Item Rank or other algorithms), it would also be nice to get some ideas here.
I love this topic but sometimes I get to my borders.
In the following, #start and #end are parameters representing the two endpoints; for simplicity, I've assumed that:
the maximum admissible path length is 10000
"rates" is the name of the "edges" collection
"rating" is the name of the property giving a weight to an edge
the "damping" factor is as per the requirements
FOR v,e,p IN 0..10000 OUTBOUND #start rates
OPTIONS {uniqueVertices: "path"}
FILTER v._id==#end
LET r = AVERAGE(p.edges[*].rating) * 0.7
COLLECT AGGREGATE avg = AVERAGE(r)
RETURN avg

Are there different types of queries?

Linear queries (or linear aggregation queries) are of the form q=(q1,q2...) , where q is a real values vector and one gets the results of the query on a value 'x' by the process of a vector product as qx = q1x1+q2x2+...
Are there any other types of queries that can be classified like this?

combination of smote and undersampling on weka

according to paper which written by chawla, et al (2002)
the best perfomance of balancing data is combining undersampling with SMOTE.
I’ve tried to combine my dataset using under-sampling and SMOTE,
but I am bit confuse about the attribute for under-sampling.
In weka there is Resample to decrease the majority class.
there is a attribute in Resample
biasToUniformClass -- Whether to use bias towards a uniform class. A value of 0 leaves the class distribution as-is, a value of 1 ensures the class distribution is uniform in the output data.
I use value 0 and the data in majority class is down so the minority do and when I use value 1, the data in majority decrease but in minority class, the data is up.
I try to use value 1 for that attribute, but I don't using smote to increase the instances of minority class because the data is already balance and the result is good too.
so, is that the same as I combine the SMOTE and under-sampling or I still have to try with value 0 in that attribute and do the SMOTE ?
For undersampling, see the EasyEnsemble algorithm (a Weka implementation was developed by Schubach, Robinson, and Valentini).
The EasyEnsemble algorithm allows you to split the data into a certain number of balanced partitions. To achieve this balance, set the numIterations parameter equal to:
(# of majority instances) / (# minority instances) = numIterations
For example, if there are 30 total instances with 20 in the majority class and 10 in the minority class, set the numIterations parameter equal to 2 (i.e., 20 majority instances / 10 instances equals 2 balanced partitions). These 2 partitions should each contain 20 instances; each has the same 10 minority instances along with a different 10 instances from the majority class.
The algorithm then trains classifiers on each of the balanced partitions,
and at test time, ensembles the batch of classifiers trained on each of the balanced partitions for prediction.

Motivation for k-medoids

Why would one use kmedoids algoirthm rather then kmeans? Is it only the fact that
the number of metrics that can be used in kmeans is very limited or is there something more?
Is there an example of data, for which it makes much more sense to choose the best representatives
of cluster from the data rather then from R^n?
The problem with k-means is that it is not interpretable. By interpretability i mean the model should also be able to output the reason that why it has resulted a certain output.
lets take an example.
Suppose there is food review dataset which has two posibility that there is a +ve review or a -ve review so we can say we will have k= 2 where k is the number of clusters. Now if you go with k-means where in the algorithm the third step is updation step where you update your k-centroids based on the mean distance of the points that lie in a particular cluster. The example that we have chosen is text problem, so you would also apply some kind of text-featured vector schemes like BagOfWords(BOW), word2vec. now for every review you would get the corresponding vector. Now the generated centroid c_i that you will get after running the k-means would be the mean of the vectors present in that cluster. Now with that centroid you cannot interpret much or rather i should say nothing.
But for same problem you apply k-medoids wherein you choose your k-centroids/medoids from your dataset itself. lets say you choose x_5 point from your dataset as first medoid. From this your interpretability will increase beacuse now you have the review itself which is termed as medoid/centroid. So in k-medoids you choose the centroids from your dataset itself.
This is the foremost motivation of introducing k-mediods
Coming to the metrics part you can apply all the metrics that you apply for k-means
Hope this helps.
Why would we use k-medoids instead of k-means in case of (squared) Euclidean distance?
1. Technical justification
In case of relatively small data sets (as k-medoids complexity is greater) - to obtain a clustering more robust to noise and outliers.
Example 2D data showing that:
The graph on the left shows clusters obtained with K-medoids (sklearn_extra.cluster.KMedoids method in Python with default options) and the one on the right with K-means for K=2. Blue crosses are cluster centers.
The Python code used to generate green points:
import numpy as np
import matplotlib.pyplot as plt
rng = np.random.default_rng(seed=32)
a = rng.random((6,2))*2.35 - 3*np.ones((6,2))
b = rng.random((50,2))*0.25 - 2*np.ones((50,2))
c = rng.random((100,2))*0.5 - 1.5*np.ones((100,2))
d = rng.random((7,2))*0.55
points = np.concatenate((a, b, c, d))
plt.plot(points[:,0],points[:,1],"g.", markersize=8, alpha=0.3) # green points
2. Business case justification
Here are some example business cases showing why we would prefer k-medoids. They mostly come down to the interpretability of the results and the fact that in k-medoids the resulting cluster centers are members of the original dataset.
2.1 We have a recommender engine based only on user-item preference data and want to recommend to the user those items (e.g. movies) that other similar people enjoyed. So we assign the user to his/her closest cluster and recommend top movies that the cluster representant (actual person) watched. If the cluster representant wasn't an actual person we wouldn't possess the history of actually watched movies to recommend. Each time we'd have to search additionally e.g. for the closest person from the cluster. Example data: classic MovieLens 1M Dataset
2.2 We have a database of patients and want to pick a small representative group of size K to test a new drug with them. After clustering the patients with K-medoids, cluster representants are invited to the drug trial.
Difference between is that in k-means centroids(cluster centrum) are calculated as average of vectors containing in the cluster, and in k-medoids the medoid (cluster centrum) is record from dataset closest to centroid, so if you need to represent cluster centrum by record of your data you use k-medoids, otherwise i should use k-means (but concept of these algorithms are same)
The K-Means algorithm uses a Distance Function such as Euclidean Distance or Manhattan Distance, which are computed over vector-based instances. The K-Medoid algorithm instead uses a more general (and less constrained) distance function: aka pair-wise distance function.
This distinction works well in contexts like Complex Data Types or relational rows, where the instances have a high number of dimensions.
High dimensionality problem
In standard clustering libraries and the k-means algorithms, the distance computation phase can spend a lot of time scanning the entire vector of attributes that belongs to an instance; for instance, in the context of documents clustering, using the standard TF-IDF representation. During the computation of the cosine similarity, the distance function scans all the possible words that appear in the whole collection of documents. Which in many cases can be composed by millions of entries. This is why, in this domain, some authors [1] suggests to restrict the words considered to a subset of N most frequent word of that language.
Using K-Kedoids there is no need to represent and store the documents as vectors of word frequencies.
As an alternative representation for the documents is possible to use the set of words appearing at least twice in the document; and as a distance measure, there can be used Jaccard Distance.
In this case, vector representation is long as the number of words in your dictionary.
Heterogeneousity and Complex Data Types.
There are many domains where is considerably better to abstract the implementation of an instance:
Graph's nodes clustering;
Car driving behaviour, represented as GPS routes;
Complex data type allows the design of ad-hoc distance measures which can fit better with the proper data domain.
[1] Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schütze. 2008. Introduction to Information Retrieval. Cambridge University Press, New York, NY, USA.
Source: https://github.com/eracle/Gap

Resources