I was testing graphframes BFS toy example:
val g: GraphFrame = examples.Graphs.friends
val paths: DataFrame = g.bfs.fromExpr("name = 'Esther'").toExpr("name <> 'Esther'").run()
The result I get is:
+-------------+------------+------------+
| from| e0| to|
+-------------+------------+------------+
|[e,Esther,32]|[e,f,follow]|[f,Fanny,36]|
|[e,Esther,32]|[e,d,friend]|[d,David,29]|
+-------------+------------+------------+
That's pretty weird, since Fanny and David also have outgoing edges. And the vertices linked to them also have outgoing edges, e.g, the result dataframe should contain not only one hop paths, but all paths from the source vertex.
I myself created a toy graph:
1 2
2 3
3 4
4 5
And when I do the same kind of query:
g.bfs.fromExpr("id = 1").toExpr("id <> 1").run()
I still get only the one hop neighbors. Am I missing something? I also tested other operators that stand for "not equal" without success. A wild guess: Maybe when BFS is reaching again the source vertex (it should look at it, but not visit its neighbors), it does not match the "toExpr" expression and aborts.
Another question: GraphFrames is directed, isn't? In order to get an "undirect graph", I should add reciprocal edges, shouldn't I?
Upon reaching Fanny and David, you've found the shortest path from Esther to a non-Esther node, so the search stops.
According to the GraphFrames User Guide, the bfs method "finds the shortest path(s) from one vertex (or a set of vertices) to another vertex (or a set of vertices). The beginning and end vertices are specified as Spark DataFrame expressions."
In the graph you're using, the shortest path from Esther to a non-Esther node is just one hop, so the breadth-first search stops there.
Consider your numeric toy graph. You're finding this (one hop):
import org.graphframes.GraphFrame
val edgesDf = spark.sqlContext.createDataFrame(Seq(
(1, 2),
(2, 3),
(3, 4),
(4, 5)
)).toDF("src", "dst")
val g = GraphFrame.fromEdges(edgesDf)
g.bfs.fromExpr("id = 1").toExpr("id <> 1").run().show()
+----+-----+---+
|from| e0| to|
+----+-----+---+
| [1]|[1,2]|[2]|
+----+-----+---+
Suppose you queried it like this instead:
g.bfs.fromExpr("id = 1").toExpr("id > 3").run().show()
+----+-----+---+-----+---+-----+---+
|from| e0| v1| e1| v2| e2| to|
+----+-----+---+-----+---+-----+---+
| [1]|[1,2]|[2]|[2,3]|[3]|[3,4]|[4]|
+----+-----+---+-----+---+-----+---+
Now the bfs method takes three hops. This is the shortest path from 1 to a node that is greater than 3. Even though there's an edge from 4 to 5 (and 5 > 3), it doesn't continue because that would be a longer path (four hops).
Another question: GraphFrames is directed, isn't? In order to get an "undirect graph", I should add reciprocal edges, shouldn't I?
I think it depends on the algorithm you want to apply to the graph. Someone could write an algorithm that ignores the direction in the underlying edges DataFrame. But if an algorithm assumes a directed graph, then I think you're right: you'd have to add reciprocal edges.
You may get a better response (from someone else) if you ask this as a separate question.
Related
I have a list of cities and an energy expenditure between each of them. I want to find the "best" (shortest) path between some specific pairs, that leads to the least energy loss. All roads are two-way roads, same energy expenditure from one city to another in a pair, but in the "best" path, each city should be visited only once to prevent looping around the same city.
I've tried making a directed adjacency list graph and using Bellman Ford but i am indeed detecting negative cycles, making the shortest path non-existent, but can't figure out how to make the algorithm go through each node only once to prevent looping. I've thought about using something like BFS to just print all the possible paths from a source node to a destination and somehow skip going through the same node (summing up the weights afterwards perhaps). Any ideas of how I would possibly solve this, either modding the Bellman Ford or the BFS or using something else?
The problem of finding the lowest-cost simple path (a path that doesn’t repeat nodes) in a graph containing negative cycles is NP-hard, which means that at present we don’t have any efficient algorithms for the problem and it’s possible that none exists. (You can prove NP-hardness by a reduction from the Hamiltonian path problem: if you assign each edge in a graph cost -1, then there’s a Hamiltonian path from a node u to a node v if and only if the lowest-cost simple path between them has length n-1, where n is the number of nodes in the graph.)
There are some techniques to find short paths using at most a fixed number of hops. Check out the “Color Coding” algorithm for an example of one of these.
add the absolute value of the most negative weight to EVERY weight.
Apply the Dijkstra algorithm. https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
Subtract absolute value of the most negative weight times number of hops from cost of optimal path found in 2
What is the exact difference between Dijkstra's and Prim's algorithms? I know Prim's will give a MST but the tree generated by Dijkstra will also be a MST. Then what is the exact difference?
Prim's algorithm constructs a minimum spanning tree for the graph, which is a tree that connects all nodes in the graph and has the least total cost among all trees that connect all the nodes. However, the length of a path between any two nodes in the MST might not be the shortest path between those two nodes in the original graph. MSTs are useful, for example, if you wanted to physically wire up the nodes in the graph to provide electricity to them at the least total cost. It doesn't matter that the path length between two nodes might not be optimal, since all you care about is the fact that they're connected.
Dijkstra's algorithm constructs a shortest path tree starting from some source node. A shortest path tree is a tree that connects all nodes in the graph back to the source node and has the property that the length of any path from the source node to any other node in the graph is minimized. This is useful, for example, if you wanted to build a road network that made it as efficient as possible for everyone to get to some major important landmark. However, the shortest path tree is not guaranteed to be a minimum spanning tree, and the sum of the costs on the edges of a shortest-path tree can be much larger than the cost of an MST.
Another important difference concerns what types of graphs the algorithms work on. Prim's algorithm works on undirected graphs only, since the concept of an MST assumes that graphs are inherently undirected. (There is something called a "minimum spanning arborescence" for directed graphs, but algorithms to find them are much more complicated). Dijkstra's algorithm will work fine on directed graphs, since shortest path trees can indeed be directed. Additionally, Dijkstra's algorithm does not necessarily yield the correct solution in graphs containing negative edge weights, while Prim's algorithm can handle this.
Dijkstra's algorithm doesn't create a MST, it finds the shortest path.
Consider this graph
5 5
s *-----*-----* t
\ /
-------
9
The shortest path is 9, while the MST is a different 'path' at 10.
Prim and Dijkstra algorithms are almost the same, except for the "relax function".
Prim:
MST-PRIM (G, w, r) {
for each key ∈ G.V
u.key = ∞
u.parent = NIL
r.key = 0
Q = G.V
while (Q ≠ ø)
u = Extract-Min(Q)
for each v ∈ G.Adj[u]
if (v ∈ Q)
alt = w(u,v) <== relax function, Pay attention here
if alt < v.key
v.parent = u
v.key = alt
}
Dijkstra:
Dijkstra (G, w, r) {
for each key ∈ G.V
u.key = ∞
u.parent = NIL
r.key = 0
Q = G.V
while (Q ≠ ø)
u = Extract-Min(Q)
for each v ∈ G.Adj[u]
if (v ∈ Q)
alt = w(u,v) + u.key <== relax function, Pay attention here
if alt < v.key
v.parent = u
v.key = alt
}
The only difference is pointed out by the arrow, which is the relax function.
The Prim, which searches for the minimum spanning tree, only cares about the minimum of the total edges cover all the vertices. The relax function is alt = w(u,v)
The Dijkstra, which searches for the minimum path length, so it cares about the edge accumulation. The relax function is alt = w(u,v) + u.key
Dijsktra's algorithm finds the minimum distance from node i to all nodes (you specify i). So in return you get the minimum distance tree from node i.
Prims algorithm gets you the minimum spaning tree for a given graph. A tree that connects all nodes while the sum of all costs is the minimum possible.
So with Dijkstra you can go from the selected node to any other with the minimum cost, you don't get this with Prim's
The only difference I see is that Prim's algorithm stores a minimum cost edge whereas Dijkstra's algorithm stores the total cost from a source vertex to the current vertex.
Dijkstra gives you a way from the source node to the destination node such that the cost is minimum. However Prim's algorithm gives you a minimum spanning tree such that all nodes are connected and the total cost is minimum.
In simple words:
So, if you want to deploy a train to connecte several cities, you would use Prim's algo. But if you want to go from one city to other saving as much time as possible, you'd use Dijkstra's algo.
Both can be implemented using exactly same generic algorithm as follows:
Inputs:
G: Graph
s: Starting vertex (any for Prim, source for Dijkstra)
f: a function that takes vertices u and v, returns a number
Generic(G, s, f)
Q = Enqueue all V with key = infinity, parent = null
s.key = 0
While Q is not empty
u = dequeue Q
For each v in adj(u)
if v is in Q and v.key > f(u,v)
v.key = f(u,v)
v.parent = u
For Prim, pass f = w(u, v) and for Dijkstra pass f = u.key + w(u, v).
Another interesting thing is that above Generic can also implement Breadth First Search (BFS) although it would be overkill because expensive priority queue is not really required. To turn above Generic algorithm in to BFS, pass f = u.key + 1 which is same as enforcing all weights to 1 (i.e. BFS gives minimum number of edges required to traverse from point A to B).
Intuition
Here's one good way to think about above generic algorithm: We start with two buckets A and B. Initially, put all your vertices in B so the bucket A is empty. Then we move one vertex from B to A. Now look at all the edges from vertices in A that crosses over to the vertices in B. We chose the one edge using some criteria from these cross-over edges and move corresponding vertex from B to A. Repeat this process until B is empty.
A brute force way to implement this idea would be to maintain a priority queue of the edges for the vertices in A that crosses over to B. Obviously that would be troublesome if graph was not sparse. So question would be can we instead maintain priority queue of vertices? This in fact we can as our decision finally is which vertex to pick from B.
Historical Context
It's interesting that the generic version of the technique behind both algorithms is conceptually as old as 1930 even when electronic computers weren't around.
The story starts with Otakar Borůvka who needed an algorithm for a family friend trying to figure out how to connect cities in the country of Moravia (now part of the Czech Republic) with minimal cost electric lines. He published his algorithm in 1926 in a mathematics related journal, as Computer Science didn't existed then. This came to the attention to Vojtěch Jarník who thought of an improvement on Borůvka's algorithm and published it in 1930. He in fact discovered the same algorithm that we now know as Prim's algorithm who re-discovered it in 1957.
Independent of all these, in 1956 Dijkstra needed to write a program to demonstrate the capabilities of a new computer his institute had developed. He thought it would be cool to have computer find connections to travel between two cities of the Netherlands. He designed the algorithm in 20 minutes. He created a graph of 64 cities with some simplifications (because his computer was 6-bit) and wrote code for this 1956 computer. However he didn't published his algorithm because primarily there were no computer science journals and he thought this may not be very important. The next year he learned about the problem of connecting terminals of new computers such that the length of wires was minimized. He thought about this problem and re-discovered Jarník/Prim's algorithm which again uses the same technique as the shortest path algorithm he had discovered a year before. He mentioned that both of his algorithms were designed without using pen or paper. In 1959 he published both algorithms in a paper that is just 2 and a half page long.
Dijkstra finds the shortest path between it's beginning node
and every other node. So in return you get the minimum distance tree from beginning node i.e. you can reach every other node as efficiently as possible.
Prims algorithm gets you the MST for a given graph i.e. a tree that connects all nodes while the sum of all costs is the minimum possible.
To make a story short with a realistic example:
Dijkstra wants to know the shortest path to each destination point by saving traveling time and fuel.
Prim wants to know how to efficiently deploy a train rail system i.e. saving material costs.
Directly from Dijkstra's Algorithm's wikipedia article:
The process that underlies Dijkstra's algorithm is similar to the greedy process used in Prim's algorithm. Prim's purpose is to find a minimum spanning tree that connects all nodes in the graph; Dijkstra is concerned with only two nodes. Prim's does not evaluate the total weight of the path from the starting node, only the individual path.
Here's what clicked for me: think about which vertex the algorithm takes next:
Prim's algorithm takes next the vertex that's closest to the tree, i.e. closest to some vertex anywhere on the tree.
Dijkstra's algorithm takes next the vertex that is closest to the source.
Source: R. Sedgewick's lecture on Dijkstra's algorithm, Algorithms, Part II: https://coursera.org/share/a551af98e24292b6445c82a2a5f16b18
I was bothered with the same question lately, and I think I might share my understanding...
I think the key difference between these two algorithms (Dijkstra and Prim) roots in the problem they are designed to solve, namely, shortest path between two nodes and minimal spanning tree (MST). The formal is to find the shortest path between say, node s and t, and a rational requirement is to visit each edge of the graph at most once. However, it does NOT require us to visit all the node. The latter (MST) is to get us visit ALL the node (at most once), and with the same rational requirement of visiting each edge at most once too.
That being said, Dijkstra allows us to "take shortcut" so long I can get from s to t, without worrying the consequence - once I get to t, I am done! Although there is also a path from s to t in the MST, but this s-t path is created with considerations of all the rest nodes, therefore, this path can be longer than the s-t path found by the Dijstra's algorithm. Below is a quick example with 3 nodes:
2 2
(s) o ----- o ----- o (t)
| |
-----------------
3
Let's say each of the top edges has the cost of 2, and the bottom edge has cost of 3, then Dijktra will tell us to the take the bottom path, since we don't care about the middle node. On the other hand, Prim will return us a MST with the top 2 edges, discarding the bottom edge.
Such difference is also reflected from the subtle difference in the implementations: in Dijkstra's algorithm, one needs to have a book keeping step (for every node) to update the shortest path from s, after absorbing a new node, whereas in Prim's algorithm, there is no such need.
The simplest explanation is in Prims you don't specify the Starting Node, but in dijsktra you (Need to have a starting node) have to find shortest path from the given node to all other nodes.
The key difference between the basic algorithms lies in their different edge-selection criteria. Generally, they both use a priority queue for selecting next nodes, but have different criteria to select the adjacent nodes of current processing nodes: Prim's Algorithm requires the next adjacent nodes must be also kept in the queue, while Dijkstra's Algorithm does not:
def dijkstra(g, s):
q <- make_priority_queue(VERTEX.distance)
for each vertex v in g.vertex:
v.distance <- infinite
v.predecessor ~> nil
q.add(v)
s.distance <- 0
while not q.is_empty:
u <- q.extract_min()
for each adjacent vertex v of u:
...
def prim(g, s):
q <- make_priority_queue(VERTEX.distance)
for each vertex v in g.vertex:
v.distance <- infinite
v.predecessor ~> nil
q.add(v)
s.distance <- 0
while not q.is_empty:
u <- q.extract_min()
for each adjacent vertex v of u:
if v in q and weight(u, v) < v.distance:// <-------selection--------
...
The calculations of vertex.distance are the second different point.
Dijkstras algorithm is used only to find shortest path.
In Minimum Spanning tree(Prim's or Kruskal's algorithm) you get minimum egdes with minimum edge value.
For example:- Consider a situation where you wan't to create a huge network for which u will be requiring a large number of wires so these counting of wire can be done using Minimum Spanning Tree(Prim's or Kruskal's algorithm) (i.e it will give you minimum number of wires to create huge wired network connection with minimum cost).
Whereas "Dijkstras algorithm" will be used to get the shortest path between two nodes while connecting any nodes with each other.
Dijkstra's algorithm is a single source shortest path problem between node i and j, but Prim's algorithm a minimal spanning tree problem. These algorithm use programming concept named 'greedy algorithm'
If you check these notion, please visit
Greedy algorithm lecture note : http://jeffe.cs.illinois.edu/teaching/algorithms/notes/07-greedy.pdf
Minimum spanning tree : http://jeffe.cs.illinois.edu/teaching/algorithms/notes/20-mst.pdf
Single source shortest path : http://jeffe.cs.illinois.edu/teaching/algorithms/notes/21-sssp.pdf
#templatetypedef has covered difference between MST and shortest path. I've covered the algorithm difference in another So answer by demonstrating that both can be implemented using same generic algorithm that takes one more parameter as input: function f(u,v). The difference between Prim and Dijkstra's algorithm is simply which f(u,v) you use.
At the code level, the other difference is the API.
You initialize Prim with a source vertex, s, i.e., Prim.new(s); s can be any vertex, and regardless of s, the end result, which are the edges of the minimum spanning tree (MST) are the same. To get the MST edges, we call the method edges().
You initialize Dijkstra with a source vertex, s, i.e., Dijkstra.new(s) that you want to get shortest path/distance to all other vertices. The end results, which are the shortest path/distance from s to all other vertices; are different depending on the s. To get the shortest paths/distances from s to any vertex, v, we call the methods distanceTo(v) and pathTo(v) respectively.
They both create trees with the greedy method.
With Prim's algorithm we find minimum cost spanning tree. The goal is to find minimum cost to cover all nodes.
with Dijkstra we find Single Source Shortest Path. The goal is find the shortest path from the source to every other node
Prim’s algorithm works exactly as Dijkstra’s, except
It does not keep track of the distance from the source.
Storing the edge that connected the front of the visited vertices to the next closest vertex.
The vertex used as “source” for Prim’s algorithm is
going to be the root of the MST.
The definition of an admissible heuristic is one that "does not overestimate the path of a particular goal".
I am attempting to write a Pac-Man heuristic for finding the fastest method to eat dots, some of which are randomly scattered across the grid. However it is failing my admissibility test.
Here are the steps of my algorithm:
sum = 0, list = grid.getListofDots()
1. Find nearest dot from starting position (or previous dot that was removed) using manhattan distance
2. add to sum
3. Remove dot from list of possible dots
4. repeat steps 1 - 3 until list is empty
5. return the sum
Since I'm using manhattan distance, shouldn't this be admissible? If not, are there any suggestions or other approaches to make this algorithm admissible?
As said your heuristics isn't admissible. Another example is:
Your cost is 9 but the best path has cost 6.
A very, very simple admissible heuristics is:
number_of_remaining_dots
but it isn't very tight. A small improvement is:
manhattan_distance_to_nearest_dot + dots_left_out
Other possibilities are:
distance_to_nearest_dot // Found via Breadth-first search
or
manhattan_distance_to_farthest_dot
Problem: Start with a set S of size 2n+1 and a subset A of S of size n. You have functions addElement(A,x) and removeElement(A,x) that can add or remove an element of A. Write a function that cycles through all the subsets of S of size n or n+1 using just these two operations on A.
I figured out that there are (2n+1 choose n) + (2n+1 choose n+1) = 2 * (2n+1 choose n) subsets that I need to find. So here's the structure for my function:
for (int k=0; k<2*binomial(2n+1,n); ++k) {
if (k mod 2) {
// somehow choose x from S-A
A = addElement(A,x);
printSet(A,n+1);
} else
// somehow choose x from A
A = removeElement(A,x);
printSet(A,n);
}
}
The function binomial(2n+1,n) just gives the binomial coefficient, and the function printSet prints the elements of A so that I can see if I hit all the sets.
I don't know how to choose the element to add or remove, though. I tried lots of different things, but I didn't get anything that worked in general.
For n=1, here's a solution that I found that works:
for (int k=0; k<6; ++k) {
if (k mod 2) {
x = S[A[0] mod 3];
A = addElement(A,x);
printSet(A,2);
} else
x = A[0];
A = removeElement(A,x);
printSet(A,1);
}
}
and the output for S = [1,2,3] and A=[1] is:
[1,2]
[2]
[2,3]
[3]
[3,1]
[1]
But even getting this to work for n=2 I can't do. Can someone give me some help on this one?
This isn't a solution so much as it's another way to think about the problem.
Make the following graph:
Vertices are all subsets of S of sizes n or n+1.
There is an edge between v and w if the two sets differ by one element.
For example, for n=1, you get the following cycle:
{1} --- {1,3} --- {3}
| |
| |
{1,2} --- {2} --- {2,3}
Your problem is to find a Hamiltonian cycle:
A Hamiltonian cycle (or Hamiltonian circuit) is a cycle in an
undirected graph which visits each vertex exactly once and also
returns to the starting vertex. Determining whether such paths and
cycles exist in graphs is the Hamiltonian path problem which is
NP-complete.
In other words, this problem is hard.
There are a handful of theorems giving sufficient conditions for a Hamiltonian cycle to exist in a graph (e.g. if all vertices have degree at least N/2 where N is the number of vertices), but none that I know immediately implies that this graph has a Hamiltonian cycle.
You could try one of the myriad algorithms to determine if a Hamiltonian cycle exists. For example, from the wikipedia article on the Hamiltonian path problem:
A trivial heuristic algorithm for locating hamiltonian paths is to
construct a path abc... and extend it until no longer possible; when
the path abc...xyz cannot be extended any longer because all
neighbours of z already lie in the path, one goes back one step,
removing the edge yz and extending the path with a different neighbour
of y; if no choice produces a hamiltonian path, then one takes a
further step back, removing the edge xy and extending the path with a
different neighbour of x, and so on. This algorithm will certainly
find an hamiltonian path (if any) but it runs in exponential time.
Hope this helps.
Good News: Though the Hamiltonian cycle problem is difficult in general, this graph is very nice: it's bipartite and (n+1)-regular. This means there may be a nice solution for this particular graph.
Bad News: After doing a bit of searching, it turns out that this problem is known as the Middle Levels Conjecture, and it seems to have originated around 1980. As best I can tell, the problem is still open in general, but it has been computer verified for n <= 17 (and I found a preprint from 12/2009 claiming to verify n=18). These two pages have additional information about the problem and references:
http://www.math.uiuc.edu/~west/openp/revolving.html
http://garden.irmacs.sfu.ca/?q=op/middle_levels_problem
This sort of thing is covered in Knuth Vol 4A (which despite Charles Stross's excellent Laundry novels is now openly available). I think you request is satisfied by a section of a monotonic binary gray code described in section 7.2.1.1. There is an online preprint with a PDF version at http://www.kcats.org/csci/464/doc/knuth/fascicles/fasc2a.pdf
When searching in a tree, my understanding of uniform cost search is that for a given node A, having child nodes B,C,D with associated costs of (10, 5, 7), my algorithm will choose C, as it has a lower cost. After expanding C, I see nodes E, F, G with costs of (40, 50, 60). It will choose 40, as it has the minimum value from both 3.
Now, isn't it just the same as doing a Greedy-Search, where you always choose what seems to be the best action?
Also, when defining costs from going from certain nodes to others, should we consider the whole cost from the beginning of the tree to the current node, or just the cost itself from going from node n to node n'?
Thanks
Nope. Your understanding isn't quite right.
The next node to be visited in case of uniform-cost-search would be D, as that has the lowest total cost from the root (7, as opposed to 40+5=45).
Greedy Search doesn't go back up the tree - it picks the lowest value and commits to that. Uniform-Cost will pick the lowest total cost from the entire tree.
In a uniform cost search you always consider all unvisited nodes you have seen so far, not just those that are connected to the node you looked at. So in your example, after choosing C, you would find that visiting G has a total cost of 40 + 5 = 45 which is higher than the cost of starting again from the root and visiting D, which has cost 7. So you would visit D next.
The difference between them is that the Greedy picks the node with the lowest heuristic value while the UCS picks the node with the lowest action cost. Consider the following graph:
If you run both algorithms, you'll get:
UCS
Picks: S (cost 0), B (cost 1), A (cost 2), D (cost 3), C (cost 5), G (cost 7)
Answer: S->A->D->G
Greedy:
*supposing it chooses the A instead of B; A and B have the same heuristic value
Picks: S , A (h = 3), C (h = 1), G (h = 0)
Answer: S->A->C->G
So, it's important to differentiate the action cost to get to the node from the heuristic value, which is a piece of information that is added to the node, based on the understanding of the problem definition.
Greedy search (for most of this answer, think of greedy best-first search when I say greedy search) is an informed search algorithm, which means the function that is evaluated to choose which node to expand has the form of f(n) = h(n), where h is the heuristic function for a given node n that returns the estimated value from this node n to a goal state. If you're trying to travel to a place, one example of a heuristic function is one that returns the estimated distance from node n to your destination.
Uniform-cost search, on the other hand, is an uninformed search algorithm, also known as a blind search strategy. This means that the value of the function f for a given node n, f(n), for uninformed search algorithms, takes into consideration g(n), the total action cost from the root node to the node n, that is, the path cost. It doesn't have any information about the problem apart from the problem description, so that's all it can know. You don't have any information that can help you decide how close one node is to a goal state, only to the root node. You can watch the nodes expanding here (Animation of the Uniform Cost Algorithm) and see how the cost from node n to the root is used to choose which nodes to expand.
Greedy search, just like any greedy algorithm, takes locally optimal solutions and uses a function that returns an estimated value from a given node n to the goal state. You can watch the nodes expanding here (Greedy Best First Search | Quick Explanation with Visualization) and see how the return of the heuristic function from node n to the goal state is used to choose which nodes to expand.
By the way, sometimes, the path chosen by greedy search is not a global optimum. In the example in the video, for example, node A is never expanded because there are always nodes with smaller values of h(n). But what if A has such a high value, and the values for the next nodes are very small and therefore a global optimum? That can happen. A bad heuristic function can cause this. Getting stuck in a loop is also possible. A*, which is also a greedy search algorithm, fixes this by making use of both the path cost (which implies knowing nodes already visited) and a heuristic function, that is, f(n) = g(n) + h(n).
It's possible that to this point, it's still not clear to you HOW uniform-cost knows there is another path that looks better locally but not globally. It should become clear after telling you that if all paths have the same cost, uniform cost search is the same thing as the breadth-first search (BFS). It would expand all nodes just like BFS.
UCS cares about history,
Greedy does not.
In your example, after expanding C, the next node would be D according to the UCS. Because, it's our history. UCS can't forget the past and remember that the total cost of D is much lower than E.
Don't be Greedy. Be UCS and if going back is really a better choice, don't afraid of going back!