Number of recursive calls in gcd() function - c

Recently I have been given a gcd() function, written in C programming language which takes two arguments n and m and compute the GCD of these two numbers using recursion.I have been asked that "How many recursive calls are made by the function if n>=m?" Can any one provide the solution with explanation to my problem as I am unable to figure it out.
Here is the source code of the function :
int gcd(int n, int m)
{
if (n%m==0)
return m;
else
n=n%m;
return gcd(m, n);
}

Euclidean algorithm gives #steps =
T(a, b) = 1 + T(b, r0) = 2 + T(r0, r1) = … = N + T(rN - 2, rN - 1) = N + 1
where a and b are the inputs, and r_i the remainder. We used that T(x, 0) = 0
Running an example in paper would help you get a better grasp of the aforementioned equation:
gcd(1071, 462) is calculated from the equivalent gcd(462, 1071 mod 462) = gcd(462, 147). The latter GCD is calculated from the gcd(147, 462 mod 147) = gcd(147, 21), which in turn is calculated from the gcd(21, 147 mod 21) = gcd(21, 0) = 21
So a = 1071 and b = 462, and we have:
T(a, b) =
1 + T(b, a % b) = 1 + T(b, r_0) = (1)
2 + T(r_0, b % r_0) = 2 + T(r_0, r_1) =
3 + T(r_1, r_0 % r_1) = 3 + T(r_1, r_2) = (2)
3 + T(r_1, 0) =
3 + 0 =
3
which says that we needed to take 3 steps to compute gcd(1071, 462).
(1): notice that the 1 is the step already done before, i.e. T(a, b)
(2): r_2 is equal to 0 in this example
You could run a plethora of examples in paper, and see how this unfolds, and eventually you will be able to see the pattern, if you don't see it already.
Note: While #Ian'Abott's comments are also correct, I decided to present this approach, since it's more generic, and can be applied to any similar recursive method.

Related

Is there a way to improve this pygame colour filter algorithm [duplicate]

I've made a function to find a color within a image, and return x, y. Now I need to add a new function, where I can find a color with a given tolerence. Should be easy?
Code to find color in image, and return x, y:
def FindColorIn(r,g,b, xmin, xmax, ymin, ymax):
image = ImageGrab.grab()
for x in range(xmin, xmax):
for y in range(ymin,ymax):
px = image.getpixel((x, y))
if px[0] == r and px[1] == g and px[2] == b:
return x, y
def FindColor(r,g,b):
image = ImageGrab.grab()
size = image.size
pos = FindColorIn(r,g,b, 1, size[0], 1, size[1])
return pos
Outcome:
Taken from the answers the normal methods of comparing two colors are in Euclidean distance, or Chebyshev distance.
I decided to mostly use (squared) euclidean distance, and multiple different color-spaces. LAB, deltaE (LCH), XYZ, HSL, and RGB. In my code, most color-spaces use squared euclidean distance to compute the difference.
For example with LAB, RGB and XYZ a simple squared euc. distance does the trick:
if ((X-X1)^2 + (Y-Y1)^2 + (Z-Z1)^2) <= (Tol^2) then
...
LCH, and HSL is a little more complicated as both have a cylindrical hue, but some piece of math solves that, then it's on to using squared eucl. here as well.
In most these cases I've added "separate parameters" for tolerance for each channel (using 1 global tolerance, and alternative "modifiers" HueTol := Tolerance * hueMod or LightTol := Tolerance * LightMod).
It seems like colorspaces built on top of XYZ (LAB, LCH) does perform best in many of my scenarios. Tho HSL yields very good results in some cases, and it's much cheaper to convert to from RGB, RGB is also great tho, and fills most of my needs.
Computing distances between RGB colours, in a way that's meaningful to the eye, isn't as easy a just taking the Euclidian distance between the two RGB vectors.
There is an interesting article about this here: http://www.compuphase.com/cmetric.htm
The example implementation in C is this:
typedef struct {
unsigned char r, g, b;
} RGB;
double ColourDistance(RGB e1, RGB e2)
{
long rmean = ( (long)e1.r + (long)e2.r ) / 2;
long r = (long)e1.r - (long)e2.r;
long g = (long)e1.g - (long)e2.g;
long b = (long)e1.b - (long)e2.b;
return sqrt((((512+rmean)*r*r)>>8) + 4*g*g + (((767-rmean)*b*b)>>8));
}
It shouldn't be too difficult to port to Python.
EDIT:
Alternatively, as suggested in this answer, you could use HLS and HSV. The colorsys module seems to have functions to make the conversion from RGB. Its documentation also links to these pages, which are worth reading to understand why RGB Euclidian distance doesn't really work:
http://www.poynton.com/ColorFAQ.html
http://www.cambridgeincolour.com/tutorials/color-space-conversion.htm
EDIT 2:
According to this answer, this library should be useful: http://code.google.com/p/python-colormath/
Here is an optimized Python version adapted from Bruno's asnwer:
def ColorDistance(rgb1,rgb2):
'''d = {} distance between two colors(3)'''
rm = 0.5*(rgb1[0]+rgb2[0])
d = sum((2+rm,4,3-rm)*(rgb1-rgb2)**2)**0.5
return d
usage:
>>> import numpy
>>> rgb1 = numpy.array([1,1,0])
>>> rgb2 = numpy.array([0,0,0])
>>> ColorDistance(rgb1,rgb2)
2.5495097567963922
Instead of this:
if px[0] == r and px[1] == g and px[2] == b:
Try this:
if max(map(lambda a,b: abs(a-b), px, (r,g,b))) < tolerance:
Where tolerance is the maximum difference you're willing to accept in any of the color channels.
What it does is to subtract each channel from your target values, take the absolute values, then the max of those.
Assuming that rtol, gtol, and btol are the tolerances for r,g, and b respectively, why not do:
if abs(px[0]- r) <= rtol and \
abs(px[1]- g) <= gtol and \
abs(px[2]- b) <= btol:
return x, y
Here's a vectorised Python (numpy) version of Bruno and Developer's answers (i.e. an implementation of the approximation derived here) that accepts a pair of numpy arrays of shape (x, 3) where individual rows are in [R, G, B] order and individual colour values ∈[0, 1].
You can reduce it two a two-liner at the expense of readability. I'm not entirely sure whether it's the most optimised version possible, but it should be good enough.
def colour_dist(fst, snd):
rm = 0.5 * (fst[:, 0] + snd[:, 0])
drgb = (fst - snd) ** 2
t = np.array([2 + rm, 4 + 0 * rm, 3 - rm]).T
return np.sqrt(np.sum(t * drgb, 1))
It was evaluated against Developer's per-element version above, and produces the same results (save for floating precision errors in two cases out of one thousand).
A cleaner python implementation of the function stated here, the function takes 2 image paths, reads them using cv.imread and the outputs a matrix with each matrix cell having difference of colors. you can change it to just match 2 colors easily
import numpy as np
import cv2 as cv
def col_diff(img1, img2):
img_bgr1 = cv.imread(img1) # since opencv reads as B, G, R
img_bgr2 = cv.imread(img2)
r_m = 0.5 * (img_bgr1[:, :, 2] + img_bgr2[:, :, 2])
delta_rgb = np.square(img_bgr1- img_bgr2)
cols_diffs = delta_rgb[:, :, 2] * (2 + r_m / 256) + delta_rgb[:, :, 1] * (4) +
delta_rgb[:, :, 0] * (2 + (255 - r_m) / 256)
cols_diffs = np.sqrt(cols_diffs)
# lets normalized the values to range [0 , 1]
cols_diffs_min = np.min(cols_diffs)
cols_diffs_max = np.max(cols_diffs)
cols_diffs_normalized = (cols_diffs - cols_diffs_min) / (cols_diffs_max - cols_diffs_min)
return np.sqrt(cols_diffs_normalized)
Simple:
def eq_with_tolerance(a, b, t):
return a-t <= b <= a+t
def FindColorIn(r,g,b, xmin, xmax, ymin, ymax, tolerance=0):
image = ImageGrab.grab()
for x in range(xmin, xmax):
for y in range(ymin,ymax):
px = image.getpixel((x, y))
if eq_with_tolerance(r, px[0], tolerance) and eq_with_tolerance(g, px[1], tolerance) and eq_with_tolerance(b, px[2], tolerance):
return x, y
from pyautogui source code
def pixelMatchesColor(x, y, expectedRGBColor, tolerance=0):
r, g, b = screenshot().getpixel((x, y))
exR, exG, exB = expectedRGBColor
return (abs(r - exR) <= tolerance) and (abs(g - exG) <= tolerance) and (abs(b - exB) <= tolerance)
you just need a little fix and you're ready to go.
Here is a simple function that does not require any libraries:
def color_distance(rgb1, rgb2):
rm = 0.5 * (rgb1[0] + rgb2[0])
rd = ((2 + rm) * (rgb1[0] - rgb2[0])) ** 2
gd = (4 * (rgb1[1] - rgb2[1])) ** 2
bd = ((3 - rm) * (rgb1[2] - rgb2[2])) ** 2
return (rd + gd + bd) ** 0.5
assuming that rgb1 and rgb2 are RBG tuples

I am getting unexpected output

I assumed the output to be '0' for the following code but, I am getting the output as '3'.
#include<stdio.h>
int num_digit(int n);
int num_digit(int n)
{
if (n == 0)
return 0;
else
return 1 + num_digit(n/10);
}
int main() {
int k = num_digit(123);
printf("%d\n",k);
return 0;
}
The following link provides an excellent source for learning C Recursion and as #MFisherKDX pointed out help solve my confusion.
https://www.programiz.com/c-programming/c-recursion
After each time the recursion happens it returns a value.
adding up all the values :
0+1 = 1
1+1 = 2
2+1 = 3
gives the answer as 3.
This is basic recursion. Just try to create a recursion tree for the program that you have written and you should be able to figure out why is the output that you see coming as 3.
You are expecting 0 as answer, only based on the last recursive call (terminating condition), but when a recursive call happens, there is a concept of activation records which are maintained in the form of Stack data structure.
The recursion tree will look something like what is shown in Recursion Tree for shared code
num_digits(123) = 1 + num_digits(12)
num_digits(12) = 1 + num_digits(1)
num_digits(1) = 1 + num_digits(0)
num_digits(0) = 0
Using substitution:
num_digits(123) = 1 + (1 + (1 + (0)))
Please follow the parenthesis above clearly and you should be able to absolutely understand the output that you were getting out of the code that you wrote.
Recursion stack for your code is like below
1 + num_digit(123/10);
1 + num_digit(12/10);
1 + num_digit(1/10); //at this point your code will return 0 for num_digit(1/10)
and backtracking is like below
1+0=1
1+1=2
1+2=3
Hence the final answer is 3

Matlab: Help understanding sinusoidal curve fit

I have an unknown sine wave with some noise that I am trying to reconstruct. The ultimate goal is to come up with a C algorithm to find the amplitude, dc offset, phase, and frequency of a sine wave but I am prototyping in Matlab (Octave actually) first. The sine wave is of the form
y = a + b*sin(c + 2*pi*d*t)
a = dc offset
b = amplitude
c = phase shift (rad)
d = frequency
I have found this example and in the comments John D'Errico presents a method for using Least Squares to fit a sine wave to data. It is a neat little algorithm and works remarkably well but I am having difficulties understanding one aspect. The algorithm is as follows:
Algorithm
Suppose you have a sine wave of the form:
(1) y = a + b*sin(c+d*x)
Using the identity
(2) sin(u+v) = sin(u)*cos(v) + cos(u)*sin(v)
We can rewrite (1) as
(3) y = a + b*sin(c)*cos(d*x) + b*cos(c)*sin(d*x)
Since b*sin(c) and b*cos(c) are constants, these can be wrapped into constants b1 and b2.
(4) y = a + b1*cos(d*x) + b2*sin(d*x)
This is the equation that is used to fit the sine wave. A function is created to generate regression coefficients and a sum-of-squares residual error.
(5) cfun = #(d) [ones(size(x)), sin(d*x), cos(d*x)] \ y;
(6) sumerr2 = #(d) sum((y - [ones(size(x)), sin(d*x), cos(d*x)] * cfun(d)) .^ 2);
Next, sumerr2 is minimized for the frequency d using fminbnd with lower limit l1 and upper limit l2.
(7) dopt = fminbnd(sumerr2, l1, l2);
Now a, b, and c can be computed. The coefficients to compute a, b, and c are given from (4) at dopt
(8) abb = cfun(dopt);
The dc offset is simply the first value
(9) a = abb(1);
A trig identity is used to find b
(10) sin(u)^2 + cos(u)^2 = 1
(11) b = sqrt(b1^2 + b2^2)
(12) b = norm(abb([2 3]));
Finally the phase offset is found
(13) b1 = b*cos(c)
(14) c = acos(b1 / b);
(15) c = acos(abb(2) / b);
Question
What is going on in (5) and (6)? Can someone break down what is happening in pseudo-code or perhaps perform the same function in a more explicit way?
(5) cfun = #(d) [ones(size(x)), sin(d*x), cos(d*x)] \ y;
(6) sumerr2 = #(d) sum((y - [ones(size(x)), sin(d*x), cos(d*x)] * cfun(d)) .^ 2);
Also, given (4) shouldn't it be:
[ones(size(x)), cos(d*x), sin(d*x)]
Code
Here is the Matlab code in full. Blue line is the actual signal. Green line is the reconstructed signal.
close all
clear all
y = [111,140,172,207,243,283,319,350,383,414,443,463,483,497,505,508,503,495,479,463,439,412,381,347,311,275,241,206,168,136,108,83,63,54,45,43,41,45,51,63,87,109,137,168,204,239,279,317,348,382,412,439,463,479,496,505,508,505,495,483,463,441,414,383,350,314,278,245,209,175,140,140,110,85,63,51,45,41,41,44,49,63,82,105,135,166,200,236,277,313,345,379,409,438,463,479,495,503,508,503,498,485,467,444,415,383,351,318,281,247,211,174,141,111,87,67,52,45,42,41,45,50,62,79,104,131,163,199,233,273,310,345,377,407,435,460,479,494,503,508,505,499,486,467,445,419,387,355,319,284,249,215,177,143,113,87,67,55,46,43,41,44,48,63,79,102,127,159,191,232,271,307,343,373,404,437,457,478,492,503,508,505,499,488,470,447,420,391,360,323,287,254,215,182,147,116,92,70,55,46,43,42,43,49,60,76,99,127,159,191,227,268,303,339,371,401,431,456,476,492,502,507,507,500,488,471,447,424,392,361,326,287,287,255,220,185,149,119,92,72,55,47,42,41,43,47,57,76,95,124,156,189,223,258,302,337,367,399,428,456,476,492,502,508,508,501,489,471,451,425,396,364,328,294,259,223,188,151,119,95,72,57,46,43,44,43,47,57,73,95,124,153,187,222,255,297,335,366,398,426,451,471,494,502,507,508,502,489,474,453,428,398,367,332,296,262,227,191,154,124,95,75,60,47,43,41,41,46,55,72,94,119,150,183,215,255,295,331,361,396,424,447,471,489,500,508,508,502,492,475,454,430,401,369,335,299,265,228,191,157,126,99,76,59,49,44,41,41,46,55,72,92,118,147,179,215,252,291,328,360,392,422,447,471,488,499,507,508,503,493,477,456,431,403]';
fs = 100e3;
N = length(y);
t = (0:1/fs:N/fs-1/fs)';
cfun = #(d) [ones(size(t)), sin(2*pi*d*t), cos(2*pi*d*t)]\y;
sumerr2 = #(d) sum((y - [ones(size(t)), sin(2*pi*d*t), cos(2*pi*d*t)] * cfun(d)) .^ 2);
dopt = fminbnd(sumerr2, 2300, 2500);
abb = cfun(dopt);
a = abb(1);
b = norm(abb([2 3]));
c = acos(abb(2) / b);
d = dopt;
y_reconstructed = a + b*sin(2*pi*d*t - c);
figure(1)
hold on
title('Signal Reconstruction')
grid on
plot(t*1000, y, 'b')
plot(t*1000, y_reconstructed, 'g')
ylim = get(gca, 'ylim');
xlim = get(gca, 'xlim');
text(xlim(1), ylim(2) - 15, [num2str(b) ' cos(2\pi * ' num2str(d) 't - ' ...
num2str(c * 180/pi) ') + ' num2str(a)]);
hold off
(5) and (6) are defining anonymous functions that can be used within the optimisation code. cfun returns an array that is a function of t, y and the parameter d (that is the optimisation parameter that will be varied). Similarly, sumerr2 is another anonymous function, with the same arguments, this time returning a scalar. That scalar will be the error that is to be minimised by fminbnd.

How do I use Metropolis Sampling in MATLAB to calculate an integral?

I am trying to write a matlab function to solve a test integral using the Metropolis Method. My function is listed below.
The integral is from 0 to infinity of x*e(-x^2), divided by the integral from 0 to infinity of e(-x^2)
This function converges to ~0.5 (notably, it does fluctuate about this answer a little) however analytically the solution is ~0.5642 or 1/sqrt(pi).
The code I use to run the function is also below.
What I have done wrong? How do I use metropolis to correct solve this test function?
% Metropolis Method for Integration
% Written by John Furness - Computational Physics, KTH
function [I S1 S2] = metropolis(f,a,b,n,sig)
% This function calculates an integral using Metropolis Mathod
% Only takes input as a function, f, on an interval between a and b,
% where n is the number of points.
%Defining burnin
%burnin = n/20;
burnin = 0;
% Finding maximum point
x = linspace(a,b,1000);
f1 = f(x);
max1 = max(f1);
%Setting Up x-vector and mu
x(1) = rand(1);
mu=0;
% Generating Random Poins for x with Gaussian distribution.
% Proposal Distribution will be the normal distribution
strg = 'exp(-1*((x-mu)/sig).^2)';
norm = inline(strg,'x','mu','sig');
for i = 2:n
% This loop generates a new state from the proposal distribution.
y = x(i-1) + sig*randn(1);
% generate a uniform for comparison
u = rand(1);
% Alpha is the acceptance probability
alpha = min([1, (f(y))/((f(x(i-1))))]);
if u <= alpha
x(i) = y;
else
x(i) = x(i-1);
end
end
%Discarding Burnin
%x(1:burnin) = [];
%I = ((inside)/length(x))*max1*(b-a);
I = (1/length(f(x)))*((sum(f(x))))/sum(norm(x,mu,sig));
%My investigation variables to see what's happening
%S1 = sum(f(x));
%S2 = sum(norm1(x,mu,sig));
S1 = min(x);
S2 = max(x);
end
Code used to run the above function:
% Code for Running Metropolis Method
% Written by John Furness - Computational Physics
% Clearing Workspace
clear all
close all
clc
% Equation 1
% Changing Parameters for Equation 1
a1 = 0;
b1 = 10;
n1 = 10000;
sig = 2;
N1 = #(x)(x.*exp(-x.^2));
D1 = #(x)(exp(-x.^2));
denom = metropolis(D1,a1,b1,n1,sig);
numer = metropolis(N1,a1,b1,n1,sig);
solI1 = numer/denom

Dijkstra's Algorithm: Why is it needed to find minimum-distance element in the queue

I wrote this implementation of Dijksta's Algorithm, which at each iteration of the loop while Q is not empty instead of finding the minimum element of the queue it takes the head of the queue.
Here is the code i wrote
#include <stdio.h>
#include <limits.h>
#define INF INT_MAX
int N;
int Dist[500];
int Q[500];
int Visited[500];
int Graph[500][500];
void Dijkstra(int b){
int H = 0;
int T = -1;
int j,k;
Dist[b] = 0;
Q[T+1] = b;
T = T+1;
while(T>=H){
j = Q[H];
Visited[j] = 1;
for (k = 0;k < N; k++){
if(!Visited[k] && Dist[k] > Graph[j][k] + Dist[j] && Graph[j][k] != -1){
Dist[k] = Dist[j]+Graph[j][k];
Q[T+1] = k;
T = T+1;
}
}
H = H+1;
}
}
int main(){
int src,target,m;
int a,w,b,i,j;
scanf("%d%d%d%d",&N,&m,&src,&target);
for(i = 0;i < N;i ++){
for(j = 0;j < N;j++){
Graph[i][j] = -1;
}
}
for(i = 0; i< N; i++){
Dist[i] = INF;
Visited[i] = 0;
}
for(i = 0;i < m; i++){
scanf("%d%d%d",&a,&b,&w);
a--;
b--;
Graph[a][b] = w;
Graph[b][a] = w;
}
Dijkstra(src-1);
if(Dist[target-1] == INF){
printf("NO");
}else {
printf("YES\n%d",Dist[target-1]);
}
return 0;
}
I ran this for all the test cases i ever found and it gave a correct answer.
My question is the why do we need to find the min at all? Can anyone explain this to me in plain english ? Also i need a test case which proves my code wrong.
Take a look at this sample:
1-(6)-> 2 -(7)->3
\ /
(7) (2)
\ /
4
I.e. you have edge with length 6 from 1 to 2, edge with length 7 from 2 to 3, edge with length 7 from 1 to 4 and edge from 4 to 3. I believe your algorithm will think shortest path from 1 to 3 has length 13 through 2, while actually best solution is with length 9 through 4.
Hope this make it clear.
EDIT: sorry this example did not brake the code. Have a look at this one:
8 9 1 3
1 5 6
5 3 2
1 2 7
2 3 2
1 4 7
4 3 1
1 7 3
7 8 2
8 3 2
Your output is Yes 8. While a path 1->7->8->3 takes only 7. Here is a link on ideone
I think your code has the wrong time complexity. Your code compares (almost) all pairs of nodes, which is of quadratic time complexity.
Try adding 10000 nodes with 10000 edges and see if the code can execute within 1 seconds.
It is always mandatory to find out the unvisited vertex with minimum distance else you will get at least one
of the edges wrong. For Example, consider the following case
4 4
1 2 8
2 4 5
1 3 2
3 2 1
(8) (5)
1-----2----4
\ /
(2)\ / (1)
3
and we start with vertex 1
distance[1]=0
when you have visited vertex 1 you have relaxed vertex 2 and vertex 3
so now
distance[2]=8 and distance[3]=2
after this, if we don't select the minimum and choose vertex 2 instead, we get
distance[4]=13
and then select vertex 3 which will give
distance[2]=3
and hence we end up with distance[4]=13 which should have been
distance[4]=8
hence we should choose minimum from unvisited at each stage of Dijkstra which can be efficiently done using priority_queue.
If you run the algorithm for the following graph it depends on the order of the children. Let's say we are looking for shortest path from 1 to 4.
If you start from the queue with 1,
dist[1] = 0
dist[2] = 21
dist[3] = 0
and seen = {1} while the queue is pushed with 2 and 3 now if we consume 2 from the queue it will make dist[4] = 51,seen={1,2}, q = [1,2,3,4] and next time when 3 is consumed from the queue 2 won't be added to queue again since it is already in seen. Hence the algorithm will later update the distance to 12+31=43 from the path of 1->3-5->4 however the shortest path is 32 and it is on 1->3->2->4.
Let me discuss some other aspects with code examples. Let's say we have a connection list of (u,v,w) where node u has a weighted and directed edge to v with weight w. And let's prepare the graph and edges as below:
graph, edges = {i: set() for i in range(1, N+1)}, dict()
for u,v,w in connection_list:
graph[u].add(v)
edges[(u,v)] = w
ALGORITHM1 - Pick any child to add if not visited
q = deque([start])
seen = set()
dist = {i:float('inf') for i in range(1, N+1)}
dist[start] = 0
while q:
top = q.pop()
seen.add(top)
for v in graph[top]:
dist[v] = min(dist[v], dist[top] + edges[(top, v)])
if v not in seen:
q.appendleft(v)
This one is already discussed above and it will give us the incorrect result 43 instead of 32 for the shortest path between 1 and 4.
The problem was not to re-add 2 to the queue, then let's get rid of seen and the children again.
ALGORITHM2 - Add all children to the queue again
while q:
top = q.pop()
seen.add(top)
for v in graph[top]:
dist[v] = min(dist[v], dist[top] + edges[(top, v)])
q.appendleft(v)
This will work in that case, but it works only for this example though. Two issues with this algorithm,
We are adding the same nodes again so for a bigger example the complexity will depend on number of edges E instead of number of nodes V and for a dense graph we can assume O(E) = O(N^2).
If we add cycles in the graph it would run forever since there is no check to stop. So this algorithm is not a fit for cyclic graphs.
So that's why we have to spend extra time to pick the minimum child if we do it with a linear search we would end up with the same complexity as above. But if we use a priority queue we can reduce the min search to O(lgN) instead of O(N). Here is the linear search update on the code.
ALGORITHM3 - Dirty Dijkstra's Algorithm with linear minimum search
q = [K]
seen = set()
dist = {i:float('inf') for i in range(1, N+1)}
dist[start] = 0
while q:
min_dist, top = min((dist[i], i) for i in q)
q.remove(top)
seen.add(top)
for v in graph[top]:
dist[v] = min(dist[v], dist[top] + edges[(top, v)])
if v not in seen:
q.append(v)
Now we know the thought process we can remember to use a heap to have the optimal Dijkstra's algorithm next time.

Resources