Dijkstra's Algorithm vs A* Algorithm

Scott Robinson
Mila Lukic

Every single search algorithm consists of:

  1. The current state of the problem
  2. Possible actions that can be done in order to change that state
  3. The ability to recognize the final state - our goal

When it comes to Artificial Intelligence, there are two types of search algorithms:

  • Uninformed search algorithms
  • Informed search algorithms

In this lesson, we'll go over some of the differences between two of the most popular algorithms, one from each category:

  1. Dijkstra's algorithm: uninformed search algorithm
  2. A* (A Star) algorithm: informed search algorithm

Uninformed Search Algorithms

As we already mentioned, a search algorithm has to be able to:

  • Identify the current state of the problem
  • Use a set of actions to modify the current state
  • Identify the final state

Uninformed in this context means that the algorithm doesn't have any additional information that helps it determine where it should go. Think of it like a near-sighted person trying to navigate the streets of a city they're not familiar with. All the information they have is what's right in front of them, and nothing else.

Common examples are the Depth First Search algorithm, as well as the Breadth First Search algorithm. However, in this lesson, we are going to only focus on Dijkstra's algorithm.

Dijkstra's algorithm

Dijkstra's algorithm is an algorithm that finds the shortest path between nodes A and B in a directed graph with non-negative edge weights. In a nutshell, it does this by finding the shortest paths from one node A to all other nodes, which will, of course, include B.

To denote the length of the current shortest path from any given node to node A, we need to set "costs" for all nodes in the graph. All of the costs will be set to infinity at the beginning, to make sure that every other cost we may compare it to would be smaller than the starting one. The only exception is the cost of the starting node - this node will have a cost of 0.

Let's go over this algorithm step by step:

  1. First, we will create a set of visited nodes (visited), to keep track of all of the nodes that have been assigned their correct shortest path to node A. We need this list so that we don't backtrack to the nodes that have already been assigned their shortest path

  2. Until we get to the final node B, we do the following:

    1. We pick a node K with the shortest current cost and visit it (add it to the visited nodes set)
    2. We update the costs of all of K's adjacent nodes that are not visited yet: we do this by comparing their current cost with the sum of K's cost and the edge between K and the adjacent node in question.
  3. When we get to the final node B, the algorithm is done and the cost assigned to the final node is guaranteed to be the shortest path from start node to it

There is an interesting way to explain the idea behind this algorithm:

A great analogy comes from Predrag Janičić's book "Artificial Intelligence":

Let's say that each node of a graph is represented by a sphere. Two spheres are connected by a thread if and only if there is an edge between them in a graph. The lengths of these threads are directly proportional to the weight of the corresponding edges. All of these spheres are laying on the exact same spot on the ground.

We pick up the sphere that corresponds to the starting node and as it goes up, it pulls the other spheres with it. The spheres leave the ground one by one, and the shortest distance between them and the starting sphere is a straight line distance between them.

In terms of Dijkstra's algorithm, this is what we actually do: We've got two groups of spheres - the ones on the ground and the ones that have already been lifted. In each iteration, we pick up one of the spheres from the ground and calculate their distance from the first sphere.

Predrag Janičić, Mladen Nikolić - Artificial Intelligence

The complexity of this algorithm is O(|E|+|V|log|V|), where |E| represents the number of edges, while |V| represents the number of nodes.

Informed Search Algorithms

Informed searching algorithms don't just use the information about possible actions to modify the current state of the problem, but also additional information that would direct the search towards the goal. This additional information is some type of score, an indicator of a quality of a certain state. This indicator is not necessarily exact, it's usually just an approximation.

This approximation is usually called a heuristic, derived from the Greek word "heurisko", meaning "to search" or "to discover".

A function that calculates the score (quality) of a certain state is called an evaluation function. If this function is h, then h(n) represents the score of state n.

Each time an informed search algorithm enters a state, it calculates the value f(n) for each of its neighbor states, and afterwards enters the node with the most optimal value of f(n).

One of the most popular informed search algorithms is definitely the A* algorithm. Let's go through it right now!

A* (A Star) Algorithm

The A* algorithm is based on heuristics for navigating the search, but unlike many similar algorithms with this base (for example Best Search Algorithm), it is both complete and (under certain conditions) optimal:

  • A complete algorithm is an algorithm that guarantees a correct answer for any correct input, if that answer exists
  • An optimal algorithm is an algorithm that returns an answer for the shortest possible amount of time.

The optimality of this algorithm is heavily dependent on the quality of its function of evaluation.

For the A* algorithm, the evaluation function has a specific form:

$$
f(n) = g(n) + h(n)
$$

Where g(n) represents the shortest cost path from the start node to node n, and h(n) represents the heuristic approximation of the value of node n.

Depending on the problem, different heuristic approximations may be used in order to achieve optimality. Choosing a quality heuristic is one of the most important steps when it comes to implementing this algorithm.

Now, let's get to the A* algorithm itself! To start, we'll go over it step by step:

  1. In order to avoid infinite loops, guarantee completeness and make sure that we can fix already found paths, we need two lists:

    • Closed state list: a list in which we keep the visited states whose neighbors are all also visited
    • Open state list: a list in which we keep the visited states whose neighbors are not necessarily all visited

    At the beginning, the open state list only contains the start state (with the calculated value f(start_state)), while the closed state list is empty.

  2. We choose state n with the best value of f(n). If state n is also our final state, we are done. If not, we go over its direct neighbors.

  3. For each neighbor m of the state n, we check whether it's in one of the two lists. If not, we put it in the open states list. We mark n as the parent of m. Then, we calculate g(m) and f(m). However, if the neighbor is in one of the two lists, we check whether the path from the start state to state m over state n is shorter than the current existing path to m. If this is true, we mark n as the parent of m and update g(m) and f(m). If the state m was in the closed list before, we place it in the open list instead.

  4. Finally, we put the current node in the closed state list.

  5. As long as there are elements in the open states list, we repeat the steps 2, 3, and 4.

  6. If we don't get to the final state, but our open states list is empty, the path to the final state doesn't exist

Loyd's Puzzle

Now that we've gone over the two algorithms, we can best show their differences using an example, in this case, Loyd's puzzle, also known as the 15 puzzle.

Loyd's puzzle is a sliding puzzle with 15 square tiles numbered 1–15 in a frame that is 4 tiles high and 4 tiles wide, leaving one unoccupied tile position. Tiles in the same row or column of the open position can be moved by sliding them horizontally or vertically, respectively. The goal of the puzzle is to place the tiles in numerical order.

Loyd's puzzle consists of a 4x4 board and pieces labeled with numbers 1-15 which fit into said board. The pieces are initially randomly arranged on the board. The goal of the puzzle is to arrange the pieces in ascending order only by sliding them around:

In this example, we are going to use Loyd's puzzle of smaller size (2x3) in order to go over each step more easily.

This is going to be our board's starting state:

[[2, 3, 5],
[1, 4, 0], 
[7, 8, 6]]

We want our final state to be the following:

[[0, 1, 2],
[3, 4, 5],
[6, 7, 8]]

The number 0 in our board represents the empty tile.
The state of the board changes each time we change the order of the numbers on them. For example, the only possible states after the first one are:

[[2, 3, 5],
[1, 0, 4], 
[7, 8, 6]],


[[2, 3, 0],
[1, 4, 5], 
[7, 8, 6]] 


[[2, 3, 5],
[1, 4, 6], 
[7, 8, 0]]

Since there are thousands of possible states for this board, it is pretty hard to show a step-by-step explanation on how these two algorithms work. This is why we are going to try to explain it only intuitively and through its implementation.

Implementation

Before we begin implementing these algorithms, we must import a couple of Python modules:

from collections import defaultdict
import copy
from queue import PriorityQueue

Now, let's talk about the representation of our board. In this code, we are going to represent it in two ways: the first way being serialized - in a one-dimensional array, and the second way being deserialized - in a matrix (i.e. a 2-dimensional array, just like the examples above).

For the sake of clarity, this is what the final state is going to look like in deserialized representation:

[[0, 1, 2],
[3, 4, 5],
[6, 7, 8]]

And this is what our final state is going to look like serialized:

 [0:1:2:3:4:5:6:7:8]

The reason we need the serialized representation is because it's a bit easier to manipulate, resulting in much cleaner code.

Now, we need to write functions that allow us easy conversion from one representation to the other:

def serialize (state):
    result = []
    for row in state:
        for col in row:
            result.append(str(col))
    return ':'.join(result)

def deserialize (serialized):
    splitted = serialized.split(':')
    splitted = [int(x) for x in splitted]
    return [splitted[:3], splitted[3:6], splitted[6:]]

Another additional function we need in order to implement these algorithms more efficiently is a function that allows us to get possible next states - the neighbors of the current state.

def get_neighbours(state):
    deserialized = deserialize(state)
    neighbours = []
    blank_i = -1
    blank_j = -1

    for i in range(0, 3):
        for j in range(0, 3):
            if deserialized[i][j] == 0:
                blank_i, blank_j = i, j
                break

    i = blank_i
    j = blank_j

    if i > 0:
        new_matrix = copy.deepcopy(deserialized)
        new_matrix[i][j] = new_matrix[i - 1][j]
        new_matrix[i - 1][j] = 0
        neighbours.append(serialize(new_matrix))
    if i < 2:
        new_matrix = copy.deepcopy(deserialized)
        new_matrix[i][j] = new_matrix[i + 1][j]
        new_matrix[i + 1][j] = 0
        neighbours.append(serialize(new_matrix))
    if j > 0:
        new_matrix = copy.deepcopy(deserialized)
        new_matrix[i][j] = new_matrix[i][j - 1]
        new_matrix[i][j - 1] = 0
        neighbours.append(serialize(new_matrix))
    if j < 2:
        new_matrix = copy.deepcopy(deserialized)
        new_matrix[i][j] = new_matrix[i][j + 1]
        new_matrix[i][j + 1] = 0
        neighbours.append(serialize(new_matrix))

    return zip(neighbours, [1 for x in neighbours])

As we already mentioned before, the neighbor states are the states we can get to by switching the places of the 0 (the empty tile) with one of the tiles around it (left, right, up or down).

In this function, we work with a deserialized representation, but we return a serialized representation because it's the one we need in our main functions (for A* and Dijkstra's algorithm).

First, we go through our matrix and find the coordinates of the empty tile. Then, we use deepcopy in order to make a new matrix, and then we switch around every possible tile.

Of course, it's important to check the validity of each tile before we set them.

Dijkstra

Now that we've gone over the additional functions, let's get into the actual algorithms!

The code for Dijkstra's algorithm for this problem looks like this:

def dijkstra(start_node, target_node):
    start_node = serialize(start_node)
    target_node = serialize(target_node)

    visited = set()
    D = defaultdict(lambda: float('inf'))
    D[start_node] = 0

    pq = PriorityQueue()
    pq.put((0, start_node))

    parent = dict()
    parent[start_node] = None
    path_found = False
    iteratrion = 0
    while not pq.empty():
        (dist, current_node) = pq.get()
        if current_node == target_node:
            path_found = True
            break
        visited.add(current_node)

        for (neighbour, distance_from_current_node) in get_neighbours(current_node):
            if neighbour not in visited:
                old_cost = D[neighbour]
                new_cost = D[current_node] + distance_from_current_node
                if new_cost < old_cost:
                    pq.put((new_cost, neighbour))
                    D[neighbour] = new_cost
                    parent[neighbour] = current_node
        iteratrion += 1

    path = []
    if path_found:
        path.append(target_node)
        while True:
            parent_node = parent[target_node]
            if parent_node is None:
                break
            path.append(parent_node)
            target_node = parent_node
        path.reverse()
    return (path, iteratrion)

As we already mentioned, we'll be using serialized data. Besides that we'll need a set of visited states, as well as a dictionary that maps a state to its distance from the starting state. The default value of a state in this dictionary is going to be inf.

We put the start state in the dictionary and map it to 0, since its distance from itself is 0.

Another thing we need is a priority queue that will sort the distances of the unvisited states. We put the distance of the start state and the state itself in the priority queue.

We also need a dictionary (parents) that maps a state to its parent in order to easily reconstruct a path from the start state to the final state once we find it.

As long as we've got states in the priority queue, we take the first state in it, check whether or not it's the final state. If yes, we break the loop, since we've found the path. If not, we put the state in the visited set and calculate the new costs of all of its unvisited neighbors.

The new cost of a neighbor is equal to the sum of the cost of the current state and the distance between the current state and the neighbor in question. If the new cost is smaller than the old cost, we put the neighbor with the new cost in a priority queue and update the dictionary of costs, as well as the dictionary of parents.

Once we found the correct path, we reconstruct it easily with the 'parents' dictionary.

In this function, we counted the number of iterations so that we can compare their number to the number of iterations in the A* algorithm. That's why we returned them along with the path.

Let's call Dijkstra's algorithm from our main function now, using the example of a board we mentioned before:

if __name__ == '__main__':
    start_state = [[2,3,5], [1,4,0], [7,8,6]]
    target_state = [[0,1,2],[3,4,5],[6,7,8]]

    print('Dijkstra benchmark')
    (path, iteration) = dijkstra(start_state, target_state)
    print(path, iteration)

This is going to produce the following result:

Dijkstra benchmark
['2:3:5:1:4:0:7:8:6', '2:3:5:1:4:6:7:8:0', '2:3:5:1:4:6:7:0:8', '2:3:5:1:0:6:7:4:8', '2:0:5:1:3:6:7:4:8', '0:2:5:1:3:6:7:4:8', '1:2:5:0:3:6:7:4:8', '1:2:5:3:0:6:7:4:8', '1:2:5:3:6:0:7:4:8', '1:2:0:3:6:5:7:4:8', '1:0:2:3:6:5:7:4:8', '0:1:2:3:6:5:7:4:8', '3:1:2:0:6:5:7:4:8', '3:1:2:6:0:5:7:4:8', '3:1:2:6:4:5:7:0:8', '3:1:2:6:4:5:0:7:8', '3:1:2:0:4:5:6:7:8', '0:1:2:3:4:5:6:7:8'] 
12649

As we can see, we needed 12649 iterations before finding the shortest path from the first state to the final state of Loyd's puzzle.

A*

Now, let's go over the more efficient, A* algorithm. This algorithm is heavily dependent on its heuristic, so we have to be very careful when choosing one. This function is going to help us navigate the search towards a more optimal solution.

For each state, we are going to calculate a quality. That quality is going to play a big role when it comes to sorting the states in a priority queue.

We can actually think of Dijkstra's algorithm as a special case of the A* algorithm - where the heuristic is 0 for each state.

Now, let's think about how we might write an evaluation function for this particular problem.

We are going to use the sum of the Manhattan Distances between the current location of each tile, and the location where that same tile should end up in the end. This is one of the simplest and most inuitive heuristics, which is why it is often used as a starting point.

The Manhattan Distance is calculated as the sum of the absolute differences between two vectors.

$$
d((1,5,3),(4,7,2))=|(1-4)|+|(5-7)|+|(3-2)|=3+2+1=6
$$

It is named after Manhattan, where the buildings are laid out in square blocks and the straight streets intersect at right angles, so moving there can only be done in 4 directions - left, right, up, and down. This resembles the way we can move in a Floyd's puzzle, which is why this heuristic in particular is ideal for our problem.

This is what our evaluation function is going to look like:

def h(state):
    deserialized = deserialize(state)
    H = 0
    for i in range(0, 3):
        for j in range(0, 3):
            H += abs(deserialized[i][j] % 3 - j) + abs(deserialized[i][j] / 3 - i)
    return H

Now, another additional function we might need is a function that returns a state in an open set with the lowest heuristic guess.

This function's input is the open set, as well as the dictionary that maps the state to its heuristic guess.

def in_open_set_with_lowest_heuristic_guess(open_set, heuristic_guess):
    result, min_guess = None, float('inf')
    for v in open_set:
        if v in heuristic_guess:
            guess = heuristic_guess[v]
            if guess < min_guess:
                result = v
                min_guess = guess
    return result

Now that we got that out of the way, let's get to the actual A* algorithm implementation:

def astar_lloyd(start_node, target_node, h):
    start_node = serialize(start_node)
    target_node = serialize(target_node)

    open_set = set([start_node])

    parents = {}
    parents[start_node] = None

    cheapest_paths = defaultdict(lambda: float('inf'))
    cheapest_paths[start_node] = 0

    heuristic_guess = defaultdict(lambda: float('inf'))
    heuristic_guess[start_node] = h(start_node)

    path_found = False
    iteration = 0
    while len(open_set) > 0:
        # O(1)
        current_node = in_open_set_with_lowest_heuristic_guess(open_set, heuristic_guess)
        if current_node == target_node:
            path_found = True
            break

        open_set.remove(current_node)
        for (neighbour_node, weight) in get_neighbours(current_node):
            new_cheapest_path = cheapest_paths[current_node] + weight
            if new_cheapest_path < cheapest_paths[neighbour_node]:
                parents[neighbour_node] = current_node
                cheapest_paths[neighbour_node] = new_cheapest_path
                heuristic_guess[neighbour_node] = new_cheapest_path + h(neighbour_node)
                if neighbour_node not in open_set:
                    open_set.add(neighbour_node)
        iteration += 1

    path = []
    if path_found:
        while target_node is not None:
            path.append(target_node)
            target_node = parents[target_node]
        path.reverse()
    return (path, iteration)

Just like in the last algorithm, we use a serialized representation of the board.

We are going to need two sets: an open set, in which we keep the visited states whose neighbors are not necessarily all visited, as well as the closed set, in which we keep the visited states whose neighbors are all also visited.

We put the start state in the open set, the closed set remains empty.

Just like in the implementation of Dijkstra's algorithm, we keep a dictionary 'parents' that helps us reconstruct the path, as well as a dictionary for cheapest paths.

In addition to that, we will need the dictionary of heuristic guesses previously mentioned.

As long as there are states in the open set, we pick a state from it with the lowest heuristic guess as the next current state. If we've stumbled across the target state by any chance, we break the loop and reconstruct the path. Otherwise, we remove the current state from the open set. We go through the neighbors of the current state, and we calculate their new cheapest paths.

The cheapest path of a neighbor is equal to the sum of the cheapest path of the current state and the distance between the current state and the neighbor.

If this new cheapest path is smaller than the cheapest path the neighbor had before, we update the parent, heuristic guess and cheapest path dictionaries.

If the neighbor state is not in the open set, we put it in there.

Once we found the correct path, we reconstruct it easily with the 'parents' dictionary.

In this function, we also counted the number of iterations so that we can compare their number to the number of iterations in the Dijkstra's algorithm. That's why we returned them along with the path.

Let's call Dijkstra's algorithm from our main function now, using the example of a board we mentioned before:

if __name__ == '__main__':
    start_node = [[2,3,5], [1,4,0], [7,8,6]]
    target_node = [[0,1,2],[3,4,5],[6,7,8]]
    print('Astar benchmark')
    (path, iteration) = astar_lloyd(start_node, target_node, h)
    print(path, iteration)

This is going to produce an output:

Astar benchmark
['2:3:5:1:4:0:7:8:6', '2:3:5:1:4:6:7:8:0', '2:3:5:1:4:6:7:0:8', '2:3:5:1:0:6:7:4:8', '2:0:5:1:3:6:7:4:8', '0:2:5:1:3:6:7:4:8', '1:2:5:0:3:6:7:4:8', '1:2:5:3:0:6:7:4:8', '1:2:5:3:6:0:7:4:8', '1:2:0:3:6:5:7:4:8', '1:0:2:3:6:5:7:4:8', '0:1:2:3:6:5:7:4:8', '3:1:2:0:6:5:7:4:8', '3:1:2:6:0:5:7:4:8', '3:1:2:6:4:5:7:0:8', '3:1:2:6:4:5:0:7:8', '3:1:2:0:4:5:6:7:8', '0:1:2:3:4:5:6:7:8'] 
217

As we can see, we only needed 217 iterations before finding the same shortest path from the first state to the final state of Loyd's puzzle!

Conclusion

Even though Dijkstra's algorithm and the A* algorithm both find the same shortest paths, the A* algorithm does it almost 60 times faster! While Dijkstra's algorithm produced the output after 12649 iterations, it only took 217 for the A* algotithm.

However, it should be noted that the efficiency of the A* algorithm is highly dependent on its evaluation function, and with the wrong function, the results could be even worse than Dijkstra.

To sum it all up, given that we have a good heuristic guess on our problem, it is definitely more efficient to use the A* algorithm compared to Dijkstra's algorithm, although this won't always be the case as it can be highly dependent on the problem at hand.

References

  1. Predrag Janičić, Mladen Nikolić - Artificial Intelligence
Lessson 10/10
You must first start the course before tracking progress.
Mark completed

© 2013-2024 Stack Abuse. All rights reserved.

AboutDisclosurePrivacyTerms