Mathematical Proof of Algorithm Correctness and Efficiency

Introduction

When designing a completely new algorithm, a very thorough analysis of its correctness and efficiency is needed.

The last thing you would want is your solution not being adequate for a problem it was designed to solve in the first place.

Note: As you can see from the table of contents, this is not in any way, shape, or form meant for direct application. It is Computer Science Theory, and is only meant for a deeper understanding of certain fields of practical programming.

Mathematical Induction

Mathematical induction (MI) is an essential tool for proving the statement that proves an algorithm's correctness. The general idea of MI is to prove that a statement is true for every natural number n.

What does this actually mean?

This means we have to go through 3 steps:

  1. Induction Hypothesis: Define the rule we want to prove for every n, let's call the rule F(n)
  2. Induction Base: Proving the rule is valid for an initial value, or rather a starting point - this is often proven by solving the Induction Hypothesis F(n) for n=1 or whatever initial value is appropriate
  3. Induction Step: Proving that if we know that F(n) is true, we can step one step forward and assume F(n+1) is correct

If you followed these steps, you now have the power to loop! No, really, this basically gives us the ability to do something like this:

for (i in range(n)):
    T[i] = True

Basic Example

Problem:

If we define S(n) as the sum of the first n natural numbers, for example S(3) = 3+2+1, prove that the following formula can be applied to any n:

$$
S(n)=\frac{(n+1)*n}{2}
$$

Let's trace our steps:

  1. Induction Hypothesis: S(n) defined with the formula above

  2. Induction Base: In this step we have to prove that S(1) = 1:

    $$
    S(1)=\frac{(1+1)*1}{2}=\frac{2}{2}=1
    $$

  3. Induction Step: In this step we need to prove that if the formula applies to S(n), it also applies to S(n+1) as follows:

    $$ S(n+1)=\frac{(n+1+1)*(n+1)}{2}=\frac{(n+2)*(n+1)}{2} $$

This is known as an implication (a=>b), which just means that we have to prove b is correct providing we know a is correct.

$$
S(n+1)=S(n)+(n+1)=\frac{(n+1)*n}{2}+(n+1)=\frac{n^2+n+2n+2}{2}
$$

$$
=\frac{n^2+3n+2}{2}=\frac{(n+2)*(n+1)}{2}
$$

Note that S(n+1) = S(n) + (n+1) just means we are recursively calculating the sum. Example with literals:
S(3) = S(2) + 3= S(1) + 2 + 3 = 1 + 2 + 3 = 6

Q.E.D.

Proof of Correctness

Because the method we are using to prove an algorithm's correctness is math based, or rather function based, the more the solution is similar to a real mathematical function, the easier the proof.

Why is this you may ask? Well, practical imperative programming has this thing called a state, this means a program's output is dependent on 3 things:

  1. Its sequence of instructions

  2. Its input values

  3. its state, or rather, all previously initialized variables that can in any way change the output value

Program State Example

def foo(x):
    x = y + 1
    return x

If I asked you to give me the output value of this function for x=1, naturally you would say:

Well golly gee sir, how would we know the output value if we don't know that gosh darn y value.

You see, that's exactly the point, this (imperative) program as any other has a state, which is defined by a list of variables and their corresponding values. Only then is this program's output truly deterministic.

Deterministic - a system with no random factors

This opens up a whole new story about programming paradigms that have a completely transparent state, or in other words, have NO VARIABLES. This might seem insane, but it does exist, and it is semi-frequently used, especially functional programming in Haskell.

But because we don't traditionally have functional concepts at our disposal in imperative programming, we settle for next best thing for proving correctness, recursion. Recursion is very easy for math interpretation because it's equivalent to recurrence relations (more on recurrence relations in the following segments).

Recursion Example

def factorial(n):
    if (n==0):
        return 1
    else:
        return n*factorial(n-1)

Converted to recurrence form:

$$
Factorial(n)=n*Factorial(n-1)
$$

Loop Invariants

This all sounds fine and dandy, but up until now, we haven't said anything about representing loops and program states as math formulas. Variables in a program's state pose a problem because all of them need to be kept in check all the time, just in case one goes haywire.

Also, loops pose a problem because there are very few mathematical equivalents to them. This means we have to incorporate mathematical induction into our Algorithm Analysis Model, because it's the only method we know that can iteratively incriminate values in math, like in actual code.

The simplest way of solving both problems (with mathematical induction) is Loop Invariants:

A loop invariant is a logic formula or just a set of rules, that is true before, during and after the loop in question (so it's unbiased to iteration). It is imperative for it to contain rules for all the variables that occur in the said loop because we need to bind them all to the set of values we want them to be.

Loop Invariant Selection

A loop invariant can be as complicated and as simple as you want it to be. However, the point is that it should be constructed to resemble the problem at hand as closely as possible.

For example, I can always say that the following is a loop invariant:

( x > y ) ( x < y ) ( x == y )

But, by using a tautology (a logic formula that is always correct) as a loop invariant, we don't really achieve anything, the only reason it is technically classified as a loop invariant is it fits all 3 requirements:

  1. The formula is correct BEFORE loop execution
  2. The formula is correct DURING loop execution, including all the steps in between
  3. The formula is correct AFTER loop execution

Example:

Let's take a look at the following code and determine the optimal loop invariant:

x = 10
y = 4
z = 0
n = 0
while(n < x):
    z = z+y
    n = n+1

Logically, this code just calculates the value x*y and stores it in z, this means that z = x*y. Another condition we know will always be true is n <= x (more on the equals in the example). But do these two really only apply after the program is done computing?

The n value is essentially the number of loops that were already executed, but also, it's the number of times that the z value has been increased by y.

So this means that both z = n*y and n <= x might apply at all times. The only thing left to do is to check whether they really can be used as a loop invariant.

Loop Invariant Example - Proof by Induction

First, we need to prove that the loop invariant is true before entering the loop (which is the equivalent of proving and induction's base):

# <=> - logical equivalence, left and right sides of the equation have the same logical value (True or False)
# <= - less or equal (not to be confused with implication, which also looks like a arrow to the left)
x = 10
y = 4
z = 0
n = 0
# RULE 1: z == n*y
# 0 == 0*4 = 0 <=> True
# so RULE 1 applies

# RULE 2: n <= x
# 0 <= 10 <=> True
# so RULE 2 applies, therefore the invariant is valid before entering the loop

Second, we need to check if the invariant is true after every finished loop (excluding the last one), we do this by observing the transition from z,n to z',n', where z' and n' are the values of z and n after the next loop has executed.

Therefore, z' = z+y and n' = n+1. We need to essentially prove that if we know that the invariant is true for z and n, it's also true for z' and n':

z = z + y z = n y n = n + 1 If the following is valid, the invariant is valid:  z = n y ? z = ( n + 1 ) y = n y + y = z + y

Third, we need to check if the invariant is true after the last iteration of the loop. Because n is an integer and we know that n-1<x is true, but n<x is false, that means that n=x (this is the reason why the invariant has to include n<=x, not n<x).

Therefore we know that z = x*y.

Q.E.D.

Efficiency Analysis: Recurrence Relations

When talking about algorithm efficiency, the first thing that comes up is recurrence relations. This just means that a function such as f(n) is dependent on its preceding and succeeding values, such as f(n-1) and f(n+1).

The simplest example of this kind of function would be the Fibonacci sequence:

$$
Fibonacci(n)=Fibonacci(n-1)+Fibonacci(n-2)
$$

You might actually recognize this concept from my article on Dynamic Programming. And yes, the problem is very similar, however, the solving method is very different.

When analyzing algorithm efficiency, there are basically two types of relations you will be solving:

  1. Linear homogeneous recurrence relations
  2. Non-linear recurrence relations - Master Theorem use case

Solving Homogeneous Linear Recurrence Relations

When reading the title above, you may be asking yourself

"What in the name of god is this math gibberish?!?!"

Well, first let's take a look at the general formula:

F ( n ) = a 1 F ( n 1 ) + a 2 F ( n 2 ) + . . . + a k F ( n k ) .

Now let's break up the definition into byte-size pieces (pun intended):

Free eBook: Git Essentials

Check out our hands-on, practical guide to learning Git, with best-practices, industry-accepted standards, and included cheat sheet. Stop Googling Git commands and actually learn it!

  1. Linear refers to the fact that the function elements F(something) are to the first power
  2. Homogeneous refers to the fact that all element duplets a*F(something) are uniform, meaning a constant can not be present (a*F(something) = const can not happen)

These recurrence relations are solved by using the following substitution:

$$
(1) \ \ F(n) = r^n
$$

  • r being a conveniently chosen (complex) number

I'll be enumerating useful formulas so that I can more easily reference them in the example

We are using a complex number because we need a variable that can cycle through a number of values, of which all can (but don't have to) be different. All of which are roots (solutions) to the equation above.

Clarification:

  • complex numbers have a form of x = a + bi, x being the complex number, a and b being simple integers, and i being the constant:
    $$
    \begin{align}
    i=\sqrt{-1}
    \end{align}
    $$
  • as you can notice i is a very particular number, meaning it actually has a cycle:
    $$
    \begin{align}
    i=\sqrt{-1}\,
    i^2=-1\,
    i^3=-1*\sqrt{-1}\,
    i^4=1\,
    i^5=i,
    \end{align}
    $$
  • this means i has a cycle of length = 5
  • other complex numbers can be tailor-made to have a precise cycle, where no two elements are the same (except the starting and ending elements)

Using the above mentioned substitution, we get the characteristic polynomial:

r k a 1 r k 1 a 2 r k 2 . . . a k = 0

This represents a very convenient equation, where r can have k possible solutions (roots). Also, we can represent F(n) as a linear combination of all of its predecessors (the proof of this formula's correctness will not be shown for the sake of your and my own sanity):

F ( n ) = i = 1 k c i r i n
  • ci being unknown coefficients that indicate which r has the most impact when calculating the value of F(n)

Also, if a root's value (r for example) does come up more than once, we say that r has the multiplicity (m) greater than 1.

This slightly alters the previous equation:

( 2 )     F ( n ) = i = 1 s h i ( n )
  • hi being the element can contain ri, which is calculated (with multiplicity in mind) according to the formula:
( 3 )     h i ( n ) = ( C i , 0 + C i , 1 n + C i , 2 n 2 + . . . + C i , m i 1 n m i 1 ) r i n

Congratulations, now we can solve the most bare bones recurrence equations. Let's test it out!

Computer Science Master Theorem

Remember when I said that the above were only the bare bones recurrence relations? Well now we'll be looking at a more complicated, but much more useful type of recurrence relation.

The basic form of this new type of recurrence relation being:

$$
T(n) = aT(\frac{n}{b})+ cn^k
$$

  • of which all constants are equal or greater that zeroa,b,c,k >= 0 and b =/= 0

This is a much more common recurrence relation because it embodies the divide and conquer principle (it calculates T(n) by calculating a much smaller problem like T(n/b)) .

The formula we use to calculate T(n) in the case of this kind of recurrence relation is as follows:

T ( n ) = { O ( n l o g b a ) for  a > b k O ( n k l o g   n ) for  a = b k O ( n k ) for  a < b k

Because the formula above is logical enough, and because the proof isn't really trivial, I would advise you to just remember it as is... but if you still want to see the proof, read Theorem 5.1 proof 1-2 in this article.

Example: Binary Search

If we have a sorted array A of length n and we want to find out how much time it would take us to find a specific element, let's call it z for example. First we need to take a look at the code we'll be using to find said element using Binary Search:

# leftIndex and rightIndex indicate which part of the original array
# we are currently examining, the initial function call is find(A,z,1,n)
import math
def find(A, z, leftIndex, rightIndex):
    # if our search range is narrowed down to one element,
    # we just check if it's equal to z, target being the index of the wanted element
    # A[target]=z
    if leftIndex == rightIndex:
        if A[leftIndex] == z:
            return leftIndex
        else:
            return -1
    else:
        middlePoint = math.ceil((leftIndex + rightIndex) / 2)
        print("{} {} {}".format(leftIndex, rightIndex, middlePoint))
        # because the array is sorted, we know that if z < X[middlePoint],
        # we know it has to be to the left of said element,
        # same goes if z >= X[middlePoint] and we call
        # find(A, z, leftIndex, middlePoint - 1)
        # if z == A[middlePoint]:
        #     return middlePoint
        if z < A[middlePoint]:
            return find(A, z, leftIndex, middlePoint - 1)
        else:  # z >= A[middlePoint]
            # leaving the middle point in this call is intentional
            # because there is no middle point check
            # except when leftIndex==rightIndex
            return find(A, z, middlePoint, rightIndex)

def main():
    A = [1, 3, 5, 7, 8, 9, 12, 14, 22]
    z = 12
    target = find(A, z, 0, len(A))
    print("Target index: {}".format(target))

if __name__ == "__main__":
    main()

The most time intensive part of this search is the recursion, this means that we can represent the time it takes the Binary search algorithm to search through an array of length n using the following recurrence relation:

$$
T(n)=T(\frac{n}{2})+1
$$

The 1 represents a constant operation like value checking (like leftIndex == rightIndex, this constant isn't really that important considering it's a great deal smaller than both T(n) and T(n\b)).

From matching the master theorem basic formula with the binary search formula we know:

$$
a=1,b=2,c=1,k=0\
$$

Using the Master Theorem formula for T(n) we get that:

$$
T(n) = O(log \ n)
$$
So, binary search really is more efficient than standard linear search.

Example: Dynamic Programming VS Recursion

Let's take one final look at the Fibonacci sequence (last time, I promise):

$$
Fibonacci(n)=Fibonacci(n-1)+Fibonacci(n-2)
$$

Dynamic programming, as we know from my last article has the time complexity of O(n) because it uses memorization and generates the array linearly, with no look-backs (it constructs the array from the ground up).

Now let's prove that it's way more efficient to use Dynamic Programming.

Fibonacci Time Complexity Analysis

Let's say that T(n) represents the time that is needed to calculate the n-th element of the Fibonacci sequence.

Then we know that the following formula applies:

$$
T(n)=T(n-1)+T(n-2)
$$

First, we need to get the implicit form of the equation (math talk for plonk everything on one side, so that the other side only has a zero):

$$
T(n)-T(n-1)-T(n-2)=0
$$

Now, let's use the standard substitution (formula (1)):

$$
r^n-r^{n-1}-r^{n-2}=0
$$

To further simplify the equation, let's divide both sides with r to the power of the lowest power present in the equation (in this case it's n-2):

r n r n 1 r n 2 = 0   / r n 2 r n ( n 2 ) r n 1 ( n 2 ) r n 2 ( n 2 ) = 0 r 2 r 1 r 0 = 0 r 2 r 1 = 0

This step is done so that we can boil the problem down to a quadratic equation.

Using the quadratic equation formula we get the following possible values for r:

r 1 = 1 + 5 2 , r 1 = 1 5 2

Now, using formula (2), we determine the basic formula for Fibonacci(n):

T ( n ) = C 1 r 1 n + C 2 r 2 n

Because we know that Fibonacci(0) = 0 and Fibonacci(1) = 1, therefore T(0) = 0 and T(1) = 1 (technically, T(0) and T(1) could be any constant number of operations needed to calculate their values, but it doesn't really impact the result that much, so for the sake of simplicity, they're 0 and 1, just like Fib(0) and Fib(1)), we can use them to solve the equation above for C1 and C2:

T ( 0 ) = 0 = C 1 r 1 0 + C 2 r 2 0 = C 1 + C 2 Which means:  C 1 = C 2

Them, using T(1):

T ( 1 ) = 1 = C 1 r 1 1 + C 2 r 2 1 = C 1 r 1 + C 2 r 2 Because we know the values of `r1` and `r2`, and the following:  C 1 = C 2 We can substitute them in the main equation:    1 = C 2 1 + 5 2 + C 2 1 5 2

When we solve the above equation for C2 we get:

C 1 = 1 5   C 2 = 1 5

Which means that now we have the final solution to the recurrence relation:

T ( n ) = 1 5 ( 1 + 5 2 ) n + 1 5 ( 1 5 2 ) n

Deducing Algorithm Complexity from Recurrence Relation

Because T(n) represents the number of steps a program needs to calculate the n-th element in the sequence, n being also the input value, or more commonly, input size in bits. The solution above tells us that the algorithm we are using has an exponential complexity.

Fun fact:

The method above can be used to also find the formula for calculating the definite value for the n-th element in the Fibonacci sequence (the functions would represent the value of the n-th element rather than how many operations it needs to calculate them)

Because O(a^n) (recursive - exponential time complexity) is a much greater order of magnitude than O(n) (dynamic programming - linear time complexity), we now have a definitive answer why dynamic programming is superior time-wise to traditional recursion.

Conclusion

I know that this article might seem a little redundant. But proofs of correctness and efficiency are the cornerstones of modern Computer Science Theory, and the main reason why this field keeps going forward at a rapid rate.

Computer Science isn't the same thing as programming, it's just one of its many use cases. And I think that it would be nice for people to get a better understanding of what it really is, at least solely through this article.

Last Updated: August 18th, 2023
Was this article helpful?

Improve your dev skills!

Get tutorials, guides, and dev jobs in your inbox.

No spam ever. Unsubscribe at any time. Read our Privacy Policy.

Free
Course

Graphs in Python - Theory and Implementation

# python# data structures# algorithms# computer science

Graphs are an extremely versatile data structure. More so than most people realize! Graphs can be used to model practically anything, given their nature of...

David Landup
Dimitrije Stamenic
Jovana Ninkovic
Details

Make Clarity from Data - Quickly Learn Data Visualization with Python

Learn the landscape of Data Visualization tools in Python - work with Seaborn, Plotly, and Bokeh, and excel in Matplotlib!

From simple plot types to ridge plots, surface plots and spectrograms - understand your data and learn to draw conclusions from it.

© 2013-2024 Stack Abuse. All rights reserved.

AboutDisclosurePrivacyTerms