Time Complexity of Algorithms

Time Complexity of Algorithms

Time complexity is an important concept in computer science that analyzes how the running time of an algorithm grows as the input size increases. Specifically, it looks at how execution time scales with respect to input size. Understanding time complexity allows for comparing algorithms and selecting efficient solutions. This article will provide a comprehensive beginner-friendly introduction to time complexity applied to algorithms, with numerous code examples and real-world applications.

What is Time Complexity?

Time complexity represents the amount of time taken by an algorithm to run as a function of the input size (n). It measures how the run time increases as the input grows.

We analyze time complexity by counting the number of elementary operations in an algorithm. These elementary operations take constant time, so counting them gives us time complexity.

For example, a function that sums the first n natural numbers performs n additions. As n increases, the number of additions grows linearly. This linear time complexity can be represented as O(n) in Big O notation.

Formal Notation

Big O notation formally describes the time complexity of an algorithm and provides an upper bound. O(f(n)) means the algorithm takes at most f(n) operations for an input of size n.

Some common Big O complexities are:

  • O(1) - Constant time

  • O(log n) - Logarithmic time

  • O(n) - Linear time

  • O(n log n) - Linearithmic time

  • O(n^2) - Quadratic time

  • O(2^n) - Exponential time

For example, accessing an array element by the index is O(1), while searching an unsorted array is O(n).

Sorted Array

Accessing an array element by the index is O(1) because it takes the same amount of time regardless of how large the array is. Here is how array access works:

  1. The array starts at some address in memory, let's call it X

  2. To get the element at index i, we calculate the memory address as X + i * size_of_element

  3. Load the data from that address into a register

No matter how big the array is, these 3 steps take the same amount of time. We don't need to iterate through the array or anything. The address calculation takes constant time, and loading from memory is also a constant operation.

Therefore, accessing any array index is always O(1) or constant time. The size of the input (the array) does not affect the time needed.

Here are some code examples to further demonstrate the O(1) complexities:

Array access - O(1)

arr = [6, 3, 8, 14, 9] 

# Access element at index 3
print(arr[3]) 

# This takes the same time no matter how big 
# the array is

The key lines that make this O(1):

print(arr[3])

Indexing into an array takes constant time, regardless of the array size.

Unsorted Array

Searching an unsorted array is O(n) because in the worst case, we have to check every single element before finding the target. Here is how linear search on an unsorted array works:

  1. Start a loop counter at 0

  2. Check if array[counter] equals target 2a. If so, return true 2b. Else, increment counter

  3. If we reach the end of the array, return false

In the worst case, the target is not in the array. So we have to increment the counter n times before figuring that out and returning false.

As the array grows bigger, the number of increments needed also grows linearly. If the array has 5 elements, we need up to 5 checks. For an array with 10 elements, we need up to 10 checks.

Therefore, searching an unsorted array depends linearly on the number of elements, making it O(n) time complexity.

Here are some code examples to further demonstrate the O(n) complexities:

Linear search - O(n)

def linear_search(arr, target):
  for i in range(len(arr)):
    if arr[i] == target:
      return i

  return -1

arr = [8, 13, 27, 1, 3]
print(linear_search(arr, 27))

# Worst case is target not in array, so we have to 
# traverse whole array before returning -1

The key lines that make this O(n):

for i in range(len(arr)):

We loop through the entire array in the worst case. As the array grows bigger, the number of iterations grows linearly.

NB

When it comes to counting elements, arrays have an advantage over linked lists in terms of time complexity. This is primarily due to the nature of their underlying data storage. In arrays, elements are stored contiguously in memory, which allows for efficient random access. As a result, counting the elements in an array takes constant time, denoted as O(1). No matter how large the array is, the time taken to count its elements remains constant, making it highly efficient for this operation.

On the other hand, linked lists store elements in nodes scattered across the memory, connected through pointers. This means that to count the elements in a linked list, we need to traverse the entire list, incrementing a counter as we move through each node. This process takes linear time, denoted as O(n), where n is the number of elements in the linked list. As the linked list grows in size, the time taken to count its elements grows linearly, making it less efficient than arrays for this particular operation.

However, when it comes to printing the elements, both arrays and linked lists exhibit the same time complexity. To print the elements, we need to visit each element in the data structure and display its value. For both arrays and linked lists, this process requires visiting all n elements, resulting in a time complexity of O(n). Therefore, when it comes to printing, the performance of arrays and linked lists is on par, as both take linear time.

In conclusion, when you need to count the elements in a data structure, arrays have a clear advantage over linked lists due to their constant-time access. However, if the task is simply printing the elements, both data structures share the same time complexity. The choice between arrays and linked lists should be based on the specific operations and performance requirements of your application.

Analyzing Algorithms

To analyze an algorithm's time complexity, follow these steps:

  1. Determine the input size n

  2. Break the algorithm into elementary operations

  3. Count how many operations are performed in relation to n

  4. Express the relationship in Big O notation

Let's go through an example of analyzing a function that checks if a number is prime:

def is_prime(n):

  if n < 2:
    return False

  for i in range(2, int(sqrt(n)) + 1):  
    if n % i == 0:
      return False

  return True
  1. Input size n is the number being checked

  2. Elementary operations are comparisons, calculations, incrementing

  3. Operations grow roughly proportionally to sqrt(n)

    • sqrt(n) iterations of loop

    • Constant ops inside loop

  4. Therefore, complexity is O(sqrt(n))

Examples of Time Complexities

  • O(1) - Get element at array index

  • O(log n) - Binary search on sorted array

  • O(n) - Linear search on unsorted array

  • O(n log n) - Efficient sorting algorithms like merge sort

  • O(n^2) - Nested loops, sorting algorithms like insertion sort

  • O(2^n) - Recursive Fibonacci calculation

Multiple Inputs

For algorithms with multiple inputs, we analyze complexity in terms of the input that most impacts the number of operations.

Searching a 2D array takes O(nm) time. Matrix multiplication takes O(n^3) time for n x n matrices.

Searching a 2D array - O(nm)

def search_2d(arr, target):
  for i in range(len(arr)):
    for j in range(len(arr[0])): 
      if arr[i][j] == target:
        return True

  return False

arr = [[1, 5, 7], 
       [3, 6, 9],
       [2, 4, 8]]

print(search_2d(arr, 4))

Looping through a 2D array takes O(nm) time. The two key lines are:

for i in range(len(arr)):
  for j in range(len(arr[0])):

This demonstrates how multiple inputs affect time complexity.

Real-World Applications

Understanding time complexity has many real-world benefits:

Comparing algorithms - Big O provides a standard way to compare algorithms independently of code, hardware, or data. For a task like sorting, we can select the most efficient algorithm by comparing complexities.

Optimizing performance - Developers can optimize areas that have the highest time complexity to achieve the greatest gains in speed. Improving an O(n^2) algorithm to O(n log n) provides much more impact than optimizing an O(log n) section.

Estimating runtime - Knowing the time complexity of an algorithm allows estimating how long it will take to run for given inputs. An O(n) algorithm with 10 elements will run faster than one with 1 million elements.

Identifying bottlenecks - For complex programs with many functions, identifying the parts with the highest time complexity reveals optimization opportunities. The 80/20 rule applies - most time is spent in a small fraction of code.

Conclusion

Analyzing time complexity is an essential skill for designing and evaluating algorithms. This article provided an introduction to time complexity analysis using Big O notation with a focus on algorithms. The key takeaways are:

  • Time complexity represents how long algorithms take to run based on input size.

  • Big O notation formally expresses the upper bound.

  • The analysis involves counting elementary operations in relation to input size n.

  • Real-world applications include comparing algorithms, optimizing performance, estimating runtime, and finding bottlenecks.

Understanding the fundamental time complexity of algorithms will help you write more efficient code and better understand the performance of systems.

Read Also:

Space Complexity of Algorithms