70. Climbing Stairs
Leetcode #: 70
Name: Climbing Stairs
You are climbing a staircase. It takes
n steps to reach the top.
Each time you can either climb
2 steps. In how many distinct ways can you climb to the top?
Input: n = 2
Explanation: There are two ways to climb to the top.
1. 1 step + 1 step
2. 2 steps
Input: n = 3
Explanation: There are three ways to climb to the top.
1. 1 step + 1 step + 1 step
2. 1 step + 2 steps
3. 2 steps + 1 step
Based on the problem statement, we can take n steps to reach to the top of the staircase but the condition is that you can only take 1 or 2 steps at a time. Our task is to figure out how many distinct ways in which we can climb up the staircase.
There are several ways we can solve this problem, but in order to arrive at the optimized approach logically, let’s start with the Brute Force approach.
This problem is very similar to the Fibonacci Sequence in the sense that if we were to code up the recursive code for the Fibonacci Sequence, the base cases would be if the given input or term that we are trying to find within the sequence is either 0 or 1. This is because the Fibonacci of 0 or Fibonacci of 1 are 0 and 1 respectively. As these are absolute, we could use them to ultimately calculate the Fibonacci of larger numbers.
Similarly here with the Climbing Stairs problem, our base cases are 1 and 2 steps because these are absolute. We can only take either 1 or 2 steps at a time as provided by the problem statement.
If we were to write up the recursive code, we’d establish our base cases of 1 and 2, then we’d want to add 2 recursive calls of the function with the first recursive call being passed in n-2 and the other recursive call being passed in n-1. The way of thinking here is Top-Down or in other words, it’s Top-Down recursion. We’re ultimately building a tree and starting with the given input (let’s say 5 steps to reach the top for example) and then we’re building out the tree in a top-down fashion with 2-branch recursion in our decision tree. In other words, we have 2 decisions; hence, we have 2 branches stemming from ideally each node of the tree at every level until we reach either 1 of our base cases.
In terms of the space-time complexity of this algorithm, the time complexity would be O(2^n) or exponential. This is because we have 2 decisions at every node and the maximum height of the tree would be the length of the given input or n. The space or memory complexity would be the height of the tree due to us using recursion which utilizes an implicit call stack. When it comes to figuring out the space complexity of a recursive algorithm, we want to ultimately take a snapshot of the entire tree when it’s call stack is fully built out or populated with as many stack frames or activation records as it possible could contain. In this case, the maximum path or depth of the tree would be n because we’d have to go down n levels starting from the root level.
Optimized Approach — Dynamic Programming
As we’ve explored the Brute Force approach above, an exponential algorithm is not very efficient but in some scenarios, it could be the best algorithm that we might come up with depending on the given problem. However, for this particular case, this solution could definitely be further optimized with Dynamic Programming. Specifically, we can implement Top-Down Dynamic Programming using a technique called Memoization, which utilizes the concept of caching subproblem sizes that we’ve fully calculated so that as we traverse through our tree recursively generally from left to right and if we encounter the same subproblem, then we’d be able to retrieve that saved calculated value from an auxiliary data structure in which we have the stored the value initially instead of having to recalculate the entire work again. This eliminates the possibility of having to do any unnecessary/redundant work during the process, and with the addition of Memoization into our Brute Force algorithm, we can optimize our algorithm from exponential to linear in terms of time.
For the space complexity, this would also be linear because we aren’t going to be exploring every single node in the tree theoretically as we are going to be reusing the subproblem sizes that we’ve already calculated out. Generally, the way to visualize it would be, as we traverse from the root node down to the left sub tree at its absolute depth and once those stack frames or activation records are bubbling back up the call stack, we are caching in our auxiliary data structure with what every subproblems equate to/return. Therefore, once we start exploring right subtrees, we’d retrieve what our subproblems equate to from our auxiliary data structure to avoid any redundant work. Ultimately, in terms of the work that we’d need to do as we traverse our tree, the amount of work we’d perform would look more linear like a Linked List starting from the root node and going down to the left most node in the left subtree.
In terms of the auxiliary data structure where we cache in what our subproblems equate to, it is common practice to either use a Hash Map or dynamic array. In the code below, I’ve used a Hash Map because operations performed on a Hash Map are expected O(1) time for insertion, deletion, and search. Because we are only performing constant operations and there is generally a linear chain as represented in the tree after applying memoization, the total time complexity with the application of memoization thus would be O(n).
Thank you for reading!
In case if you haven’t yet joined my Discord server where I host daily sessions on data structures & algorithms for free, please join via the link shared below.
If you have any questions or comments, please don’t be afraid to connect with me on any of the platforms below or feel free to send me an email at firstname.lastname@example.org