Using the properties of logs, we see that log2n=log2logn=Θ(logn). Similarly, log3n=log3logn=Θ(logn). So when consiering asymptotics relating to logarithms, the base does not matter (as long as it is some constant).
Problem 2
This is essentially fib with 3 recursive calls instead of 2. The runtime is Θ(3n).
Procedural
Find the runtime of running print_fib with for arbitrarily large n.
From lecture, we know that fib(i) runs in roughly 2i time. If we run fib for each i from 1 to n, we get a total work of 20+21+...2n. Note that this is a geometric sum with last term 2n, so the overall runtime is actually still Θ(2n).
Problem 2
Again, we know that fib(i) takes about 2i time. However, this time we call fib(n) each time in the loop n total times. This gives a runtime of Θ(n2n).
Problem 3
This function creates two recursive branches of half the size per call, with each node taking linear work. As such, each level has n total work, and since we halve the input each time, there are logn total levels, for a runtime of Θ(nlogn).
Solutions are linked here and on the course website.
Metacognitive
What would the runtime of modified_fib be? Assume that values is an array of size n. If a value in an int array is not initialized to a number, it is automatically set to 0.
This is an example of dynamic programming, a topic covered in more depth in CS170. Note that since values is saved across calls, we only recompute each value of n once. Computing a single value of n only takes constant time, since we just add two already-computed values or do an array access. As such, the overall runtime is linear: Θ(n).