Comment on page

# 13.10 Exercises

- 1.Analyze the runtime of the following code in terms of
`N`

:

for (int i = 0; i < N; i++) {

int j = 0;

while (j < N) {

j = N;

}

}

- 2.Let$f(N) = 2N$. Which of the following statements is true?
- $f(N) \in \Theta(1)$
- $f(N) \in \Theta(N)$
- $f(N) \in \Theta(N^2)$
- $f(N) \in O(1)$
- $f(N) \in O(N)$
- $f(N) \in O(N^2)$

- 1.
**True or false.**Suppose we have a function$f$, and we are told$f(N) \in \Theta(N^2)$. If we run$f$on an input of size$N$, then an input of size$2N$, it will take roughly 4 times as long. - 2.
**True or false.**Suppose we have a function$f$, and we are told$f(N) \in \Theta(N^2)$. If we run$f$on an input of size 100, then an input of size 200, it will take roughly 4 times as long.

- 1.Why do use asymptotics instead of empirical timing (for example, like the
`Stopwatch`

class from Lab 3)?

There are several advantages to using asymptotics over empirical timing. See if you can come up with more beyond the list below!

- Different computers run at different speeds. Depending on architecture, hardware components, even room temperature, the same code can execute with vastly different empirical times.
- It may not be feasible to test code on extremely large inputs.
- Asymptotics are language-agnostic. The same algorithm may have different empirical runtime depending on which language it's written in (for example, C is usually around 10-400x faster than Python), but will have the same asymptotic runtime.
- The worst-case runtime may only occur in certain cases that are hard to measure empirically.

Last modified 9mo ago