Definitions

In computing an algorithm is said to run in subquadratic time if its running time $f\left(n\right)$ is less then $o\left(n^2\right)$.

For example, most naïve sorting algorithms are quadratic (e.g., insertion sort), but more advanced algorithms can be found that are subquadratic (e.g., merge sort). No general-purpose sorts run in linear time, but the change from quadratic to the common $O\left(nln n\right)$ is of great practical importance.

Since Big O notation essentially captures how well an algorithm scales, subquadratic algorithms scale to higher problem sizes much better than algorithms of quadratic or higher complexity. For example, say we have two algorithms (call them A and B) which compute a function $f\left(n\right)$. Algorithm A computes $f$ in $10n log_2\left(n\right) + 3n + 4$ steps, rounded up, (making it an $O\left(n log \left(n\right)\right)$ algorithm, and algorithm B comptutes it in $n^2 + 2n + 1$ steps, making it a quadratic ($O\left(n^2\right)$) algorithm.

In this case, algorithm B outperforms algorithm A for all integer values of $n < 61$. On the other hand, for $n=100$, the quadratic algorithm is doing $46%$ more work than the $O\left(n log \left(n\right)\right)$ algorithm, and the difference becomes even more dramatic for higher values of $n$. By the time $n=1000$, the quadratic algorithm is doing almost $10$ times the work of the subquadratic algorithm.