> At the root of the fast transform is the simple fact that
Actually... no? That's a constant factor optimization; the second expression has 75% the operations of the first. The FFT is algorithmically faster. It's O(N·log2(N)) in the number of samples instead of O(N²).
That property doesn't come from factorization per se, but from the fact that the factorization can be applied recursively by creatively ordering the terms.
It's the symmetry that gives recursive opportunities to apply the optimization. It's the same optimization folded over and over again. Butterfly diagrams are great for understanding this.
https://news.ycombinator.com/item?id=45291978 has pointers to more in depth exploration of the idea.
"Digits" are constant in an FFT (or rather ignored, really, precision is out of scope of the algorithm definition).
Obviously in practice these are implemented as (pairs of, for a complex FFT, though real-valued DCTs are much more common) machine words in practice, and modern multipliers and adders pipeline at one per cycle.
Actually... no? That's a constant factor optimization; the second expression has 75% the operations of the first. The FFT is algorithmically faster. It's O(N·log2(N)) in the number of samples instead of O(N²).
That property doesn't come from factorization per se, but from the fact that the factorization can be applied recursively by creatively ordering the terms.