Unitary Operators: Grammar of Reality (Ep-3)

Fair warning: This is going to be a bit of a lengthy post. It's worth it, trust me!
You know what's cool? There are these mathematical transformations that can completely rearrange things while keeping every single geometric relationship intact. No distortion, no information loss, no breaking of the underlying structure, just perfect, pristine symmetry.
These are called unitary operators, and here's the mind-blowing part: every single one of them, no matter how complex or abstract, is fundamentally just a change of orthonormal coordinate system. Sometimes this looks like a literal rotation, sometimes like a reflection or more exotic rearrangement, but the core insight is the same. You're never distorting the space; you're just viewing it through a different orthonormal lens.
And that’s exactly why this isn’t “just another linear algebra article.” The Ivethium style is about something bigger. We’re not stockpiling abstract theorems for their own sake. We’re laying down the grammar of reality. Once these fundamentals settle in, the math behind physics will stop feeling like cryptic symbols and start reading like a story you already know.
Note: If you are new to Ivethium, we recommend starting with these articles:
👉 The L2 Norm and Inner Products
👉 Cracking Hilbert Space
👉 Adjoint Operators
From Adjoint to Unitary
Let's start with a quick refresher (from our previous post). The adjoint $T^\dagger$ of an operator $T$ is the unique operator that "moves $T$ from one side of the inner product to the other":
$$\langle Tx, y \rangle = \langle x, T^\dagger y \rangle$$
Think of the adjoint as the operator's "mirror image" in the geometry of inner products. It captures how $T$ interacts with the fundamental notion of angle and length in our space.
Now, a unitary operator $U$ is one that satisfies a beautifully simple condition:
$$U^\dagger = U^{-1}$$
Equivalently, we can say that $U$ preserves inner products exactly:
$$\langle Ux, Uy \rangle = \langle x, y \rangle \quad\text{for all } x, y$$
As a little excercise, pause here and try to see why this is the case !
The Finite-Dimensional Mental Model
Before we dive into infinite dimensions, let's build intuition with familiar examples. In finite dimensions, unitary operators are exactly the orthogonal matrices, and there's a beautiful characterization that makes everything clear:
Theorem: A finite-dimensional matrix $U$ is unitary if and only if its columns form an orthonormal set (equivalently, its rows form an orthonormal set).
This gives us a powerful geometric insight: multiplying by a unitary matrix is exactly orthogonal decomposition! When you compute $Ux$, each component is the orthogonal projection of vector $x$ onto the corresponding column of $U$:
The action of a unitary matrix $U$ on a vector $x$ can be written as
$$Ux = \begin{bmatrix} \langle \mathbf{c}_1, x \rangle \\ \langle \mathbf{c}_2, x \rangle \\ \vdots \\ \langle \mathbf{c}_n, x \rangle \end{bmatrix}$$
where $\mathbf{c}_1, \mathbf{c}_2, \ldots, \mathbf{c}_n$ are the orthonormal columns of $U$.
Example 1: Rotations
Consider a rotation by $60^\circ$ in the plane:
$$R_{60^\circ} = \begin{pmatrix} \cos(60^\circ) & -\sin(60^\circ) \\ \sin(60^\circ) & \cos(60^\circ) \end{pmatrix} = \begin{pmatrix} \tfrac{1}{2} & -\tfrac{\sqrt{3}}{2} \\ \tfrac{\sqrt{3}}{2} & \tfrac{1}{2} \end{pmatrix}$$
The columns are
$$\mathbf{c}_1 = \begin{pmatrix} \tfrac{1}{2} \\ \tfrac{\sqrt{3}}{2} \end{pmatrix}, \quad \mathbf{c}_2 = \begin{pmatrix} -\tfrac{\sqrt{3}}{2} \\ \tfrac{1}{2} \end{pmatrix}$$
Check orthonormality:
$$|\mathbf{c}_1|^2 = \left(\tfrac{1}{2}\right)^2 + \left(\tfrac{\sqrt{3}}{2}\right)^2 = \tfrac{1}{4} + \tfrac{3}{4} = 1 \quad \checkmark$$
$$|\mathbf{c}_2|^2 = \left(-\tfrac{\sqrt{3}}{2}\right)^2 + \left(\tfrac{1}{2}\right)^2 = \tfrac{3}{4} + \tfrac{1}{4} = 1 \quad \checkmark$$
$$\langle \mathbf{c}_1, \mathbf{c}_2 \rangle = \tfrac{1}{2}\left(-\tfrac{\sqrt{3}}{2}\right) + \tfrac{\sqrt{3}}{2}\left(\tfrac{1}{2}\right) = -\tfrac{\sqrt{3}}{4} + \tfrac{\sqrt{3}}{4} = 0 \quad \checkmark$$
Example 2: Reflections
Reflection across the line $y = x$:
$$\text{Ref} = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$$
The columns are
$$\mathbf{c}_1 = \begin{pmatrix} 0 \\ 1 \end{pmatrix}, \quad \mathbf{c}_2 = \begin{pmatrix} 1 \\ 0 \end{pmatrix}$$
Check orthonormality:
$$|\mathbf{c}_1|^2 = 0^2 + 1^2 = 1 \quad \checkmark$$
$$|\mathbf{c}_2|^2 = 1^2 + 0^2 = 1 \quad \checkmark$$
$$\langle \mathbf{c}_1, \mathbf{c}_2 \rangle = 0 \cdot 1 + 1 \cdot 0 = 0 \quad \checkmark$$
Example 3: Permutations
Swapping the first two coordinates in $\mathbb{R}^3$:
$$P = \begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix}$$
The columns are
$$\mathbf{c}_1 = \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}, \quad \mathbf{c}_2 = \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}, \quad \mathbf{c}_3 = \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}$$
These are just the standard basis vectors $\mathbf{e}_2, \mathbf{e}_1, \mathbf{e}_3$ in rearranged order, clearly orthonormal!
Each of these transformations preserves distances and angles perfectly because they're just changing which orthonormal coordinate system we use to measure everything. A circle remains a circle, perpendicular lines stay perpendicular, and the "size" of any geometric object is unchanged. We're just viewing it through different orthonormal "rulers."
The Geometry of Unitaries
The condition $U^\dagger = U^{-1}$ is deceptively simple, but it reveals a treasure of geometric properties. Let's derive them quickly:
Property 1: Isometry
From $U^\dagger U = I$, we get:
$$|Ux|^2 = \langle Ux, Ux\rangle = \langle x, U^\dagger Ux\rangle = \langle x, x\rangle = |x|^2$$
So $\|Ux\| = \|x\|$ for all $x$. Unitary operators preserve norms exactly. They're perfect isometries.
Property 2: No Information Loss
If $Ux = 0$, then $|x| = |Ux| = 0$, which forces $x = 0$. Therefore $\ker U = {0}$ (Remember in previous article we discuss that a Kernel is the set of vectors that an operator kills or sends to zero). Unitary operators are injective. That means, they never collapse distinct vectors to the same point.
Property 3: Onto Everything
Since $U$ is injective on a finite-dimensional space (or more generally, since $U$ has a two-sided inverse), we get $\text{range}(U) = \mathcal{H}$. Unitary operators are surjective. That means, they hit every point in the space.
Property 4: Spectral Geometry
All eigenvalues of $U$ lie on the unit circle in the complex plane. If $Ux = \lambda x$ with $x \neq 0$, then $|\lambda| = |\lambda||x| = |\lambda x| = |Ux| = |x|$, so $|\lambda| = 1$. (We have not yet properly discussed eigenvalues and eigenvectors. Therefore, you can skip part for now.)
The picture emerging is clear: unitary operators are the "perfect symmetries" of Hilbert spaces. They preserve all geometric structure while potentially rearranging everything.
How Do We Prove an Operator Is Unitary?
Now that we understand what unitary operators are, how do we actually prove that a specific operator is unitary? It turns out there are exactly two properties we need to verify:
- Isometry — it preserves inner products: $\langle Ux, Uy \rangle = \langle x, y \rangle$, which implies $\|Ux\| = \|x\|$ for all $x$.
- Surjectivity — its range covers the whole Hilbert space.
An operator is unitary if and only if it's both an isometry and onto (Surjectivity). Different operators get help from different mathematical tools to prove each property, but this two-step framework always works.
The Fourier Transform: A Complete Worked Example
Let's see this framework in action with the continuous Fourier transform on $L^2(\mathbb{R})$. We map a time-domain function $f(t)$ to its frequency representation:
$$\hat{f}(\xi) = \int_{-\infty}^\infty f(t) e^{-2\pi i \xi t} , dt$$
To prove the Fourier transform $\mathcal{F}$ is unitary, we need to show:
- Isometry: $$\langle \mathcal{F} f, \mathcal{F} g \rangle_{L^2} = \langle f, g \rangle_{L^2}$$
- Surjectivity: $$\mathcal{F}(L^2) = L^2$$
In other words, the total energy of a signal is preserved when moving between time and frequency domains, and every $L^2$ frequency-domain function comes from exactly one $L^2$ time-domain function.
Step 1: Proving Isometry (Plancherel's Theorem)
For the Fourier transform, isometry is established by Plancherel's theorem:
$$\int_{-\infty}^\infty |f(t)|^2 dt = \int_{-\infty}^\infty |\hat{f}(\xi)|^2 , d\xi$$
This is exactly the statement
$$|\mathcal{ F } f| = |f|$$
—the Fourier transform preserves the $L^2$ norm perfectly.
(For Fourier series on the circle, this corresponds to Parseval's theorem, which says the sum of squared Fourier coefficients equals the squared $L^2$ norm over one period.)
Step 2: Proving Surjectivity (Fourier Inversion Theorem)
The Fourier inversion theorem provides the key to surjectivity:
$$f(t) = \int_{-\infty}^\infty \hat{f}(\xi) e^{2\pi i \xi t} d\xi$$
This formula tells us that if a function $\hat{f}$ came from some $f$ via the Fourier transform, we can reconstruct $f$ completely.
In the $L^2$ setting, this extends (via density arguments) to all square-integrable functions, ensuring that $\mathcal{F}$ is onto.
Important note: The inversion theorem handles surjectivity, while Plancherel handles norm preservation. These are separate mathematical results that together give us unitarity!
Step 3: The Unitary Conclusion
Combining our results:
- From Plancherel's theorem: $$\mathcal{F}^* \mathcal{F} = I \quad \text{(isometry)}$$
- From Fourier inversion theorem: $$\mathcal{F} \mathcal{F}^* = I \quad \text{(surjectivity)}$$
Together:
$$\mathcal{F} \text{ is unitary on } L^2(\mathbb{R}).$$
This two-step framework proves isometry, proves surjectivity, and this works for any candidate unitary operator, though the specific theorems you'll need vary from case to case.
The "Circus": Same Act, Different Venues
Let me show you how this same two-step dance plays out across completely different mathematical contexts. It's like a travelling circus; the same core performance (isometry + surjectivity), but each venue requires different mathematical tools.
Case 1: Finite-Dimensional Matrices
Space: $\mathbb{C}^n$ with standard inner product
Operator: $U$ given by an $n \times n$ complex matrix
- Isometry tool: THrough direct computation, check $U^*U = I$ (columns are orthonormal).
- Surjectivity tool: In finite dimensions, isometry $\implies$ surjective automatically!
Case 2: Discrete Fourier Transform (DFT)
Space: $\mathbb{C}^N$ ($N$ equally spaced samples)
Operator:
$$(Uf)_k = \frac{1}{\sqrt{N}} \sum_{j=0}^{N-1} f_j e^{-2\pi i jk / N}$$
- Isometry tool: Discrete Parseval identity:
$$\sum_{j=0}^{N-1} |f_j|^2 = \sum_{k=0}^{N-1} |(Uf)_k|^2$$ - Surjectivity tool: Explicit inverse formula (finite Fourier inversion).
Case 3: Continuous Wavelet Transform (CWT)
Space: $L^2(\mathbb{R})$
Operator:
$$W_\psi f(a,b) = \frac{1}{\sqrt{|a|}} \int_{-\infty}^{\infty} f(t) \overline{\psi!\left(\frac{t-b}{a}\right)} dt$$
-
Isometry tool: Wavelet admissibility condition
$$C_\psi = \int_{0}^{\infty} \frac{|\hat{\psi}(\xi)|^2}{\xi} , d\xi < \infty$$
leads to a Parseval-like identity. -
Surjectivity tool: Wavelet reconstruction formula
$$f(t) = \frac{1}{C_\psi} \int_{\mathbb{R}^*} \int_{\mathbb{R}} W_\psi f(a,b) , \psi_{a,b}(t) \frac{da , db}{a^2}$$
Case 4: Quantum Time Evolution
Space: Hilbert space of quantum states $L2(\mathbb{R}3)$ or similar
Operator:
$$U(t) = e^{-iHt/\hbar}$$
where $H$ is a self-adjoint Hamiltonian.
- Isometry tool: Stone's theorem + self-adjointness of $H$ $\implies$ $U(t)$ is unitary.
- Surjectivity tool: One-parameter unitary group property
$$U(t)^{-1} = U(-t)$$
ensures surjectivity.
The Fundamental Theorem for Unitary Operators
Now we reach a key result that encapsulates everything we've learned. For any unitary operator $U$ on a Hilbert space $\mathcal{H}$:
$$\ker(U^\dagger) = {0}, \quad \text{range}(U^\dagger) = \mathcal{H}$$
$$(\ker U)^\perp = \text{range}(U^\dagger), \quad (\text{range}, U)^\perp = \ker(U^\dagger)$$
Background note: If you're not familiar with the concepts of kernel ($\ker$) and range of linear operators, we covered these fundamental ideas in detail in our previous article on adjoint operators.
This theorem guarantees that unitary operators are:
- Injective: no vector collapses to zero
- Surjective: they cover the entire space
In essence, unitary operators are the isomorphisms of Hilbert space geometry. They can rearrange everything, but they preserve the fundamental relationships between vectors.
My Favourite Theorem
Here, the geometric intuition becomes mathematically precise with this fundamental characterization that I absolutely love:
Theorem (My personal favourite):
Let $\mathcal{H}$ be a Hilbert space, and let
$$\mathcal{B} = { e_i \mid i \in I } , \mathcal{B}' = { f_i \mid i \in I }$$
be two orthonormal bases of $\mathcal{H}$. Here $I$ is the set that indexes the orthonormal basis elements of the Hilbert space $H$.
Then there exists a unique unitary operator $U : \mathcal{H} \to \mathcal{H}$ such that
$$U e_i = f_i \quad \forall i \in I$$
Moreover, $U$ preserves inner products:
$$\langle Ux, Uy \rangle = \langle x, y \rangle \quad \forall x,y \in \mathcal{H}$$
Conversely, if $U$ is a unitary operator on $\mathcal{H}$, then
$$\mathcal{B}' = {U e_i}$$
is an orthonormal basis whenever $\mathcal{B} = {e_i}$ is one.
Additional note on $I$ if you are looking for rigor:
- If $H$ is finite-dimensional of dimension $n$, then $I$ can be taken as $\{1,2,\dots,n\}$.
- If $H$ is infinite-dimensional and separable (like $\ell^2$ or $L^2(\mathbb{R})$, then $I$ can be taken as $N$.
- If $H$ is non-separable, then $I$ may be a larger set (even uncountable).
Connecting to Our Finite-Dimensional Insight
This theorem is the perfect infinite-dimensional generalization of our earlier discovery! Remember how we showed that finite-dimensional unitary matrices have orthonormal columns? Here's the beautiful connection:
Finite case: The columns of a unitary matrix $U$ are orthonormal vectors $\mathbf{c}_1, \mathbf{c}_2, \ldots, \mathbf{c}_n$.
Infinite case: A unitary operator $U$ maps any orthonormal basis ${e_i}$ to another orthonormal basis ${U e_i}$.
In the finite-dimensional setting, if ${e_1, e_2, \ldots, e_n}$ is the standard basis, then the columns of the matrix representation of $U$ are exactly ${U e_1, U e_2, \ldots, U e_n}$: which our theorem guarantees form an orthonormal basis!
So our finite-dimensional insight about orthonormal columns is literally a special case of this general theorem about orthonormal basis transformations.
Why This Theorem is Pure Mathematical Beauty
This theorem reveals the deep truth: every unitary operator is exactly a change between orthonormal coordinate systems. When we apply a unitary transformation, we're not distorting the space. We're simply viewing the same mathematical objects through a different orthonormal lens.
A Showcase Proof
Now for the payoff: a beautiful application that showcases the power of our unitary operator theory. We will see how this works for both the discrete (Fourier series) and continuous (Fourier transform) cases.
We are going to show that Fourier series and Fourier transform both are complete.
Case 1: Fourier Series
Proof via Unitary Theory:
Let $\mathcal{F}_s$ be the Fourier series operator that maps $f$ to its sequence of Fourier coefficients ${\hat{f}(n)}{n \in \mathbb{Z}}$.
By Parseval's identity, $\mathcal{F}_s$ preserves the $L^2$ norm:
$$\sum_{n} |\hat{f}(n)|^2 = |f|_2^2$$
Combined with the inverse relationship, this makes $\mathcal{F}_s$ unitary.
Apply our fundamental theorem:
- $\ker(\mathcal{F}_s) = {0}$ means no function is orthogonal to all exponentials except the zero function. This is precisely the completeness property.
- $\text{range}(\mathcal{F}_s) = \ell^2(\mathbb{Z})$ means every square-summable sequence comes from some function. This gives us spanning.
The orthonormality is built into the definition of the Fourier coefficients.
Therefore, the exponentials form a complete orthonormal basis for $L^2(\mathbb{T})$.
Case 2: Fourier Transform on the Real Line
Proof via Unitary Theory:
Let $\mathcal{F}$ be the Fourier transform:
$(\mathcal{F}f)(\xi) = \int_{\mathbb{R}} f(x), e^{-2\pi i x \xi} dx$
By Plancherel's theorem, $\mathcal{F}$ preserves the $L^2$ norm:
$$|\mathcal{F}f|_2 = |f|_2$$
Combined with the inverse transform, this makes $\mathcal{F}$ unitary.
Apply our fundamental theorem:
-
$\ker(\mathcal{F}) = {0}$ means no function has Fourier transform equal to zero except the zero function itself. Equivalently, if
$$\int_{\mathbb{R}} f(x) e^{-2\pi i x \xi} dx = 0 \quad \forall \xi \in \mathbb{R}$$
then $f = 0$. This is exactly the completeness of the plane wave system ${e^{2\pi i \xi x}}_{\xi \in \mathbb{R}}$. -
$\text{range}(\mathcal{F}) = L^2(\mathbb{R})$ means every square-integrable function arises as the Fourier transform of some other function.
Therefore, the plane waves form a complete system for $L^2(\mathbb{R})$.
This is why the unitary framework is so powerful: without it, proving completeness of the Fourier transform is notoriously difficult (at least the ways I have tried and seen)! Traditional approaches often require, deep harmonic analysis results, showing Schwartz functions are dense in $L^2$, or sophisticated applications of Stone-Weierstrass. (Honestly, I have never fully gone through this proof. I just know it exists.)
Bridge to the Spectral Theorem
We’ve seen that unitary operators are “perfect changes of basis.” They rearrange vectors while preserving all geometric relationships. This already hints at something profound: many important operators can be understood by finding the right unitary transformation that reveals their structure.
This idea sits at the heart of the spectral theorem, which tells us that normal operators (including self-adjoint and unitary operators) can be diagonalized (don't worry about this word yet) by a unitary change of basis. In other words, every normal operator becomes as simple as multiplication by a function once you step into the right coordinate system.
It’s like having a master key that unlocks the hidden geometric unity running through mathematics. Whenever I see a transformation that preserves norms and inner products, I can’t help but think: “Aha! This is just a rotation in some Hilbert space.”
But let me pause here and share the bigger picture: I’m not writing these posts just to appreciate some abstract beauty in mathematics. There’s a deeper mission. Once we finish laying down a handful of crucial fundamentals, the real grammar that underlies mathematics, we will suddenly find that the mathematics of physics becomes almost natural. Equations in quantum mechanics, wave analysis, relativity, they will stop feeling like foreign scripts and instead read like poetry in a language you already speak.
That’s why this episode is titled “Grammar of Reality.” We’re building the alphabet and syntax of the universe itself. There are a few more topics left in this foundational arc, but I promise to keep the same Ivethium style: intuitive and always tied to the bigger wonder of why we care.