Why Signals are Vectors

The Thing They Never Explained in Engineering School
In my second year of engineering, during Digital Signal Processing class, the professor suddenly announced:
“Signals are vectors.”
A few confused faces around the room. Someone brave enough asked: “How?”
The professor rattled off: “Hilbert spaces, inner products, linear operators, basis functions…”
Then he quickly moved on to convolution, Fourier transforms, and filter design.
The class? We just shrugged and copied down the formulas. After all, we had exams to pass. Who has time to understand the beautiful intuition when you can just memorize the math?
Nobody really bothered to dig deeper. The formulas worked. We could solve problems. We could pass tests.
Sound familiar?
Here’s What They Should Have Told You
This isn’t just some arbitrary mathematical abstraction. Understanding why signals are vectors is the master key that unlocks everything in digital signal processing.
Once you truly get this, suddenly:
👉 Fourier transforms stop being mysterious formulas and become intuitive dot products
👉 Convolution makes sense as a form of matrix multiplication
👉 Filter design becomes geometric intuition in high-dimensional spaces
👉 Orthogonal basis functions aren’t just math jargon — they’re the reason we can compress audio and images
This is the general principle that connects all the seemingly complex DSP concepts you’ve been struggling with.
🌱 Let’s Start Where You’re Comfortable
When you first heard the word vector, you probably thought of high school physics class.
You know the scene:
- Draw an arrow.
- Label it “force” or “velocity.”
- Break it into x and y components.
- Done!
Vectors = arrows. That was the whole story.
But really — that was just chapter one.
But Math is Never Content with Just 2D 🤔
Mathematicians looked at those arrows and said:
“Why stop at 2D? Why not 3D? Or 10D? Or n-dimensions?”
And sure enough — it all still worked!
- Vector addition.
- Scalar multiplication.
- Dot products.
- Projections.
- Orthogonality.
✅ Same rules.
✅ Same intuition.
✅ Just with more components.
First Principles: What Makes Something a Vector?
This is the important bit.
A vector in math isn’t defined by how it looks. It’s defined by how it behaves. Basically there are 8 rules that defines a vector space.
If your mathematical objects obey these rules? 👉 You got a vector space!
💡 Signals Pass the Test
Let’s actually check against all the vector space axioms. For something to be a vector space, it needs to satisfy these 8 fundamental rules:
Axiom 1: Closure under Addition
Rule: If you add two vectors, you get another vector in the same space.
For signals: Add two audio signals → you get another audio signal. Example: Mix two music tracks, result is still an audio signal.
Axiom 2: Addition is Commutative
Rule: u + v = v + u
For signals: signal₁(t) + signal₂(t) = signal₂(t) + signal₁(t) Example: Playing two sounds simultaneously gives the same result regardless of order.
Axiom 3: Addition is Associative
Rule: (u + v) + w = u + (v + w)
For signals: (s₁ + s₂) + s₃ = s₁ + (s₂ + s₃) Example: Adding three signals together — doesn’t matter how you group them.
Axiom 4: Zero Vector Exists
Rule: There’s a special vector 0 such that v + 0 = v
For signals: The zero signal (silence) — add it to any signal, get the original signal back. Example: Your music + silence = your music.
Axiom 5: Additive Inverse Exists
Rule: For every vector v, there’s a -v such that v + (-v) = 0
For signals: For every signal, there’s its negative (inverted amplitude) that cancels it out. Example: Active noise cancellation — your headphones play the “negative” of outside noise.
Axiom 6: Closure under Scalar Multiplication
Rule: Multiply a vector by a scalar, get another vector in the same space.
For signals: Scale a signal’s amplitude → still get a signal. Example: Turn volume up or down, still have an audio signal.
Axiom 7: Scalar Multiplication is Associative
Rule: a(bv) = (ab)v
For signals: Scaling by 2, then by 3 = scaling by 6 directly. Example: Double the volume, then triple it = multiply by 6 directly.
Axiom 8: Distributive Properties
Rule: a(u + v) = au + av and (a + b)v = av + bv
For signals: Scaling a sum of signals = sum of scaled signals. Example: Amplify two mixed tracks = mix two amplified tracks.
Why Does This Matter for DSP?
Because this lets us use all the powerful tools of linear algebra:
- Inner products (dot products) → This is what Fourier transforms really are
- Projections onto basis functions → This is how we decompose signals
- Orthogonality → This is why we can reconstruct signals perfectly
- Decompositions → This is the math behind compression algorithms
- Linear transformations → This is how filters actually work
Everything you loved about vectors in 2D or 3D?
✅ Still works.
✅ Just in much richer spaces.
A Little Bit About the History
I love history. It helps me feel like I’m not alone in trying to figure this stuff out. I want to know that there were people like me who started with basic questions — and that when they hit limits, someone else addressed those issues to make the understanding “cool.” I don’t want to feel like I’m being left behind in math because I didn’t just memorize formulas. I want to see how this all fits together.
If you’re like me, this part is for you.
Hermann Grassmann (1844)
Let’s rewind.
People first thought of vectors as arrows in space. Great for 3D physics.
But Hermann Grassmann, a Prussian schoolteacher, asked:
“Why limit ourselves to 3 dimensions? Let’s make a system that works for any dimension.”
In 1844, he published Die Ausdehnungslehre, introducing linear combinations and generalized dimension.
Grassmann said:
“You can add and scale anything if you follow the rules.”
Sadly, people thought his writing was… let’s say dense. Historians have called it “unreadable.” The result? Most ignored him at the time.
Giuseppe Peano (1888)
Fast forward a few decades.
Giuseppe Peano, an Italian mathematician, took Grassmann’s messy-but-brilliant idea and said:
“Let’s define this properly.”
In 1888, Peano gave the first modern axiomatic definition of a vector space over a field.
He laid down the exact rules (the 8 axioms we just saw with signals):
- Addition rules: Commutative, associative, has zero element, has inverses
- Scaling rules: Associative, distributive over vector addition and scalar addition
- Closure: Operations stay within the space
Peano’s genius insight:
“I don’t care what your objects look like. If they follow these 8 rules, they’re vectors.”
If your set and operations obey those rules? It’s a vector space!
No need to look like arrows. Just obey the rules.
David Hilbert Enters the Chat (Early 1900s)
Ok, Peano nailed finite-dimensional spaces.
But what about functions? Signals? Waves?
David Hilbert saw the potential.
“Wait. Functions are like infinite-dimensional vectors. Let’s make that rigorous.”
He created the concept of Hilbert spaces:
⭐ Infinite-dimensional analogues of Euclidean space.
⭐ Spaces of functions with inner products.
⭐ Complete, well-behaved mathematical structures.
Thanks to Hilbert:
✅ We can treat signals (like audio) as vectors in function spaces.
✅ We can talk about orthogonality, projections, and Fourier expansions for functions.
✅ We can handle quantum wavefunctions in the same mathematical language.
Why This Matters
This is why your squiggly audio waveform, your image data, or your quantum state is, deep down, a vector.
- Because it obeys the same axioms.
- Because it fits in the same framework.
- Because you can use all the same powerful math.
So next time someone says, “Signals are vectors”, ,they’re not being sloppy.
They’re standing on the shoulders of Grassmann, Peano, and Hilbert, who spent decades making sure we could say exactly that.
What’s Coming Next?
This post is just the first step in expanding our intuition. In this blog series, we’ll go from first principles to really understanding:
- Why Fourier transforms are dot products (and why this makes them so powerful)
- Why convolution is like matrix multiplication (and how this connects to filtering)
- How quantum mechanics is linear algebra in infinite-dimensional spaces
- Why orthogonal basis functions are the secret to compression
All built on the simple, powerful idea:
If it behaves within vector rules, it is a vector. Doesn’t matter how weird or unintuitive it may look.
And once you truly understand this, all those “complex” DSP concepts become intuitive geometric operations in high-dimensional spaces.
Stay tuned!