When Vectors Go Wild

When Vectors Go Wild
Vectors aren’t just arrows—they’re a universal language defined by simple rules of addition and scaling. Once you strip away the visual baggage of direction and length, you realize that polynomials, matrices, functions, operators, sequences—even quantum states—are all vectors too. Size and angle? Optional add-ons you can define as needed. This shift in thinking doesn’t just expand your idea of what a vector is—it unlocks powerful, unified tools across math, physics, and engineering. The moment you stop seeing vectors as shapes and start seeing them as structures, the whole mathematical universe starts to make sense.

From Arrows...

In our last post, we discovered why signals are vectors. Later I realized that it didn’t quite deliver the punch needed to knock some of you out of your geometric addiction. So let me try harder this time.

Let’s rewind to where it all started: we all thought vectors were arrows.
You did. I did. Everyone did.
Simple 2D arrows. 3D arrows if you were feeling fancy.

But then we should ask: "What makes a vector... a vector?"

Breaking the Euclidean Geometric Cage

This question led to a revolutionary insight: geometric intuition was holding us back.

When we think "vector = arrow," we automatically think:

  • It has a direction
  • It has a length
  • It lives in 2D or 3D space

But mathematicians realized these were just accidents of our visual representation, not essential properties.

The 8 Axioms: The Real Definition

As we covered in the last post, they distilled vectors down to 8 fundamental rules about addition and scaling. That's it.

No mention of:

  • ❌ Direction
  • ❌ Length
  • ❌ Geometric space
  • ❌ Visual representation
  • Size (this is crucial - we'll come back to this!)

Just pure algebraic behavior. Seriously, it's just that!

Notice what's missing? Nothing in those 8 axioms talks about the "size" of a vector. You don't need to measure anything. You don't need to know how "big" a vector is. Size is completely irrelevant to being a vector.

The 8 Vector Space Axioms:

  1. Closure under addition: $u + v$ is in the vector space
  2. Commutative: $u + v = v + u$
  3. Associative: $(u + v) + w = u + (v + w)$
  4. Zero vector exists: There's a $\mathbf{0}$ where $v + \mathbf{0} = v$
  5. Additive inverse: For every $v$, there's $-v$ where $v + (-v) = \mathbf{0}$
  6. Closure under scaling: $c \cdot v$ is in the vector space
  7. Scalar associative: $a(b \cdot v) = (ab) \cdot v$
  8. Distributive: $c(u + v) = cu + cv$ and $(a + b)v = av + bv$

Decoupling Size and Vector-ness

Here's something crucial that blew my mind: you don't need "length" to be a vector.

Length (or "norm" as we covered in our previous post) is an additional structure you can add to a vector space. But it's optional!

Think about it:

  • You can add vectors without measuring their length
  • You can scale vectors (multiply by numbers) without needing to know their "size"
  • The 8 axioms work perfectly fine without any concept of length or magnitude

What is scaling anyway? Scaling a vector by a number means multiplying each component by that number. For a 2D vector [3, 4], scaling by 2 gives [6, 8]. We're not measuring how "big" the original vector was - we're just applying a uniform multiplication rule to all its components.

This decoupling opened the floodgates. If you don't need arrows, direction, or even length... what CAN'T be a vector?

Welcome to the Vector Zoo

Let me show you some samples to taste how crazy this can get.

The Space of All Polynomials

The vectors: All polynomials like 3x² + 2x - 1, x⁵ - 7x + 4, etc.

Addition: (3x² + 2x) + (x² - x) = 4x² + x ✅
Scaling: 5 × (x² + 2x) = 5x² + 10x ✅

(We could check all 8 axioms here, but I'll keep this brief - they all work perfectly!)

Wait, what's the "length" of x² + 3x - 2?

Pause here and think about it...

How do you even define a "size" for a polynomial? Sounds like this question doesn't even make intuitive sense!

But it's still a perfectly valid vector space.

The Space of All Matrices

The vectors: 2×2 matrices, 3×3 matrices, any size you want

$$\begin{bmatrix} 1 & 2 \ ; 3 & 4 \end{bmatrix} + \begin{bmatrix} 0 & 1 \ ; 2 & 0 \end{bmatrix} = \begin{bmatrix} 1 & 3 \ ; 5 & 4 \end{bmatrix}$$

Scaling: Multiply every entry by a number.

All 8 vector axioms work perfectly:

  • Commutative addition: A + B = B + A ✅
  • Zero matrix exists: Add the all-zeros matrix to any matrix ✅
  • Additive inverse: For every matrix A, there's -A that cancels it ✅
  • Distributive: 3(A + B) = 3A + 3B ✅
  • (And the other 4 axioms...)

Matrices are vectors! Wild, right?

The Space of All Functions

The vectors: sin(x), cos(x), e^x, ln(x), your favorite function

Addition: sin(x) + cos(x) = another function ✅
Scaling: 3 × sin(x) = 3sin(x) ✅

All vector axioms hold beautifully:

  • Zero function exists: $f(x) = 0$ for all $x$ (adding it to any function leaves it unchanged) ✅
  • Additive inverse: For $f(x)$, there's $-f(x)$ that cancels it out ✅
  • Associative: $(\sin(x) + \cos(x)) + e^x = \sin(x) + (\cos(x) + e^x)$ ✅
  • Distributive: $2(\sin(x) + \cos(x)) = 2\sin(x) + 2\cos(x)$ ✅

Every function you've ever worked with is a vector in this space.

The Space of Differential Operators

Now we're getting really wild.

The vectors: Operators like ∇ (nabla), d/dx, ∂²/∂x², Laplacian ∇²

Wait, what? Operators can be vectors?

Let's take the gradient operator ∇ and check all 8 axioms:

1. Closure under addition: $\nabla + \frac{\partial^2}{\partial x^2} = $ another operator ✅

2. Commutative: $\nabla + \frac{\partial^2}{\partial x^2} = \frac{\partial^2}{\partial x^2} + \nabla$ ✅

3. Associative: $\left(\nabla + \frac{\partial}{\partial x}\right) + \frac{\partial^2}{\partial x^2} = \nabla + \left(\frac{\partial}{\partial x} + \frac{\partial^2}{\partial x^2}\right)$ ✅

4. Zero operator exists: The operator that does nothing to functions ✅

5. Additive inverse: For $\nabla$, there's $-\nabla$ that cancels it ✅

6. Closure under scaling: $3\nabla = $ another operator ✅

7. Scalar associative: $2(3\nabla) = (2 \times 3)\nabla = 6\nabla$ ✅

8. Distributive: $3\left(\nabla + \frac{\partial}{\partial x}\right) = 3\nabla + 3\frac{\partial}{\partial x}$ ✅

When we write (∇ + ∂²/∂x²)f, we mean: "take the gradient of f, then add the second derivative of f."

Yes! The gradient operator ∇, the divergence, the curl - they're all vectors in operator space.

And here's the mind-bender: What's the "size" of the gradient operator?

The question seems absurd! ∇ isn't a number, it's not a geometric object you can measure with a ruler. It's a rule for transforming functions.

(Now, differential operators can have norms defined on them, but these are highly abstract constructions - not anything you'd intuitively call "size.")

Yet ∇ is still perfectly, rigorously a vector because it follows all 8 axioms.

The Space of Linear Transformations

The vectors: Rotations, reflections, projections, any linear transformation

Addition: Apply two transformations and add their effects
Scaling: Make a transformation "stronger" or "weaker"

The other 6 axioms follow similarly.

The transformations themselves are vectors!

The Space of All Sequences

The vectors: (1, 2, 3, 4, ...), (1, 1/2, 1/4, 1/8, ...), any infinite sequence

Addition: Add sequences term by term
Scaling: Multiply every term by a number

All 8 axioms work beautifully:

  • Zero sequence exists: (0, 0, 0, 0, ...) - adding it to any sequence leaves it unchanged ✅
  • Additive inverse: For (1, 2, 3, ...), there's (-1, -2, -3, ...) that cancels it ✅
  • Commutative: (1, 2, 3, ...) + (0, 1, 0, ...) = (0, 1, 0, ...) + (1, 2, 3, ...) ✅
  • Associative: Grouping doesn't matter when adding three sequences ✅
  • Distributive: 3((1, 2, 3, ...) + (0, 1, 0, ...)) = (3, 6, 9, ...) + (0, 3, 0, ...) ✅
  • (Plus the other 3 axioms...)

Even infinite lists of numbers are vectors.

The Mind-Bending Reality

Here's what's beautiful and terrifying about this:

Anything that follows the 8 axioms is a vector.

  • Polynomials? Vectors.
  • Matrices? Vectors.
  • Functions? Vectors.
  • Operators? Vectors.
  • Probability distributions? Vectors.
  • Solutions to differential equations? Vectors.
  • Quantum states? Vectors.

Why This Matters

This isn't just mathematical showing off. Each of these vector spaces gives us powerful tools:

  • Polynomial spaces → approximation theory, computer graphics
  • Function spaces → signal processing, quantum mechanics
  • Operator spaces → differential equations, physics

When you realize these are all vector spaces, you can apply the same geometric intuition and linear algebra tools to all of them.

But What About Size?

Remember, we deliberately decoupled the concept of "size" from vectors. But here's the beautiful part: you can add it back whenever you want!

As we discussed in our L2 norm post, "size" (or "norm") is additional structure you layer on top of a vector space. And you have complete freedom to define your "size" however makes sense for your problem - as long as it follows the 4 norm axioms:

  1. Non-negative: ||v|| ≥ 0 (size can't be negative)
  2. Zero only for zero vector: ||v|| = 0 if and only if v = 0
  3. Homogeneous: ||cv|| = |c| × ||v|| (scaling the vector scales the size)
  4. Triangle inequality: ||u + v|| ≤ ||u|| + ||v|| (direct path is shortest)

Some wild examples of norms:

  • For matrices: The Frobenius norm (√sum of squares of all entries) or nuclear norm (sum of singular values)
  • For functions: The maximum value the function reaches (L∞ norm)
  • For sequences: The sum of absolute values of all terms (L1 norm)
  • For polynomials: The largest coefficient (coefficient norm)

Each choice of norm gives you a different notion of "distance" and "angle" in your vector space, opening up different geometric insights. There's no right or wrong here - just different perspectives on the same mathematical structure.

The power is in your hands to define what "size" means in your particular mathematical universe!

What About Dot Products and Projections?

But wait, there's more! What about those other geometric concepts we love - dot products, projections, angles between vectors?

Just like with norms, these concepts can also be added back to any vector space through something called an inner product (the generalized version of the familiar dot product).

An inner product is an operation that takes two vectors and gives you a number, following its own set of axioms:

  • Symmetry: ⟨u, v⟩ = ⟨v, u⟩ (order doesn't matter)
  • Linearity: ⟨au + bv, w⟩ = a⟨u, w⟩ + b⟨v, w⟩ (distributes over addition and scaling)
  • Positive definite: ⟨v, v⟩ ≥ 0, and equals 0 only when v = 0

Once you define an inner product, you automatically get:

  • Angles between vectors (through the inner product formula)
  • Projections (projecting one vector onto another)
  • Orthogonality (when ⟨u, v⟩ = 0)
  • A natural norm (||v|| = √⟨v, v⟩)

The beautiful part? You can define completely different inner products on the same vector space, giving you different notions of "angle" and "projection"!

Some wild examples:

  • For functions: $\langle f, g \rangle = \int f(x)g(x) , dx$ (the L2 inner product we covered before)
  • For polynomials: $\langle p, q \rangle = p(1)q(1) + p(2)q(2) + p(3)q(3)$ (evaluate at specific points and sum)
  • For matrices: $\langle A, B \rangle = \text{trace}(A^T B)$ (sum of products of corresponding entries)
  • For sequences: $\langle (a_1, a_2, ...), (b_1, b_2, ...) \rangle = \sum \frac{a_i b_i}{i^2}$ (weighted by position)

Each choice creates a completely different geometry on the same mathematical objects!

The Journey of Understanding

This post series is fundamentally about careful demystification.

Too often in engineering and mathematics education, we're told what to think without understanding why. Professors write "signals are vectors" on the board and move on. Students memorize formulas without grasping the beautiful underlying principles.

But when you truly understand these ideas to the core—when you see that vectors aren't about arrows but about behavior, when you realize that anything following simple rules can be a vector—the entire mathematical universe opens up.

Our goal isn't just to teach you facts. It's to build your intuition from the ground up, so that when you encounter Fourier transforms, quantum mechanics, or machine learning, you don't see mysterious formulas. You see familiar friends: vectors, norms, and geometric operations in different clothing.