For the longest time, I never really understood what limits were. School taught me how to "solve" them, but never why they existed in the first place. Looking back, the way limits were presented was not just incomplete—it was misleading.
The name itself feels like a betrayal. "Limit" suggests finality, precision, an ultimate value you reach when you squeeze infinitely close. So naturally, I assumed that taking the limit of a function was about discovering its most exact, most absolute value.
Wrong!
The whole point of limits is the opposite. Limits were invented because we couldn't get an exact value at some point. Either the function blew up (division by zero), didn’t exist, jumped, or did something too wild to handle directly. And yet, we still wanted to say something meaningful about it. So we created a system to describe not the value at a point, but what it approaches as you get infinitely close.
It’s a philosophy hack. A controlled form of vagueness that’s still usable in rigorous math.
Limits are a way to make broken things workable. They let you study what a function is trying to do near a point, without needing it to behave perfectly at that point.
If you’ve ever seen something like this:
lim (x -> 2) (x^2 - 4)/(x - 2)
You were likely told to cancel terms, simplify, and plug in 2. What they didn’t tell you is: you’re not actually evaluating the function at 2—because it’s undefined there. You’re watching what it becomes as it gets close. That's the limit.
In algebra, math is absolute. You plug in, you solve, you get an answer. But limits are behavioral. They are about trends, approaching, tendencies. It’s the first time you’re not solving a problem by direct substitution or transformation, but by asking, “Where is this going?”
That subtle switch is huge.
It’s why limits are the backbone of derivatives and integrals. You can’t define instantaneous change (like velocity) or exact area under a curve (integral) unless you embrace this kind of infinitesimal behavior.
Here's what’s even more interesting: computers don’t “understand” limits. They simulate them. Numerically, they take tiny steps close to the point and observe behavior. Symbolically, they apply transformation rules until the expression behaves nicely. But in both cases, they’re mimicking inference. There’s no magic. Just behavior-watching done mechanically.
In a way, we’ve taught computers to do what school never taught us: understand what the limit is really about.
Limits don’t solve problems that have answers. They make unsolvable things usable.
They don’t give you perfect values. They give you the next best thing: a direction, a behavior, a tendency.
Limits turn broken math into usable math.
They’re not the pinnacle of precision. They’re the legalized approximation engine that lets the rest of calculus exist.
Because no one told me this. Not in high school. Not in undergrad. I had to piece it together from frustration, programming, and intuition.
And once you get it, you start to realize: limits weren’t just a new technique. They were a philosophical shift in what math even is.
If you felt like limits never made sense, maybe it’s not you. Maybe it’s the way we’ve all been taught to fake-understand them.