AI: Episode 1 — What Makes Something Smart?

The line between a machine that follows orders and one that figures things out on its own — and why that distinction is the whole ballgame.


Your dog can learn tricks. Your thermostat adjusts the temperature. A chess computer can beat every grandmaster alive. But which of them is actually intelligent?

That question sounds philosophical. Fluffy and abstract. It’s not. The answer is the single most important idea in artificial intelligence, and everything else — machine learning, neural networks, ChatGPT — falls out of it like dominoes. So let’s get it right.

The thermostat is not smart

Thermostat feedback loop diagram
A thermostat checks one rule, endlessly. That’s not intelligence — it’s a recipe on a loop.

Start simple. Your thermostat checks the temperature. If it drops below 68, the heat kicks on. Hits 68 again, the heat shuts off. It’s sensing its environment and reacting. Looks a little bit like thinking, if you squint.

But it’s not. The thermostat is running one rule a human programmed into it. It doesn’t know what temperature is. Doesn’t know why 68 matters. Move it to a different room, break the heater, change the seasons — it just keeps checking the same rule, oblivious. That’s not intelligence. That’s a recipe on a loop.

Your dog, though

Dog learning to sit through repetition
Nobody programmed the dog. It figured out the rule from experience.

Hold up a treat and say “sit.” First few times, nothing. The dog stares at you, maybe jumps, maybe barks. Then at some point — almost by accident — it sits. Treat. Do it again. Again. After maybe half a dozen repetitions, the dog sits the second you say the word.

Nobody programmed the dog. Nobody opened up its brain and typed if (sound == "sit") { lower(hindquarters); }. The dog figured it out from experience. It noticed a pattern: make this shape when the human makes that sound, and good things happen.

That distinction — right there — is the foundation of this entire field. There’s a massive difference between following a rule someone handed you and figuring out the rule yourself from experience. The thermostat was told its rule. The dog discovered its.

And that, at the most basic level, is what separates regular software from AI. Traditional software follows rules a programmer wrote. AI figures out the rules from examples.

Everything else we’ll talk about in this series — machine learning, deep learning, large language models — those are specific techniques for how machines figure out rules. But the core idea never changes.

The intelligence spectrum

Intelligence spectrum from calculator to human
Intelligence isn’t binary. It’s a spectrum — and a weird one.

It’s tempting to think of intelligence as a binary. Smart or not smart. But it’s a spectrum, and a weird one.

A calculator can do math billions of times faster than you, but it has zero understanding of what numbers mean. Your dog builds mental models of the world — it knows the sound of the leash means a walk, that some neighbors give belly rubs and others don’t, that your car keys mean a ride is probably coming. A three-year-old can recognize faces, learn languages by osmosis, pick up a cup without crushing it, and tell when someone’s sad from their tone of voice.

And then there’s Deep Blue. In 1997, it beat world chess champion Garry Kasparov. It could evaluate something like 200 million board positions per second. Better at chess than any human who ever lived.

But it couldn’t recognize a cat in a photograph. Couldn’t understand a sentence of English. Couldn’t pick up a chess piece on a physical board. It was simultaneously the best chess player on Earth and one of the dumbest machines in existence.

Moravec’s Paradox: hard is easy, easy is hard

This leads to one of AI’s most important ideas. Hans Moravec, a robotics researcher at Carnegie Mellon, pointed out something deeply counterintuitive in the 1980s: the stuff humans find intellectually hard — chess, calculus, formal logic — is actually easy for computers. And the stuff humans find effortless — recognizing a face, walking across a room, catching a ball — is devastatingly hard for computers.

Think about catching a ball. You don’t think about it. But your eyes are tracking a moving object, your brain is calculating trajectory adjusted for wind and spin, your arm is extending to the right spot, your fingers are closing with exactly enough force. Dozens of muscles coordinating in real time, in about half a second. More physics and motor control than a room full of engineers could describe in equations. And you do it without thinking.

We’ve spent decades and billions of dollars trying to build robots that can do things like that. They’re still worse at it than a six-year-old. Moravec’s Paradox tells you something fundamental: what looks simple from the outside is often mind-bogglingly complex underneath. (Actually, that’s not quite right — it’s not just complex, it’s complex in ways we can’t even articulate. Which turns out to be the whole problem, as we’ll see in Episode 5.)

The Turing Test: famous but flawed

Alan Turing at a typewriter
Alan Turing proposed his famous test in 1950. Elegant idea — but it only tests one thing.

Alan Turing proposed it in 1950. You’re having a text conversation. You don’t know if it’s a human or machine on the other end. If you can’t tell the difference, the machine passes.

Elegant idea. But it only tests one thing: language. A machine could be a brilliant conversationalist while understanding absolutely nothing it’s saying — just stringing together words that statistically tend to follow each other. And actually, that’s surprisingly close to what modern language models do. We’ll get into exactly how in later episodes.

The bigger issue: the Turing Test frames intelligence as pass/fail. But we’ve already established it’s a spectrum. The better question isn’t “can this machine think?” It’s “can this machine do something useful that looks like thinking?”

And in 2026, the answer to that is an overwhelming yes.

You use AI every day (you just don’t notice)

Montage of everyday AI: spam filters, recommendations, face unlock
Spam filters, word predictions, Netflix recommendations, face unlock — all AI you use every day.

Your spam filter? A machine learning classifier trained on millions of emails. It learned the patterns of spam from examples — nobody programmed a list of suspicious words.

Your phone’s word suggestions? A small language model predicting what word probably comes next, trained on billions of sentences.

Netflix recommendations? A system that finds patterns between your viewing history and millions of other users. It doesn’t know what shows are about. It knows that people who watch A, B, and C tend to enjoy D.

Face unlock on your phone? A deep learning model called a convolutional neural network, trained on hundreds of angles of your face. It learned the mathematical structure of your features — not a simple picture match. The fact that this happens in a fraction of a second on a phone in your pocket is, honestly, kind of wild.

Every one of these is a machine that learned rules from data instead of being told the rules by a programmer. Different techniques, different architectures, same umbrella: machines that learn.

The AI taxonomy (it matters)

The term “AI” gets thrown around so loosely it’s nearly meaningless. When a company says their product “uses AI,” that could mean six completely different things. So here’s the hierarchy:

  • Artificial Intelligence — the umbrella. Any system that learns from experience.
  • Machine Learning — the broad category of systems that learn from data.
  • Deep Learning — machine learning using layered neural networks, loosely inspired by the brain.
  • Large Language Models — deep learning systems trained specifically on text (ChatGPT, Claude, Gemini).
  • Generative AI — any AI that creates new content: text, images, music, video.

These are nested. All large language models use deep learning. All deep learning is machine learning. All machine learning is AI. But not all AI is machine learning. Throughout this series, we’ll always be specific about which one we’re talking about.

One thing to remember

If someone asks you what AI is, here’s your answer: AI is a machine that figures out the rules from experience, instead of being told the rules by a programmer. Everything else — machine learning, ChatGPT, self-driving cars — is a specific technique for how machines do that learning.

And whether a machine that “learns” actually understands anything, or is just doing very sophisticated math on very large datasets? That’s genuinely one of the great philosophical debates of our time. Smart people disagree. But the practical impact is real either way.


Listen to this episode: [Zeroth: AI, Episode 1 — What Makes Something Smart?]


Next up: Every AI system ever built runs on hardware that only knows two things: zero and one. How do you get from zeros and ones to a machine that writes poetry? That’s Episode 2.


Sources

  1. Russell, S. & Norvig, P., Artificial Intelligence: A Modern Approach, 4th ed., Pearson, 2020. The standard definition distinguishing AI (learning from experience) from traditional programmed systems.
  2. IBM Research, “Deep Blue,” project archives, 1997. Deep Blue evaluated approximately 200 million positions per second during the Kasparov match.
  3. Moravec, H., Mind Children, Harvard University Press, 1988. Moravec’s Paradox: sensorimotor skills that seem easy to humans require enormous computational resources, while abstract reasoning is comparatively cheap.
  4. Turing, A., “Computing Machinery and Intelligence,” Mind, Vol. 59, No. 236, 1950, pp. 433-460. The original proposal of the imitation game (Turing Test).
  5. Graham, P., “A Plan for Spam,” 2002. The essay that popularized Bayesian spam filtering, demonstrating pattern-based classification over rule-based approaches.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top