All posts
·3 min read

Did You See the Gorilla?

There's a famous experiment by Christopher Chabris and Daniel Simons. You watch a video of people passing a basketball and your job is to count the passes. Halfway through, a person in a gorilla suit walks across the screen, beats their chest, and leaves.

About half the people watching don't see the gorilla.

They were told to count passes. Their attention locked onto the ball and everything else got filtered out, including a gorilla standing right there in the open. Psychologists call this inattentional blindness. When you're focused hard enough on one thing, you can completely miss something obvious happening right in front of you.

Kahneman took this further. He argued that attention can be deliberately allocated and trained. I'd put it even more simply: attention is programmable. Tell someone what to look for, and that's what they'll see.

This applies to AI agents too

Give an agent one job, say labeling transactions as business or personal, and it nails it. Picks up on patterns you'd miss. Consistent. Doesn't get bored after the 400th receipt.

Now give that same agent three jobs. Label the transactions, calculate the quarterly totals, flag anomalies. Suddenly it's hallucinating numbers. Mislabeling things it would have nailed with a narrower scope. The model didn't get dumber, you just asked it to track too many things at once, and the edge cases started slipping through. Same as the gorilla experiment.

Most of life is categorization anyway

Once you see this, you start noticing it everywhere. Filing your taxes? Categorizing transactions. Organizing a reading list? Categorizing books. Managing a project? Sorting tasks by status. Even thinking works this way. You have a new idea, and the first thing your brain does is figure out what kind of thing it is. That decision shapes everything that happens next.

AI is built for pattern matching, and categorization is pattern matching. A focused agent crushes these problems. But the moment you dilute its focus, too many patterns to match against at once.

The fix is focus, not firepower

In Scribbles, when you type something like "just watched Oppenheimer, incredible film," the first step just figures out what kind of thing you're talking about. A movie. That's the only question it answers. The next step extracts the title. The next one looks up the details. Each step is one categorization problem, and one categorization problem is something AI can do reliably all day long.

This is how we think about AI in our tools. You set up focused rules: "if this is about a movie, route it to my movie list." "If it has an arxiv link, send it to my research space." You decide what the system pays attention to. The AI does the sorting. Each rule is one narrow question, so it gets it right.

Program the attention, and let it run.

Stop losing your best thinking

Start capturing for free. No credit card required.

Get Started