4 Comments
User's avatar
Taylor T Black's avatar

Thanks brother! Stoked to dive deep with you!

David Hallowell, PhD's avatar

This made me think about a lot of really interesting dimensions of intelligence I hadn’t pondered. It also spurred an insight for me about isomorphic instantiations of intelligibility, and how stochastic activations uncover isomorphisms that may never have occurred in a human knower. It also raises some of the compelling, somewhat uncomfortable questions about bootstrapping and what differentiates different kinds of knowing (especially human vs LLM). Thanks for sharing!

Diglio Simoni's avatar

Your discussion of Gap 1 reminded me of how I once tried to explain this to my 8-year old...

Imagine you have a toy robot that sorts colored blocks. You teach it by giving it gold stars when it gets it right, and it gets better and better at sorting. But the robot doesn't know it's sorting blocks. It doesn't know anything. It just... gets better.

Now imagine your sister watches the robot and says: "Oh! I bet the robot doesn't really think — it just follows rules."

Here's the funny thing. To say that, your sister had to actually think. She had to look at the robot, look at herself, and notice something connecting them. That's a real thought — a real "aha!"

So if someone says "thinking is just a machine following rules" — they used real thinking to say it. The thought is doing the very thing they're claiming doesn't exist.

The robot can never do that. The robot can get really really good at sorting blocks. But it will never sit back and go, "huh, I wonder if I'm really sorting, or just following rules." It just... keeps sorting.

That's the difference. The "aha!" can look at itself. The robot's improvement can't.