This made me think about a lot of really interesting dimensions of intelligence I hadn’t pondered. It also spurred an insight for me about isomorphic instantiations of intelligibility, and how stochastic activations uncover isomorphisms that may never have occurred in a human knower. It also raises some of the compelling, somewhat uncomfortable questions about bootstrapping and what differentiates different kinds of knowing (especially human vs LLM). Thanks for sharing!
Thanks brother! Stoked to dive deep with you!
Likewise!
This made me think about a lot of really interesting dimensions of intelligence I hadn’t pondered. It also spurred an insight for me about isomorphic instantiations of intelligibility, and how stochastic activations uncover isomorphisms that may never have occurred in a human knower. It also raises some of the compelling, somewhat uncomfortable questions about bootstrapping and what differentiates different kinds of knowing (especially human vs LLM). Thanks for sharing!