Making Sense of the World: "Be Intelligent" and the Art of Insight
Insight is often the result of rigorous questioning and a refusal to settle for the superficial.
In our first post, we explored Bernard Lonergan's cognitional guiding principle "Be Attentive." We looked at how attentiveness expands beyond the usual sense data, involving emotional and conscious layers that are often neglected in product development. Today, we take the next step in Lonergan's cognitive journey: "Be Intelligent." This precept invites us to interpret the data gathered through attentiveness—to make sense of it and derive meaning.
For innovators and product builders, "Be Intelligent" isn't just about identifying a pattern in the data; it's about deeply understanding it. This means moving from surface-level definitions, which can be misleading or superficial, to deeper, explanatory insights that transform our understanding. We’ll explore this in the context of generative AI, discussing the difference between nominal and explanatory definitions and why this distinction matters for building truly insightful products.
From Data to Insight: The Challenge of Being Intelligent
The challenge of being intelligent begins when we move beyond merely collecting data—whether from user interactions, market research, or internal discussions—and start interpreting it. Intelligence, in this context, involves spotting connections, raising questions, and proposing hypotheses that fit the data. It’s about making leaps, connecting the dots, and allowing ourselves to imagine the deeper structures at play.
Imagine a scenario where your AI product’s user engagement metrics are down. At first, being attentive gives us a detailed picture: maybe we notice that drop-off rates spike after a particular interaction. But to be intelligent is to ask why—to seek underlying reasons, whether they are based on usability issues, emotional friction, or unmet user expectations.
Nominal vs. Explanatory Definition
A critical aspect of intelligence lies in how we define what we observe. Here, Lonergan introduces an important distinction between nominal definitions and explanatory definitions—a distinction that is crucial for anyone working with technology and innovation.
Nominal Definition: A nominal definition essentially assigns a label to an observed phenomenon. It describes what something is, often in a manner that is based on superficial attributes. For instance, if we define a "user" as simply "a person interacting with our AI tool," we have given a nominal definition. It tells us what the term means in a basic sense, but it doesn’t explain the underlying dynamics. Nominal definitions are useful for communication, but they can limit our thinking to fixed categories without digging deeper into the nature of what we are observing.
Explanatory Definition: In contrast, an explanatory definition delves deeper, explaining why something is the way it is by exploring its underlying cause or structure. Using our earlier example, an explanatory definition of a "user" might involve understanding the user's underlying motivations, their emotional state while interacting with the product, and the larger context in which they use the tool. Explanatory definitions attempt to answer why the observed phenomena happen the way they do, rather than simply what they are.
In the tech world, relying solely on nominal definitions can be dangerous. They give us convenient but ultimately shallow categories—labels that obscure complexity. Explanatory definitions, however, push us to understand the relationships, functions, and purposes that drive user behavior. In the case of generative AI, defining a user's journey nominally might lead us to focus only on inputs and outputs. By contrast, an explanatory definition asks us to consider how users interpret the model's responses, how they integrate the outputs into their creative processes, and even how their backgrounds shape their expectations.
Generative AI: Nominal Versus Explanatory Insights
To illustrate, let’s consider a generative AI application that writes poetry. A nominal definition of what the application does might be: "The model generates text that resembles human poetry based on a given prompt." While technically accurate, this tells us very little about what is truly happening under the hood.
An explanatory approach would dig deeper: it might explore how the model processes input prompts, recognizes semantic patterns, and recombines learned elements in a way that captures something poetic. Beyond this, it would consider how a user evaluates the output—is the poetry judged based on its emotional resonance, originality, or conformity to traditional poetic forms?
More importantly, an explanatory definition can extend to the broader impact of the generated poetry. How does this AI-assisted creativity affect the user's own creative process? Does it inspire or inhibit them? By moving beyond nominal definitions, we can create products that are not just functionally capable but that also resonate deeply with the human motivations underlying their use.
Intelligence in Practice: Moving Beyond Labels
So, how do we apply "Be Intelligent" practically when building or refining AI products? The first step is to recognize when we’re relying on nominal definitions and when we need to move toward something explanatory. Here are some ways to foster intelligence in product teams:
Challenge the Nominal: Whenever you identify a label—whether it’s "user," "engagement," or "success"—ask whether it merely names a phenomenon or whether it actually explains it. Digging into metrics, for example, often yields nominal definitions ("engagement dropped by 10%"). To be intelligent, you need to start asking deeper questions about why and how this happened, seeking the relationships beneath the data.
Cross-Disciplinary Dialogue: Often, the best way to move from nominal to explanatory definitions is through collaboration with people who have different perspectives. Designers, data scientists, psychologists, and marketing experts can all bring different lenses to the same problem, helping to challenge surface-level labels and uncover deeper insights.
Hypothesis Generation: Intelligence is fundamentally about curiosity and the courage to form and test hypotheses. When analyzing a user interaction, an intelligent approach means hypothesizing possible reasons behind observed behaviors. For example, if users are abandoning a feature, is it due to poor UX design, or could it be that they don’t perceive enough value in the task itself? Hypotheses lead to experiments, and through experimentation, we move toward deeper, explanatory understanding.
User Stories as Explanatory Tools: Instead of simply analyzing users via metrics, incorporating qualitative insights into your understanding can lead to more explanatory definitions. Consider creating detailed user stories that explore a user's emotional state, intent, and the broader context of their life. These stories help transform metrics into meaningful narratives that foster intelligence.
The Role of Generative AI in Enhancing Intelligence
Interestingly, generative AI itself can be both a product of and a tool for enhancing human intelligence. When used intelligently, AI can assist us in generating hypotheses, revealing patterns that might not be immediately apparent, and suggesting correlations for us to explore further. For example, sentiment analysis on user reviews can reveal not just that users are dissatisfied (nominal insight) but can help us identify specific emotional triggers and potential design flaws (explanatory insight).
However, it is ultimately up to human teams to interpret these AI-generated insights. AI can suggest patterns, but to make sense of these suggestions requires intelligence in the Lonerganian sense—asking "What does this pattern mean?" and "How does this relate to the experiences we want to create?"
Being Intelligent in the Age of AI
Being intelligent means striving for depth in understanding, especially in the way we define and categorize the world around us. It requires moving beyond nominal labels and definitions to grasp the underlying dynamics at play. In product development, this often means questioning assumptions, looking for root causes, and being willing to explore multiple levels of meaning and explanation.
For those working in AI and innovation, Lonergan’s call to "Be Intelligent" is a reminder that insight is often the result of rigorous questioning and a refusal to settle for the superficial. Generative AI can help surface data, but it’s up to us to make that leap from mere information to true understanding.
Conclusion
"Being Intelligent" is about transforming the data we've gathered through attentiveness into something meaningful—something that offers insight. It’s about pushing beyond superficial definitions and diving into the root causes, structures, and relationships that govern user behaviors and product performance. In the next post, we’ll explore the third cognitional guiding principle: "Be Reasonable," where we move from understanding to making sound judgments about what’s true.
What questions do you have as we continue on this journey through Lonergan's philosophy, uncovering insights that make us better creators and innovators?