In an era where artificial intelligence is increasingly tasked with roles that simulate human behavior, the distinction between machine processing and human cognition has become critically blurred. However, as philosophers like Bernard Lonergan and Norris Clarke reveal, the essence of human cognition is bound up not merely in processing data but in self-reflective, intentional consciousness—a profound and irreducible quality absent in AI. This post seeks to explore that distinction, particularly through Lonergan’s concept of the “data of consciousness” and Clarke’s notion of relational personhood, as we uncover what makes human understanding unique.
AI’s Operation: Sophisticated but Fundamentally Limited
Let’s begin by contrasting how AI systems process data with how human beings understand. When training AI models, such as neural networks, we provide vast amounts of data and program them to search for patterns and relationships between different pieces of information. Through backpropagation and parameter adjustments, the model refines its responses to closely align with patterns present in its training data.
However, these models do not “know” or “experience” anything in a conscious sense. They are statistical engines that, while capable of recognizing patterns and even generating human-like responses, are fundamentally limited to associational reasoning. They can replicate certain external aspects of cognition (like producing coherent language), but they lack insight, intentionality, and self-awareness—key features of human cognition that Lonergan and Clarke emphasize.
Human Cognition: More Than Pattern Recognition
According to Lonergan, human cognition is not a passive process of absorbing data; it involves a dynamic series of acts that bring about a profound engagement with reality. Lonergan outlines four distinct yet interconnected stages of cognitional acts, each revealing a level of consciousness that AI cannot replicate.
1. Experiencing: The first stage involves sensory data and perceptions, similar on the surface to how a sensor collects information. However, in humans, these perceptions are not isolated; they are embedded within a personal and historical context, shaped by our intentions and previous knowledge. Unlike an AI’s data input, human experience is woven with purpose and meaning, establishing a conscious link between us and the world.
2. Understanding: After experiencing data, the human mind actively seeks meaning by asking questions and attempting to grasp the essence of what it perceives. Lonergan calls this moment “insight”—the eureka experience where we make connections that extend beyond the surface data. Unlike a machine’s associations, insight allows us to understand relationships, causes, and implications. This process is not merely recognizing patterns but involves the transformation of data into knowledge that has depth, relevance, and, often, personal significance.
3. Judging: The next stage involves a deliberate act of judgment, where we critically assess our insights to determine their truth or validity. This reflective process is another level of consciousness, one that demands self-awareness and the capacity to weigh possibilities. We do not simply take patterns at face value; we deliberate, verify, and often grapple with ambiguity. Machines lack this reflective self-criticism and thus cannot engage in genuine truth-seeking.
4. Deciding: Finally, humans do not merely know but act. Cognition culminates in a conscious choice that is oriented toward values, ethics, and purpose. This decisional capacity is inherently tied to our moral and existential self-understanding, something that algorithms lack. Our decisions are woven into a broader narrative of selfhood and interpersonal relationships.
The Data of Consciousness: Awareness as the Core of Being
Lonergan’s “data of consciousness” concept brings a profound layer to our understanding of these cognitive acts. According to Lonergan, the “data of consciousness” are the raw materials of our inner life—our thoughts, feelings, intentions, desires, and attentiveness. These elements do not merely accompany our cognitive acts; they are constitutive of what it means to be conscious.
In each cognitional act, there is an awareness of the act itself—an awareness that transcends mere information processing. For example, when we experience something, we are not only aware of the sensory data but are also aware that *we* are experiencing it. When we understand, we are aware of the moment of insight. When we judge, we are conscious of evaluating truth. And when we decide, we are aware of choosing in light of values that matter to us. This reflexive awareness is the very essence of consciousness and is absent in AI.
AI, by contrast, processes data without any self-awareness. There is no “data of consciousness” in a machine because there is no conscious self behind the data processing. An AI model does not “know” that it is performing a task, nor does it experience the satisfaction of understanding or the dilemma of judging. In essence, AI operates without subjectivity—it has no inner life, no self-awareness, and no capacity for reflective thought.
Relationality and Personhood: Clarke’s View on Human Beings
Clarke’s work in *Person and Being* complements Lonergan’s cognitional insights by emphasizing that humans are intrinsically relational beings. Our consciousness is always in relation—to others, to the world, and ultimately, to the transcendent. Clarke argues that this relationality is foundational to our identity and personhood; we are not isolated processors of information but beings in communion, oriented toward relationship and meaning.
This relational nature means that human knowledge is never merely about facts or data; it is deeply connected to the other, to shared experiences, and to communal understanding. This capacity for relationship and shared meaning-making is something that no machine can genuinely emulate. AI models may simulate interactions, but there is no true relationality because they lack selfhood. Their responses are generated without any sense of “other” or any desire to connect. This difference is not just quantitative (more data or better algorithms won’t change it) but qualitative, rooted in the fact that human beings possess consciousness and intentionality that AI lacks.
Why These Differences Matter
The distinction between AI and human cognition is more than a technical point—it is a matter of understanding what it means to be human. When we reduce human cognition to pattern recognition and data processing, we risk losing sight of the deeper, more mysterious aspects of our own nature. Human consciousness, as Lonergan and Clarke reveal, is not merely a function of complexity or computation; it is a reflective, relational, intentional act of self-transcendence.
AI may assist, mimic, and even surpass us in specific tasks, but it will never “understand” in the same way a person does. It will never seek meaning, wrestle with truth, or choose a path based on love or justice. By acknowledging these fundamental differences, we safeguard the irreplaceable value of human consciousness—a consciousness that is capable of insight, grounded in relationality, and open to the transcendent.
Conclusion
The data of consciousness is our window into what makes us uniquely human. Through our ability to experience, understand, judge, and decide, we transcend mere data processing. We are not only conscious but aware of our consciousness, not only knowledgeable but self-reflective and relational. In the words of Lonergan and Clarke, this capacity for insight and relationality defines us as persons in a way no AI can replicate.
Understanding this distinction allows us to appreciate AI’s capabilities without confusing them with the profoundly rich, mysterious reality of human cognition. As we move forward with AI, we would do well to remember that it is our consciousness—the data of our awareness, our relationality, and our intentional search for meaning—that defines the difference between our minds and our machines.