Taking Action: "Be Responsible" in Tech and Innovation
Why action is necessary for authentic human insights
We’ve journeyed through three of Bernard Lonergan’s cognitional imperatives: "Be Attentive," "Be Intelligent," and "Be Reasonable." Each one has built upon the previous, guiding us from gathering data, generating insights, and making informed judgments; all in the pursuit of the unrestricted desire to know. Today, we reach the next step: "Be Responsible."
Lonergan argues that we must be responsible in the pursuit of the unrestricted desire to know because our intellectual activity is not morally neutral—it’s embedded in a world of human action and ethical implications. The unrestricted desire to know, at its core, seeks truth, and this pursuit brings with it an obligation to ensure that the knowledge we gain, and how we apply it, serves the greater good and respects the dignity of others.
Here’s why responsibility is crucial in Lonergan's framework:
1. The Human Condition Is Relational
Lonergan recognizes that human beings are deeply interconnected. Our pursuit of knowledge, whether through science, philosophy, or any other domain, affects others. Knowledge doesn’t exist in a vacuum; it has practical consequences on individuals, societies, and institutions. Therefore, it’s not enough to simply seek understanding or truth for its own sake. Once we acquire knowledge, we must take responsibility for how that knowledge impacts others.
This responsibility is especially critical when it comes to technologies like AI or innovations in science, where new knowledge can have far-reaching effects on society. Acting responsibly means considering these wider consequences, not just following curiosity wherever it leads, but aligning that curiosity with the ethical demands of living in community.
2. Knowledge and Freedom Are Linked
For Lonergan, the pursuit of knowledge is tied to the notion of human freedom. The more we understand the world, the more capable we become of making choices that shape it. However, with this freedom comes the burden of responsibility: the choices we make based on our knowledge can either contribute to human flourishing or to harm.
Responsibility, therefore, serves as a check on the use of our freedom. Without it, the desire to know could lead to destructive outcomes—think of the ethical implications of scientific discoveries like nuclear energy or genetic engineering. Lonergan would argue that the unrestricted desire to know must be guided by moral considerations, ensuring that the freedom knowledge grants is used in ways that respect the well-being of others.
3. The Desire to Know Is an Ethical Imperative
The unrestricted desire to know, for Lonergan, is inherently linked to human authenticity. This desire drives us not only toward understanding the world but also toward discovering what is truly good, what promotes justice, and what aligns with the flourishing of all people. It’s not enough to simply accumulate facts or insights—the pursuit of knowledge is also a moral endeavor aimed at discovering what we ought to do once we know.
In this sense, being responsible is part of the natural progression of the desire to know. The deeper we understand reality, the more we are called to act in ways that reflect that understanding. Ignoring the ethical dimension of knowledge leads to inauthenticity, where we might use what we know for selfish or harmful purposes. Being responsible ensures that our actions align with the deeper moral truths we discover through our inquiry.
4. Intellectual Integrity Requires Ethical Action
Lonergan believes that true intellectual integrity requires us to recognize the moral dimensions of our knowing. This means that simply knowing something is not enough; we must integrate that knowledge into our broader moral framework. If we fail to be responsible with what we know, we betray the very essence of the intellectual pursuit, which is to align ourselves with the truth—and the truth, in Lonergan's view, always points us toward the good.
For example, in fields like AI development or bioethics, the unrestricted pursuit of knowledge could lead to groundbreaking discoveries. But intellectual integrity demands that we apply these discoveries with care, ensuring that they serve humanity rather than harm it. This is why responsibility is essential: it ensures that our pursuit of knowledge remains authentic, aligned with both truth and goodness.
5. The Common Good and the Responsibility of Knowledge
Finally, Lonergan argues that knowledge, when properly pursued, contributes to the common good. The unrestricted desire to know should, ideally, lead to insights that improve human life and advance collective well-being. However, this can only happen if the knowledge we pursue and apply is done responsibly. Knowledge used irresponsibly—without concern for its social, ethical, or environmental impact—can erode the common good rather than promote it.
Responsibility, in Lonergan’s framework, ensures that the pursuit of knowledge contributes to the common good by aligning it with justice, fairness, and care for others. It prevents the unrestricted desire to know from becoming a purely self-serving or exploitative endeavor.
In many ways, this precept is the most pivotal because it requires action. It asks us to translate our understanding and judgments into behavior. For those of us working in tech and innovation, "being responsible" involves ethical decision-making, understanding the consequences of our work, and recognizing our impact—not just on users but on society as a whole.
For Lonergan, the unrestricted desire to know is a central feature of what it means to be human, but it comes with a profound moral dimension. Being responsible in our pursuit of knowledge ensures that our desire for truth is tempered by a commitment to justice and the well-being of others. It’s not enough to simply seek understanding; we must ensure that our knowledge is applied in ways that are ethical, constructive, and aligned with the common good. In this sense, responsibility is not just a moral add-on to the desire to know—it’s an integral part of it.
Responsibility in Product Development: Real-World Considerations
Let’s look at a specific example—a team developing a generative AI tool for health and wellness recommendations. This tool can offer personalized advice based on user inputs, leveraging AI to simulate the kind of guidance a health coach might provide. To be responsible, the team needs to consider several critical aspects:
User Safety: How do we ensure that the recommendations provided by the AI are safe? Could the advice be harmful if taken out of context? Being responsible means having a robust framework to ensure safety—perhaps limiting the tool to general wellness advice and explicitly cautioning users against relying on it for medical diagnoses.
Misuse and Misinformation: Is there potential for users to misuse the tool, or for it to propagate misinformation? For instance, could users input symptoms and receive potentially dangerous advice? Addressing these risks might involve designing in a way that nudges users toward speaking to a medical professional and implementing explicit disclaimers.
Equitable Access: Being responsible also means considering who benefits from the technology and who might be left out. Does this tool work equally well for different groups, or does it disproportionately benefit those who already have better access to technology? Are there inherent biases in the data that could skew recommendations in ways that disadvantage certain users? Responsibility includes ensuring that technology is designed with inclusivity in mind.
Environmental Impact: One aspect often overlooked in tech development is environmental responsibility. Training generative AI models can be energy-intensive. Teams need to consider the carbon footprint of their projects and explore ways to make them more efficient. Responsibility extends to how our choices today will impact the broader environment and future generations.
The Role of Accountability in Being Responsible
Responsibility also implies accountability—taking ownership of the outcomes of the technologies we create. In practice, this means having mechanisms in place to address issues when they arise. For example:
Feedback Loops: Building systems that allow users to report problems or concerns. If users are experiencing negative consequences, there should be clear and accessible ways for them to communicate that, and for the product team to respond.
Post-Launch Audits: After a product launch, being responsible means conducting audits to see if the technology is being used as intended, and to identify any unexpected harms. These audits should be an integral part of the development lifecycle, rather than an afterthought.
Regulatory Compliance and Advocacy: Responsibility also includes staying abreast of and complying with regulations, as well as advocating for responsible standards in the industry. The tech world is evolving faster than regulations can keep up, which means that as creators, we need to hold ourselves to high standards—often beyond what is legally required.
A Framework for Responsible Innovation
To embed responsibility into product development, it helps to have a framework that teams can use to evaluate their work at each step of the journey:
Ethical Evaluation: Incorporate ethical assessments into every stage of product development. When formulating new features, ask how they align with core ethical values. Use ethical frameworks like the TARES Test (Truthfulness, Authenticity, Respect, Equity, Social responsibility) to ensure the feature meets ethical standards.
User Empowerment: Responsible technology empowers users rather than exploiting them. Ask whether your product helps users achieve their goals meaningfully. Does it offer transparency about how it works, so users can make informed decisions?
Impact Forecasting: Before deploying a product or feature, anticipate its broader impact. What are the potential negative outcomes? What populations could be affected? Using methods like impact mapping can help visualize the ripple effects of the technology, ensuring that the consequences are well considered.
Establish Accountability Paths: Clearly outline the ways in which your team is accountable for its product. This could involve public transparency reports, collaborating with independent auditors, or engaging with user communities to understand their experiences and act on feedback.
Being Responsible in Generative AI: A Case Study
Consider a team developing an AI that helps generate realistic conversation scenarios, which could be used for training in customer service, crisis negotiation, or even language learning. The tool can generate human-like responses, adapt to different user inputs, and create dynamic interaction experiences.
Being responsible in this scenario involves asking critical questions at each phase:
During Development: Are we training this model on diverse and ethically sourced data? Are the training conversations inclusive of different cultures, dialects, and perspectives, or do they risk being biased?
Before Deployment: What are the potential negative consequences of releasing this model? Could it be used for malicious purposes, such as generating abusive content or impersonating others? Do we need to put safeguards in place to limit these risks?
After Deployment: Are we continuously monitoring the AI’s interactions? Are there feedback mechanisms for users to report unethical or harmful outputs? How do we address these issues promptly?
Being responsible in this context means taking ownership not only of the intended use of the AI but also the unintended misuse that might occur. It means being proactive in shaping the ethics of use through design, transparency, and accountability.
Conclusion
"Being Responsible" challenges us to take our understanding and judgments and translate them into ethical action. It calls for a deep awareness of the impact our technologies have on individuals, societies, and the world. In the fast-moving environment of tech and innovation, it’s easy to lose sight of the broader consequences of our work. Responsibility anchors us, ensuring that we build products that don’t just work well but work well for humanity.
In our next and final post, we will explore Lonergan’s fifth cognitional imperative: "Be Loving." This step takes us beyond responsibility into the realm of human flourishing, encouraging us to develop technology that enhances the human experience and contributes to the well-being of others. Stay tuned as we conclude this journey through Lonergan’s transformative insights for the tech world.