What Societal Permission Actually Requires
On Impact, Work, and the Preservation of Human Judgment
This is the third of three essays responding to Satya Nadella’s year-end reflection, “Looking Ahead to 2026.” The first addressed his call for a new theory of mind that accounts for humans equipped with cognitive tools. The second examined the agency question raised by the shift from models to systems. This essay takes up what “societal permission” actually requires—and what we risk if we misunderstand it.
Satya Nadella closes his reflection with an observation that cuts against the grain of how technology usually talks about itself. AI, he argues, requires “societal permission,” and that permission must be earned through “real world eval impact.” We face choices about where to deploy scarce resources—energy, compute, talent—and those choices will matter. This is a socio-technical issue demanding consensus.
The instinct here deserves appreciation. Against the quiet determinism that pervades the industry—the assumption that what can be built will be built, that capability flows inevitably into deployment—Nadella insists on deliberation. We choose where to point these tools. The pointing is ours to do well or badly. It would be easy for a technology executive to speak as though momentum were destiny; that he doesn’t is worth noticing.
But the phrase “societal permission” opens questions it does not answer. Permission from whom, granted how, on what basis? And what happens if the technology seeking permission has already begun reshaping the society from which permission is sought?
Start with impact, since that’s the proposed currency of permission. Real world eval impact: demonstrate that the technology makes things better, and society will grant its blessing.
The trouble is that “better” is not self-interpreting. Every impact metric embeds prior assumptions about what counts—whose experience matters, which outcomes register, what timeframe applies. Productivity gains look unambiguously good until you ask: productive of what, for whom, at what cost to what else? Engagement metrics look like evidence of value until you notice that addiction also engages. The measurement apparatus is not neutral; it encodes choices about value that the measurements then appear to validate.
This is not a complaint about metrics per se. Measurement matters; rigor matters; we should want to know whether interventions work. The problem is mistaking measurement for evaluation. Evaluation asks whether what we’re measuring is what we should care about. That question requires judgment operating prior to measurement—judgment about human flourishing, about goods that resist quantification, about whose experience counts and why. No accumulation of impact data answers whether the impacts serve genuine human good or merely optimize for proxies we’ve confused with the real thing.
E.F. Schumacher saw this decades ago when he noticed that modern economics recognizes only one purpose for work: the production of goods and services. But work, he argued, serves at least two other purposes that this framing renders invisible. It gives people a chance to utilize and develop their faculties—to become more capable, more skilled, more fully themselves through what they do. And it enables them to join with others in common tasks, overcoming isolation through shared endeavor. An economic calculus that registers only output will count as gain any efficiency that increases production, even if it eliminates the formative and social dimensions of work entirely.
AI makes this concrete. A system that automates tasks previously requiring human judgment may score well on productivity metrics—more output per hour, lower cost per unit—while eliminating the occasions through which people developed competence and participated in shared enterprise. The work got done; the workers got hollowed out. Impact measurement that sees only the output will record this as progress. But progress toward what? Schumacher would say we’ve become more efficient at something while losing track of what the something was for.
Wendell Berry has spent a lifetime pressing a related question: what happens to knowledge when we abstract it from the places and practices where it lives?
His concern began with farming but extends wherever work involves care, judgment, and attention to particulars. The good farmer knows this land—its contours, its seasons, what it will bear and what exhausts it. That knowledge accumulated through generations of presence, through failure and adaptation, through the kind of learning that happens only when you stay long enough to see consequences unfold. It cannot be fully captured in transferable rules or scalable systems because it is knowledge of this, not knowledge of any.
When we override such knowledge with distant expertise—the consultant who has never seen the field, the algorithm trained on averages—we do not merely substitute one kind of knowing for another. We destroy the conditions under which placed knowledge develops. The next generation inherits tools that work without understanding why, and gradually the understanding vanishes. The farm still produces; the farmer has become an operator executing procedures someone else designed. Whether this counts as progress depends on what you think farmers are for.
The parallel to AI should be clear. Systems that replace human judgment with algorithmic decision-making may achieve consistency and scale, but they eliminate the contexts in which practical wisdom forms. The physician who learns to trust the diagnostic algorithm over her own perception stops developing the perception. The teacher who follows the adaptive learning system’s recommendations stops learning to read a classroom. The craftsman whose work is decomposed into optimizable steps stops being a craftsman and becomes a component. Each gains efficiency; each loses something that efficiency cannot measure.
Berry is sometimes dismissed as a nostalgist, but his argument is fundamentally epistemological. Certain kinds of knowledge exist only in practice, only in place, only in the patient attention of someone who has stayed. Abstract that knowledge into transferable systems and you have not preserved it; you have replaced it with something else—something useful, perhaps, but not the same. The question is whether we know what we’re trading away.
There is a tradition of social thought that has been asking these questions systematically for over a century. Catholic Social Teaching developed in response to industrialization’s disruptions, and its animating concern—the dignity of the human person as criterion for evaluating economic arrangements—speaks directly to the AI moment.
The dignity at stake is not abstract. It is the concrete capacity of actual people to flourish: to develop their gifts, to participate in community, to exercise meaningful agency in their own lives. When John Paul II wrote that labor has priority over capital, he meant that work is not merely a factor of production to be optimized but an expression of personhood. What we do to work, we do to workers. Arrangements that treat labor as a cost to be minimized may succeed economically while failing humanly.
Two principles from this tradition bear directly on the question of societal permission. Subsidiarity holds that decisions should be made at the lowest level capable of addressing them effectively; what individuals, families, and communities can handle should not be absorbed by larger systems without compelling reason. The principle does not oppose scale as such, but it insists on justification. When AI systems concentrate decision-making—pulling judgment out of distributed human hands and into centralized algorithms—subsidiarity asks what is gained and what is lost. Efficiency is not automatic justification; the question is whether the efficiency serves the people whose agency it displaces.
Solidarity insists that the common good includes everyone, particularly those most vulnerable to exploitation or exclusion. A transformation that benefits some while rendering others superfluous has not demonstrated its goodness merely by benefiting some. Those displaced, diminished, or made marginal by technological change have claims that productivity gains do not automatically override. The farmer pushed off the land by industrial agriculture, the factory worker replaced by automation, the knowledge worker whose judgment is absorbed by AI—solidarity requires that their flourishing count in the calculus, not merely their productivity.
What emerges from this tradition is an evaluative framework richer than impact metrics can capture. Integral human development—the full flourishing of persons in their material, social, cultural, and spiritual dimensions—cannot be reduced to measurable outcomes without losing what makes it integral. The judgment required to assess whether AI serves such development is not algorithmic; it is the kind of practical wisdom that weighs incommensurable goods, attends to what quantification obscures, and remains accountable to those whose experience the numbers miss.
So who grants societal permission, and through what process? The question is harder than it looks.
“Society” is not a subject that deliberates. It has no unified will, no moment of collective decision. What we call societal permission emerges from accumulated choices—individual adoption, institutional procurement, regulatory action, market dynamics, cultural drift. The emergence happens through countless interactions that no one controls and no one fully perceives. By the time we recognize that permission has been granted, the granting has already occurred through processes that were never framed as permission-granting.
This diffusion creates room for distortion. Those who benefit from a technology’s deployment have strong incentives to advocate for it and typically possess resources to make their advocacy effective. Investors, technologists, early adopters, those whose work is augmented rather than replaced—these voices are loud, articulate, well-positioned. Those who bear costs often lack comparable standing. Their experience surfaces as lagging indicators: displacement statistics, community decline, mental health trends observed after the fact. By the time the costs become legible, deployment has achieved momentum that makes course correction difficult.
What passes for societal permission may be the permission of the advantaged, mistaken for consensus. The voices that dominate do not represent the whole. Berry’s farmers, Schumacher’s craftsmen, the workers whose practical wisdom is being optimized away—they are not absent from the conversation because they have nothing to say. They are absent because the processes through which “consensus” forms systematically underweight them. A permission that emerges from such processes is not society’s permission. It is power ratifying itself.
But there is a still deeper problem, one that loops back on itself in ways that resist easy resolution.
Genuine permission requires judgment—the capacity to assess what is being permitted, to understand its implications, to weigh considerations, to reach a warranted conclusion. Permission is an evaluative act. Someone must understand enough to judge, and judge well enough to grant or withhold meaningfully.
But judgment is precisely what unreflective AI deployment threatens to erode. We have seen this in earlier essays: AI systems that substitute for human cognitive operations remove the occasions for exercising those operations, and exercise is what maintains capacity. Outsource attention and attention attenuates. Outsource judgment and judgment atrophies. The degradation is gradual, invisible in any given instance, legible only in retrospect when we reach for capacities and find them diminished.
What this means for societal permission is troubling. The technology seeking permission may have already degraded the evaluative capacity on which permission depends. The society being asked to judge is a society already shaped by prior deployments—attention fragmented by platforms optimized for engagement, critical thinking eroded by information environments designed for persuasion, practical wisdom thinned by systems that perform without explaining. The judge has been compromised by the defendant.
And so permission becomes nominal. It is granted by people who no longer possess the operations that meaningful granting requires. The form persists—consultation processes, impact assessments, regulatory review—while the substance leaches away. Consent without comprehension is not genuine consent, but it looks enough like consent to satisfy the procedural requirements. The box gets checked; the capacity to check well has vanished.
This circularity describes dynamics already underway. And breaking it requires more than better metrics or more inclusive consultation. It requires preserving and cultivating the human capacities on which judgment depends—attention, understanding, evaluation, decision. These capacities are formed through education, practice, community, culture. They are precisely what efficiency-maximizing systems tend to treat as friction.
What would genuine permission require? Not a one-time license but an ongoing relationship. Not mere acceptance but comprehending assessment. Not the preferences of beneficiaries but the judgment of communities attending to their own flourishing.
This demands, first, that we protect the capacity for judgment itself. The institutions that form people in careful attention, rigorous thought, honest evaluation—schools, universities, religious communities, professional guilds, the informal mentorship through which practical wisdom passes—these are not peripheral to the technology question. They are central to it, because they produce the evaluators on whom meaningful permission depends. AI deployed in ways that undermine these institutions undermines the conditions of its own legitimate acceptance.
It demands, second, that we make space for voices the dominant processes exclude. Those whose work is threatened, whose communities are being reorganized, whose children will inherit a world where certain kinds of knowing have been eliminated—they must be heard before permission is granted, not discovered afterward as collateral damage. Subsidiarity means they participate in decisions affecting them. Solidarity means their flourishing counts. These are conditions without which permission is merely power declaring itself welcome.
It demands, third, time. Not delay for its own sake, but the time that genuine understanding requires. Insight cannot be rushed; judgment needs room to develop; evaluation matures through reflection that efficiency forecloses. A society pressured to decide before it understands will decide without understanding, and the decision will not truly be its own. We must resist the tempo that technology imposes when that tempo is too fast for wisdom.
And it demands, fourth, the ongoing possibility of revocation. Permission that cannot be withdrawn is not permission but subjugation. Society grants provisionally, continues to assess, reserves the right to change course when assessment warrants. This requires maintaining the capacity for assessment—the institutions, the time, the judgment—across the duration of the technology’s deployment. The permission is not a single act but a continuing relationship of evaluation and accountability.
Nadella is right that the choices about deploying AI will matter. Where we direct scarce resources reflects what we value and shapes what we become. The choices are genuinely ours to make well or badly.
But there are two sets of choices, not one. The first concerns where to deploy AI—which problems, which sectors, which applications. These are the choices Nadella names, and they matter. The second concerns whether to preserve the human capacities on which meaningful choice depends. These choices are less visible, less often named, but they condition the possibility of the first. A society that has lost the capacity for judgment cannot choose wisely about AI deployment, no matter how many consultation processes it conducts.
The technology seeking permission reshapes the context in which permission is sought. It forms habits, alters capacities, reorganizes work, shifts what we notice and what we ignore. The society granting permission today is not the society that will live with consequences, because the technology will have reshaped that future society in the meantime. To grant wisely now requires anticipating who we are becoming and whether we want to become that.
Schumacher asked whether our tools remain scaled to human capacity to understand and direct. Berry asked whether we know what we trade away when we abstract knowledge from place and practice. The tradition of Catholic Social Teaching asks whether our arrangements respect the dignity of persons and the integrity of communities. These are not antiquarian concerns. They are precisely the questions that genuine societal permission requires us to answer.
If we cannot answer them—if the pace of deployment outruns our capacity to evaluate, if the processes through which consensus forms systematically exclude those who bear costs, if the technology itself erodes the judgment on which meaningful permission depends—then what we call permission is something else. It is momentum mistaken for choice, acquiescence dressed as consent, the powerful granting themselves welcome in the name of a society that was never genuinely asked.
The alternative is harder and slower. It requires protecting the human capacities that evaluation demands. It requires hearing from those whom efficiency would silence. It requires time that productivity pressures constantly foreclose. And it requires the honesty to recognize when what we call permission is not permission at all.
Taylor Black writes about AI, human flourishing, and the Catholic intellectual tradition. He serves as head of AI & venture ecosystems in Microsoft’s Office of the CTO and is Founding Director of the Institute for AI & Emerging Technologies at Catholic University of America.

