Ready Player Two – How I Use AI as an Adversarial Thought Partner
Reflections on a Year Spent Sparring with AI as an Adversarial Thought Partner—And Wondering Whether Polymathic, Agentic Orchestration Might Become a New Baseline Skill
I use an AI for almost everything I write. Emails to my team, notes to my family, strategy documents, project plans, research reviews, code comments, even rough sketches of business ideas — all of it gets run through an AI at some point. But I don’t use it as an omniscient assistant or a magical answer box. I use it as an adversarial thought partner. In practice, this means every idea I have meets a tireless sparring partner that questions assumptions, offers counterpoints, suggests rewrites, and generally refuses to let me off easy. The AI is always available, mostly objective, occasionally wrong, sometimes insightful, and never truly tired. It’s like having a colleague on call 24/7 who will critique your work bluntly but never get offended if you ignore the advice. This approach has reshaped how I work — and, more broadly, how I think about productivity and creativity in an AI-mediated era.
Turning AI into an Adversarial Collaborator
Harnessing an AI as a thought partner isn’t plug-and-play. It takes intentional effort to get the AI to behave in a useful way. Early on, we all discovered that if we just ask for an answer or a paragraph, we’ll get a generic or surface-level response. That’s not much better than a quick Google search. Instead, I learned to engage in an open-ended conversation with the AI, treating it less like a search engine and more like a collaborator. I prompt it to stress-test my ideas: “What are all the reasons this strategy might fail?” or “Attack this hypothesis and tell me where it’s weak.” In response, the AI will break apart my arguments, identify blind spots, and even rebuild stronger versions of my proposals.
This adversarial stance is key. Rather than seeking confirmation or quick answers, I invite the AI to argue with me. For example, when drafting an important email or memo, I’ll ask the AI to play devil’s advocate. If I propose a plan in writing, I prompt: “List the top 5 critiques a smart skeptic might raise about this plan.” The feedback comes instantly — perhaps noting I’ve overlooked a dependency, or that my tone might not land well. Likewise, for a personal email that I’ve written in frustration, I might ask the AI to review it and point out anything that could be misinterpreted or offensive. It will politely highlight, for instance, that my phrasing sounds accusatory rather than constructive. In all these cases, the AI acts as a gut-check and sounding board, probing my thinking from different angles. It’s an adviser who’s empathetic to my goals but brutally honest about my ideas’ weaknesses.
Crucially, I give the AI plenty of context to work with. A generic prompt yields generic output. So I feed in background: the goals of the email, the audience’s perspective, the constraints I’m facing. When working on a strategy document, I’ll paste in the outline or key data and say, “Given this context, what holes can you poke in these assumptions?” If I’m brainstorming a product idea, I share the market research or user persona info I have and ask the AI to role-play as a skeptical investor questioning the idea’s viability. The more specific the scenario I set, the more pointed (and useful) the AI’s adversarial feedback becomes.
One technique I use often is asking the AI to adopt particular personas or frameworks. For instance, I’ll prompt: “How would a CFO tackle this problem?” or “If Satya and Pope Francis were co-CEOs, what policy would they set here?” This forces the model to step out of the default and respond from a specific viewpoint. It’s a way of injecting diverse perspectives into my solo brainstorming. A question Google could never answer – imagining two famous people co-authoring a policy – the AI will cheerfully engage with. The results aren’t gospel, of course, but they often jolt me into considering angles I hadn’t before. By role-playing scenarios or experts, the AI helps me simulate feedback from a wide range of voices at the click of a button.
The Mechanics of an Intentional AI Workflow
Using AI as a ubiquitous collaborator has required me to develop a whole workflow around it. I’ve become a part-time AI wrangler, spending significant effort to prompt and steer the model deliberately. This is far from “one-shot” usage; it’s an iterative process. I maintain saved chat workspaces for different projects so that each has its own context. For example, I have one thread where the AI and I only discuss my ongoing marketing plan; another thread dedicated to a software design I’m prototyping. This separation allows me to pick up right where I left off on a topic without needing to re-explain everything each time. It’s a bit like having multiple dedicated advisers, each immersed in a different workstream of mine. It also makes context-switching between tasks much easier – I can jump from editing a contract to brainstorming a blog post by hopping into the respective AI “workspace,” with minimal mental reset. The AI remembers the last discussion in that thread, so I don’t have to. That ability to effortlessly carry context between workstreams is a small superpower in my day-to-day flow.
I also invest time up front in prompt engineering and what I call meta-prompts – prompts about how to prompt. Before diving into a complex task with the AI, I often give it an initial set of instructions about style or constraints. For instance, I might start a session by saying: “You are an editor who specializes in concise, witty prose. Our goal is to draft a press release in that style. Ask me questions if you need more info.” (Although my prompts end up being much longer and more precise.) This “setup” prompt serves as a mini system directive, shaping the AI’s behavior throughout our session. It’s my way of nudging the model into an intentional stance (concise and witty, in this case) rather than relying on its default tone. Throughout the interaction, I’ll reinforce or adjust these constraints. If it starts rambling or getting generic (which it totally does), I interject with a reminder like, “Focus on the strategic insight; skip the fluff.” If its first output is off-base, I’ll clarify the prompt or even show it a quick example of the tone I want. Essentially, I treat prompt-writing as an interactive dialogue where I gradually sculpt the AI’s responses to fit the need.
Sometimes I even chain models against each other. This is a favorite trick for truly high-stakes or creatively demanding work. I might use one AI model (say, GPT-4.5) to generate a detailed plan or draft, then bring in another model (say, Claude) solely to critique that output. Model A produces, model B evaluates. In one instance, I drafted a proposal with GPT-4 and then prompted Claude with: “You are a tough investor. Here’s a proposal – tear it apart.” The second AI proceeded to identify several weaknesses and questions that the first draft hadn’t addressed. By bouncing the task between different AI systems, I essentially set up an AI-on-AI debate, with me as the referee who synthesizes the outcome. This multi-model choreography plays on their differing strengths and biases. One model might be better at structured logic, another at empathetic language or strategic insight. Using both means I get a richer review. In fact, when I prepared a recent board presentation, I ran my slide deck through two separate AI agents in “board member” mode – the combined feedback from OpenAI’s and Anthropic’s models covered nearly 90% of the same points our real board eventually brought up. That kind of validation – two different AIs and then actual humans all converging on similar critiques – gives me a lot more confidence that I’ve not missed something obvious.
Finally, I’ve integrated a level of tool orchestration around the AI. Beyond just chat interfaces, I use plugins and scripts to extend what the AI can do. I hook it up with my notes and knowledge bases so it can pull in facts rather than hallucinate. I sometimes have it draft an email in one tool, then use another AI tool to analyze the sentiment of that draft. These chains of tools and prompts mean that on any given task, there’s a mini assembly line of AI and software working together under my direction. It’s not fully automated by any stretch — I’m the one breaking the task into sub-tasks and checking each output — but it feels like conducting an orchestra of semi-intelligent agents. This approach, admittedly, is a bit advanced and can introduce overhead. Yet I suspect that this polymathic orchestration of multiple AI helpers is going to be a foundational skill going forward: the ability to coordinate various specialized AIs (and human skills) to achieve an outcome. In a sense, I’ve become less of a specialist in any single thing and more of an integrator of many tools and domains. My value is shifting from knowing an answer, to knowing how to ask the right questions and combine the right resources – a mix of talents, amplified by AI, that lets a single individual tackle problems that used to require a team.
Research at Scale, Grounded in Reality
One of the most profound changes in my workflow is how I conduct research with AI assistance. I have learned (sometimes the hard way) that letting an AI just invent facts or sources is a recipe for disaster. These models will confidently produce text that looks authoritative but can be completely fabricated. To counter this, I now pair the AI with source-grounded research practices. In practical terms, this means I start any research-heavy task by supplying the AI with real reference material. Before asking for an analysis or a summary, I feed it verified data: excerpts from articles, statistics from reports, quotes from experts, or even entire PDFs of reference books. Rather than trusting the AI to conjure the knowledge base, I hand it the library I want it to use.
For example, if I’m writing a market analysis, I’ll begin by giving the AI the latest market research PDF or a few key market statistics. I explicitly instruct: “Use only the data provided below for your analysis.” The transformation is immediate — the AI’s output becomes more factual, specific, and (crucially) traceable to real sources. I often ask it to cite where a particular insight came from, which it can do if the source text was in the prompt (as you see in this very essay with bracketed citations). This technique drastically reduces hallucinations. The model shifts from being a storyteller to being an analytical engine working on the raw materials I’ve given it. It’s akin to giving your human assistant a stack of articles and saying “read these and tell me the common themes,” instead of asking them to wing it from memory or their own library rambles.
I also use AI to accelerate literature reviews and exploratory research. I’ll pour in a bunch of my own notes or excerpts from papers I trust, then have a dialogue with the AI to draw connections. It’s especially good at synthesizing across sources: “Based on these five journal abstracts I provided, what are the recurring research questions and how do they differ?” or “Summarize the consensus and the main point of debate from these sources.” Because the context is constrained to the material I vetted, I can trust that the summary is grounded in something real. Essentially, I let the AI do the heavy lifting of chewing through text, but I make sure I choose the diet. In this way, the AI becomes a powerful research assistant that works at scale and speed — scanning dozens of documents in seconds — yet I retain control over the veracity of the information pipeline.
This approach addresses one of AI’s biggest weaknesses: truthfulness. By tethering the AI to a reliable knowledge source, I bypass a lot of the well-known pitfalls of it generating plausible nonsense. I’ve also become more adept at spotting when the AI is BS-ing. The moment I see an ultra-specific claim with no reference, or a quote that sounds too perfect, I pause and verify it. Often, a quick cross-check (sometimes even asking the AI “source?” explicitly) reveals whether it’s genuine or hallucinated. Yes, this adds a bit of friction — I can’t just accept their output blindly — but it’s a necessary discipline. In fact, it mirrors what any good researcher or analyst should do: trust, but verify. AI hasn’t removed that need; if anything, it has heightened my vigilance. I’ve essentially trained my AI partner to know that I will fact-check it. Over time, with the right prompting and the inclusion of citations, the model’s responses have become more self-aware of their limits, often warning me when it’s unsure or when a claim would require looking up. When it doesn’t, the responsibility is mine to be skeptical. This dynamic, ironically, has made me a more critical thinker. I don’t take even the AI’s confident answers at face value unless I see the grounding. And when the source of truth is provided, I can dive deeper into it if needed. It’s a collaborative research method: I curate and validate, the AI synthesizes and suggests.
Drafting, Refining, and Iterating to My Own Voice
Perhaps the most common way people imagine using AI in writing is as a draft generator — to beat the blank page. Indeed, one of the immediate advantages I found is that I never have to start from scratch anymore. For any given piece of writing, from a blog post to a project proposal, I can ask the AI for a first pass: “Here are bullet points for what I want to cover, please turn this into a rough draft.” The draft I get back is rarely great prose, but that’s not the point. It gives me something to react to. It’s much easier to start with something on the page than stare at an empty screen, and AI reliably provides that “something”. Often it’s full of clichés or sections that make me cringe — but even that is useful, as a contrast. Seeing a bland paragraph helps me realize, “No, that’s not the tone I want; I want it more punchy.” A quick prompt to rewrite in a more energetic tone, or simply me rewriting a sentence or two myself, and the copy improves. We bounce back and forth like this, the AI and I, refining each paragraph. This iterative loop continues until the piece starts to sound like my voice and meets my standards.
I’ve gone through as many as a dozen iterations on an important document. Each cycle, I might focus on a different aspect. One round to get the structure and flow right (where AI might help suggest a more logical ordering of sections). Another round to tighten the language (where I prompt, “shorten this section and make it more direct”). Another to inject a bit of wit or personality (sometimes I’ll say, “give me two humorous analogies for this point” and then I’ll choose one I like). It truly feels like co-writing: the idea and intention remain mine throughout, but the AI offers endless rephrasings, alternatives, and even challenges that push me closer to what I envision. I’ve had it happen that after enough iterations, very little of the AI’s original phrasing remains — I might have replaced or edited most of it — yet the process of bouncing off the AI was what propelled me there. As one writer noted, the iteration process is one of idea refinement; the AI enables me to develop the idea thoroughly “free of stylistic concerns” at first, ensuring the core concept is sound before I polish the final wording. This means by the end, the something I’m saying has been pressure-tested and clarified, not just nicely worded. The final product is in my voice, but it’s undeniably influenced by this back-and-forth with a tireless editor who never runs out of suggestions.
It’s important to stress that I never trust the first draft from AI as the final output. That initial draft is a starting point, not an end point. It’s “momentum-building” material to overcome blank-page paralysis, but it’s usually far from client-ready or publish-ready. AI text often has a certain generic style — a bit sterile, lacking specific anecdotes, or using formulaic transitions. My job in the iterative process is to infuse the human elements that the AI cannot generate on its own: the personal story that illustrates the point, the nuanced tone that aligns with my intent, the judicious selection of which facts to highlight. Sometimes the AI drafts a perfectly logical paragraph that just doesn’t feel right for the narrative — so I change it or ask the AI to try again with a different angle. By the time I declare something “done,” I’ve usually touched every sentence, either directly or via directive. In many cases I’ll do a final pass entirely by myself, reading it aloud to catch awkward phrasing (a trick from old-school writing) and making final tweaks. The end result might be a piece that sounds like I wrote it from scratch, even if an AI helped assemble every sentence along the way. That’s the goal: the final product reflects my standards and voice, not the AI’s generic fingerprint. I’ll use whatever the AI gives me as raw clay, but the sculpting and polishing — that’s on me as the human and author.
Interestingly, this iterative co-writing has made my overall writing better even when I’m not using AI. Working with a model that can generate ten variants of a sentence at will has expanded my sense of possibility in phrasing. It’s like training with a partner who is drastically different from you — you pick up new moves. I’ve internalized some of the AI’s strengths (like how to structure an argument clearly, or how to simplify complex jargon for a broader audience) while also becoming more aware of its weaknesses (like when language becomes too generic or when an argument lacks emotional resonance). In a way, editing AI’s output has trained me to edit my own work more ruthlessly. I find myself asking, even when writing solo, “Is this section fluff? Would I cut this if AI had written it?” and often the answer is yes, leading me to tighten the draft. The iterative partnership is making me a more critical and flexible writer. It’s a strange symbiosis: by teaching the machine to adopt my voice, I’ve also sharpened my own voice.
Fast Prototyping and Creative Experiments
Beyond writing and planning, one area where this AI thought partner approach truly shines is in rapid prototyping of ideas. This goes for written ideas as well as more technical or design-oriented ones. In the past, if I had a concept for a tool or a process, I might sketch it on a whiteboard or write a lengthy spec and then wonder if it actually made sense. Now, I often try a quick AI-driven experiment. For instance, I was curious about creating a small chatbot agent that could handle a specific task (like a bot that helps schedule meetings in a quirky, personable way). Rather than embark on a full software project, I described the idea to the AI and essentially had it simulate how the agent would behave. With a series of prompts, I had ChatGPT role-play the entire interaction: I gave it a fake meeting request scenario and prompted it step-by-step to respond as my hypothetical scheduling agent would. This exercise surfaced a bunch of things I hadn’t considered — the AI highlighted conflicts in the schedule, asked clarifying questions, even made a joke to apologize for a double-booking. In 30 minutes of “conversation prototyping,” I identified improvements for the design of this agent. It’s a form of speculative design with AI as the simulator.
When it comes to actual prototyping of software or workflows, AI dramatically compresses the iteration loop. I’ve used it to whip up rough code snippets and even working mini-apps in a fraction of the time it would take me normally. While I have been a professional full-stack programmer, I’ve gotten rusty with the decade or so of not being in SublimeText every day. With AI I can speedily get to an 80% of what I used to be able to do. I’ll say, “Generate a simple HTML page with a form that does X,” and it will give me starter code. I then test and refine by describing the changes: “Now make that button do Y instead, and handle input Z.” It’s not flawless — I often have to debug or correct things — but the speed is astounding. Recently, AI-assisted tools have emerged that can turn natural language into running prototypes almost instantly. A colleague built a 2D game with an AI opponent in 10 minutes just by describing what should happen, step by step. I’ve tried similar approaches for web app ideas: describing an interface or a feature to the AI and letting it generate the skeleton which I then tweak. The feeling is that of having a super-fast junior developer who follows instructions literally and doesn’t complain about redoing the work. The real limitation becomes how clearly I can articulate what I want — a great exercise in sharpening the vision.
Even for non-technical creative experiments, AI is a catalyst. I can brainstorm five variations of a logo tagline, or ten hypothetical product names, in a single prompt. If I’m designing a workshop curriculum, I’ll ask the AI to pretend to be a student and react to the outline, catching whether it’s engaging or dull. For conceptual thought experiments, I sometimes chain multiple AIs with each other. I had an idea about simulating an argument between two historical figures on a modern problem — a sort of Turing Test theater. I assigned ChatGPT to be Person A and another instance to be Person B, gave them each a brief (and some historical context), and then moderated a debate between them by alternating prompts. The result was messy but enlightening: it was like quickly prototyping a mini Socratic dialogue without writing the script myself. This kind of idea stress-testing and exploration, done in minutes, would have taken me days of writing or programming to simulate by hand. The AI, as a universally pliable simulator, lowers the cost of trying out ideas to nearly zero. That means I can afford to kill more bad ideas early and let the better ones evolve faster. If an experiment with the AI falls flat, I’ve lost an hour at most — far better to discover that a concept doesn’t work at prototype stage than after investing weeks. Conversely, if something shows promise in the AI-driven simulation, it gives me confidence (and often a blueprint) to pursue it further in the real world.
Three Key Advantages (and a New Kind of Productivity)
Stepping back, these practices — using AI as an adversarial partner across emails, plans, research, drafts, and prototypes — amount to more than just efficiency tricks. Together they herald a broader shift in how productivity and creativity unfold in knowledge work. I often summarize the shift for skeptics in terms of three big gains I’ve personally experienced:
Seamless Context-Switching: I can jump between different workstreams with far less mental friction. Each project has an always-on AI deputy that “remembers” where we left off, so I spend less time reorienting myself. The cognitive load of resuming a complex task after an interruption is lower because the AI can recap and carry the thread forward. In a world where we all juggle multiple hats, this makes a huge difference in maintaining momentum.
No More Blank Pages: The AI generates imperfect but usable first drafts for virtually anything – emails, memos, slides, brainstorm lists. I don’t stall at the starting line anymore. Instead of dreading the blank page, I leverage the model to put something down, which I can then reshape. It’s like always having a rough sketch to react to. This means I get from zero to one much faster on creative tasks, and I rarely feel the paralysis of not knowing how to begin. Even if the draft is mediocre, it kickstarts the process and gets my own creative juices flowing.
Rapid Vetting of Ideas: Perhaps most transformative is how ruthlessly efficient this workflow is at killing bad ideas early and refining good ones. By throwing every idea into the ring with an adversarial AI, I immediately see the weak spots and can decide whether to iterate or abort. It’s the venture studio model at it’s best: only the ideas that survive the onslaught of critique and stress-testing move forward. This saves me from investing too much time down dead ends. In effect, I’ve added an automated sounding board that filters out a lot of noise in my creative process. The outcome is that when I do commit to a plan or a piece of writing or a design, it’s already been through mini “trial by fire” sessions with the AI. I have more confidence in its robustness (or I’ve consciously chosen to take a risk if the AI flagged issues). This accelerates learning by failure – I can fail an idea in 30 minutes via AI, rather than in 30 days of real-world piloting.
Taken together, these gains point to a new kind of productivity that is less about outputting more stuff (though that can happen too) and more about amplifying cognitive agility. Easier context switching means my attention is less fragmented; quick first drafts mean my creative engines start roaring sooner; rapid idea vetting means I allocate my time and energy more intelligently. There’s a compounding effect as well – since I’m rarely stuck or idle, I find myself tackling more diverse tasks in a given week than I used to. The AI acts like a lubricant in the gears of my workflow, reducing the friction that traditionally slows knowledge work (writer’s block, research drudgery, decision paralysis). The result isn’t that I’ve become a superhuman solo worker who doesn’t need others – in fact, I deeply value human collaboration even more for what AI can’t provide – but it does mean that my independent throughput and learning velocity have increased. In concrete terms, I finish workstreams faster and with more confidence in their quality. Strategically, I can take on more ambitious projects because I know I have this ever-present “coach” to help me think and create.
The Friction and the Skepticism
It would be dishonest to portray this AI-augmented way of working as all upside. In truth, it comes with its own set of frictions and challenges. For one, driving an AI like this requires a lot of mental effort and patience. Prompting well is a skill that I had to cultivate, and it can be frustrating. There are days when I just can’t get the AI to output what I need — it misunderstands, or gets stuck repeating the same point, or produces a block of text that’s technically correct but contextually useless. Achieving the right tone or depth can feel like coaxing a very stubborn, very literal-minded assistant. I’ll rewrite the prompt five times, only to realize the issue was that I didn’t provide a crucial piece of context. At those moments I sometimes think, “Wouldn’t it be easier if I just did this myself from scratch?” The answer might be yes for that single instance, but I remind myself that it’s an investment: I’m effectively training my digital assistant through these iterations. Still, the struggle is real. This isn’t a flawless Jarvis from the Iron Man films; it’s more like a precocious student who needs clear instructions and occasionally goes on tangents.
There’s also the matter of trust and skepticism, both mine and others’. Early on, I was wary of trusting any content that came from the AI without double-checking. Over time I’ve built some trust in its capabilities, but I maintain a healthy skepticism. I fact-check important things, as mentioned, and I’m always aware that the AI doesn’t truly understand truth — it just predicts likely text. This means I am ultimately accountable for everything we produce together. If a mistake slips through, that’s on me. I’ve had a couple of close calls where a misremembered fact from the AI almost made it into a deliverable. Those instances reinforced the need for vigilance. I also encountered the overconfidence problem: after a string of successes using AI, I’d start trusting it too implicitly and speed through tasks, only to catch a subtle error later. It’s a bit like working with a very knowledgeable but occasionally deceitful colleague — you must verify the critical pieces.
Moreover, not everyone around me is immediately comfortable with this mode of work. There’s been some skepticism from colleagues when I mention that “AI helped with this draft” or that I used a chatbot to sanity-check a plan. Some worry about reliability; others, more bluntly, don’t trust something if they know AI had a hand in it. In fact, over half of workers in one survey said they don’t yet trust outputs from AI tools like ChatGPT in the workplace. I’ve felt this directly: handing a document to a peer, I sometimes get asked, “This is good — but was it you or the AI?” The subtext is clear: there’s a stigma or doubt that if an AI was involved, maybe it’s less valid or it might contain hidden inaccuracies. To address this, I’ve learned to be transparent about my process without overselling it. I’ll say, “I drafted this in collaboration with an AI assistant, and I’ve verified the content.” Over time, as I consistently deliver quality, the skepticism diminishes. People care about the result, and if the results are good, the tools matter less. But I don’t blame folks for being cautious — after all, AI can spout nonsense, and not everyone is going to have insight into how carefully I managed the process.
There’s also a subtle friction in terms of workflow disruption. Using AI deeply in everything means I’m constantly switching between my normal thinking and “prompt mode.” Sometimes, in the middle of writing, I have to stop and craft a prompt to ask the AI for some detail or alternative phrasing. It interrupts the flow a bit, like pausing a conversation to look something up. Ideally, the AI would be more seamlessly integrated (and it’s getting better with things like voice interfaces or plugins), but currently it can feel like juggling — my own train of thought alongside guiding the AI. I’ve had to develop a rhythm: type a bit myself, call on AI for a suggestion, evaluate it, continue typing, and so on. When it works, it feels like a dance. When it doesn’t, it can break my concentration more than it helps. I suspect these kinks will smooth out as tools improve and as I refine my prompting habits. But right now, friction still exists in the man-machine collaboration. It’s not a mind-meld; it’s a dialogue, and dialogue can be messy.
Finally, it’s worth noting the limits of the AI’s capabilities. Despite the buzz about recent models, they still have obvious holes. They lack genuine common sense in many cases. They can’t truly understand emotional subtext (though they try), which means if I’m writing something with a lot of human nuance — say, a letter about a sensitive personal matter — the AI’s suggestions might miss the point entirely, or worse, strike the wrong tone. I encountered this when I attempted to use the AI to draft a condolence message; the output was grammatically fine but emotionally off, almost hollow. I scrapped it and wrote from the heart instead. That’s a reminder that the AI is a tool, not a replacement for human empathy or judgment. It augments my work; it doesn’t replace the need for me to care about the work. I remain the final editor, the moral compass, and the decision-maker in the process.
The Polymathic, Agentic Future of Work
Stepping back, I realize that what I’m doing is cultivating an integrated cognitive toolkit – an orchestra of different AI capabilities harmonizing with my own skills. This goes beyond just “learning to prompt.” Yes, prompt literacy is crucial, but the real goal is a mindset where AI is woven into how you think and work. I suspect this will become a baseline skill in the future of work: a kind of polymathic, agentic orchestration. By polymathic, I mean the ability to comfortably straddle multiple domains – something AI makes far more feasible, since it can provide on-demand expertise in areas outside my primary specialty. With AI assistants, a single individual can dip their toes in law, coding, design, marketing – not to masquerade as an expert in all, but to leverage tools that are experts or can bring in the knowledge from those fields. By agentic orchestration, I’m referring to the skill of coordinating various semi-autonomous agents or tools towards your goals. In the future, you might have one AI agent managing your schedule, another researching investment opportunities, another monitoring your codebase for bugs – and you, the human, will conduct this symphony of AIs. In many ways, I’m already doing a proto-version of that in my daily workflow.
This shift will likely redefine knowledge work. Just as basic computer literacy became a must-have skill decades ago, AI orchestration will become the new literacy. We’ll all need to be a bit more like conductors and a bit more like polymaths, comfortable handing off subtasks to AI, but also integrating the results into a coherent bigger picture. Importantly, this isn’t a call for everyone to suddenly do everything themselves – rather, it’s a recognition that cross-domain work with AI cooperation will be commonplace. The individuals who thrive will be those who can weave together different threads of expertise (human or machine) into novel solutions. In my case, using AI pervasively has already made me a more interdisciplinary thinker. I don’t hesitate to explore a new field, because I know I have a safety net (or perhaps a booster rocket) in the form of AI assistance.
There is, of course, a learning curve and a need for judgment. Cultivating this integrated toolkit means learning each tool’s strengths and weaknesses, much like a craftsperson with their instruments. It also means developing the discernment to know when to trust the AI, when to double-check, and when to turn it off and think deeply on my own. That discernment – the uniquely human editor-in-chief role – remains paramount. But once you embrace AI as a thought partner and not just a fad or a threat, you unlock an amplified way of working. I often reflect on how my output today compares to a few years ago, and it’s dramatically richer. More importantly, I’ve expanded the scope of what I consider myself capable of doing. That, to me, is the most exciting part of all this.
In the end, using AI in the way I’ve described isn’t about offloading my brain – it’s about extending it. It’s like having a team of bright (if sometimes erratic) colleagues who can be called upon at any hour, for any task. The future of work will belong to those who know how to direct such a team, how to question it, how to collaborate with it, and how to merge its strengths with their own. I’m still learning every day how to better integrate AI into my life, but one thing feels clear: this hybrid human-AI approach is not a party trick or productivity hack du jour, it feels like a fundamental shift in how we create and solve problems. And as more people cultivate their own cognitive toolkits, I’m optimistic that we’ll see an explosion of creativity and efficiency – a future where polymathic is just the way we all operate, by default.