The AI Jobs Chart That Actually Explains What's Coming for Your Career
Every few months a new report claims “X% of jobs will be automated by AI.” Every few months the number is different, the methodology is vague, and the takeaway is either panic or dismissal. Neither helps you figure out what to actually do.
The EBRD’s latest Transition Report takes a more useful approach. Instead of a single exposure number, it maps every occupation on two axes:
- AI exposure — how much of the job’s day-to-day tasks AI can currently perform
- Human-AI complementarity — how well humans and AI can work together in that role to improve productivity
The result is a quadrant that tells a more honest story than any single percentage.
The four quadrants
Top right — AI as a force multiplier
High exposure + high complementarity. These roles see productivity gains. Examples: CEOs, general managers, doctors, IT managers, business analysts, engineers.
In these jobs, AI handles the cognitive grunt work — data synthesis, pattern recognition, draft generation — while humans provide judgment, strategy, accountability, and relationships. The combination produces more than either alone. A doctor who uses AI for diagnostic imaging interpretation sees more patients and catches more edge cases. A CEO who uses AI for scenario planning makes faster, better-informed decisions. The human isn’t replaced; they’re amplified.
Bottom right — pressure to reskill
High exposure + low complementarity. These roles face genuine disruption. Examples: secretaries, administrative assistants, accounting clerks, auditors, data entry workers.
Here, AI doesn’t just assist — it substitutes. Scheduling, transcription, reconciliation, routine correspondence, form processing — AI does these end-to-end, without needing a human in the loop to add value. There’s already evidence that generative AI is reducing entry-level positions in these categories. The work isn’t disappearing immediately, but the volume of humans needed to do it is declining.
Left — largely unaffected for now
Low AI exposure. Examples: structural metal workers, plumbers, maids, personal care workers, dentists.
This quadrant surprises people. Dentists? Safer than accountants? Yes — the work requires manual dexterity, tactile judgment, physical presence, and patient communication under stress. AI can assist with imaging analysis but can’t hold a drill or read patient anxiety. Physical and social complexity is the moat, not educational level.
The question most analysis gets wrong
Most AI jobs analysis asks: can AI do this task?
The EBRD framework asks a better question: when AI does this task, does the human become more valuable or less?
That distinction explains almost every counterintuitive result in the data. A radiologist (high exposure — AI reads scans well) lands in the top right because the human’s job shifts toward clinical judgment, rare case identification, patient consultation, and accountability. The AI makes the radiologist’s decisions faster and better-informed. Complementarity is high.
A data entry clerk (high exposure — AI does this trivially) lands in the bottom right because there’s no complementary human skill left. When AI processes the form perfectly, the human’s presence added nothing. Complementarity is zero.
What this means practically
If you’re in the top right: Your job is getting more powerful. The risk is complacency — if you don’t actively use AI tools, a colleague who does will outperform you. The mandate is to build your AI leverage while maintaining the judgment, relationships, and accountability that create complementarity.
If you’re in the bottom right: The honest message is that the core tasks of your role are under pressure. The path is not to avoid AI but to move toward the parts of your role (or adjacent roles) that require higher complementarity — managing AI systems, client relationships, quality oversight, edge case handling. Workers who learn to direct AI rather than compete with it move from bottom right toward top right.
If you’re in the left quadrant: Lower urgency, but not zero. Physical roles will be affected by robotics eventually. The timeline is longer, but the underlying principle is the same.
The skills that create complementarity
Across the data, the same capabilities appear repeatedly in high-complementarity roles:
- Strategic judgment — synthesizing ambiguous information into decisions with real consequences
- Interpersonal trust — relationships, negotiations, communication in high-stakes situations
- Physical presence and dexterity — irreplaceable for now
- Creative synthesis — generating genuinely novel combinations, not just recombinations
- Accountability — being the person responsible when things go wrong
- AI system management — directing, evaluating, and correcting AI outputs
The last one is new and worth emphasizing. “Prompt engineer” is a caricature, but the underlying skill — knowing how to get reliable, useful outputs from AI systems, knowing when to trust them and when not to — is becoming a high-complementarity skill in almost every domain.
The connection to AI reliability
This framework takes on additional weight in context of the OpenAI hallucination research we covered recently. The finding that more capable models hallucinate more confidently is exactly why high-stakes roles retain humans. A CEO can’t delegate the decision to an AI that might be confidently wrong 48% of the time on factual questions. A doctor can’t sign off on a diagnosis from a system with no mechanism for saying “I’m not sure.”
Complementarity isn’t just about productivity — it’s about accountability and reliability. The roles that keep humans in the loop are often the ones where a confident wrong answer has irreversible consequences.
The Paul Conyngham case, revisited
Our recent post on Paul Conyngham’s cancer vaccine pipeline is a live example of this quadrant in action. Conyngham is a data scientist (top right — high AI exposure, high complementarity). AI handled literature review, protein structure prediction, and neoantigen candidate generation. Human judgment handled strategy, institutional relationships, ethical oversight, and synthesizing outputs from multiple AI systems into a coherent plan. Neither alone would have produced the vaccine.
The EBRD framework predicts exactly this outcome: in high-complementarity roles, AI doesn’t replace expertise — it makes expertise more productive.
Source: EBRD Transition Report 2025-26
Related: AI Hallucinations Are Mathematically Inevitable · Paul Conyngham’s Cancer Vaccine Pipeline · OpenClaw-RL — AI That Learns From Being Used