2026 AI Reality Check
A clear-eyed view of AI’s coming impact on work, leadership, and the human skills that matter most.
By Splendid Torch (see contributor list below)
If your inbox is anything like ours, it’s currently overflowing with breathless January predictions about how AI is going to revolutionize, disrupt, or entirely replace... well, everything.
This is not one of those reports.
At Splendid Torch, we are a collective of senior organizational designers, operators, and technologists. We don’t just watch the future happen; we are in the trenches helping real teams navigate real change. And if there is one thing we know for certain, it’s this:
Technology doesn’t change organizations—people do.
As our colleague Joe O’Connor put it this week, “We are less advisors around developing an AI strategy, and more about developing a strategy for a world being shaped by AI.”
So, instead of asking what the technology will do in 2026, we asked our collective of seasoned experts what leaders and employees will actually do with it. We wanted the ground truth—the messy, human, and occasionally hilarious reality of adopting AI in the workplace.
We asked for the unvarnished truth in three categories: The Good (what will stick), The Bad (the friction and fear), and The Absurd (the innovation theatre).
What follows isn’t a prediction about tech specs or breakthroughs—it’s a forecast about us: how we work, how we lead, and how we stay adaptive in a world that won’t slow down.
🚀 1. ADOPTION
If 2024 was the year of “Wow” and 2025 was the year of “How,” 2026 is shaping up to be the year of “Why?”
The Good: Work Design is the New Strategy
If done well in 2026, the winners won’t roll out AI tools—they’ll redesign how work gets done. The shifts that stick will require dropping the idea of an “AI strategy” altogether.
Quiet Gains are Durable Gains: When workflows are redesigned properly—rather than AI being bolted onto broken processes—the gains are quiet, but durable. This is where AI actually creates value, not noise.
Time Dividends: A small number of organizations will start recognizing work time reduction as an AI strategy. By explicitly sharing productivity gains through “time dividends,” they will see faster adoption and less resistance.
The Prompting Bonus: Paradoxically, learning the art of AI prompting will force leaders to finally give better context. Defining the desired outcome and style for an AI is exactly what human teams have needed all along!
The Bad: The Illusion of Progress
The biggest pitfall will be the belief that AI activity = Impact.
Busy Dashboards, Empty Strategy: Tool adoption will massively outpace outcome measurement. Dashboards will look busy and pilots will multiply, creating a convincing illusion of progress while core issues like misalignment and poor leadership remain untouched.
Resistance & Burnout: Employees will be expected to “adopt AI” on top of already unsustainable workloads. Leaders will underestimate how much workflow redesign and manager enablement are required to turn usage into outcomes.
The Absurd: Shadow AI & The Policy Paradox
“We’re AI-first – but you can’t use ChatGPT.”
Organizations will ban external models while everyone continues to use them anyway, unofficially and without governance. Leadership will act shocked when this surfaces, while rolling out AI policies that were hastily written… by AI.
⚡ 2. PRODUCTIVITY
The rise of the “Disposable App” and the “Vampire Economy.”
The Good: The “Dreamweaver” Phase
Just as the internet democratized information, AI is democratizing capability.
The Disposable App: We are entering an era where non-engineers will stop filing tickets for minor tools and start building “disposable apps” themselves. Interns and ops teams will spin up single-use AI tools to kill the backlog. It won’t be pretty code, but it will free up actual engineers to do the hard work.
Data for All: Teams will finally be able to mine their own data for insights without relying on expensive data scientists.
The Bad: The Work About the Work
With low-value tasks vanishing, organizations will rush to fill the void with more projects.
Intentional Subtraction: Unless companies make a concerted effort to subtract work, people will feel acute pressure to produce more.
The Vampire Economy: We will see the rise of AI agents that consume knowledge to solve problems without ever visiting the original source. Organizations will wake up to find their public documentation sucked dry by bots that extract value but contribute nothing back.
The Absurd: The Dead Internet
AI hallucinations and “slop” will blend in with real work, creating an environment where the sheer volume of content becomes overwhelming.
AI Talking to AI: Most internal communication will quietly be written and sent by AI, and subsequently summarized by AI, with humans barely reading what’s being sent.
🧠 3. COMPLEXITY
The risk of eroding judgment and the fight for truth.
The Good: The 360-Degree View
Leaders will lean into the power of AI to review potential solutions from a wide variety of perspectives, quickly identifying risks, ethical considerations, and stakeholder misalignment that might otherwise have been missed.
The Bad: The Erosion of Judgment
The real hidden cost of AI isn’t just jobs; it’s our collective ability to make difficult decisions without a chatbot holding our hand.
Atrophy of Discernment: With ready answers always at hand, we risk losing the ability to sit with uncertainty.
Dysfunction Accelerated: AI is an accelerant. If your culture is toxic or your processes are broken, AI won’t fix them—it will just make them broken faster.
The Absurd: The Reconciliation Recess
Board meetings will grind to a halt as members refuse to approve minutes until their personal AI assistants agree on the “truth.”
We will see the rise of the “Reconciliation Recess“—a dedicated 20-minute block where everyone sits in silence while their agents argue in the cloud over who actually promised to send the Q2 follow-up email.
The “Fake” Candidate: People will pass through rigorous hiring filters without possessing the requisite skills, simply because they can rehearse the “script” of success. Convincingly speaking about a job is becoming more valuable than the ability to do the job.
❤️ 4. HUMANITY
The more fake the digital world gets, the more valuable the human world becomes.
The Good: Humanity as a Premium
As machines handle the grunt work, the uniquely human skills—trust, ethics, coaching, and strategic narrative—aren’t just “soft skills” anymore. They are the only skills that matter.
The Human Double-Down: Humans will focus on what only humans can do: build trust, align across functions, and stay accountable for outcomes.
The Bad: The Trust Backfire
When teams struggle with foundational elements of communication and trust, attempts to use AI to strengthen fundamentally human relationships will inevitably implode.
The Absurd: The Avatar Backlash
The most disruptive shift in 2026 will be video and audio avatars. This will trigger a huge backlash—but paradoxically, it will drive a craving for the real. As people struggle to know what’s real online, they will crave more in-person experiences and trust-based relationships, not less.
FROM PREDICTION TO ACTION
Three Strategic Imperatives for the Adaptive Organization
If these predictions tell us anything, it’s that 2026 will not be kind to the passive. The gap between the “AI Busy” (high activity, low impact) and the “AI Smart” (strategic, measured, humane) is widening.
Looking across our collective forecast, three massive meta-themes—and financial imperatives—emerge:
Work Design is the Profit Lever: You cannot buy “AI success.” You have to build it by tearing down outdated workflows. The leaders who win won’t be the ones with the best software licenses; they will be the ones brave enough to perform “intentional subtraction” to capture the efficiency gains AI promises.
The “Trust Premium” is Real Risk Mitigation: As AI generates infinite content and policy, authentic human connection is becoming a luxury good. Investing in human-centric leadership isn’t just “nice to have”—it is your primary hedge against the brand risk of deepfakes, hallucinations, and the “uncanny valley” of automated customer service.
Competence Over Titles: The rigid job description is dying. We are seeing a shift toward “Swiss Army Knife” talent—people who don’t need a script, just a problem to solve and the judgment to know when to use the bot and when to use their brain.
How to move from “Shadow AI” to Stewardship
As a leadership team, ask yourself these three questions. If you can’t answer them, you are likely running on “Innovation Theatre” rather than strategy.
The Usage Test: “Can we distinguish between activity (prompts/logins) and impact (time saved/quality improved)?”
The Shadow Test: “Do we know exactly where our data is going, or is our ‘AI Strategy’ actually just 500 employees pasting proprietary data into free tools?”
The Value Test: “Have we explicitly stopped doing low-value work to make space for AI, or have we just added AI on top of a burnout culture?”
Ready to turn AI from noise into measurable impact? Schedule a strategic briefing with Splendid Torch to:
Diagnose where your AI initiatives are merely “busy work” versus driving important and impactful gains.
Identify gaps in leaders/manager enablement to turn AI usage into real, sustainable outcomes.
Build human-centric leadership practices that safeguard your culture while accelerating innovation.
Connect with us at hello@splendidtor.ch to explore how to keep humanity front and center in your AI journey.
🎙️ The Contributors
Featuring insights from the Splendid Torch collective:
Hannah Keen | Global Operator turned AI Strategist Joe O’Connor | CEO & Co-founder, Work Time Revolution; Author, “Do More in Four” Julie Clow | Leadership & People Advisor; Author, “The Work Revolution” Kurt Collins | Operations Strategist & Engineering Leader Patty Simonton | Founder, Big Table Institute Sinéad Condon | Former Chief People Officer and Change Advisor Usha Gubbala | Organizational Strategist & Executive Coach




This was excellent! Thank you for such an insightful write-up.
"The gap between the “AI Busy” (high activity, low impact) and the “AI Smart” (strategic, measured, humane) is widening."
"As machines handle the grunt work, the uniquely human skills—trust, ethics, coaching, and strategic narrative—aren’t just “soft skills” anymore. They are the only skills that matter."
Spot on.