Adobe Firefly
The Human Alignment Problem
e’ve seen a flurry of activity in generative AI over the past three years, from massive capital expenditure investments to technology stock whiplash on Wall Street. Every leader, consultant, and coach now seems to have an AI solution, yet reports show less than stellar outcomes.
Despite limited measurable success, AI expenditures continue to rise. Meanwhile, a white-collar recession continues, with Wall Street Journal headlines like “Job Hunters Are So Desperate That They’re Paying to Get Recruited,” all against the backdrop of a supposed agentic revolution in AI just over the horizon.
A second approach is “partial play,” or trying to ignore or avoid the technology. Leaders in this category may be lightly playing with AI, but nothing significant is happening. The underlying sentiment is mild curiosity. It’s not in the forefront of understanding, strategy, or planning. It sits in the background behind other priorities, sometimes by default, sometimes by choice.
In my experience, the primary reason leaders land here is lack of time (ironically, the very thing AI promises to give back). Sometimes there is true and justified avoidance, often tied to intense regulatory/legal or industry issues and/or legitimate ethical and moral concerns. For others, though, it’s a deliberate decision to wait and see or let others figure it out first. There’s no right or wrong here, so long as there’s a conscious strategy.
The biggest risk is the assumption that just because your company doesn’t approve of AI usage or have an AI policy, that means your people aren’t using AI. They are. Business Insider reports that 57 percent of employees are already using it without leadership visibility—and that number is growing. If you are not providing secure access behind your own paywall, your intellectual property may already be leaking, and/or liabilities may be growing without your awareness.
The third style of approaching AI tools is to prepare, plan, and execute judiciously. These leaders are steady and informed, and they integrate AI on their own terms. They start small, focus on one or two high-value use cases, and invest heavily upfront in both human and technical alignment. They understand that AI implementation is not just a technology rollout. It’s a change management initiative.
Research and history both support this approach. The primary risk here is overemphasizing only the technology and missing the human piece. If you are working with a strong partner or internal team, focus on the human side. That’s where the long-term gains will be won.
AI is adding speed to systems and workflows that weren’t made for it, and humans aren’t ready for the increased intensity. AI is putting managers and teams who are already overwhelmed and unable to focus under even greater pressure. In a controlled study from MIT’s Media Lab, participants who relied heavily on ChatGPT showed lower neural engagement, weaker memory integration, and less original output compared to those who used search tools or worked unaided. Over time, reliance increased while cognitive effort decreased. When asked to reproduce their work without AI, recall dropped significantly.
Business Insider also recently wrote about what AI engineers are calling “AI fatigue,” a paradox where output increases but exhaustion rises with it. As one engineer put it, “The AI doesn’t get tired between problems. I do.”
Harvard Business Review’s latest research reflects the same pattern: AI doesn’t reduce work. It intensifies it by expanding scope, accelerating pace, and quietly increasing cognitive strain.
And this is the real problem.
Consider the fact that we’re already dealing with a significant lack of availability and accountability at work (see The Wall Street Journal’s article “Your Boss Doesn’t Have time to Talk to You”). Our current reality is that leaders don’t have time to work with their teams, burnout and overwhelm are already at all-time highs, engagement has dropped below 2020 levels, and our collective focus and attention is dwindling. When we combine all of this, we get a recipe for poor strategy, poor implementations, and ultimately poor outcomes.
If we look historically, most work on programs and projects was front-loaded, with quality checks handled later. With AI, that model flips. We need more thinking time and more review cycles built into workflows earlier. Research shows the best outcomes come from people actively reviewing, pushing back on, and refining AI outputs throughout the work. That means front-loading decision quality as a core operating discipline—and prioritizing the time and space needed for higher cognitive functioning. If we don’t, both outcomes and people will suffer—at 10x the speed.
As we navigate the months ahead, leaders will need to get very clear on how to hone their team’s time, focus, and energy with AI. And they will need to redesign work to enable more capacity for critical thinking at the individual and team level.
These steps are foundational to successful AI integration. When humans have clarity and capacity, they integrate AI more effectively. Space. Time. Thinking. Clarity. This is what our teams need. When teams are overworked and fragmented, speed only increases complexity and errors, as highlighted in Harvard Business Review’s February article “AI Doesn’t Reduce Work—It Intensifies It.”
A second step is to avoid scattered solutions and be precise in your focus. Let history be the guide. Just as any lasting technology solutions need to be tied to a clear business case and be outcome driven, AI needs well-defined use cases, aligned workflows, and intentional integration. Indiscriminate rollouts are not performing well. In fact, the most recent Deloitte study shows only 5 percent of AI implementations are meeting expectations.
A good example is Copilot, a Microsoft chat assistant, which is seeing mixed results. Not because there isn’t value there, but because blanket implementation rarely delivers strong outcomes. Simply giving teams access to a tool and hoping value emerges is not a strategy. As with any technology, outcomes depend on how precisely the tool is applied to a clearly defined problem. Find your business case, map the solution, and be diligent on implementing key performance indicators. And don’t forget to focus on building in time for iterative team training, touchpoints, and focused, deliverable reviews; much more time will be needed for quality reviews when using AI.
The third step is to train and educate while focusing on the human side. On the technology side, teams need training on AI tools, how to use them, and when to use which tools and why. Platforms like ChatGPT, Claude, and Gemini operate differently from each other, and building custom generative pre-trained transformers (i.e., chatbots or AI agents, known as Projects or Gems in the Anthropic or Google ecosystems) requires training and skill. Clear company policies around usage, privacy, transparency, and ethics are also essential.
However, on the human side, our declining attention levels are already creating major challenges, and AI will only increase them. Leaders need to start building workflows to pre-emptively address attention and focus issues now, not later. Today’s work worlds are designed to distract: hundreds of emails, instant messages, notifications, and constant interruptions. It’s no wonder adults and children alike are struggling with attention.
As we look forward, we need to create workflows that are built for both collaborative as well as concentrated focus time. These structure changes can be built internally and sustained through facilitated discussions or supported externally through seasoned professionals. The future of work is focused, collaboratively creative, with strong and high-quality reviews integrated throughout. Our workflows are not designed for that yet, but it’s on the horizon. AI is here and driving change fast. The question is whether our systems and teams have been built strong enough to handle the speed.