Executives Think Their AI Strategy Is Working. The Workforce Doesn’t Agree.

SM
Sarah McKenna5 min read

The most uncomfortable finding in Section’s January 2026 AI Proficiency Report is about perception; Leaders overwhelmingly believe their AI deployments are succeeding. The rest of the organisation, particularly individual contributors, do not share that view.

Executives Think Their AI Strategy Is Working. The Workforce Doesn’t Agree.

The most uncomfortable finding in Section’s January 2026 AI Proficiency Report is not about tools, training, or even productivity. It is about perception. Leaders overwhelmingly believe their AI deployments are succeeding. The rest of the organisation, particularly individual contributors, do not share that view.

This is not a small discrepancy. It is a structural gap in awareness.

According to the report, 81 percent of C-suite respondents believe they have a clear, actionable AI policy. Only 53 percent of individual contributors agree. Eighty percent of executives believe tools exist with a clear access process; just over half of contributors say the same. Seventy-one percent of executives believe policies are enforced and connected to strategy, compared to less than half of contributors. The divergence continues across questions about adoption, encouragement, and clarity of direction  .

From the top of the organisation, AI appears embedded, supported and progressing well. From the operational core, it feels uneven, unclear and underpowered.

That difference matters because executive perception drives strategic confidence. When leadership believes a deployment is broadly successful, scrutiny softens. Investment decisions stabilise. The organisation moves on to the next frontier. But if the foundation is weaker than assumed, progress becomes performative rather than transformative.

The headline adoption numbers help explain the optimism. ChatGPT now reports close to 900 million monthly users globally, and 55 percent of knowledge workers in the survey say they use AI at least weekly  . On paper, that looks like penetration. It suggests behaviour change is well underway.

Yet when the report examines proficiency and impact, the optimism becomes harder to justify. Seventy percent of the workforce are classified as “AI experimenters.” Twenty-eight percent are “AI novices.” Fewer than 3 percent qualify as practitioners or experts. In effect, 97 percent of the workforce are using AI poorly or not at all in ways that materially affect business outcomes  .

The gap between usage and value is stark. Twenty-five percent report saving no time with AI. Nearly half say they would be fine never using it again. Only 15 percent of reported use cases are judged likely to generate ROI. The most common use cases remain surface-level: replacing Google search, drafting and editing text, summarising documents. Automation and deeper workflow integration barely register  .

In other words, access and experimentation are not the same as transformation.

Executives, who tend to use AI daily and report high levels of enthusiasm and trust in its outputs, may reasonably conclude that the organisation is moving in the right direction  . Their own experience reinforces the belief. But the report shows that this experience is not widely shared. Individual contributors are significantly less likely to have tool access, reimbursement, or formal training. Only 32 percent report clear access to AI tools compared with 80 percent at C-suite level. Only 27 percent have received training compared with 81 percent of executives  .

At the same time, contributors are more likely to feel anxious or overwhelmed by AI and less likely to trust its impact. The people closest to repetitive, automatable work are the least supported in using the tools that could change it.

This inversion should give leaders pause. AI transformation is not an executive productivity play. Its real economic leverage sits in the daily workflows of teams. If those teams remain in experimentation mode while leadership believes transformation is underway, the organisation is operating on an illusion of progress.

The report’s data suggests the root cause is not apathy. Companies are investing in policies, access and training. Employees with a formal AI strategy are 1.6 times more proficient. Those whose managers expect AI usage are 2.6 times more proficient  . These interventions move the needle. But even after training, average proficiency scores remain low. Employees who have completed AI training score just 40 out of 100.

The issue is that many organisations are still teaching foundational skills. How to use an LLM safely. How to write a prompt. How to comply with policy. Those were appropriate objectives in 2024 and 2025. In 2026, AI proficiency means integrating AI into meaningful, value-adding work every week. It means redesigning workflows. Identifying bottlenecks. Automating processes. Reallocating human effort.

That shift has not happened at scale.

The strategic risk lies in mistaking readiness for impact. When adoption metrics become the proxy for success, leaders stop asking harder questions. How many hours per employee are actually being saved? Which workflows have been fundamentally redesigned? Where has AI changed decision quality or speed in measurable ways? How many processes have been eliminated rather than augmented?

Without those measures, success becomes narrative rather than evidence.

The report closes with a set of leadership imperatives that read less like operational advice and more like a corrective to executive overconfidence. Stop measuring success by access and weekly usage. Treat use case development as a structured organisational competency rather than a personal responsibility. Prioritise individual contributor enablement. Close the awareness gap through regular, direct exposure to how AI is used in day-to-day work  .

At its core, this is not a technology story. It is a governance story.

If the C-suite believes deployment is succeeding while individual contributors report minimal impact, then the organisation has a visibility problem. And visibility problems in transformation programmes tend to widen over time. The longer leaders assume progress, the harder it becomes to acknowledge stagnation.

AI deployment is not measured by policy documents or tool licences. It is measured by whether the economics of work have shifted. Whether time is being materially reclaimed. Whether workflows are being re-architected. Whether output quality and speed have changed in ways that are visible in performance metrics.

The Section report should be read as calibration rather than criticism. It surfaces a misalignment that many organisations are likely experiencing but not naming.

Executives are confident. The workforce is cautious. The data sits somewhere in between.

Closing that gap will require humility at the top, discipline in measurement, and sustained investment in workflow redesign rather than surface adoption.

Until then, many organisations will continue to report AI progress while quietly wondering why the promised returns have not arrived.


Not everyone reads
our newsletter.

Because it’s not for everyone. But if you’re interested in the cutting edge of AI transformation and product development, you could do worse than receiving our monthly missives.

We just need your deets:

THINGS THAT GO IN THE FOOTER

©copyright 2025 Machine & Folk