Boards today are drowning in advice about innovation and artificial intelligence. The problem is not a shortage of information. It is that most of what boards receive falls into three unhelpful buckets. It is wildly theoretical, it is overly technical or operational, or it tells boards how to “do AI” instead of how to govern it.
Boards do not build AI systems. They do not run transformation programmes. They do not design new products. Their job is governance. To ask the right questions, hold the right tensions, and make sure the organisation has the capabilities, conditions and culture to innovate and adopt AI safely and successfully.
What follows is a boardroom lens, grounded in ten years of digital product work and the last five years designing AI-led products and workflows in complex, regulated organisations. It is practical rather than theoretical, and written for governance rather than delivery.
It revolves around three questions every board should be able to answer.
Do we understand innovation properly, and are we pairing it with enough transformation to realise value?
Do we understand what AI really changes, and how mature we are as an organisation?
Do we know where AI value actually sits, and are we creating the capabilities, conditions and culture for it to emerge?
Everything in this article is designed to help you answer those three questions.
Innovation Is a Definition Problem Before It Is a Delivery Problem
Most organisations innovate far less than they think, even when the board papers say otherwise. What shows up as an “innovation challenge” is often a language problem that has quietly become an investment problem.
Inside most organisations, the following words are used interchangeably: invention, improvement, optimisation, transformation, innovation. They do not mean the same thing, and each creates a different kind of value. When boards treat them as synonyms, they set the wrong expectations and fund the wrong work.
Invention is creating something entirely new, whether that is a product, a capability, or a technical approach. It can be exciting, but it does not automatically create value. A familiar failure pattern is teams falling in love with the new thing rather than the business case, and optimising for originality instead of outcomes.
Improvement is fixing something broken. It restores value that should have been there already. It matters, often deeply, but it is not new value. The common trap is treating “getting back to where we should have been” as innovation, then wondering why there is no step change.
Optimisation is making something that already exists work better. Traditionally, that meant marginal gains. Small percentage improvements that compound over time. With AI, certain types of optimisation can deliver larger jumps, particularly where work is high-volume, repeatable and information-heavy. But even then, it is still an improvement to an existing way of working, and boards should not expect it to behave like a new growth engine.
Transformation is what allows invention and innovation to scale. It is the unglamorous work of changing workflows, decision rights, incentives, systems, data flows and governance. Transformation does not create new value by itself. It unlocks value at scale. Without it, innovation collapses back to proofs of concept. A familiar failure pattern is treating transformation as communications rather than redesigning the conditions for delivery, so the organisation reverts to legacy behaviours.
Innovation, properly defined, is new customer value successfully delivered. It is new value that produces new behaviours, new markets or new revenue lines. Innovation is not the app, platform or tool. It is the value the app, platform or tool creates.
This is where boards often miss the main risk. Even when organisations build the right thing, value evaporates because the environment around it does not change. The operating model stays the same. The data and systems do not support the new way of working. Governance is too rigid. Incentives reward legacy behaviour. Transformation funding is thin. The organisation creates the future, then drops it into the past.
Boardroom test: Ask management to take a handful of major initiatives and classify them honestly. Which are invention, improvement, optimisation, transformation and innovation, and which create genuinely new customer value? Then ask what transformation work sits underneath each innovation and whether it is properly funded.
What AI Changes Is Cognition, Not Cleverness
For many years, “AI” in daily life meant narrow assistants that were easy to dismiss. Today’s models feel different because they can do useful cognitive work: summarising, synthesising, interpreting, guiding. They are not intelligent in the human sense, but they are capable enough to behave like a thinking partner in many contexts.
You can see this in how people use tools like ChatGPT. They are not only asking for summaries or shortcuts. They are using them to think through performance conversations, career decisions, conflict, complex leadership problems, and emotionally charged situations. The point is not the specific prompt. It is the category of prompt. People are using AI to think with them, and that is a step change.
The productivity evidence backs this up. A field experiment with 758 consultants, documented through Harvard Business School research, found that consultants using generative AI completed more tasks, worked faster, and produced higher-quality outputs on the tasks within the tool’s effective range. The reported effects were 12.2% more tasks completed, 25.1% faster completion time, and over 40% higher quality compared to the control group.
Zoom out further and the macro estimates point the same way. McKinsey’s analysis of 63 use cases estimated that generative AI could add $2.6 trillion to $4.4 trillion annually in value. PwC, drawing on the World Economic Forum’s Future of Jobs survey, notes that generative AI could reshape work in a way that potentially affects up to 40% of total global working hours.
Boards do not need to debate exact numbers. They need to register what the numbers imply. AI is not just “making the best people faster.” It is lifting the baseline, changing the shape of work, and altering what “good” looks like in routine knowledge tasks.
And yet most organisations are still early. Many are experimenting, with pockets of progress and large areas of freeze. That is normal. The governance risk is not being behind today. It is staying there, or allowing informal use to grow unchecked because formal enablement is too slow.
Boardroom test: Ask for an honest baseline of how AI is used today, formally and informally. Do not ask for a glossy success story. Ask where measurable productivity gains exist, where policy is inhibiting basic use, and where shadow use is emerging.
Boards Do Not Need to “Do AI”
They Need to Know What Kind of AI Is Being Proposed, and Why
Two terms in board papers often get blurred: machine learning and generative AI. They are related, but distinct, and they are useful for different types of problems.
Machine learning excels at pattern recognition in structured data. It is often used for forecasting, classification, risk scoring, fraud detection, and price optimisation.
Generative AI excels with unstructured information like language, images, audio, and code. It is often used for drafting and editing content, summarising documents, knowledge assistants, and producing synthetic scenarios.
Boards do not need technical depth here. They need the ability to ask a basic governance question: are we using the right category of AI for the right category of problem, and do we understand the risk profile?
Two generative AI patterns are also worth knowing at a high level because they change the control surface.
Retrieval augmented generation, often called RAG, combines a model with trusted documents or data so answers are grounded in relevant sources. It is commonly used for internal knowledge, policy, compliance, legal summarisation, and other contexts where traceability matters.
Agents go further. They pursue a goal, plan steps, call tools, access knowledge sources, and execute multi-step workflows. They can be powerful, but they also raise governance questions about permissions, auditability, and failure modes.
Boardroom test: For any major AI proposal, ask whether it is machine learning or generative AI, and if generative AI, whether it relies on RAG, agents, or both. Then ask where the data comes from, who owns the outputs, and how control and assurance are handled.
Digital Transformation Is Quietly Becoming AI Transformation
For thirty years, most digital transformation was about building systems for humans to use. Screens, journeys, dashboards, forms, and flows.
AI changes the basic assumption. Increasingly, the “user” of systems will be an agent acting on behalf of a person or a team. That shift is quiet, but profound, because it changes how work is distributed between people and machines.
In traditional workflows, humans do most steps and systems record outcomes. In AI-enabled workflows, detection, triage, drafting, routing, and logging can happen automatically inside guardrails, and humans handle exceptions and oversight. This changes roles, training needs, accountability, and risk management. It also changes what good governance looks like.
Boardroom test: Ask management to bring one or two real workflows and map them as they operate today, then sketch what they could look like with AI augmenting or compressing the workflow. You are not looking for perfection. You are looking for fluency and realism.
Where AI Value Actually Sits
A Mountain, Not a Single Peak
It helps to think about AI value as a mountain with tiers, because boards often overfocus on the summit.
At the base is personal productivity. People use AI to draft, summarise, analyse, and prepare. Done well, this has low risk and high upside because it builds fluency and reveals where AI genuinely helps. It also reduces the temptation for shadow behaviour, which often arises when employees feel personal productivity at home but face blunt restrictions at work.
The middle tier is workflow automation, where one-to-two times productivity improvements can appear when repeatable processes are automated and supported by good change management. Here, the governance challenge is less about the model and more about workflow clarity, controls, and ownership.
At the summit is core process automation and custom AI, including agents embedded into core systems, proprietary models, and operating models where AI handles routine work and people manage exceptions. This is where the largest potential returns sit, and where risk is highest. It is also where weak foundations tend to show up, especially fragmented data, unclear platform strategy, and missing product ownership.
The mistake many organisations make is trying to jump to the summit without building base camp. Boards can prevent that by insisting on staged maturity and honest readiness.
Boardroom test: Ask management which tier the organisation is truly operating in today, what is stopping wider Tier 1 and Tier 2 adoption, and whether any Tier 3 ambitions are being pursued before foundations are ready.
The Board’s Real Job
Capabilities, Conditions, and Culture
Boards do not implement AI. They shape the environment in which AI is adopted.
That environment has three parts.
Capabilities are the skills and fluency people need: AI literacy, data literacy, automation skills, product thinking, and comfort with experimentation.
Conditions are the systems and incentives that support progress: governance that can flex, transformation funding that is realistic, accessible and well-governed data, evaluation criteria and kill points, and IT that enables rather than blocks.
Culture is the set of norms that determine behaviour: psychological safety to experiment, leaders who model curiosity rather than fear, and a low tolerance for unnecessary bureaucracy.
When one of these is missing, AI adoption becomes either unsafe or stalled. Sometimes both.
The DORA 2024 report highlights the interplay between AI, platform engineering, developer experience and organisational transformation, reinforcing a point boards often underestimate: technical change and organisational conditions move together.
Boardroom test: Ask for a short, honest assessment of current capabilities, conditions and culture for AI adoption, followed by three practical steps management will take in the next six months.
A Closing Frame for the Boardroom
AI is not simply another technology wave. It changes how work is done, how decisions are made, and how value is created. Boards do not need to become AI experts. They do need to govern in a world where AI is present, moving fast, and increasingly normalised in people’s personal lives.
If you hold onto three questions, you will keep cutting through noise.
Do we understand innovation properly, and are we pairing it with enough transformation to realise value?
Do we understand what AI really changes, and how mature we are as an organisation?
Do we know where AI value actually sits, and are we building the capabilities, conditions and culture for it to emerge?
You will not have all the answers. No one does. But you will be asking the right questions, in the right way, at the right level.
That is what good governance looks like in an AI-led world.
