The argument against someday: A realistic path to building AI value

EO
Edmundo Ortega8 min read

Stop treating AI enablement as a distant vision or software rollout. Progress stalls because organizations avoid hard choices today. Treat AI as an operating model change. Pick one real workflow, build with small teams, learn fast in 90 days, and let execution—not hype—shape strategy.

Someday the future will come.

The Problem with Someday

The word “someday” is doing a lot of work in AI conversations right now.

Someday, AI will run the back office. Someday, every employee will have a copilot. Someday, decisions will be faster, cheaper, and better. Someday, work will look nothing like it does today.

All of that might be true. In fact, some version of it probably will be.

The problem is that someday has become a comfortable place to live. It's far enough away that we don't have to make hard choices today, but close enough that we can convince ourselves we're being strategic just by talking about it.

This isn't an argument against vision. Vision matters. But vision without a near-term path tends to produce stalled initiatives, scattered investments, and a quiet sense of disappointment that shows up a year or two later as "AI hasn't really worked for us."

This is an argument for execution—specifically, realistic execution in a moment where the technology is powerful, but immature, and changing fast.

Why Execution Keeps Stalling

Ask most executives what AI will mean for their business in five years, and you'll get a thoughtful answer. Ask what it will change in the next twelve months, and the conversation gets a lot quieter.

That gap isn't a failure of leadership. It's structural, and it comes from three converging pressures:

The technology is genuinely immature. There are very few enterprise AI use cases that are both well understood and broadly repeatable. The ground is moving under everyone's feet, which makes it genuinely hard to know what's worth building versus what will look naïve in six months.

The impatience is real. Boards want movement. Employees want quality tools and training, but also job security. Nobody wants to be the company that "missed AI." That impatience pushes organizations toward buying AI software as a shortcut to progress—software that's often just as immature as the technology itself. The result is a familiar pattern: adoption without impact, pilots that never scale, and a creeping narrative that AI is impressive but not actually useful.

This isn't a technology upgrade. Gen AI doesn't behave like previous enterprise technologies. It changes how work happens—how people draft, analyze, decide, communicate, and coordinate. That puts it much closer to an operating model shift than a software implementation. And unlike prior waves, there's no clear end-state to plan toward. Leaders are being asked to act without knowing what "done" looks like.

The distance to "someday" lets us talk about transformation without immediately confronting these tradeoffs. Strategy offsites get booked. Task forces get formed. Decks get approved. And yet very little actually changes in how work gets done.

The Question That Clarifies Everything

Here's a question worth sitting with:

Will your organization operate the same way in three to five years?

Most leaders will say no. AI alone makes that hard to imagine, even if you're skeptical about the hype.

What's less clear is how you get from today to that future. Many organizations acknowledge that work will change without having a credible path for changing it.

That gap is where strategy is supposed to live. And filling it requires treating AI as what it actually is: an organizational change, not a technology project.

If you treat Gen AI like a technology rollout, you'll almost certainly miss its impact. AI adoption reshapes roles, decision rights, workflows, and incentives. It changes who does what, how decisions get made, and what "good work" looks like. Those are organizational questions, not IT ones.

The technology itself is often the easiest part. The hard part is changing behavior—especially in organizations optimized for stability and efficiency.

What to Do Monday Morning: A 90-Day Playbook

If you're leading an organization through this transition, here's what actionable strategy actually looks like right now.

Week 1: Stop Preparing, Start Choosing

Most organizations are stuck in "readiness mode"—training programs, governance frameworks, data initiatives. All of these things matter. None of them matter without a concrete objective they're meant to support.

Without a clear reason—a concrete objective they're meant to support—readiness efforts tend to consume time and goodwill before anyone sees value. People get tired. Skepticism grows. Alignment and excitement don't come from being prepared. They come from pulling something off.

Your first action: Pick one high-value workflow where AI could create real leverage. Not the flashiest use case. Not the one that looks best in a board deck. The one where you deeply understand the current process, success would be visible and meaningful, you can get close access to the people doing the work, and failure would be contained and recoverable.

This is not a six-month discovery process. You already know where work is painful, slow, or expensive in your organization. Pick something that matters.

Week 2: Apply the Filter

Before you commit resources, run your chosen workflow through these five questions. If you can't answer "yes" to at least four of them, pick a different workflow.

1. Can you clearly explain the current process in 2-3 sentences?

If you can't describe how the work happens today without hedging or hand-waving, you don't understand it well enough to change it. AI won't fix a process you can't articulate.

2. Is there a clear quality standard that everyone agrees on?

"Good work" needs a definition. If people disagree about what makes a good contract review, RFP response, or customer segmentation, AI will just inherit that ambiguity. You need to know what success looks like before you can measure improvement.

3. Does the workflow create a tangible bottleneck or cost?

The best early AI initiatives attack visible pain—work that's slow, expensive, error-prone, or blocks other activities. If you can't point to a real consequence of the current process (missed deadlines, high costs, customer complaints, burned-out employees), it's probably not worth starting here.

4. Can 2-3 people use this in their daily work within 8 weeks?

Avoid workflows that require enterprise-wide coordination, major system integration, or six-month timelines. You want something small enough to build quickly but meaningful enough that people will actually use it. If you can't get to real usage in two months, the scope is wrong.

5. Will you know within 30 days if it's working?

You need fast feedback loops. Can you measure cycle time, error rates, output quality, or user behavior within a month of deployment? If the only way to know if it worked is a quarterly business review, pick something with tighter feedback.

Common mistakes this filter catches:

  • "Let's build an AI strategy assistant for executives" → Fails #2 and #4. No clear quality standard, impossible to get real usage quickly.
  • "AI-powered enterprise search across all our documents" → Fails #1 and #3. Too broad to understand, unclear what problem it solves.
  • "Automate our entire procurement process" → Fails #4 and #5. Too big to build quickly, too slow to measure.

What passes the filter:

  • "Use AI to draft initial responses to standard customer support inquiries" → Clear process, clear quality standard, creates measurable bottleneck (response time), can deploy to small team quickly, daily feedback.
  • "Help sales team generate first-draft pricing proposals from client requirements" → Understood process, agreed-upon format, saves time on repetitive work, can test with 2-3 sellers, weekly feedback on quality and speed.

Weeks 3-4: Assemble the Smallest Possible Team

You don't need a transformation program. You need 3-5 people who can build something real:

  • Someone who knows the workflow intimately (not their manager—the person who actually does it)
  • Someone who can build with AI tools (this might be an internal engineer, a sharp analyst, or a technical consultant)
  • Someone with decision authority to change how the work gets done

Notice what's missing: elaborate governance, cross-functional steering committees, change management consultants. You're not changing the enterprise yet. You're learning whether AI can actually improve one specific thing.

Critical: Set a 4 week deadline for a working prototype. Not a proof of concept. Not a demo. Something people can actually use to do real work.

Weeks 4-8: Build in Production, Not in a Lab

Here's where most AI initiatives die: they build perfect pilots that never touch reality.

Your team should be building with real data, real constraints, and real users from day one. Yes, this feels inefficient. Yes, you'll rebuild things. That inefficiency is far cheaper than spending six months building something nobody wants.

AI-enabled workflows break the assumptions behind linear planning. It's extremely difficult to predict second- and third-order effects on paper. The only way to understand how AI will change a process is to put it into the process and see what happens. Traditional waterfall approaches assume requirements can be known upfront. With Gen AI, that assumption doesn't hold.

What "working" looks like:

  • 2-3 actual users doing real work with the AI-enabled workflow
  • Clear before/after metrics (time, quality, cost—whatever matters for this workflow)
  • A log of what works, what doesn't, and what surprised you

You're not trying to prove ROI yet. You're trying to understand second- and third-order effects that you cannot predict on paper.

Weeks 8-12: Decide Based on Signals, Not Stories

At the end of 90 days, you won't have a scaled solution. You'll have something much more valuable: signal.

When outcomes are uncertain, progress needs a different definition. Look for these indicators rather than immediate ROI:

  • Did the people using it change their behavior? (Not "did they say they liked it"—did they actually work differently?)
  • Did it surface new questions or possibilities you hadn't considered?
  • What broke that you didn't expect to break?
  • Where did human judgment matter more than you thought? Where did it matter less?

Based on what you learned, you have three options:

  1. Scale this use case. You've found something that works and the path to broader adoption is clear.
  2. Pivot. The workflow wasn't quite right, but you learned something that points to a better opportunity.
  3. Stop. It didn't work, and that's valuable information. Kill it, document why, and redirect resources.

All three are success. The only failure is spending 90 days and learning nothing.

What You're Actually Building

This isn't just about one workflow. You're building:

  • Organizational muscle for working with immature technology
  • Credibility with your team that AI efforts are serious and grounded
  • Pattern recognition for what works in your specific context
  • A decision framework for the next initiative

After your first 90 days, you'll know more about AI in your organization than six months of vendor demos and strategy offsites could teach you. And you'll have something to show for it.

On Buying vs. Building

This playbook assumes you're building custom solutions, at least initially. That's deliberate, but it requires explanation.

Buying enterprise AI software can move you forward. It can also lock you into assumptions about workflows, roles, and value that haven't been tested in your organization. When that happens, disappointment gets blamed on the technology rather than the expectations attached to it.

The vendor market hasn't settled yet. Most AI software is packaging uncertainty into something that looks complete and future-ready. Demos are compelling. Case studies are polished. The organizational effort required to make any of it work is often... implied.

Here's a practical approach:

  • For your first 2-3 initiatives, bias toward building. You need to learn how AI works in your specific context before you can evaluate vendor claims effectively. Building teaches you what questions to ask.

  • Buy selectively once you have pattern recognition. After you've built a few things, you'll know what "good" looks like. Then vendor solutions become easier to evaluate. You'll recognize which promises are real and which are aspirational.

  • Avoid broad platform commitments early. Enterprise AI platforms promise to solve everything. In practice, they often solve nothing particularly well. Start narrow. Expand only after you've proven value in specific use cases.

This isn't anti-vendor. It's anti-premature-commitment. The best time to buy is after you've learned enough to be a sophisticated buyer.

Leadership's New Responsibility

In this environment, leadership looks different.

It's less about having the answers and more about making sense of what's emerging. Leaders set the tone for experimentation, manage expectations, and create psychological safety for learning. Just as importantly, they decide where not to act.

What this looks like in practice:

  • Champion small bets over transformation programs. One of the most counterintuitive moves right now is to slow down decision-making while accelerating delivery. Taking time to really understand how your business operates today is not a luxury. It's the only way to identify where AI could create real leverage instead of novelty. Ironically, focusing on fewer, better bets is often the fastest way forward.

  • Manage expectations without killing momentum. Overpromising is one of the fastest ways to lose trust. Unrealistic timelines and inflated claims don't just disappoint—they make future efforts harder. Credibility, once lost, is expensive to regain. The balance is honesty with ambition: clear about what's unknown, optimistic about what can be learned. Trust compounds faster than hype.

  • Model comfort with uncertainty. Waiting for certainty is a choice. So is moving without intent. Your team is watching to see which kind of leader you are. If you need everything de-risked before you act, they'll learn to hide uncertainty from you. If you reward thoughtful experimentation—including well-executed failures—you'll get signal instead of theater.

Strategy, in this moment, isn't a single plan. It's a series of intentional steps. The goal isn't precision. It's adaptability. Not knowing exactly where you'll land isn't a failure. It's the starting condition.

A Final Reflection

AI will change how work gets done. That part is easy to say.

The harder question is whether you're actively shaping that change—or just talking about it.

Having an AI strategy doesn't mean having all the answers. It means knowing what you're trying to learn, why it matters, and how you'll adjust as you go.

If that feels messy, it's because it is. But it's also where real progress starts.

The distance between here and "someday" isn't crossed with vision documents and vendor contracts. It's crossed with small teams doing real work, learning quickly, and building the organizational muscle to do it again.

You don't need to know the full path. You just need to know the next 90 days.


Not everyone reads
our newsletter.

Because it’s not for everyone. But if you’re interested in the cutting edge of AI transformation and product development, you could do worse than receiving our monthly missives.

We just need your deets:

THINGS THAT GO IN THE FOOTER

©copyright 2025 Machine & Folk