AI uncertainty is a leadership problem

If you feel exposed by how fast this is moving, you're not alone.

You're leading a successful organisation, but AI feels different from anything you've dealt with before.

Everyone is talking about it. Boards are asking questions, articles promise breakthroughs and your team is experimenting. And yet, when it comes to your own business, it's hard to point to anything concrete that actually moves the needle.

You might find yourself saying things like:

"We're looking at AI, but we need to be careful."

"Everyone's talking about it, but I haven't seen anything that genuinely changes how we work."

"I'm seeing people use ChatGPT, and I've no idea what's going into it."

Privately, the concern runs deeper.

You know this matters. You feel the responsibility to lead it properly. But you don't yet have a clear, grounded answer to what your approach should be, or how to talk about it with confidence. You don't want to rush. You don't want to get it wrong. And you don't want AI quietly creating risk inside the organisation.

That feeling often sharpens in two moments.

When someone asks what your AI strategy is, how you're planning to adopt it, or where you're starting, and you realise you don't yet have a clear response.

The second may be when you become aware that AI is already being used across the business, informally and without visibility, and you're not entirely sure what that might mean for risk, quality, or reputation.

This isn't about being behind. It's about control.

You're responsible for credibility, for keeping people safe, and for steering change in a way that makes sense. And right now, AI introduces a level of uncertainty that makes that harder than it should be.

If this feels familiar, you're not alone.

You don't need to understand the engine to drive

AI becomes risky when nobody is clearly leading it.

One of the reasons AI feels so uncomfortable is that it's often framed as a technical problem.

Something you're supposed to understand in detail before you're allowed to lead it.

But that framing is misleading.

You don't need to understand exactly how a car engine works to drive safely from your house to the shops. You do need to know where you're going, how to steer, and what the rules of the road are.

AI is the same.

You don't need to become an expert in how it works internally. What matters is understanding what it can do, where it can help, and how it should and should not be used inside your organisation.

Most people didn't learn to drive by themselves. They had lessons and were guided. They practised in safe conditions before being expected to operate confidently.

Yet with AI, many leaders are trying to work it out alone, while their teams experiment informally around them.

That's where the real risk creeps in.

Doing nothing doesn't remove risk. It just hands control to informal, unmanaged use.

AI doesn't become risky because it exists. It becomes risky when nobody is clearly leading it.

In reality, AI is already in your organisation. The question isn't whether it will show up. The question is whether you're leading it deliberately, or discovering how it's being used by accident.

Most leaders have never actually seen AI used well, on a real business problem, in a way they'd feel comfortable copying. They've seen demos and heard bold claims, but they haven't seen it applied, calmly and properly, to the kind of work that really matters.

Once you see AI used properly, with judgement and clear boundaries, it stops feeling abstract. It starts to feel leadable.

That's when control begins to return.

Why I take this so seriously

I saw early what happens when AI is applied properly to real work.

In March 2023, I had an experience that fundamentally changed how I thought about AI.

At the time, I was running a business development agency. Our work attract the "big fish" of senior leadership teams from organisations such as Etsy, Spotify and Laurent-Perrier for our clients.

We were very good at what we did. One particular piece of work sat at the centre of how we created this impact.

It was a lead generation approach that helped to attract exactly the right person. It took four to five weeks to create and the end result was a simple piece of A4 paper. Yes it was deeply rooted in human psychology and was exceptionally effective. It was the kind of work that benefited from years of experience, judgement and intuition.

Out of curiosity, and a little scepticism, I wondered whether AI could help with it.

Not as a shortcut or as an experiment, but to solve a real, important business problem.

Most people have a surface-level experience with AI and find it useful for rewriting emails.

I went deep and worked with AI as a collaborator to solve a significant problem.

Two things happened. First, I managed to create a process so the work was done in an hour.

But that wasn't the thing that hit me.

What really stung was that the quality was exceptional. And this was with old models, years ago.

Like a slap around my face, I realised that at some point, anybody with a £20 subscription to AI would be able to replicate my 25+ years of hard-won expertise.

I knew everything about work would change. And I'm being proved right.

That realisation came with consequences.

I assumed our clients who were mostly large, established organisations would move quickly once they saw what was possible. In reality, they've been slow. The reputational and regulatory risk around AI was, and still is, very real and it can be paralyzing.

That meant we couldn't simply bring everyone with us.

We pivoted fully. In effect, we closed the agency overnight and rebuilt around a very different future, one where expert-level thinking would no longer be scarce, but judgement and leadership would matter more than ever.

Over time, another pattern became clear.

The organisations with the greatest opportunity weren't the biggest. They were the ones where owners and CEOs were closely involved, able to run small experiments, see real results, and make decisions from evidence rather than theory.

Since then, I've seen the same behaviour change repeatedly.

People who believe AI is technical and hard realise it's neither. They stop guessing. They start asking better questions. The quality of their work improves, not because AI replaces them, but because it shows them what they weren't seeing before.

Pressure drops. Thinking space returns. Judgement improves.

A CTO from a billion-pound organisation summed it up perfectly.

He told me he hired people for their brilliance. He could see it clearly in interviews. But once they started work, that brilliance was drowned out by the sheer weight of day-to-day demands.

By using AI as a thought partner to help them in their daily work, he was freeing them to do their best work.

Used properly, AI didn't make those people smarter. It removed the drudge that was hiding their thinking.

Much of this work has been done in environments where mistakes matter: pharmaceuticals, legal firms, healthcare and other highly regulated industries. Places where speed without control isn't an option.

That's why I deliberately don't start with technology.

I don't believe AI is the strategy. I believe the strategy starts with understanding your business, your priorities and what's slowing your people down. Only then does it make sense to ask whether AI can help, and how to apply it safely.

Done well, AI doesn't replace human capability. It restores it.

How change should feel

Calm, step-by-step, with clarity for your people.

People don't fear AI. They fear the uncertainty it brings.

We're all told AI is going to replace our jobs, but that we have to use it. And then it lies to us. There's a huge lack of psychological safety.

So whilst people will use it, they'll use it in very limited ways. It's safe to use to rewrite an email. Or to summarise meeting notes.

To unlock your people, and AI, they need psychological safety within your organisation to be able to experiment and make mistakes.

And they also need psychological safety with the AI itself otherwise they'll only ever use it as a fancy Google search.

When AI change is done well, it shouldn't feel frantic or overwhelming. It should feel calm. Step by step. With clarity about what's happening and why.

That clarity matters because people look to their leaders for it.

Not because leaders are expected to have all the answers, but because uncertainty, left unaddressed, creates anxiety. And anxiety is what makes change feel threatening.

Good leadership in moments like this isn't about being the most technical person in the room. It's about creating enough clarity and direction that people can do their best work without fear of getting it wrong.

Sometimes AI is ignored, quietly, while people experiment on their own. Other times it's reduced to vague instructions like "use AI more" or "AI-ify your role", without clear guidance, boundaries or support.

Neither approach is fair on the people expected to work with it.

A good leader doesn't need to have all the answers about AI. But they do need to understand what it can do, how it's likely to impact their industry, and how to guide their people through that change responsibly.

This isn't about moving fast or chasing trends.

It's about leading change in a way that's human, deliberate and in control.

A safe way to regain control

Start with clarity, then build fluency, then prove value in focused experiments.

When leaders first engage with AI, the instinct is often to let teams experiment freely.

Others spend months reading articles and watching videos, hoping clarity will emerge.

In practice, both approaches tend to create fog.

Without clarity on how AI is already being used, where it can genuinely help, and what it means for your industry, it's very hard to move forward with confidence.

Nobody walks confidently into a fog.

Regaining control starts with orientation, not action.

Once clarity exists, people can build practical fluency and trust with AI. Focused experiments can then remove friction from existing workflows, using AI to do the heavy lifting while people apply judgement and insight.

It's critical that value is proven before anything is scaled so your decisions are made from evidence rather than theory.

Control comes from sequence, not speed.

Start with one real problem

A short working session gives you clarity without commitment.

You don't have to figure this out on your own.

A sensible place to start isn't with tools, training programmes or big decisions. It's with one real problem that matters right now.

AI Clarity Working Session

A focused, one-to-one working session to help you regain control of AI by applying it to a real business problem.

You bring a current challenge. We work through it together using AI, in context, so you can see what good use actually looks like and what's possible.

This is not a demo. It's not a sales call. It's a working session to see if we can solve your challenge quickly and efficiently.

What typically happens:

  • You see how capable AI can be when used properly
  • You understand why AI may not have felt useful so far
  • You see how you and your team are likely using it in limiting ways
  • You gain clarity on a better, safer way forward

The session lasts 60 minutes and is a fixed, one-off fee of £300.

If you don't leave with clearer thinking and a useful next step, I'll refund you immediately.

Clarity comes much faster when you work through a real problem instead of reading about possibilities.