By Stephen Downward, Head of Digital Experience, Atomic 212°
In the past 12 months, we’ve seen many organisations pour money, time and executive attention into AI programs. They’ve bought the tools, hired the consultants, run the workshops, built the playbooks and the policy. And most of them have very little to show for it beyond a few impressive demos.
The problem here isn’t the technology. The tools work and, in most cases, they’re getting better all the time. The problem is that organisations have been measuring their AI progress by counting how many tools they’ve deployed rather than asking how many of their people are actually using them – and whether those people’s work has changed in any meaningful way as a result.
The early adopters have woven AI into their daily workflow and can’t imagine going back. But then there’s everyone else, who attended the training session, bookmarked the prompt library, and then went back to doing things the way they’ve always done them. That divide is where most AI programs go to die, and it’s not openly talked about because the metrics on the leadership dashboard (tools deployed, licences activated, workshops completed) all look good.
The reason AI adoption stalls so often comes down to something that doesn’t show up in any implementation roadmap: people are uncertain about what AI means for how they work, and nobody is giving them a clear answer. When someone has spent years building expertise in a particular craft like media planning, content strategy or SEO, and they’re suddenly told that a machine can produce a version of their output in seconds, the rational response is a bit of anxiety. Not panic, necessarily, but a recalibration of where they fit. And if the organisation’s only message is “AI will make you more productive”, that doesn’t actually resolve the uncertainty.
The organisations making better progress are the ones where leadership has been specific and honest about what AI changes and what it doesn’t. They’ve said, “This is where we expect you to use it, this is where we don’t, this is how we’ll evaluate quality, and this is what your role looks like going forward”. That kind of clarity doesn’t come from a policy document or a lunch-and-learn. What you need is leaders who use the tools themselves, visibly and regularly, and who talk about AI in the context of actual work rather than in the abstract.
We recently saw a content marketer use an LLM to turn raw population data into a fully interactive, JavaScript-driven landing page. She did it in an afternoon; not because someone told her to, but because her team had clear permission to experiment, clear guidance on expectations, and a leader who was doing the same thing alongside her.
That example illustrates something that the industry is underestimating. Yes, AI speeds up existing workflows, but it also collapses the boundaries between roles entirely. If you look at the traditional handoff (strategy briefs, creative briefs, dev, dev builds the thing), it assumes a division of labour that AI is making optional.
The organisations that recognise this will operate at a fundamentally different speed and cost base to those that continue as they are. But change only works if people trust the process, and trust only comes from leaders who are willing to go first. Set the rules clearly. Use the tools publicly. Be honest about what’s changing. Stop measuring rollout and start measuring whether people’s actual work is different from what it was six months ago. If it isn’t, your AI program is decoration.