Nina Christian, marketing futurist and strategist
I was at an executive briefing recently where the presenter made a series of sweeping declarations: the days of strategy are over, the days of functional business units are over, the days of marketing are over. His argument was that agentic AI will handle all of it. AI agents would find the best options, make the assessments, and do the choosing. Marketing to (and by) humans would become obsolete because agents would do the job instead. Inevitability was the vibe. And the room was full of people responsible for deploying AI in their organisations. People with budgets, mandates and accountability for getting it right.
That’s what got under my skin. Not the provocation itself, but the absolutism of it. He’s not the only one. This narrative is being presented in boardrooms and executive briefings everywhere, to capable leaders under real pressure to act, and to business owners trying to make the best decisions to keep their companies competitive. Whether you’re a senior leader with a transformation mandate or running a business while navigating the choppy, uncertain waters of the “everything is moving so fast” world we’re now in, you deserve nuance, not absolutes.
The end of marketing? Not quite.
Of all the claims made that morning, the claim that AI marks the end of marketing is the one that rattled in my head all day. Maybe because I’ve spent nearly three decades in this space. Maybe because I co-founded an AI company in the marketing space, and I’m watching AI make brand, trust and differentiation more important, not less. But mostly because the claim misunderstands something fundamental about what marketing actually is.
Here’s what the narrative misses. Strategy and marketing are both fundamentally about differentiation. Michael Porter built an entire body of work on the premise that strategy is about choosing what not to do, making deliberate trade-offs that create a unique position. Marketing, at its best, is how that position gets communicated, felt and remembered. But sitting above both is brand. Brand is the reason someone chooses you when the options all look similar on paper. It’s the accumulated experience of how you show up, what you stand for, how you make people feel, and whether they trust you enough to come back or refer you. It’s not a logo or a tagline. It’s reputation, built over time through consistency and conviction.

“The essence of strategy is choosing what not to do.” – Michael Porter. Image:file
AI can optimise. It can’t differentiate.
And this is where the “AI will handle it” argument falls apart. AI optimises, synthesises, and scales. Those are powerful capabilities. But differentiation comes from the human decisions about what to stand for, who to serve, and which trade-offs to make. Brand is built through those decisions and the way they’re expressed over time. AI can support that process. It cannot originate it. Underneath the question of whether marketing survives is a deeper question about what brands are actually built on. And that’s trust.
What trust is made of
Before we look at how trust plays out in real decisions, it’s worth being clear about what trust actually is. Because the speaker’s claim that trust becomes rational in an AI-mediated world only holds if you define trust narrowly, as reliability or verification. And that’s not what the research says trust is. Stephen M.R. Covey breaks trust into two dimensions: character and competence. Competence is about capabilities and results. Can you deliver? Character is about integrity and intent. Do you mean well? Are you honest? Do you care? AI can handle the competence side. It can be fast, reliable, and consistent. But it can’t demonstrate character. It has no intent. No integrity in the human sense. No benevolence. It has nothing at stake.
The Edelman Trust Barometer measures trust through a similar lens: competence (can this institution deliver?) and ethics (is it doing the right thing?). Their most recent data shows something that should give every AI advocate pause: trust in AI is low and declining in many markets, even as adoption accelerates. People are using it more and trusting it less.
This has a direct commercial implication. Covey’s argument is that high trust increases speed and reduces cost, while low trust makes everything slower and more expensive. Many of the people advocating for AI are doing so precisely because it’s faster and cheaper. But if the way you deploy it erodes trust, you may end up with the opposite outcome. Slower decisions, more friction, higher cost to win and retain clients. And here’s the distinction that matters most: AI can be efficient and reliable, but it cannot care about you. Benevolence is a core element of trust. If you build an organisation optimised for profit with no humans making decisions, no one who genuinely cares about the client relationship, you don’t have a high-value business. You have a transactional one. And transactional businesses are easy to leave.
How trust actually shows up in decisions
The frameworks explain what trust is made of. But the more revealing question is how it actually behaves when people make decisions. Because even when every rational box is ticked, the choice rarely comes down to the rational boxes.
Boards and C-suites don’t choose partners purely on optimised criteria. Relationship history matters. Reputational comfort matters. The feeling that someone understands your context, your culture, your risk appetite. Anyone who’s been around business long enough has seen the “best option on paper” lose to the one that felt right. The partner whose leadership gave them confidence they’d be well supported over the long term. The one with the wisdom, integrity and maturity to weather storms. These are significant inputs into high-stakes decisions, and they’re rarely rational in the way an agent would model them. Trust, by its very nature, is emotional. We don’t have to speculate about what happens when this gets ignored. We’ve already seen it play out.

Stephen M.R.Covey: “Change moves at the speed of trust.” Image: file
Klarna: when efficiency outpaces understanding
Klarna, the Swedish fintech, became one of the most visible case studies in aggressive AI deployment when it announced in 2024 that its AI assistant was handling two-thirds of customer service chats, doing the equivalent work of 700 full-time agents. The company reduced overall headcount from around 5,000 to approximately 3,800, with a stated ambition to halve its workforce. The logic was straightforward: faster, cheaper, better. But the reality was more complicated. While Klarna initially reported equivalent customer satisfaction scores, the picture grew more mixed as the AI encountered complex, emotionally sensitive or multi-layered queries it couldn’t handle well.
By late 2024, reports indicated the company was quietly bringing human agents back for these interactions. The lesson here isn’t that AI can’t handle customer service. It’s that deploying at speed, without fully understanding the human trust dynamics at play, creates costs that don’t appear in the efficiency model. The savings look compelling on a spreadsheet until your customers start leaving, or your brand reputation starts eroding in ways far more expensive to repair than the headcount you reduced.
How AI changes the mechanics of trust
AI agents don’t eliminate trust. They change how it’s built. Agents will increasingly do the searching, filtering and shortlisting. But what are they drawing on? The same cues humans use: who’s being referenced, who has a coherent body of work, who others are recommending. The mechanism has shifted, but the job of building trust and credibility is more important than ever. The buying journey is becoming more private, not more automated. People research and evaluate behind closed doors, through AI summaries, peer conversations, and their own quiet investigation. By the time someone reaches out, they’ve already decided whether they trust you.
Brands need to be findable, coherent and shareable across every surface where that private evaluation happens. That’s still marketing. And coherence is what AI rewards. When someone asks their favourite LLM to recommend an expert, what gets surfaced? Consistent, well-referenced thinking. A body of work that hangs together. Peer endorsement. The brands that treat AI as the end of marketing will become invisible to the very systems they think will replace them.
Marketing to machines is smart. Stopping there isn’t.
I want to be clear about something. I am not saying you shouldn’t market to machines. Or have machines help you with your marketing. You absolutely should. AI systems are increasingly the first filter. They decide what gets surfaced, what gets shortlisted, what gets recommended. So it makes sense to use machines to help your message be more intelligible to machines. If your brand isn’t legible to them, you won’t make it to the human conversation.
But that’s the point. You still need to make it to the human conversation. Machines will look at a lot of the same cues humans do: consistency, clarity, coherence, a body of work that hangs together. So optimising for AI discovery and optimising for human trust aren’t as far apart as people think. But they’re not the same thing either.
The claim that marketing to humans will become obsolete assumes that the machine’s recommendation is the end of the journey. It rarely is, especially for high-stakes, high-value, or complex decisions. The machine might get you on the list. The human still decides whether to trust you enough to pick up the phone, send the message or fill in the contact form. This is actually where the opportunity lies for brands willing to do the harder work. Because if it were easy, everyone would do it. The brands that can be both machine-intelligible and deeply human will be the ones that differentiate. The rest will compete on price.
The questions leaders should be asking before they deploy
The current AI market has the energy of a gold rush, with companies everywhere looking to stake their claim and capitalise on the fear and urgency driving adoption. And the reality is it’s moving faster than most organisations’ ability to evaluate it. Vendor claims are broad, credentials are hard to verify, and the pressure from boards and stakeholders to “have an AI strategy” is intense. When someone positions themselves as an “AI specialist” without specifying what problem they solve or for whom, it should raise questions, not confidence. Especially when there’s no evidence of the difference it’s actually made, to their clients’ brands, their bottom line, or anything beyond “59% faster document turnaround” and “46% fewer hours on routine tasks.”
It’s understandable that organisations default to speed when the pressure is this intense. But speed without clarity is expensive. Hasty, fear-led deployment rarely produces the outcomes the business case promised. The cost usually shows up in trust, not in the deployment budget. That’s the Wild West we’re operating in right now. And fear has always been a terrible deployment strategy. AI’s acceleration is real. I’m not suggesting anyone slow down. But there’s a meaningful difference between pushing AI because the capability exists and using it to solve real human problems in partnership with the people it affects. That means being willing to slow down on the questions that matter: the ethical, cultural and data sovereignty considerations that are largely being sidelined in the rush to deploy. So what does a more considered approach look like? Three things.
Understand enough of the landscape to make your own decisions. You don’t need to understand everything. But you should be across the areas most likely to affect your business meaningfully. Look at how and why others in your space are using AI. What’s working and what isn’t. How is it affecting the sentiment of their customers? Stay across developments in the fields that matter to you. The goal isn’t expertise in AI. It’s enough literacy to ask the right questions and spot when someone is selling you fear instead of strategy.
Remember that you have agency. You get to choose what could, should, and will be delegated to AI. Fear of being left behind might create urgency, and that’s fine. But it shouldn’t be the driving force behind how you approach it.
Partner with people who can help you work through the nuance, people who aren’t selling fear, aren’t blanket critics dismissing AI outright, and aren’t blind enthusiasts cheerleading it uncritically either.
Strategy is about choices and trade-offs, what you’re going to do and what you’re not. Protect trust and brand alongside the efficiency metrics. Workflow improvements, operational efficiency, faster production, cost savings, short-term revenue gains: these are all legitimate metrics. But they shouldn’t be measured in isolation. If your brand isn’t getting stronger, if client trust isn’t growing, if your reputation in the market isn’t improving, then the efficiency gains are hollow. You might be saving money now while quietly eroding your long-term brand and business viability, making it easier for customers to leave later. The two things that protect your revenue into the future are trust and brand. Don’t sacrifice them for a dashboard that looks good this quarter.
Instead of asking “how fast can we deploy AI?”, leaders should be asking:
• Where are we augmenting human expertise, and where have we started replacing it?
• Do our customers know the difference?
• Are we deploying because we’ve identified a meaningful problem, or because we’re afraid of being left behind?
• What happens to trust if this goes wrong?
• Is our brand stronger because of how we’re using AI, or weaker?
Strategy is choice. Good marketing is differentiation. Brand is trust earned over time. None of those things are going away. They’re getting harder, more important, and more valuable. Just because someone tells you the old rules are dead doesn’t mean they are. They still decide who wins. AI just changes how the game is played. The competitive advantage belongs to whoever deploys AI in a way that makes people trust them more. And trust, despite what that presenter claimed, isn’t rational. It never has been.
Feature image- Nina Christian, marketing futurist and strategist: supplied
Nina Christian is a marketing futurist and award-winning strategist who helps leaders bring more humanity and strategic intelligence into how they market and lead. A certified practising marketer (CPM), fellow, and life member of the Australian Marketing Institute, she is the creator of Marketing Me® and co-founder of Virtually Myself®, an agentic AI company building in the marketing space. She is the author of Marketing Me: Take Charge of Your Personal Brand and Make Your Mark on the World and Solar System Marketing, a framework for people who are the brand and need a simpler, more sustainable way to build visibility with clarity and commercial focus. Through keynotes and advisory work, she helps founders, executives and organisations navigate the intersection of AI, trust and long-term brand value, so the efficiency gains of AI don’t come at the cost of the trust that makes businesses durable and the potency that makes their message memorable and impactful.

