AI hallucinations cost Aussie brands millions in lost sales

AI reputation breach Ryan McMIllan

Atlas Digital and Ryan McMillan reveal ‘AI reputation breaches’, that cost brands in revenue and stature.

Just when the corporate world thought it grasped search engine optimisation, a new invisible threat quietly wrecks brand reputations. New data from Atlas Digital reveals AI platforms inaccurately represent the majority of Australian businesses.

This creates a commercial black hole for companies asleep at the wheel.

The global growth partner dubs the phenomenon an ‘AI reputation breach’. The issue explodes when generative engines like ChatGPT, Google Gemini, Perplexity AI, and Claude generate incorrect, outdated, or completely fabricated information about a business.

Recent Atlas Digital audits across the technology and financial services sectors uncovered a startling reality.

A staggering 72% of brands suffered at least one factual error in AI-generated responses. Meanwhile, 70% completely failed to appear in AI recommendations for their own specific categories.

Atlas-Digital-AI-Reputation-Breach-Infographic-April-2026-1600

If enterprise spending on AI misinformation mitigation hits $30 billion by 2028, where is that money coming from? Image: supplied

The invisible reputation breach

To make matters worse, most users blindly trust the bots and never check the information’s accuracy.

Mediaweek spoke with Atlas Digital founder and chief executive officer Ryan McMillan, who warns that AI shapes business reputations and compliance positions without any corporate oversight.

“AI is now a front door to decision-making, but it is not always getting the facts right,” McMillan said. “There is no alert, no complaint. The customer simply chooses a competitor, and the impact goes unseen.”

The analysis proves the issue hits complex businesses the hardest. McMillan highlighted the severe risk for businesses operating in strict industries.

“We had one client that only had a 44% hit rate in terms of accuracy, and they are in the fintech space,” McMillan said. “You can actually imagine in a highly regulated financial space when there are inaccuracies. In their case, a lot of it was around features they offered and pricing.”

The commercial cost of hallucination

Atlas Digital organic product lead Alla Lvovich highlights that the commercial implications accelerate daily as AI rapidly embeds itself into everyday consumer behaviour.

AI-reputation-breach-Alla-Lvovich

Alla Lvovich says, “AI referral traffic to websites has increased by 1,200% year-on-year”. Image: supplied

“ChatGPT alone now reaches more than 900 million weekly users, while Google’s AI Overviews are seen by more than two billion people globally,” Lvovich said. “In Australia, AI referral traffic to websites has increased by 1,200% year-on-year.”

The real kicker lies in the conversion metrics.

While traditional search might only muster a 5% to 10% conversion rate for highly targeted ads, AI platforms blow those numbers out of the water.

“With an LLM, we are seeing 17% to 25%, and I have seen as high as 40% to 50% conversion rates on average for LLM results coming through,” McMillan noted. “People seem to trust an LLM recommendation more like that of a friend than of an advertisement. And actually, 80% of people do not double-check the LLM results anyway, so they just take it at face value.”

The new cybersecurity

Model accuracy continuously fails to keep pace with consumer adoption.

Hallucination rates across major AI models range from 15% to 52%, with engines frequently repeating and amplifying fabricated claims over time. Worse still, bad actors actively exploit these vulnerabilities.

“There have been several tests done by cybersecurity firms around the world cracking LLMs and exploiting them,” McMillan said. He pointed to astroturfing tactics on platforms like Reddit, where bad actors create fake accounts to flood channels with seeded opinions.

“There was a university research organisation in Zurich that actually ran a test to see if they could influence that using Reddit, and they did,” McMillan added. “So there are real vulnerabilities out there for people to sway opinions.”

Taking control of the narrative

With tech moving at breakneck speed, McMillan urges businesses to treat their AI-generated presence as a core pillar of their commercial strategy rather than waiting for lawmakers to catch up.

“There is obviously a massive lack of regulation,” McMillan said. “The technology seems to be moving a lot faster than the regulation anyway. I think it is in their favour to just go and try to take control of that rather than hope that regulation will help them.”

Taking control starts with a basic, internal audit. McMillan advises marketers to step into the shoes of their customers and manually test the platforms to see exactly how the bots treat their brand.

“Businesses need to jump into ChatGPT, Gemini, and Claude and literally ask, ‘What are the pros and cons of my business?’ or ‘Who are the best providers in my industry?’,” McMillan said. “You need to see exactly what your customers see when they do their research.”

Once a brand establishes its baseline, it can identify dangerous hallucinations and critical knowledge gaps. From there, companies can begin actively feeding accurate, structured data and digital PR signals back into the ecosystem to correct the record.

“Do not wait for the platform to fix it for you,” McMillan added. “Take control of your narrative now, because your competitors probably already are.”

Feature image- Ryan McMillan, Atlas Digital founder and chief executive officer: supplied

Keep on top of the most important media, marketing, and agency news each day with the Mediaweek Morning Report – delivered for free every morning to your inbox.

To Top