Claire Fenner: CMOs must find moral compass in AI arms race

Foxtel - Claire Fenner Atomic 212

The effort is worth it. “According to IBM, a data breach in 2023 was worth an average of AUD$6.77 million – a 15.3% jump from 2020.”

By Claire Fenner, CEO at Atomic 212º

Marketers around the world have kicked off 2024 running on their digital hamster wheels: an endless effort to keep up with the evolving artificial intelligence landscape.

While it’s indisputable that this innovation can deliver some incredible results, it’s important to remember that there is a dark side of AI which, if handled incorrectly, can spearhead distrust with consumers. That means finding your moral compass in AI innovation should be treated equally as important as the structures in place to implement the tech itself.

The latest Edelman Trust Barometer indicated that whether or not people embrace AI all comes down to one thing: how effectively regulated they perceive it to be. The vast majority of people worldwide believe innovation is mismanaged and feel their government regulators lack adequate understanding of emerging technologies to regulate them effectively.

As AI and machine learning experience increasing widespread adoption, it’s absolutely crucial that leaders can ensure their responsible development and implementation. But in order to realise this potential fully, regulators must focus on safeguarding users, organisations and society from the potential pitfalls.

Trust and transparency are everything.

So how does this apply to your business? 62% of people globally said they expect CEOs to manage changes occurring in society, not just those occurring in their business. 78% want their CEO to speak publicly about issues such as the impact of automation on jobs, and 79% want them to speak publicly about the ethical use of technology.

But overcoming ethical challenges with AI will be a constant exercise in diligence, collaboration and a willingness to address issues proactively. Rather than rushing large-scale deployment, leaders need to prioritise demonstrating value safely. One way to do this is to curate a multi-disciplinary team with expertise across technical, ethical and domain-specific matters to help balance competing priorities.

As the technology progresses, so too must research and standards. Prioritising education and discussion of AI risks and mitigations in the workplace can be genuinely empowering for employees. The Edelman Trust Barometer tells us that when people feel in control over how innovations affect their lives, they are more likely to embrace them, not resist them. Leaders should listen for concerns and be open to questions. With openness to new strategies and partnership across industries, businesses can stay on top of responsibilities as technologies continue to evolve rapidly.

Earning consumers’ trust is one thing; holding onto it in a digital world full of cybersecurity breaches is quite another. We’ve seen some catastrophic data breaches in the past year, emphasising just how important it is to establish robust data privacy and governance practices. According to IBM, a data breach in 2023 was worth an average of AUD$6.77 million – a 15.3% jump from 2020.

Once someone forks over their most sensitive information, the onus is on the organisation to ethically handle that data in the way they train staff as well as develop and deploy systems – a risky operation if protection controls are insufficient.

So how can organisations maintain governance and oversight of how user data is accessed and utilised? Techniques like anonymisation can help when sharing data is necessary; while access controls and auditing allow for monitoring who is using data and for what purposes. At the very least, having well-defined policies is a must to ensure data is only ever used as agreed and to benefit the organisation and its users.

Once trust is established and data security is well under control, it’s time to harness AI systems to create unique and personalised outputs. Sure, off-the-shelf solutions provide speed and ease of implementation but these copy-paste responses typically fail to capture an individual organisation’s distinct expertise, culture and customer relationships. This can lead to outputs that just don’t address user needs or queries appropriately, and homogeneous outcomes tend to undermine user trust or brand consistency when not properly tailored.

Make sure AI applications are customised with internal knowledge to produce results that hit the right notes. Test systems to evaluate how well outputs meet requirements and then refine approaches where needed.

Finding your moral compass in murky AI waters is challenging, but it’s not impossible. At the end of the day, it really just comes down to giving the people what they want: transparency and trust, data security and unique and personalised outputs. And for competitive CMOs, the outcome can be well worth the effort.

See also: APAC marketers see AI as more important to social media than international counterparts: Meltwater report

To Top