Mens lifestyle site Man of Many continues to stay on the front foot of the changing nature of online publishing as it continues to be shaped by the adoption of AI. Today it has announced it has been selected to join the APAC Newsroom AI Catalyst, an accelerator program supported by OpenAI, along with membership in the WAN-IFRA and the News/Media Alliance.
Last month Man of Many revealed it had implemented a mandatory user sign-in for articles on its site, along with a refreshed newsletter offer of eight topic-focused emails. The moves gave Man of Many a greater opportunity at data collection and a stronger connection with its readership.
Co-founder Scott Purcell has been vocal about the need for independent publishers to safeguard themselves against the impacts of changes to search engines and LLM content harvesting fuelled by the AI gold rush underway right now. Mediaweek reached out to Scott this morning after he announced the membership in the World Association of News Publishers (WAN-IFRA) and the North American-focused News/Media Alliance, along with acceptance into the current APAC Newsroom AI Catalyst intake.
1) What prompted your interest in joining the WAN-IFRA and the News/Media Alliance? Was this something you sought, or did they reach out?
Joining WAN-IFRA was actually a condition of being accepted into the OpenAI Newsroom AI Catalyst, which was the initial catalyst. However, it was a natural step for us regardless. We’ve entered their awards in the past and have immense respect for the work they do, so becoming a full member felt like the right evolution for Man of Many. We won GOLD for Best Native Advertising, SILVER for Best Lifestyle Site and SILVER for Best Newsletter at the WAN-IFRA Digital Media Awards Asia, 2024.
Our decision to join the News/Media Alliance was entirely proactive and strategic. We were drawn to the calibre of publishers in their network (The New York Times, Wall Street Journal, Conde Nast, The Associated Press etc) and, crucially, the vital work they do in lobbying for publishers’ rights globally. In an era where our industry faces existential challenges from platform shifts and AI, being part of a collective voice that champions the value of original content is more important than ever.
2) In joining OpenAI’s accelerator program, how does that work at a practical level? Are these going to be learnings that will aid in creating content that will find benefit in competing LLMs?
On a practical level, three of our team members will attend the in-person training, and from there, a series of online workshops will be made available to our entire team. The goal is to build a foundational level of AI literacy across the business.
However, I want to be very clear about our intention here. This isn’t about using AI to write articles. We remain completely committed to our ‘100% Human’ initiative and our strict Responsible AI Usage Policy. For us, the value of this program is in accelerating automation on the back-end: streamlining processes like commercial reporting, data analysis, and on-site content recommendations. It’s about using AI to make our human team more efficient and powerful, not to replace them.
To your second point, the learnings will absolutely help us engage with all LLMs more effectively. The strategy isn’t to create content with AI, but to structure our human-created content to be more visible to AI. By optimising our articles to be ‘AI-ready’, we’re teaching the algorithms what quality, authoritative content looks like, which increases our chances of being surfaced in AI overviews and generating high-intent AI referrals.
3) Can you speak to the multi-prong approach of putting up a paywall to aid data collection (using that to build a stronger relationship with the audience) and facilitating the sharing of information with platforms like OpenAI?
They might seem like opposing actions, but they are two sides of the same coin: securing the future of our business.
Firstly, to clarify, we’re implementing a free registration wall, not a hard paywall. The primary goal is to shift from ‘rented’ audiences on social and search to ‘owned’, direct relationships with our community. This allows us to collect valuable first-party data in a privacy-compliant way, which is essential for personalising user experience and future-proofing our advertising model post-cookies.
Simultaneously, our engagement with AI platforms isn’t about giving our content away freely. Personally, I use AI in my day-to-day work, so it would be completely illogical for us to block our content from platforms that people are increasingly using to find answers. The data already backs this up: we’re seeing 15% month-on-month growth in our referral traffic from these AI platforms.
So, our strategy is to build a strong presence there, but on the condition that it leads to a framework for fair value exchange. That’s why we’re actively working with partners like TollBit and ProRata.AI to explore monetisation, but we’re also realists. Frankly, I don’t believe pay-per-crawl models will be truly effective until there is a legislative push compelling platforms to pay for scraping. An AI bot isn’t going to pull out a credit card if it hits a block.
So while we engage and experiment, our core belief remains firm: meaningful protection for original content will come from legislation. That’s the only way to ensure that the companies building multi-billion-dollar technologies on our work are required to compensate us for its value.