How moderation can protect ROI in social media
By Roy Brockman, Chief Revenue Office, Bodyguard.ai
Social media can be big business, for business. Almost 5 billion people across the world use social media; and in 2022, that figure increased by 4.2%. With so many potential consumer eyeballs at stake, it’s no wonder that brands, clubs and businesses invest so much time, energy and money into building their online communities and on social media advertising – a market projected to reach more than $268 billion this year.
For many brands, their owned social media channels are an integral part of their communications strategy. They’re an opportunity to create engagement, loyalty and connection with their fans and customers. And they can also be a crucial conduit for feedback – both good and bad.
But this is not without risk, both to reputation and revenue. There have been countless high-profile social media storms where toxic and hateful content posted by bad actors have damaged brand reputation and impacted the very communities that brands want to protect. It’s worth remembering that 40% of people leave a platform on their first encounter with toxic content. So, online toxicity, left unchecked, can be problematic for brands.
What’s more, it’s easy for racist, sexist or inflammatory comments to grab the headlines. And while it’s imperative that brands address this, they should be careful not to overlook other, more insidious types of toxic content.Take sports and entertainment, for example. Digital piracy in this sector impacts both the fan viewing experience and clubs’ commercials. In fact, a 2019 study conducted by American sponsorship valuation firm GumGum Sports and London-based digital piracy experts MUSO showed that illegal streaming causes Premier League clubs to lose approximately £1 million in sponsorship for each match.
Illegal ticket resales and copycat products also represent potential financial loss. Arcom estimates that illegal ticket sales account for a 1% loss of the global market,equating to around €20m a year. And brand protection company MarkMonitor suggests online counterfeiting is worth about £10bn annually in the UK. Research by Marketing Week showed that a quarter of marketers have no process for monitoring or enforcing anti-counterfeiting action.
With more and more sales moving online, social media channels are a natural habitat for bandits to lie. And the sheer scale and pace of conversations online mean that it is almost impossible for human moderation teams to keep up. If you imagine that a trained human moderator takes around ten seconds to read, process and moderate a post, and that the average lifespan of a tweet is just 16 minutes, it’s easy to see why keeping brand channels free of toxicity is an uphill struggle. And it’s even more complex when you consider that spam and junk links aren’t always obvious at first glance.
To moderate social media at scale, brands need to look at automating the process, to lift the burden from their community managers onto technology. Using AI and large language models (LLMs), moderation technology can analyse thousands of posts a second, automatically removing up to 90% of toxic and hateful content in realtime, before it has a chance to do any real damage. It can also detect spam and junk links that can lead customers away from your platforms onto dangerous or illegal sites where they may become victims of fraud, or direct them to low quality, illegal goods that are a poor copycat of your brand’s products.
Moderation can also help brands identify fake followers in their own community. A 2021 study on bot management by Netacea found that automated bots operated by malicious actors cost businesses 3.6% of their annual revenue, on average. For the worst affected businesses, this equated to at least $250million every year. With brands dedicating huge budgets to building a loyal and engaged community, it makes little sense to allow bots and trolls to undermine that spend. Instead, brands can weed these out at the root to ensure they have an authentic online community which buys into their product and identity; which in turn should lead to better customer conversions.
For brands that don’t make moderating content a priority, the ramifications are far-reaching; and the stats show that content moderation is one of the biggest weapons a brand can have in their tech arsenal. Nurturing social media and an online following is an essential part of any communication strategy; by moderating content, brands can make sure they don’t fall foul to the potential pitfalls that come with it.