WORDS —
MERILEE KERN
With industries across the board becoming increasingly adept at using AI-driven Large Language Models (LLMs) like OpenAI’s ChatGPT, Anthropic’s Claude2 and Meta AI’s Llama2, it begs one obvious question: How do we make sure all of the content out there isn’t generated by AI?
“The future of AI hinges on our capacity to distinguish fact from fiction in the online realm,” says AI and technology expert Alex Fink. “If we can use AI to create things like deepfakes, we can (and should) also use AI to stop misinformation. Knowing how to separate the signal from the noise is crucial for advancing AI ethically.”
According to Axios, “By some estimates, AI-generated content could soon account for 99% or more of all information on the internet, further straining already overwhelmed content moderation systems.” Being an “AI sleuth” is relatively easy thanks to some emerging smart SaaS tools. Such solutions will become crucial for those endeavoring to identify the use of AI in content creation, dealing with challenges related to false positives in AI content detection, discerning if AI material has yet been detected, and even if AI “distilled” content has been reconfigured to be more neutral in sentiment.
A study by Originality.ai found that some of the most popular LLMs such as Chat GPT used to rewrite or paraphrase another text are making content more neutral in sentiment and, in doing so, are notably altering the nature and objective of the initial written work.
A tactic known as “parasite SEO” is being increasingly used for a content marketing strategy “where reputable domains host content – even wholly AI created – primarily for search engine ranking purposes”.
“Employing LLMs to rewrite or paraphrase another text can certainly offer speed and ease in content production, but it comes with caveats,” says Originality.ai founder and CEO Jonathan Gillham. “For example, there might be a sound reason for coverage of a news event to have highly negative or positive sentiment. Dampening those qualities might prevent readers from perceiving how potentially troublesome or heartening an event might be. Outside of news content, publishers might desire to convey a particular kind of sentiment to evoke feelings in readers, and a neutral-scoring story might struggle to do so.”
Other research conducted by the company examined large publishing and media companies like Conde Nast and Red Ventures that dominate Google search results. “Our analysis shows that 46% of major publishers are using AI for content creation,” continues Jonathan. “On one particular globally popular sports magazine site, an article was identified by Futurism as being penned by a ‘fake’ author, which our algorithm substantiated.”
A tactic known as “parasite SEO” is being increasingly used for a content marketing strategy “where reputable domains host content – even wholly AI created – primarily for search engine ranking purposes”. “Of course,” adds Jonathan, “for readers of such content expecting a human was at the helm, this kind of gratuitous and unimpassioned information propagation, especially that with extreme reach, can be disheartening.”
With other reports indicating that more than three-quarters of consumers are reportedly concerned about AI-driven misinformation, “content sentiment degradation is emerging as an insidious facet of that overarching threat”.
“Although it’s a more stealth and less considered offense,” warns Jonathan, “the restructuring of content in a way that mitigates, or outright eliminates, the intended emotional tonality – the humanity – of a text is exacerbating the Misinformation Age of AI.”





