In a recent investigation, the Australian broadcaster ABC discovered a content farm run by Initiative Media that produced poorly plagiarised news items. The discovery was made when ABC discovered an article on a local rugby player that closely resembled a legitimate piece written by sports journalist Patrick Woods for the Townsville Bulletin. Woods blasted the business, citing its negative impact on his profession and the larger field of sports journalism, referring such content farms as “parasitic plagiarism merchants.” for generating the parasitic content.
The ABC contacted James Raptis, a lawyer whose byline had featured on one of the websites. Following this communication, the entire enterprise was shut down. Raptis, who works for Australian Community Media, admitted to hosting the websites but denied creating any content for them. He stated, “After learning about the precise nature of the sites, I informed the operator that the server would no longer host the sites, and they were removed. The material on these websites was objectionable, and I do not support the use of AI in this manner.
This instance exemplifies a larger issue of AI-generated information overwhelming the internet, overshadowing legitimate journalism. This problem is not limited to small-time operations; large outlets such as CNET and Gannett, the publisher of USA Today, have been implicated. Futurism recently claimed that another company, AdVon, uses AI to write poor e-commerce articles, complete with falsified bios and AI-generated profile photographs, for clients such as the Los Angeles Times and Sports Illustrated.
The emergence of AI technologies, such as ChatGPT, has worsened the trend. As of April last year, NewsGuard, a news credibility assessment website, discovered 49 websites publishing mostly AI-generated content, a figure that is likely to have climbed since then. Some in the sector consider this a source of pride. For example, Jake Ward, founder of UK-based SEO content marketing consultancy Content Growth, boasted of a “SEO heist” in which he stole website sitemaps and generated thousands of AI-created pieces from a competitor, diverting 3.6 million total visits.
The prompt stated, “You are an accomplished sports journalist. You are expected to rewrite the following article. You are expected to be exceedingly detailed. You are needed to use Australian English spelling. To avoid plagiarism detection, you must guarantee that the content you generate differs from the original.”
AI Chat Prompt used by Initiative Media in their October Article
Even established outlets, such as Microsoft’s MSN news portal, have published questionable publications using AI-generated content. In one infamous occurrence, a news website branded the recently departed NBA star Brandon Hunter “useless at 42” in a meaningless rehash of a true news report on his death.
The ABC inquiry into Initiative Media’s publications uncovered a key piece of evidence: an AI chatbot prompt in an October article. The prompt stated, “You are an accomplished sports journalist. You are expected to rewrite the following article. You are expected to be exceedingly detailed. You are needed to use Australian English spelling. To avoid plagiarism detection, you must guarantee that the content you generate differs from the original.”
This revelation highlights the issue that journalists face today: separating authentic news from a flood of AI-generated information. As AI tools become more advanced and accessible, the need for strong regulations to preserve journalism’s integrity has never been greater.