Should AI Chatbots Write News Articles? Google's Former Safety Boss Sounds Off

hero arificial hand book end
In a world fascinated by the capabilities of AI, a former employee of Google sends out a warning about the current dangers surrounding utilizing AI chatbots to write news articles. Arjun Narayan, a former Trust and Safety Lead at ByteDance and Google, says AI-generated articles are still subject to "hallucinations," or making things up that are either not true or don't exist, making them a liability.

It is no secret that some news outlets are already taking advantage of AI in order to write some its content. CNET has been found using such technology and then having to go back and correct factual errors. There are also websites out there that are fully dependent upon AI-generated articles, known as content farms. As it becomes harder for even mainstream news outlets to churn a profit these days, the allure of being able to replace human writers who cost more money is becoming increasingly attractive to some.

image of transparent robotic hand

In an interview with Gizmodo, Narayan was asked what he saw as the biggest unforeseen challenges posed by generative AI from a trust and safety aspect. He replied that there are a couple of risks, one being ensuring "AI systems are trained correctly and trained with the right ground truth." Narayan remarked that it is much more difficult to work backward in order to understand why AI makes certain decisions.

When posed with the question of the dangers or challenges Narayan sees with recent efforts by news organizations to use content generated by AI, he pointed out that it can be difficult to detect stories that are written fully by AI and which are not. He added, "That distinction is fading."

Narayan feels it is important to have some principles, such as letting users know when an article has been generated using AI. A second principle he touched on was having a competent editorial team that can prevent "hallucinations" from making it to print/post. This would require editors to check for factual errors, or for things such as a political slant, just as they should for a human writer. In short, publications cannot simply take AI at its word.

laptop with chatgpt being used

AI-written content is not only an issue for those reading news articles, however. Australian universities have had to change how they run exams and assessments because of students using AI software to write essays. The solution was to have students return to taking exams using pen and paper. New York public schools have also taken precautions against students using AI software in class, by banning software like OpenAI's ChatGPT across all devices in the New York public school system.

Just as it is with teachers trying to discern whether or not a student has used something like ChatGPT to write an essay, Narayan says it is getting to where it is extremely difficult to discern when a news outlet has used AI software to fully write an article or not. He poses some ethical questions as well, such as "Who has that copyright, who owns that IP?"

While Narayan does not see anything wrong with using AI to write articles, he does believe there needs to be transparency. He remarked, "It is important for us to indicate either in a byline or in a disclosure that content was either partially or fully generated by AI. As long as it meets your quality standards or editorial standards, why not?"

To be fully transparent, this was completely written by a human.
Tags:  Google, Chat, AI, openai, ehtics