With the advent of ChatGPT, consumers of news rejected thoughts of AI-generated news stories. Now, Technocrats at Google have officially institutionalized the practice as it has already developed a product to write news and is currently testing it.ChatGPT is already being used to write emails, blog posts, comments on articles, abstracts on scientific papers and more. If it now becomes standard practice to write general news stories, the Internet may be finished.
Whatever cesspool the Internet might have become in pre-ChatGPT days, its expanding use is like dumping raw sewage into the ocean. ⁃ TN Editor
Google is testing a product that uses artificial intelligence technology to produce news stories, pitching it to news organizations including The New York Times, The Washington Post and The Wall Street Journal’s owner, News Corp, according to three people familiar with the matter.
The tool, known internally by the working title Genesis, can take in information — details of current events, for example — and generate news copy, the people said, speaking on the condition of anonymity to discuss the product.
One of the three people familiar with the product said that Google believed it could serve as a kind of personal assistant for journalists, automating some tasks to free up time for others, and that the company saw it as responsible technology that could help steer the publishing industry away from the pitfalls of generative A.I.
Some executives who saw Google’s pitch described it as unsettling, asking not to be identified discussing a confidential matter. Two people said it seemed to take for granted the effort that went into producing accurate and artful news stories.
A Google spokeswoman did not immediately respond to a request for comment. The Times and The Post declined to comment.
“We have an excellent relationship with Google, and we appreciate Sundar Pichai’s long-term commitment to journalism,” a News Corp spokesman said in a statement, referring to Google’s chief executive.
Jeff Jarvis, a journalism professor and media commentator, said Google’s new tool, as described, had potential upsides and downsides.
“If this technology can deliver factual information reliably, journalists should use the tool,” said Mr. Jarvis, director of the Tow-Knight Center for Entrepreneurial Journalism at the Craig Newmark Graduate School of Journalism at the City University of New York.
“If, on the other hand, it is misused by journalists and news organizations on topics that require nuance and cultural understanding,” he continued, “then it could damage the credibility not only of the tool but of the news organizations that use it.”
News organizations around the world are grappling with whether to use artificial intelligence tools in their newsrooms. Many, including The Times, NPR and Insider, have notified employees that they intend to explore potential uses of A.I. to see how it might be responsibly applied to the high-stakes realm of news, where seconds count and accuracy is paramount.
But Google’s new tool is sure to spur anxiety, too, among journalists who have been writing their own articles for centuries. Some news organizations, including The Associated Press, have long used A.I. to generate stories about matters including corporate earnings reports, but they remain a small fraction of the service’s articles compared with those generated by journalists.
Artificial intelligence could change that, enabling users to generate articles on a wider scale that, if not edited and checked carefully, could spread misinformation and affect how traditionally written stories are perceived.