FTC probes ChatGPT over possible consumer harms
The U.S. Federal Trade Commission is demanding documents from OpenAI as part of a probe into whether the company’s conversational AI tool ChatGPT harms consumers, according to a report first published by the Washington Post.
The investigation reportedly will examine whether ChatGPT violates consumer protection laws for inadequately safeguarding users’ data.
As ChatGPT has exploded in popularity in the eight months since its release — and kicked off competition among Silicon Valley’s tech companies to develop competing AI chatbots — concerns have been rising over the technology’s potential to go awry. ChatGPT is trained on reams of text from the web, which helps it generate human-like responses to queries, but it already has a track record of issuing falsehoods.
The FTC probe into the Microsoft Corp.-backed startup marks the first official inquiry into a generative AI tool. FTC Chair Lina Khan, who testified before Congress on Thursday, has been a vocal critic of the popular AI chatbot, warning that regulators must “be vigilant early” in the field of artificial intelligence.
The FTC declined to comment on the nonpublic investigation. Microsoft also declined to comment.
In response to a request for comment, OpenAI referred to a tweet OpenAI Chief Executive Sam Altman sent Thursday afternoon. “It is very disappointing to see the FTC’s request start with a leak and does not help build trust,” Altman wrote. “That said, it’s super important to us that [our] technology is safe and pro-consumer, and we are confident we follow the law. Of course we will work with the FTC.”
The probe comes after congressional hearings in May, during which Altman testified and called for more regulation and independent audits of artificial intelligence. Altman has floated the idea that the government form a separate agency to oversee AI regulation.
ChatGPT and other new AI services benefit from a science fiction-infused marketing frenzy unlike anything in recent memory. There’s more to fear here than killer robots.
“I think if this technology goes wrong, it can go quite wrong ... we want to be vocal about that,” Altman said. “We want to work with the government to prevent that from happening.”
In March, the Center for Artificial Intelligence and Digital Policy, a prominent tech ethics group, filed a complaint with the FTC. The group requested an investigation into the fast-developing technology and called for a pause in the training of AI models for six months to “ensure the establishment of necessary guardrails to protect consumers, businesses, and the commercial marketplace.”
“What we need them to do is enjoin OpenAI to prevent further releases of GPT until adequate safeguards are available,” said Marc Rotenberg, the center’s leader and a longtime privacy advocate.
Federal regulation of artificial intelligence has lagged behind the technology’s development. Khan has said the FTC, which enforces both antitrust and consumer protection laws, is looking at myriad issues, including how these tools can harm consumers and whether the leading tech companies behind generative AI are using data to discriminate against rivals.
In a May op-ed for the New York Times, Khan wrote that there was little doubt that AI’s potential will be “highly disruptive” but cited Facebook’s and Google’s growth in the mid-2000s as cautionary tales. She called those tech companies’ free services innovative, though they “came at a steep cost” as companies tracked and sold users’ personal data.
Increasingly concerned about powerful AI systems, regulators say they’re directing resources toward identifying negative effects on consumers and workers.
“What began as a revolutionary set of technologies ended up concentrating enormous private power over key services and locking in business models that come at extraordinary cost to our privacy and security,” Khan wrote. She cited collusion, monopolization, mergers and price discrimination as areas where she already sees potential dangers in the application of generative AI. Fair competition is also a major concern, as a handful of businesses control the cloud services, data and computing power to develop AI tools.
In its demand for documents from OpenAI, the FTC asked the company to submit details on all of the complaints it had received of ChatGPT making “false, misleading, disparaging or harmful” statements about people, according to the Post. The FTC is looking into whether the company engaged in unfair or deceptive practices that caused “reputational harm” to consumers, according to the Post.
Silicon Valley’s tech leaders have issued their own calls for AI regulation. Google, the developer of rival chatbot Bard, has advocated for a “multi-layered, multi-stakeholder approach to AI governance” rather than what it calls a “Department of AI,” the company wrote in a comment to the National Telecommunications and Information Administration.
Microsoft, in line with Altman, has called for a more centralized new government agency to oversee regulatory developments.
The Senate Judiciary Committee held its first hearing Wednesday on AI and copyright issues, with music and tech executives testifying on topics such as fair use and intellectual property protection.
Column: Artificial intelligence chatbots are spreading fast, but hype about them is spreading faster
Will artificial intelligence make jobs obsolete and lead to humankind’s extinction? Not on your life
In the congressional hearing Thursday, Khan faced a barrage of attacks from Republican lawmakers who labeled her a “bully” for her aggressive antitrust stance, questioned her ethics and declared her leadership “a disaster.”
Bloomberg contributed to this report.
More to Read
Inside the business of entertainment
The Wide Shot brings you news, analysis and insights on everything from streaming wars to production — and what it all means for the future.
You may occasionally receive promotional content from the Los Angeles Times.