As Lawyers and Lawmakers Tackle AI, the 1990s Loom Large
AI innovation mirrors the early “Wild West” days of the internet – is regulation soon to come?
The 1990s seem like another world: boy band NSYNC was charting, flannel shirts were in and nascent internet businesses were flourishing under a light regulatory scheme.
Governments and citizenry alike viewed the future of this transformative technology almost entirely with promise. Then the dot.com bubble burst following intense growth in the online commerce of illicit products and bad ideas. Now, with artificial intelligence ushering in another boom, governments and citizenry are much more skeptical and have a desire to impose more regulatory restrictions on these companies.
“It was a forgiving regulatory environment,” recalled Larry Sonsini, a corporate lawyer in Palo Alto, California, who is often called the “godfather of Silicon Valley.”
“At the time, it didn’t feel that way, because the kind of technologies that we had prior to that time, beginning with the semiconductors in the 70s and software and computers and personal computers in the 80s, there wasn’t a lot of regulation there. The concern about unintended consequences wasn’t prevalent with those kinds of technologies.”
Sonsini, who cofounded Wilson Sonsini Goodrich & Rosati PC and advised Google and Netscape in the 1990s, said that in hindsight, what was now clearly a light-touch approach – or in some cases a complete absence of regulation – was fundamental to the internet’s success.
“I think it was necessary, because, as you look at the growth of the internet and the services in the 90s and 2000s, it gave us a great economic acceleration,” he said.
Most experts agree that a light regulatory touch in internet businesses that was envisioned by the Clinton administration and then largely modeled by governments around the world was crucial to the creation of the online world that we live in today. But they also say that approach is unlikely to be adopted for the AI generation for reasons both political and economic.
“Nobody wants to be quoted as only an AI evangelist that turns a blind eye to potential problems, and a lot of that comes from having seen what happened in the internet, but we’re in a very different zone in terms of being able to get things done as well,” said Kenton J. King, another corporate lawyer in Palo Alto.
King, a partner at Skadden, Arps, Slate, Meagher & Flom LLP and Affiliates, advised companies, including AltaVista and Yahoo during the late 1990s. Regulation of the internet “wasn’t as much of a thing, to be honest,” he said.
“The environment was so different. There was an environment where tech could do no wrong. Yes, there was a valuation bubble that got ahead of valuations, got ahead of where things were, but there was real optimism about what the internet could accomplish and what tech could accomplish,” he said.
In line with that optimism, the Clinton administration took steps in the 1990s to support the rapid growth of the emerging internet. In 1996, it led efforts to overhaul the Telecommunications Act. While not specifically aimed at the internet, this update deregulated telecommunications and media ownership rules, opening the market to more companies and driving expansion of internet infrastructure. That same year, the administration passed the Communications Decency Act, which included one of the most influential provisions for the internet’s growth – Section 230. Still central to internet law today, Section 230 protects online platforms from liability for content posted by their users.
Section 230 was “foundational in many ways to the growth of the internet by taking that liability away from platform providers on the internet and remains to this day an important piece of the architecture governing it,” King said.
Contrast that approach to today and government efforts to grapple with artificial intelligence, where concerns about mitigating unintended consequences loom large.
Absent a handful of guiding principles issued by the White House, including a blueprint for an AI Bill of Rights issued in 2022, and federal regulatory agencies, efforts to govern the technology have been led by a patchwork of states. Sonsini said that this state-led approach was very different to what had occurred in the 1990s.
These efforts have primarily targeted specific concerns, such as potential bias in AI-driven hiring process as well as the creation of AI deepfakes and the use of name, image and likeness. Comprehensive attempts to regulate have been less successful, such as California’s SB 1047, which would have made companies legally liable for harm caused by their platforms. That bill was vetoed by Gov. Gavin Newsom in September.
Overseas, the European Union was the first to enact a comprehensive law governing AI. It categorizes AI systems by risk level, imposing stricter requirements on high-risk applications, such as those used in critical infrastructure or biometric identification, and holding high-risk applications liable for harm.
Taken collectively, these efforts tend to emphasize the potential risks of artificial intelligence. That’s representative of what has been learned in the decades following the internet’s mass adoption.
“Nobody at the inception [of the internet], or really until fairly recently, really understood a lot of the kind of societal harms that we needed to potentially be more thoughtful about, more proactive in terms of protecting our, you know, our young ones or others,” said King’s colleague, Skadden partner Ken D. Kumayama.
Kumayama, whose practice is focused on artificial intelligence, privacy and intellectual property, added that these societal harms, combined with dystopian images of artificial intelligence promulgated by science fiction meant many people had preconceived notions about AI.
Societal harms that are now being litigated were likely some “unintended consequences” of permissive regulation, Sonsini said.
“The downside was that it gave bad actors a microphone for their views. It created a pathway for disinformation, and it also, you know, captured a lot of young people with the iPhone, with the way that they related to some of the technology and depended on it for their own relations and their own education, and I think those are the things that started to worry us. Those were hard to foresee at the time,” he said.
Realization of technology’s potential for harm was evident in the way even the industry talked about itself. King noted that no AI presentation is complete without reference to the balancing of safety and benefits.
Both approaches to AI regulation – the United States’ patchwork approach and the EU’s broad regulation – had their advantages and disadvantages.
Kumayama gave privacy legislation as an example of the shortcomings involved with a jurisdiction by jurisdiction approach: “To date, there is no nationwide privacy law [in the U.S.] which makes it very difficult, frankly, for companies to comply with regulations, when you have patchwork of potentially 50 states that are engaged in regulation.”
But the issue with broad scale, pre-emptive regulation at a national or supranational level, such as that in the European Union, is that regulators can’t keep up with the technology and may stamp out innovation by being too proscriptive.
“Even Europeans will often say, you know, Europe has no innovation and lots of regulation, and the U.S. has no regulation, lots of innovation. And so, while that’s a pithy statement, there, frankly, there’s some truth to it,” King said.
However, the risk of a lack of regulation in the United States is that “Europe ends up filling that space and becoming a default,” he says.
“You had the unintended consequences of technology, but you also have the unintended consequences of regulation,” Sonsini said. He added that he thinks “Europe is being aggressive too fast.”
Eric Goldman, associate dean for research and professor at Santa Clara University School of Law, cautioned that the volume and extent of AI regulations, both those passed and proposed, represented a regulatory “tsunami” that had the potential to kill the generative AI industry in its nascent phase.
“There’s technologies that never emerged because they got overregulated earlier,” he said.
He gave the example of digital audio tapes, or DATs, a technology that faced significant pushback from the music recording industry upon their introduction in the 1980s.
“You’ve never heard of them because they were regulated so early. No one ever wanted to touch them, and so that technology just faded,” he said. “Every single one of these new major innovations, we have the same question: when do we regulate? How much do we know about the technology at the time of regulating, and what will the regulation do to the evolution of the technology?
“We don’t really know what that looks like, because we’re thinking about it with our very limited perspectives that will only be proven right or wrong with hindsight.” He cautioned that overregulation could also result in the capture of AI by governments “and weaponized for government purposes, most obviously in something like censorship.”
Kumayama said that it appeared that U.S. leaders were at least cognizant of the need to balance innovation with public safety, citing Gov. Newsom’s decision to veto SB 1047.
“I think that California is trying to strike that balance. But it’s tough. They’re trying to be a lot more open to innovation than the EU,” he said.
Given the pivotal role Section 230 played in allowing companies to innovate, is there a role for a similar provision applicable to artificial intelligence companies?
“We need something like Section 230 that provides some liability breathing room for generative AI model makers,” Goldman said. “If they are liable for the contents that they publish, that is an untenable situation. And as a result, government interventions are necessary to provide that breathing space.”
He added that this may not be the case if courts determine AI products are covered by First Amendment exemptions and noted that this relatively novel legal argument was only in the early stages of being litigated, citing a recent decision by a federal judge to California’s AB 2839, which allows individuals to sue for damages over election deepfakes. Kohls v. Bonta, 2:24-cv-02527-JAM-CKD (E.D. Cal., filed Sept. 17, 2024).
“230 really opened the door for continued innovation and taking the risk because of the protections that 230 gave,” Sonsini said. “I think that there may be the same kind of need, I’m not sure yet, and 230 is a roadmap that we can learn from … it’s on the table, but I think it’s open.”
“This is an area that, like so many other areas of the law, it would be logical and it would be reasonable to have some sort of an accommodation,” Kumayama said.
However, he cautioned, “I don’t think anybody really is seriously saying Section 230 just ought to automatically apply to AI. That’s sort of a square peg and round hole.”
Given the present lack of clarity, Sonsini said he’s excited but sober about the prospects of AI.
“It’s a complicated question, but I’ll tell you, the energy is there. The investment drive is there. The innovation is there,” he said.
King and Kumayama both described themselves as “AI optimists.”
“I think that it has tremendous potential to change the world in a positive way,” King said.
“That level of potential impact, that does, I think, demand some level of regulation or significant level of regulation,” he added.
“I do worry about how, in our current political system, we’re able to achieve that in a sensible way.”
-Jack Needham, Daily Journal Associate Editor
The Los Angeles/San Francisco Daily Journal is a publication for lawyers practicing in California, featuring updates on the courts, regulatory changes, the State Bar and the legal community at large