DNA scientists pressed pause 50 years ago, will AI researchers do the same? - Los Angeles Times
Advertisement

DNA scientists once halted their own apocalyptic research. Will AI researchers do the same?

AI app icons, including one for ChatGPT, on a smartphone screen
AI app icons, including one for ChatGPT, on a smartphone screen.
(Olivier Morin/ AFP via Getty Images)
Share via

In the summer of 1974, a group of international researchers published an urgent open letter asking their colleagues to suspend work on a potentially dangerous new technology. The letter was a first in the history of science — and now, half a century later, it has happened again.

The first letter, “Potential Hazards of Recombinant DNA Molecules,” called for a moratorium on certain experiments that transferred genes between different species, a technology fundamental to genetic engineering.

The letter this March, “Pause Giant AI Experiments,” came from leading artificial intelligence researchers and notables such as Elon Musk and Steve Wozniak. Just as in the recombinant DNA letter, the researchers called for a moratorium on certain AI projects, warning of a possible “AI extinction event.”

Advertisement

Hundreds of business leaders and academic experts signed a brief statement from the Center for AI Safety, saying they sought to “voice concerns about some of advanced AI’s most severe risks.”

May 30, 2023

Some AI scientists had already called for cautious AI research back in 2017, but their concern drew little public attention until the arrival of generative AI, first released publicly as ChatGPT. Suddenly, an AI tool could write stories, paint pictures, conduct conversations, even write songs — all previously unique human abilities. The March letter suggested that AI might someday turn hostile and even possibly become our evolutionary replacement.

Although 50 years apart, the debates that followed the DNA and AI letters have a key similarity: In both, a relatively specific concern raised by the researchers quickly became a public proxy for a whole range of political, social and even spiritual worries.

The recombinant DNA letter focused on the risk of accidentally creating novel fatal diseases. Opponents of genetic engineering broadened that concern into various disaster scenarios: a genocidal virus programmed to kill only one racial group, genetically engineered salmon so vigorous they could escape fish farms and destroy coastal ecosystems, fetal intelligence augmentation affordable only by the wealthy. There were even street protests against recombinant DNA experimentation in key research cities, including San Francisco and Cambridge, Mass. The mayor of Cambridge warned of bioengineered “monsters” and asked: “Is this the answer to Dr. Frankenstein’s dream?”

Advertisement

ChatGPT and other new AI services benefit from a science fiction-infused marketing frenzy unlike anything in recent memory. There’s more to fear here than killer robots.

March 31, 2023

In the months since the “Pause Giant AI Experiments” letter, disaster scenarios have also proliferated: AI enables the ultimate totalitarian surveillance state, a crazed military AI application launches a nuclear war, super-intelligent AIs collaborate to undermine the planet’s infrastructure. And there are less apocalyptic forebodings as well: unstoppable AI-powered hackers, massive global AI misinformation campaigns, rampant unemployment as artificial intelligence takes our jobs.

The recombinant DNA letter led to a four-day meeting at the Asilomar Conference Grounds on the Monterey Peninsula, where 140 researchers gathered to draft safety guidelines for the new work. I covered that conference as a journalist, and the proceedings radiated history-in-the-making: a who’s who of top molecular geneticists, including Nobel laureates as well as younger researchers who added 1960s idealism to the mix. The discussion in session after session was contentious; careers, work in progress, the freedom of scientific inquiry were all potentially at stake. But there was also the implicit fear that if researchers didn’t draft their own regulations, Congress would do it for them, in a far more heavy-handed fashion.

With only hours to spare on the last day, the conference voted to approve guidelines that would then be codified and enforced by the National Institutes of Health; versions of those rules still exist today and must be followed by any research organization that receives federal funding. The guidelines also indirectly influence the commercial biotech industry, which depends in large part on federally funded science for new ideas. The rules aren’t perfect, but they have worked well enough. In the 50 years since, we’ve had no genetic engineering disasters. (Even if the COVID-19 virus escaped from a laboratory, its genome did not show evidence of genetic engineering.)

Advertisement

A common declaration about AI programs is that they’re learning abilities they weren’t trained to have. But that claim doesn’t quite hold up.

May 14, 2023

The artificial intelligence challenge is a more complicated problem. Much of the new AI research is done in the private sector, by hundreds of companies ranging from tiny startups to multinational tech mammoths — none as easily regulated as academic institutions. And there are already existing laws about cybercrime, privacy, racial bias and more that cover many of the fears around advanced AI; how many new laws are actually needed? Finally, unlike the genetic engineering guidelines, the AI rules will probably be drafted by politicians. In June the European Union Parliament passed its draft AI Act, a far-reaching proposal to regulate AI that could be ratified by the end of the year but that has already been criticized by researchers as prohibitively strict.

No proposed legislation so far addresses the most dramatic concern of the AI moratorium letter: human extinction. But the history of genetic engineering since the Asilomar Conference suggests we may have some time to consider our options before any potential AI apocalypse.

Genetic engineering has proven far more complicated than anyone expected 50 years ago. After the initial fears and optimism of the 1970s, each decade has confronted researchers with new puzzles. A genome can have huge runs of repetitive, identical genes, for reasons still not fully understood. Human diseases often involve hundreds of individual genes. Epigenetics research has revealed that external circumstances — diet, exercise, emotional stress — can significantly influence how genes function. And RNA, once thought simply a chemical messenger, turns out to have a much more powerful role in the genome.

The panic over products from OpenAI and other companies says more about our cultural moment than about the tech itself.

April 9, 2023

That unfolding complexity may prove true for AI as well. Even the most humanlike poems or paintings or conversations produced by AI are generated by a purely statistical analysis of the vast database that is the internet. Producing human extinction will require much more from AI: specifically, a self-awareness able to ignore its creators’ wishes and instead act in AI’s own interests. In short, consciousness. And, like the genome, consciousness will certainly grow far more complicated the more we study it.

Both the genome and consciousness evolved over millions of years, and to assume that we can reverse-engineer either in a few decades is a tad presumptuous. Yet if such hubris leads to excess caution, that is a good thing. Before we actually have our hands on the full controls of either evolution or consciousness, we will have plenty of time to figure out how to proceed like responsible adults.

Michael Rogers is an author and futurist whose most recent book is “Email from the Future: Notes from 2084. His fly-on-the-wall coverage of the recombinant DNA Asilomar conference, “The Pandora’s Box Congress,” was published in Rolling Stone in 1975.

Advertisement