China calls for stronger security measures to deal with risks from AI
BEIJING — China’s ruling Communist Party has warned of potential risks from advances in artificial intelligence and called for stronger national security measures.
The statement issued after a meeting chaired by President Xi Jinping on Tuesday underscores the tension between the government’s determination to seize global leadership in cutting-edge technology and concerns about the possible social and political harms of such technologies.
It also followed a warning by scientists and tech industry leaders in the U.S., including high-level executives at Microsoft and Google, about the perils that artificial intelligence poses to humankind.
The meeting in Beijing discussed the need for “dedicated efforts to safeguard political security and improve the security governance of internet data and artificial intelligence,” the official Xinhua News Agency said.
“It was stressed at the meeting that the complexity and severity of national security problems faced by our country have increased dramatically. The national security front must build up strategic self-confidence, have enough confidence to secure victory, and be keenly aware of its own strengths and advantages,” Xinhua said.
“We must be prepared for worst-case and extreme scenarios,” it said.
China is exporting its repressive use of artificial intelligence. It’s up to democracies to set responsible rules for the technology.
Xi, who is China’s head of state, commander of the military and chair of the party’s National Security Commission, called at the meeting for “staying keenly aware of the complicated and challenging circumstances facing national security.”
China needs a “new pattern of development with a new security architecture,” Xinhua reported Xi as saying.
China already dedicates vast resources to suppressing any perceived political threats to the party’s dominance, with spending on the police and security personnel exceeding that devoted to the military.
While it relentlessly suppresses in-person protests and censors online criticism, citizens have continued to express dissatisfaction with policies, most recently the draconian lockdown measures enacted to combat the spread of COVID-19.
Ever since OpenAI’s viral chatbot was unveiled late last year, detractors have lined up to flag potential misuse of ChatGPT by email scammers, bots, stalkers and hackers.
China has been cracking down on its tech sector in an effort to reassert party control, but like other countries, it is scrambling to find ways to regulate fast-developing AI technology.
The most recent party meeting reinforced the need to “assess the potential risks, take precautions, safeguard the people’s interests and national security, and ensure the safety, reliability and ability to control AI,” the official Beijing Youth Daily newspaper reported Tuesday.
Worries about artificial intelligence systems outsmarting humans and slipping out of control have intensified with the rise of a new generation of highly capable AI chatbots such as ChatGPT.
Sam Altman, chief executive of ChatGPT-maker OpenAI, and Geoffrey Hinton, a computer scientist known as the godfather of artificial intelligence, were among the hundreds of leading figures in the U.S. who signed the statement Tuesday that was posted on the Center for AI Safety’s website.
The panic over products from OpenAI and other companies says more about our cultural moment than about the tech itself.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement said.
More than 1,000 researchers and technologists, including Elon Musk, who is currently on a visit to China, had signed a much longer letter earlier this year calling for a six-month pause on AI development.
The missive said AI poses “profound risks to society and humanity,” and some involved in the topic have proposed a United Nations treaty to regulate the technology.
China warned as far back as 2018 of the need to regulate AI, but has nonetheless funded a vast expansion in the field as part of efforts to seize the initiative on cutting-edge technologies.
At SXSW, the annual summit of tech boosters, all eyes turned to A.I. — and, in particular, what the emerging sector means for entertainment.
A lack of privacy protections and strict party control over the legal system have also resulted in near-blanket use of facial, voice and even walking-gait recognition technology to identify and detain those seen as threatening, particularly political dissenters and religious minorities, especially Muslims.
Members of the Uyghur and other mainly Muslim ethnic groups have been singled out for mass electronic monitoring, and more than 1 million people have been detained in prison-like political reeducation camps that China calls de-radicalization and job training centers.
AI’s risks are seen mainly in its ability to control robotic, self-governing weaponry, financial tools and computers governing power grids, health centers, transportation networks and other key infrastructure.
China’s unbridled enthusiasm for new technology and willingness to tinker with imported or stolen research and to stifle inquiries into major events such as the COVID-19 outbreak heighten concerns over its use of AI.
“China’s blithe attitude toward technological risk, the government’s reckless ambition, and Beijing’s crisis mismanagement are all on a collision course with the escalating dangers of AI,” technology and national security scholars Bill Drexel and Hannah Kelley wrote in an article published this week in the journal Foreign Affairs.
More to Read
Sign up for Essential California
The most important California stories and recommendations in your inbox every morning.
You may occasionally receive promotional content from the Los Angeles Times.