Over 1,000 Public Figures Sign Petition to Call for Ban on Superintelligence
In late October 2025, a joint statement on artificial intelligence (AI) sparked global attention. The non-profit Future of Life Institute launched an appeal, urging a pause in the research and development of "superintelligence" until the scientific community reaches a broad consensus on its "safe and controllable development." In just one day, the number of signatories exceeded 3,000, reaching 3,193 by noon on October 23. The "weight" of this list has even reshaped the public's perception of debates surrounding AI.
The lineup of signatories is a "top-tier configuration" spanning fields and national borders: it includes pioneering scholars who laid the technical foundation, such as AI pioneer Geoffrey Hinton; business leaders like Steve Wozniak, co-founder of Apple, and Richard Branson, chairman of the Virgin Group; authorities in policy and academia, including economist Daron Acemoglu and former U.S. National Security Advisor Susan Rice; as well as public figures with global influence, such as Prince Harry and his wife Meghan, and Steve Bannon. More notably, top Chinese scholars—including Yao Qizhi, academician of the Chinese Academy of Sciences, and Zhang Yaqin, chair professor of intelligent science at Tsinghua University—have also joined, elevating this appeal beyond geographical and ideological divides to become a collective global reflection on the future of AI.
So, what exactly is "superintelligence" that has alarmed elites worldwide? It is not the AI tools we are familiar with today, but a form of artificial intelligence that outperforms humans in all cognitive tasks—from scientific research and strategic decision-making to creative production and social governance, its capabilities will comprehensively surpass human intelligence. The core of the controversy lies precisely in the unknown risks behind this "overwhelming" capability.
The statement clearly points out that currently, many top AI companies have made developing superintelligence a goal. However, this process is accompanied by multiple hidden dangers: economically, humans may face large-scale unemployment and disenfranchisement due to "skill obsolescence"; socially, individuals' freedom, civil liberties, and dignity may lose their autonomy to AI-driven decisions; in terms of national security, uncontrolled technology could trigger unpredictable geopolitical conflicts; more critically, some experts argue that without effective regulation, superintelligence even poses a potential risk of "human extinction."
"There is currently no solid scientific evidence or practical method to ensure the safety of superintelligence," said Zeng Yi, president of the Beijing Academy of Artificial Intelligence Security and Governance. This statement captures the core logic of the appeal. When technological advancement outpaces humanity's ability to understand and control risks, a "pause in research and development" is not conservatism, but a rational choice responsible for humanity's future. After all, no one dares to casually touch a "switch" that could alter the course of civilization before having a proper "safety valve" in place.
This cross-border, cross-field joint initiative is essentially a "collective awakening" of humanity in the face of a technological revolution: we pursue the convenience and progress brought by AI, but we also need to uphold the bottom line of "humans dominating the future." The development of superintelligence should not be a "race" among enterprises, but a "safe breakthrough" requiring global collaboration. Only by first building consensus and improving rules can technology truly serve humanity, rather than becoming a "Sword of Damocles" hanging over our heads.
No comments:
Post a Comment