Harry and Meghan Align With AI Pioneers in Demanding Ban on Advanced AI
Prince Harry and Meghan Markle have teamed up with AI experts and Nobel Prize winners to push for a complete ban on creating artificial superintelligence.
The royal couple are among the signatories of a influential declaration that calls for “a ban on the development of artificial superintelligence”. Artificial superintelligence (ASI) refers to AI systems that could exceed human cognitive abilities in all cognitive tasks, though this technology remain theoretical.
Key Demands in the Statement
The statement insists that the ban should stay active until there is “broad scientific consensus” on creating superintelligence “safely and controllably” and once “strong public buy-in” has been secured.
Prominent figures who endorsed the statement include AI pioneer and Nobel laureate Geoffrey Hinton, along with his fellow “godfather” of modern AI, Yoshua Bengio; tech entrepreneur Steve Wozniak; British business magnate Richard Branson; former US national security adviser; former Irish president an international leader, and UK writer Stephen Fry. Other Nobel laureates who signed include a peace advocate, Frank Wilczek, an astrophysicist, and Daron Acemoğlu.
Organizational Background
The statement, targeted at national leaders, tech firms and policy makers, was coordinated by the Future of Life Institute (FLI), a US-based AI safety group that previously called for a hiatus in developing powerful AI systems in recent years, shortly after the emergence of ChatGPT made artificial intelligence a global political talking point.
Industry Perspectives
In July, Meta's CEO, the leader of the social media giant, one of the leading tech companies in the United States, claimed that advancement toward superintelligent AI was “now in sight”. Nevertheless, some experts have suggested that discussions about superintelligence reflects competitive positioning among tech companies investing enormous sums on artificial intelligence recently, rather than the industry being close to achieving any technical breakthroughs.
Possible Dangers
Nonetheless, FLI states that the possibility of ASI being developed “within the next ten years” presents numerous threats ranging from eliminating all human jobs to erosion of personal freedoms, leaving nations to security threats and even threatening humanity with extinction. Existential fears about AI center around the possible capability of a AI system to evade human control and safety guidelines and initiate events against human welfare.
Citizen Sentiment
FLI published a US national poll showing that approximately three-quarters of US citizens want robust regulation on sophisticated artificial intelligence, with six out of 10 thinking that artificial superintelligence should not be created until it is demonstrated to be secure or manageable. The survey of 2,000 US adults added that only a small fraction supported the current situation of rapid, uncontrolled advancement.
Industry Objectives
The top artificial intelligence firms in the US, including the conversational AI creator OpenAI and Google, have made the creation of human-level AI – the theoretical state where AI matches human levels of intelligence at many intellectual activities – an explicit goal of their work. Although this is slightly less advanced than superintelligence, some experts also warn it could carry an existential risk by, for instance, being able to improve itself toward achieving superintelligence, while also presenting an implicit threat for the contemporary workforce.