Harry and Meghan Join Tech Visionaries in Calling for Prohibition on Advanced AI
The Duke and Duchess of Sussex have teamed up with artificial intelligence pioneers and Nobel laureates to push for a total prohibition on creating artificial superintelligence.
The royal couple are among the signatories of a powerful statement that demands “a prohibition on the creation of artificial superintelligence”. Superintelligent AI refers to AI systems that could exceed human cognitive abilities in every intellectual area, though this technology have not yet been developed.
Key Demands in the Statement
The declaration states that the prohibition should remain in place until there is “broad scientific consensus” on creating superintelligence “with proper safeguards” and once “strong public buy-in” has been secured.
Prominent figures who endorsed the statement include technology visionary and Nobel laureate Geoffrey Hinton, along with his colleague and pioneer of contemporary artificial intelligence, Yoshua Bengio; Apple co-founder Steve Wozniak; UK entrepreneur Richard Branson; former US national security adviser; former Irish president an international leader, and UK writer a public intellectual. Additional Nobel winners who endorsed include a peace advocate, Frank Wilczek, an astrophysicist, and Daron Acemoğlu.
Behind the Movement
The declaration, aimed at governments, technology companies and policy makers, was organized by the FLI organization, a US-based AI safety group that previously called for a hiatus in advancing strong artificial intelligence in 2023, shortly after the launch of conversational AI made AI a global political discussion topic.
Industry Perspectives
In recent months, Meta's CEO, the leader of the social media giant, one of the major AI developers in the United States, stated that development of superintelligence was “approaching reality”. Nevertheless, some experts have argued that discussions about superintelligence indicates competitive positioning among tech companies investing enormous sums on artificial intelligence this year alone, rather than the sector being close to achieving any technical breakthroughs.
Possible Dangers
Nonetheless, the organization states that the possibility of ASI being developed “within the next ten years” carries numerous risks ranging from eliminating all human jobs to losses of civil liberties, leaving nations to national security risks and even threatening humanity with extinction. Deep concerns about artificial intelligence focus on the possible capability of a AI system to evade human control and safety guidelines and initiate events against human welfare.
Public Opinion
The institute published a US national poll showing that approximately three-quarters of US citizens want strong oversight on advanced AI, with six out of 10 thinking that artificial superintelligence should not be created until it is proven safe or controllable. The survey of 2,000 US adults added that only 5% supported the current situation of rapid, uncontrolled advancement.
Corporate Goals
The leading AI companies in the US, including the ChatGPT developer OpenAI and Google, have made the creation of human-level AI – the hypothetical condition where artificial intelligence equals human cognitive capability at many intellectual activities – an stated objective of their research. While this is one notch below superintelligence, some experts also caution it could carry an extinction threat by, for example, being able to improve itself toward reaching superintelligent levels, while also presenting an implicit threat for the contemporary workforce.