The Duke and Duchess of Sussex Join Tech Visionaries in Calling for Ban on Superintelligent Systems

The Duke and Duchess of Sussex have joined forces with AI experts and Nobel laureates to push for a complete ban on developing superintelligent AI systems.

The royal couple are among the signatories of a powerful statement that calls for “a ban on the development of superintelligence”. Superintelligent AI refers to AI systems that could exceed human intelligence in all cognitive tasks, though this technology remain theoretical.

Primary Requirements in the Statement

The declaration insists that the prohibition should stay active until there is “widespread expert agreement” on developing ASI “with proper safeguards” and once “substantial public support” has been achieved.

Prominent figures who endorsed the statement include AI pioneer and Nobel laureate a leading AI researcher, along with his fellow “godfather” of modern AI, Yoshua Bengio; tech entrepreneur Steve Wozniak; UK entrepreneur Virgin founder; Susan Rice; former Irish president Mary Robinson, and UK writer Stephen Fry. Other Nobel laureates who endorsed include a peace advocate, Frank Wilczek, an astrophysicist, and Daron Acemoğlu.

Organizational Background

The statement, aimed at governments, tech firms and lawmakers, was organized by the FLI organization, a American AI ethics organization that earlier demanded a hiatus in developing powerful AI systems in 2023, shortly after the emergence of ChatGPT made artificial intelligence a global political talking point.

Industry Perspectives

In recent months, Meta's CEO, the chief executive of Facebook parent Meta, one of the leading tech companies in the United States, stated that advancement toward superintelligent AI was “approaching reality”. However, some experts have suggested that talk of ASI indicates competitive positioning among technology firms investing enormous sums on artificial intelligence recently, rather than the sector being close to achieving any scientific advancements.

Potential Risks

Nonetheless, FLI states that the possibility of ASI being achieved “within the next ten years” presents numerous threats ranging from replacing human workers to erosion of personal freedoms, exposing countries to security threats and even threatening humanity with existential risk. Existential fears about artificial intelligence focus on the possible capability of a AI system to escape human oversight and safety guidelines and trigger actions against human welfare.

Public Opinion

FLI published a American survey showing that about 75% of Americans want strong oversight on sophisticated artificial intelligence, with 60% thinking that superhuman AI should not be created until it is proven safe or controllable. The survey of 2,000 US adults noted that only 5% backed the current situation of fast, unregulated development.

Industry Objectives

The leading AI companies in the US, including the ChatGPT developer OpenAI and the search giant, have made the creation of human-level AI – the hypothetical condition where AI matches human levels of intelligence at many intellectual activities – an explicit goal of their research. Although this is slightly less advanced than ASI, some specialists also warn it could carry an existential risk by, for example, being able to enhance its own capabilities toward reaching superintelligent levels, while also carrying an implicit threat for the contemporary workforce.

Alex Ramos
Alex Ramos

Digital marketing strategist with over a decade of experience, specializing in SEO and content creation for tech startups.