Harry and Meghan Align With Tech Visionaries in Demanding Ban on Advanced AI

Prince Harry and Meghan Markle have teamed up with artificial intelligence pioneers and Nobel laureates to advocate for a complete ban on creating artificial superintelligence.

The royal couple are among the signatories of a powerful statement that calls for “a prohibition on the development of superintelligence”. Artificial superintelligence (ASI) refers to AI systems that would surpass human intelligence in every intellectual area, though such systems remain theoretical.

Primary Requirements in the Statement

The statement states that the prohibition should remain in place until there is “widespread expert agreement” on developing ASI “with proper safeguards” and once “strong public buy-in” has been secured.

Prominent figures who added their signatures include AI pioneer and Nobel laureate a leading AI researcher, along with his colleague and pioneer of contemporary artificial intelligence, another AI expert; tech entrepreneur a Silicon Valley legend; British business magnate Virgin founder; former US national security adviser; ex-head of state Mary Robinson, and UK writer Stephen Fry. Other Nobel laureates who signed include a peace advocate, a physics Nobelist, an astrophysicist, and Daron Acemoğlu.

Behind the Movement

The statement, targeted at national leaders, tech firms and lawmakers, was organized by the Future of Life Institute (FLI), a American AI ethics organization that previously called for a hiatus in developing powerful AI systems in recent years, shortly after the launch of conversational AI made artificial intelligence a global political talking point.

Industry Perspectives

In recent months, Meta's CEO, the leader of Facebook parent Meta, one of the leading tech companies in the United States, claimed that development of superintelligence was “approaching reality”. Nevertheless, some analysts have suggested that discussions about superintelligence indicates market competition among tech companies spending hundreds of billions on artificial intelligence this year alone, rather than the industry being near reaching any scientific advancements.

Possible Dangers

However, FLI warns that the prospect of artificial superintelligence being achieved “within the next ten years” carries numerous threats ranging from replacing human workers to losses of civil liberties, leaving nations to national security risks and even endangering mankind with existential risk. Deep concerns about artificial intelligence center around the potential ability of a AI system to escape human oversight and protective measures and trigger actions contrary to human interests.

Citizen Sentiment

FLI released a American survey showing that about 75% of Americans want strong oversight on advanced AI, with 60% thinking that artificial superintelligence should not be created until it is demonstrated to be secure or controllable. The survey of American respondents noted that only a small fraction supported the status quo of fast, unregulated development.

Industry Objectives

The top artificial intelligence firms in the United States, including the ChatGPT developer OpenAI and Google, have made the creation of human-level AI – the theoretical state where AI matches human levels of intelligence at most cognitive tasks – an explicit goal of their research. While this is one notch below ASI, some experts also caution it could pose an existential risk by, for example, being able to improve itself toward reaching superintelligent levels, while also carrying an implicit threat for the modern labour market.

Susan Brown MD
Susan Brown MD

A tech enthusiast and AI researcher with a passion for sharing cutting-edge insights and practical advice.

July 2025 Blog Roll