Harry and Meghan Align With AI Pioneers in Demanding Prohibition on Superintelligent Systems

Prince Harry and Meghan Markle have teamed up with AI experts and Nobel Prize winners to advocate for a complete ban on developing superintelligent AI systems.

Harry and Meghan are part of the group of a powerful statement that demands “a ban on the development of superintelligence”. Superintelligent AI refers to AI systems that could exceed human cognitive abilities in every intellectual area, though such systems have not yet been developed.

Primary Requirements in the Statement

The declaration states that the ban should stay active until there is “widespread expert agreement” on developing ASI “with proper safeguards” and once “substantial public support” has been achieved.

Prominent figures who endorsed the statement include AI pioneer and Nobel Prize recipient Geoffrey Hinton, along with his fellow “godfather” of modern AI, another AI expert; tech entrepreneur a Silicon Valley legend; British business magnate Virgin founder; Susan Rice; former Irish president an international leader, and British author a public intellectual. Additional Nobel winners who endorsed include a peace advocate, a physics Nobelist, an astrophysicist, and an economics expert.

Behind the Movement

The statement, targeted at national leaders, tech firms and lawmakers, was organized by the FLI organization, a American AI ethics organization that previously called for a hiatus in advancing strong artificial intelligence in recent years, shortly after the launch of conversational AI made artificial intelligence a global political discussion topic.

Tech Sector Views

In recent months, Meta's CEO, the leader of the social media giant, one of the major AI developers in the United States, claimed that development of superintelligence was “approaching reality”. However, some experts have suggested that discussions about superintelligence reflects competitive positioning among technology firms spending hundreds of billions on artificial intelligence this year alone, rather than the sector being close to achieving any technical breakthroughs.

Potential Risks

Nonetheless, the organization warns that the possibility of ASI being developed “within the next ten years” presents numerous risks ranging from replacing human workers to losses of civil liberties, exposing countries to security threats and even endangering mankind with existential risk. Deep concerns about AI focus on the potential ability of a AI system to evade human control and protective measures and initiate events against human welfare.

Citizen Sentiment

The institute published a American survey showing that about 75% of Americans want strong oversight on advanced AI, with 60% thinking that artificial superintelligence should not be created until it is demonstrated to be secure or controllable. The poll of 2,000 US adults noted that only a small fraction backed the status quo of rapid, uncontrolled advancement.

Corporate Goals

The leading AI companies in the US, including the ChatGPT developer a major AI lab and the search giant, have made the development of artificial general intelligence – the hypothetical condition where artificial intelligence equals human cognitive capability at most cognitive tasks – an explicit goal of their research. While this is slightly less advanced than superintelligence, some specialists also caution it could pose an extinction threat by, for instance, being able to improve itself toward reaching superintelligent levels, while also carrying an implicit threat for the modern labour market.

Misty Hanson
Misty Hanson

A passionate traveler and writer sharing insights from years of exploring the UK's hidden gems and popular spots.