Site icon Fishe News

Celebrities And AI Experts Call For End To Superintelligence Development

More than 700 scientists, political figures, and celebrities, including Prince Harry, Meghan Markle, Richard Branson, and Steve Bannon, have signed an open letter urging a global halt to the development of artificial intelligence capable of surpassing human intelligence.

The statement, published by the Future of Life Institute, calls for a ban on the development of superintelligence until the technology is reliably safe, controllable, and enjoys broad public support.

Signatories include prominent figures in AI and science such as Geoffrey Hinton, often called the “Godfather of AI” and 2024 Nobel Prize winner in Physics, Stuart Russell, professor at the University of California, Berkeley, and Yoshua Bengio, the world’s most-cited AI scientist from the University of Montreal. Other public figures signing the letter include Virgin Group founder Richard Branson, Apple co-founder Steve Wozniak, former Trump adviser Steve Bannon, and former Obama national security adviser Susan Rice.

The initiative also received endorsement from the Vatican’s AI expert Paolo Benanti, as well as celebrities like Prince Harry, Meghan Markle, and U.S. singer will.i.am. Signatories warn that while AI can bring benefits to science, medicine, and productivity, racing toward AI that surpasses human intelligence is proceeding without adequate safety measures or public oversight.

Industry leaders have expressed concern over the pace of AI advancement. OpenAI CEO Sam Altman has stated that superintelligence could be achieved within the next five years. Max Tegmark, President of the Future of Life Institute, emphasized the need for robust regulatory frameworks before pursuing such objectives, while co-founder Anthony Aguirre highlighted that the current AI development path conflicts with public safety, ethical standards, and societal expectations.

The letter echoes a previous call made by AI researchers during the United Nations General Assembly, urging governments to agree on “red lines” for AI development by the end of 2026. This initiative represents a growing global push to ensure AI is developed responsibly and safely before it can outsmart humans.

The statement concludes by stressing that superintelligent AI could pose existential risks if uncontrolled, and urges immediate regulatory measures to guide the future of AI technology.

Exit mobile version