Global cooperation urgently needed to govern risks of advanced AI, warns new report
World leaders in artificial intelligence explain future possibilities.
The report titled International Governance Issues of the Transition from Artificial Narrow Intelligence to Artificial General Intelligence (AGI) is a distillation of interviews and collected insights from 55 AI experts from the United States, China, the United Kingdom, Canada, the European Union, and Russia on how to regulate AGI—AI that can handle novel situations as well as, or better than humans. Included among these experts are Sam Altman, Bill Gates, and Elon Musk.
AGI could arise in the next few years, representing an “intelligence explosion” that creates AI surpassing human abilities, the report states. Lack of governance could lead to catastrophic outcomes, including existential threats to humanity if such systems are misaligned with human values and interests. The report finds that no existing governance models are adequately prepared to manage the risks and opportunities posed by artificial general intelligence (AGI). It calls for the rapid development of a new kind of flexible governance that can match and anticipate the pace of AI change and provide the necessary safeguards while not stifling the promises of and advancing AI.
"AGI is closer than any time before—the next advance could surpass human intelligence," the report quotes Ilya Sutskever, co-founder of OpenAI. "Alignment with human values is critical but challenging." Ben Goertzel, author of AGI Revolution added: “It is more about WHO controls the development and use of AGI than a list of ethics.”
Other key findings include:
• Because the benefits of AGI are so great in medicine, education, management, and productivity, corporations are racing to be first.
• Because AGI will increase political power, governments are racing to be first.
• International cooperation is essential but threatened by competitive tensions among nations and corporations racing for AI supremacy. The shared risks may compel collaboration, but overcoming distrust poses an enormous challenge.
• Extraordinary enforcement powers may be needed for governance to be trusted and effective globally, potentially including military capabilities
• Although controversial, proposals to limit research and development may be needed to allow time to design and implement management solutions.
• The window for developing effective governance is short, demanding unprecedented collaboration.
"We’re all in this boat together—if it goes badly, we’re all doomed," the report quotes Oxford professor Nick Bostrom.
The Millennium Project is calling for urgent action to create AGI governance and alignment at national and international levels before advanced AI exceeds humanity's ability to control it safely. “If we don’t get an UN Convention on AGI and a UN AGI Agency to enforce rules, guardrails, auditing, and verification right, then various forms of Artificial Supper Intelligence could emerge beyond our control and not to our liking,” says Jerome Glenn, CEO of The Millennium Project.
With stakes potentially including human extinction, the report warns we can ill afford delay in mobilizing global cooperation.
This work was supported by the Dubai Future Foundation and general support from the Future of Life Institute. The Millennium Project is an international participatory think tank with 70 Nodes around the world and three regional networks; it was established in 1996 and has published over 60 futures research projects based on international judgments.
Jahangir Amir
Mishal Pakistan
+ +92 300 8555161
email us here
Visit us on social media:
Facebook
Twitter
LinkedIn
YouTube
Other
Transition from Narrow to General Artificial Intelligence
Legal Disclaimer:
EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.