Free Trial

AI companies make fresh safety promise at Seoul summit, nations agree to align work on risks

A screen shows an announcement of the AI Seoul Summit in Seoul, South Korea, Tuesday, May 21, 2024. World leaders are expected to adopt a new agreement on artificial intelligence when they gather virtually Tuesday to discuss AI’s potential risks but also ways to promote its benefits and innovation. (AP Photo/Ahn Young-joon)

SEOUL, South Korea (AP) — Leading artificial intelligence companies made a fresh pledge at a mini-summit Tuesday to develop AI safely, while world leaders agreed to build a network of publicly backed safety institutes to advance research and testing of the technology.

Google, Meta and OpenAI were among the companies that made voluntary safety commitments at the AI Seoul Summit, including pulling the plug on their cutting-edge systems if they can’t rein in the most extreme risks.

The two-day meeting is a follow-up to November’s AI Safety Summit at Bletchley Park in the United Kingdom, and comes amid a flurry of efforts by governments and global bodies to design guardrails for the technology amid fears about the potential risk it poses both to everyday life and to humanity.

Leaders from 10 countries and the European Union will “forge a common understanding of AI safety and align their work on AI research," the British government, which co-hosted the event, said in a statement. The network of safety institutes will include those already set up by the U.K., U.S., Japan and Singapore since the Bletchley meeting, it said.

U.N. Secretary-General Antonio Guterres told the opening session that seven months after the Bletchley meeting, “We are seeing life-changing technological advances and life-threatening new risks — from disinformation to mass surveillance to the prospect of lethal autonomous weapons.”

The U.N. chief said in a video address that there needs to be universal guardrails and regular dialogue on AI. “We cannot sleepwalk into a dystopian future where the power of AI is controlled by a few people — or worse, by algorithms beyond human understanding,” he said.

The 16 AI companies that signed up for the safety commitments also include Amazon, Microsoft, Samsung, IBM, xAI, France’s Mistral AI, China’s Zhipu.ai, and G42 of the United Arab Emirates. They vowed to ensure the safety of their most advanced AI models with promises of accountable governance and public transparency.

It's not the first time that AI companies have made lofty-sounding but non-binding safety commitments. Amazon, Google, Meta and Microsoft were among a group that signed up last year to voluntary safeguards brokered by the White House to ensure their products are safe before releasing them.

The Seoul meeting comes as some of those companies roll out the latest versions of their AI models.

The safety pledge includes publishing frameworks setting out how the companies will measure the risks of their models. In extreme cases where risks are severe and “intolerable," AI companies will have to hit the kill switch and stop developing or deploying their models and systems if they can't mitigate the risks.

Since the U.K. meeting last year, the AI industry has “increasingly focused on the most pressing concerns, including mis- and dis- information, data security, bias and keeping humans in the loop,” said Aiden Gomez CEO of Cohere, one of the AI companies that signed the pact. "It is essential that we continue to consider all possible risks, while prioritizing our efforts on those most likely to create problems if not properly addressed.”

Governments around the world have been scrambling to formulate regulations for AI even as the technology makes rapid advances and is poised to transform many aspects of daily life, from education and the workplace to copyrights and privacy. There are concerns that advances in AI could eliminate jobs, spread disinformation or be used to create new bioweapons.

This week's meeting is just one of a slew of efforts on AI governance. The U.N. General Assembly has approved its first resolution on the safe use of AI systems, while the U.S. and China recently held their first high-level talks on AI and the European Union's world-first AI Act is set to take effect later this year.

__

Chan contributed to this report from London. Associated Press writer Edith M. Lederer contributed from the United Nations.

Where should you invest $1,000 right now?

Before you make your next trade, you'll want to hear this.

MarketBeat keeps track of Wall Street's top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis.

Our team has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on... and none of the big name stocks were on the list.

They believe these five stocks are the five best companies for investors to buy now...

See The Five Stocks Here

20 Stocks to Sell Now Cover

MarketBeat has just released its list of 20 stocks that Wall Street analysts hate. These companies may appear to have good fundamentals, but top analysts smell something seriously rotten. Are any of these companies lurking around your portfolio? Find out by clicking the link below.

Get This Free Report
Like this article? Share it with a colleague.

Featured Articles and Offers

Recent Videos

Why SoundHound Stock Dip Could Mean Big Gains for 2025 Investors
Nintendo Stock: Buy Before the 2025 Switch Platform Hits!
How to Profit from NVIDIA’s Earnings: Short-Term Trading Guide

Stock Lists

All Stock Lists

Investing Tools

Calendars and Tools

Search Headlines