GENEVA (AP) — OpenAI CEO Sam Altman was a star speaker at the annual AI for Good conference Thursday, addressing the U.N. telecommunications agency’s annual gathering about how to tap the societal promise of artificial intelligence technology.
But Altman spent part of his virtual appearance fending off thorny questions about governance, an AI voice controversy and criticism from ousted board members.
Altman’s appearance to talk about AI’s benefits comes as his company has been battling a rising tide of concern about its business practices and how it handles AI safety.
He was among tech leaders to join the Geneva gathering as the two-day event hosts speeches and talks on artificial intelligence applications for robotics, medicine, education, sustainable development and more.
The latest round of discontent at OpenAI coincided with the company’s new product showcase earlier in May that drew the ire of Hollywood star Scarlett Johansson, who said she was shocked to hear ChatGPT’s voice sounding “eerily similar” to her own despite having previously rejected Altman’s request that she lend her voice to the system.
Altman talked at length about themes related to AI in an interview with Nicholas Thompson, CEO of The Atlantic. But he avoided answering questions about a few sensitive topics, such as explaining the ChatGPT-maker's use of an actor's voice that resembled Johansson's.
“It's not her voice. It's not supposed to be. I'm sorry for the confusion. Clearly you think it is,” Altman said, noting that people are going to have different opinions about how much voices sound alike.
"Not sure what else to say,” Altman added. Thompson, whose magazine struck a product and content deal with OpenAI a day earlier, didn't follow up.
He later asked Altman about governance at OpenAI, including an idea for a governance board.
“We continue to talk about how to implement governance. I probably shouldn’t say too much more right now,” Altman said.
“Say a little bit more,” Thompson replied, drawing laughs from the audience.
“I will pass. I'm sorry,” Altman responded.
Oversight for big AI companies like OpenAI has been an increasingly heated topic. Soon after the Johansson controversy, departing researcher Jan Leike said the company was letting safety “take a backseat to shiny products” and cited disagreements with top leaders that reached a “breaking point.”
Leike’s departure came days after co-founder Ilya Sutskever quit. The two jointly led OpenAI’s “Superalignment” team, centered around the organization's founding mission to safely develop better-than-human AI known as artificial general intelligence, or AGI. That team is now disbanded and replaced with a different safety committee.
Thompson asked about comments from Helen Toner, a Georgetown University researcher, who was among a group of OpenAI board members ousted last year following a chaotic power struggle with Altman. Toner, part of a group that held reservations about AI safety risks, criticized Altman in a podcast for withholding information, misrepresenting what was happening at OpenAI or “outright lying.”
In one example, she said when ChatGPT was released in November 2022, “the board was not informed in advance about that. We learned about ChatGPT on Twitter.”
“I disagree with her recollection of events,” Altman said. Toner “genuinely cares about a good AGI outcome, and I appreciate that about her, and I wish her well. I probably don’t want to get into a line-by-line refutation here.”
OpenAI has been riding a new wave of generative AI technology that burst into public view 18 months ago when it launched ChatGPT, an early standout among AI systems that churn out novel text, images and video based on a vast database of online writings, books and other media. Other corporate titans and AI startups have since launched their versions of the technology.
While the UN's “AI for Good” efforts predate the latest generative AI craze, the technology's rapid advancement and commercialization have attracted the world's attention.
“Artificial intelligence is changing our world and our lives," U.N. Secretary-General António Guterres said by video, noting its promise for things like education and health care in remote areas, increasing crop yields, and early warning systems for natural disasters.
Political leaders around the world are particularly concerned about AI tools that could supercharge the spread of online disinformation: With a few typed commands and requests, computer-generated texts and images can be spread on social media and across the internet — blurring the line between fake news and reality.
“Transforming its potential into reality requires AI that reduces bias, misinformation and security threats, instead of aggravating them,” Guterres said, insisting it must also involve helping developing countries “harness AI for themselves” and ”connecting the unconnected” around the world.
——
Chan reported from London. AP Technology Writer Matt O'Brien in Providence, Rhode Island contributed to this report.
—--
The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP’s text archives.
—-
This story has been corrected to show Thompson’s title is CEO, not editor, of The Atlantic.
Before you make your next trade, you'll want to hear this.
MarketBeat keeps track of Wall Street's top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis.
Our team has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on... and none of the big name stocks were on the list.
They believe these five stocks are the five best companies for investors to buy now...
See The Five Stocks Here
Looking to avoid the hassle of mudslinging, volatility, and uncertainty? You'd need to be out of the market, which isn’t viable. So where should investors put their money? Find out with this report.
Get This Free Report