Boss of OpenAI calls for US to regulate artificial intelligence

OpenAI CEO Sam Altman urged lawmakers to regulate artificial intelligence during a Senate panel hearing Tuesday, describing the technology's current boom as a potential "printing press moment" but one that required safeguards. 

"We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models," Altman said in his opening remarks before a Senate Judiciary subcommittee.

Altman's appearance comes after the viral success of ChatGPT, his company's chatbot tool, renewed an arms race over AI and sparked concerns from some lawmakers about the risks posed by the technology.

Sen. Richard Blumenthal kicked off Tuesday's hearing with a fake recording of his own voice, illustrating the potential risks of the technology. The recording, which featured remarks written by ChatGPT and audio of Blumenthal's voice produced using recordings of his actual floor speeches, argued that AI cannot be allowed to unfold in an unregulated environment.

Blumenthal explained that while ChatGPT produced an accurate reflection of the real lawmaker's views, it could just as easily have produced "an endorsement of Ukraine's surrendering or Vladimir Putin's leadership." That, he said, "would've been really frightening."

A growing list of tech companies have deployed new AI tools in recent months, with the potential to change how we work, shop and interact with each other. But these same tools have also drawn criticism from some of tech's biggest names for their potential to disrupt millions of jobs, spread misinformation and perpetuate biases.

In his remarks Tuesday, Altman said the potential for AI to be used to manipulate voters and target disinformation are among "my areas of greatest concern," especially because "we're going to face an election next year and these models are getting better."

One way the US government could regulate the industry is by creating a licensing regime for companies working on the most powerful AI systems, Altman said on Tuesday. This "combination of licensing and testing requirements," Altman said, could be applied to the "development and release of AI models above a threshold of capabilities."

Also testifying Tuesday will be Christina Montgomery, IBM's vice president and chief privacy and trust officer, as well as Gary Marcus, a former New York University professor and a self-described critic of AI "hype."

Montgomery warned against creating a new era of "move fast and break things," the longtime mantra of Silicon Valley giants such as Facebook. "The era of AI cannot be another era of 'move fast and break things,'" Montgomery told lawmakers. Still, she said, "We don't have to slam the brakes on innovation either."

Both Altman and Montgomery also said AI may eliminate some jobs, but create new ones.

"There will be an impact on jobs," Altman told Blumenthal. "We try to be very clear about that, and I think it'll require partnership between industry and government, but mostly action by government, to figure out how we want to mitigate that. But I'm very optimistic about how great the jobs of the future will be."

OpenAI CEO Sam Altman testifies before a Senate Judiciary Privacy, Technology & the Law Subcommittee hearing titled 'Oversight of A.I.: Rules for Artificial Intelligence' on Capitol Hill in Washington.
OpenAI CEO Sam Altman testifies before a Senate Judiciary Privacy, Technology & the Law Subcommittee hearing titled 'Oversight of A.I.: Rules for Artificial Intelligence' on Capitol Hill in Washington. Photo credit: Reuters

As the CEO of OpenAI, Altman, perhaps more than any other single figure, has come to serve as a face for a new crop of AI products that can generate images and texts in response to user prompts.

Altman's remarks come the day after he met with more than 60 House lawmakers over dinner. The bipartisan gathering, featuring roughly an even split of Republicans and Democrats, saw Altman demonstrating various uses of ChatGPT "to much amusement," according to a person in the room who described lawmakers as "riveted" by the event.

Most of those in attendance widely acknowledged that regulation of AI will be necessary, the person added.

California Democratic Rep. Ro Khanna, whose district includes Silicon Valley, said Altman stressed during the dinner that AI is a tool, not a "creature," and that AI "can help with tasks, not jobs."

"Altman's most helpful contribution was ramping down the hype," Khanna told CNN.

In a reflection of how AI has taken Congress by storm, even as the Judiciary subcommittee was questioning OpenAI and IBM, the Senate Homeland Security and Governmental Affairs Committee was holding a separate and simultaneous hearing on the use of artificial intelligence in government.

Earlier this month, Altman was one of several tech CEOs to meet with Vice President Kamala Harris and, briefly, President Joe Biden as part of the White House's efforts to emphasize the importance of ethical and responsible AI development.

In interviews this year, Altman has presented himself as someone who is mindful of the risks posed by AI and even "a little bit scared" of the technology. He and his company have pledged to move forward responsibly.

Others want Altman and OpenAI to move more cautiously. Elon Musk, who helped found OpenAI before breaking from the group, joined dozens of tech leaders, professors and researchers in signing a letter calling for artificial intelligence labs like OpenAI to stop the training of the most powerful AI systems for at least six months, citing "profound risks to society and humanity."

Altman has said he agreed with parts of the letter. "I think moving with caution and an increasing rigor for safety issues is really important," Altman said at an event last month. "The letter I don't think was the optimal way to address it."

CNN