OpenAI Chief Concerned About AI Used To Compromise Elections

WASHINGTON, May 16 (Reuters) – The CEO of OpenAI, the startup behind ChatGPT, told a Senate panel on Tuesday the use of artificial intelligence to interfere with election integrity is a “significant area of concern”, adding that it needs regulation.

“I am nervous about it,” CEO Sam Altman said about elections and AI, adding rules and guidelines are needed.

For months, companies large and small have raced to bring increasingly versatile AI to market, throwing endless data and billions of dollars at the challenge. Some critics fear the technology will exacerbate societal harms, among them prejudice and misinformation, while others warn AI could end humanity itself.

“There’s no way to put this genie in the bottle. Globally, this is exploding,” said Senator Cory Booker, one of many lawmakers with questions about how best to regulate AI.

Senator Mazie Hirono noted the danger of misinformation as the 2024 election nears. “In the election context, for example, I saw a picture of former President Trump being arrested by NYPD and that went viral,” she said, pressing Altman on whether he would consider the faked image harmful.

Altman responded that creators should make clear when an image is generated rather than factual.

Speaking before Congress for the first time, Altman suggested that, in general, the U.S. should consider licensing and testing requirements for development of AI models.

Altman, asked to opine on which AI should be subject to licensing, said a model that can persuade or manipulate a person’s beliefs would be an example of a “great threshold.”

He also said companies should have the right to say they do not want their data used for AI training, which is one idea being discussed on Capitol Hill. Altman said, however, that material on the public web would be fair game.

Altman also said he “wouldn’t say never” to the idea of advertising but preferred a subscription-based model.

The White House has convened top technology CEOs including Altman to address AI. U.S. lawmakers likewise are seeking action to further the technology’s benefits and national security while limiting its misuse. Consensus is far from certain.

An OpenAI staffer recently proposed the creation of a U.S. licensing agency for AI, which could be called the Office for AI Safety and Infrastructure Security, or OASIS, Reuters has reported.

OpenAI is backed by Microsoft Corp (MSFT.O). Altman is also calling for global cooperation on AI and incentives for safety compliance.

Christina Montgomery, International Business Machines Corp (IBM.N) chief privacy and trust officer, urged Congress to focus regulation on areas with the potential to do the greatest societal harm.

Reporting by Diane Bartz in Washington and Jeffrey Dastin in Palo Alto, California; Editing by Matthew Lewis and Edwina Gibbs

Our Standards: The Thomson Reuters Trust Principles.

Thomson Reuters

Focused on U.S. antitrust as well as corporate regulation and legislation, with experience involving covering war in Bosnia, elections in Mexico and Nicaragua, as well as stories from Brazil, Chile, Cuba, El Salvador, Nigeria and Peru.

Thomson Reuters

Jeffrey Dastin is a correspondent for Reuters based in San Francisco, where he reports on the technology industry and artificial intelligence. He joined Reuters in 2014, originally writing about airlines and travel from the New York bureau. Dastin graduated from Yale University with a degree in history.
He was part of a team that examined lobbying by Amazon.com around the world, for which he won a SOPA Award in 2022.

READ MORE HERE