Microsoft, OpenAI, Google and others agree to combat election-related deepfakes

A coalition of 20 technology companies sign an agreement Friday to help prevent AI deepfakes during critical 2024 elections taking place in more than 40 countries. OpenAI, Google, Meta, Amazon, Adobe and X are among the companies joining the pact to prevent and combat AI-generated content that could influence voters. However, the agreement's vague language and lack of binding enforcement call into question its sufficient scope.

The list of signatory companies to the “Technology Agreement to Combat the Deceptive Use of AI in the 2024 Elections” includes those that create and distribute AI models, as well as social platforms on which deepfakes are most likely to appear. The signatories are Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap Inc., Stability AI, TikTok, Trend Micro, Truepic and X (formerly Twitter ). ).

The group describes the agreement as “a set of commitments to deploy technology to combat harmful AI-generated content intended to mislead voters.” The signatories agreed to the following eight commitments:

  • Develop and implement technology to mitigate risks related to Deceptive AI Election content, including open source tools where appropriate.

  • Evaluate the models under this agreement to understand the risks they may pose with respect to misleading AI election content

  • Seeking to detect the distribution of this content on their platforms

  • Seeking to appropriately address this content detected on their platforms

  • Fostering Cross-Sector Resilience to AI’s Misleading Election Content

  • Ensure transparency with the public about how the company responds

  • Continue to collaborate with a diverse set of global civil society organizations, academics

  • Support efforts to foster public awareness, media literacy and resilience across society

The agreement will apply to AI-generated audio, video and images. It targets content that “falsifies or deceptively alters the appearance, voice or actions of political candidates, election officials and other key stakeholders in a democratic election, or that provides false information to voters about when, where and how they can vote. »

The signatories say they will work together to create and share tools to detect and combat the online distribution of deepfakes. Additionally, they plan to run educational campaigns and “guarantee transparency” to users.

OpenAI CEO Sam Altman gestures during a session of the World Economic Forum (WEF) in Davos on January 18, 2024. (Photo by Fabrice COFFRINI / AFP) (Photo by FABRICE COFFRINI/AFP via Getty Images)OpenAI CEO Sam Altman gestures during a session of the World Economic Forum (WEF) in Davos on January 18, 2024. (Photo by Fabrice COFFRINI / AFP) (Photo by FABRICE COFFRINI/AFP via Getty Images)

Sam Altman, CEO of OpenAI (FABRICE COFFRINI via Getty Images)

OpenAI, one of the signatories, already declared this last month plans to remove election-related misinformation global. Images generated with the company logo DALL-E 3 The tool will be encoded with a classifier providing a digital watermark to clarify their origin as AI-generated images. THE ChatGPT The manufacturer said it will also work with journalists, researchers and platforms to get feedback on its provenance classifier. It also plans to prevent chatbots from impersonating candidates.

“We are committed to protecting the integrity of elections by enforcing policies that prevent abuse and improving transparency around AI-generated content,” wrote Anna Makanju, vice president of global affairs at OpenAI, in the joint press release from the group. “We look forward to working with industry partners, civil society leaders and governments around the world to help protect elections from misleading use of AI.” »

Notably absent from the list is Midjourney, the AI ​​image generator company (of the same name) that currently produces some of the most convincing fake photos. However, the company said earlier this month that it consider banning political generations altogether during election periods. Last year, Midjourney was used to create a fake viral image of Pope Benedict unexpectedly strutting down the street in a puffy white jacket. One of Midjourney's closest competitors, Stability AI (creator of the open source software Stable broadcast), participated. Engadget has reached out to Midjourney for comment on his absence, and we will update this article if we receive a response.

Only Apple is absent among the “Big Five” of Silicon Valley. However, this can be explained by the fact that the iPhone maker has not yet launched any generative AI products and also does not host a social media platform where deepfakes could be distributed. Regardless, we reached out to Apple PR for clarification, but did not hear back at the time of publication.

Although the general principles agreed by the 20 companies appear to be a promising start, it remains to be seen whether a set of vague agreements, without binding enforcement, will be enough to combat a nightmare scenario in which the world's bad actors use AI generative to influence public opinion and elect aggressively undemocratic candidates – in the United States and elsewhere.

“The language is not as strong as one would expect,” said Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center. said The Associated Press Friday. “I think we should give Caesar's whereabouts and recognize that corporations have a vested interest in ensuring their tools are not used to undermine free and fair elections. That said, this is voluntary and we will monitor whether they follow through.

AI-generated deepfakes have already been used in the US presidential election. As early as April 2023, the Republican National Committee (RNC) broadcast an announcement using AI-generated images of President Joe Biden and Vice President Kamala Harris. The campaign for Ron DeSantis, who has since dropped out of the GOP primary, followed with AI-generated images of his rival and likely candidate Donald Trump in June 2023. Both included easy-to-miss warnings that the images were AI-generated.

BOSTON, UNITED STATES - DECEMBER 2: President Joe Biden participates in an International Brotherhood of Electrical Workers (IBEW) phone banking event December 2, 2022 in Boston, Massachusetts, for the re-election campaign of Senator Rev. Raphael Warnock ( D-GA).  (Photo by Nathan Posner/Anadolu Agency via Getty Images)BOSTON, UNITED STATES - DECEMBER 2: President Joe Biden participates in an International Brotherhood of Electrical Workers (IBEW) phone banking event December 2, 2022 in Boston, Massachusetts, for the re-election campaign of Senator Rev. Raphael Warnock ( D-GA).  (Photo by Nathan Posner/Anadolu Agency via Getty Images)

In January, New Hampshire voters were greeted with a robocall of an AI-generated imitation of President Biden's voice, urging them not to vote. (Anadolu via Getty Images)

In January, a AI-generated deepfake of President Biden's voice was used by two Texas-based companies to automatically call voters in New Hampshire, urging them not to vote in the state's January 23 primary. The clip, generated using ElevenLabs' voice cloning tool, reaches up to 25,000 NH voters, according to the state attorney general. ElevenLabs is one of the signatories of the pact.

The Federal Communications Commission (FCC) acted quickly to prevent further abuse of voice cloning technology in fake campaign calls. Earlier this month he voted unanimously to ban AI-generated robocalls. The US Congress (seemingly in eternal impasse) has not passed any AI legislation. In December, the European Union (EU) agreed on a comprehensive security development bill under the AI ​​Act this could influence regulatory efforts in other countries.

“As society reaps the benefits of AI, we have a responsibility to ensure that these tools do not become weapons in elections,” Microsoft Vice President and President Brad Smith wrote in a Press release. “AI didn’t create election deception, but we need to make sure it doesn’t enable deception.”

Source link

Leave a Comment