Tech giants sign voluntary pledge to fight election-related deepfakes

Tech giants sign voluntary pledge to fight election-related deepfakes


Tech companies are pledging to fight election-related deepfakes as policymakers amp up pressure.

Today at the Munich Security Conference, vendors including Microsoft, Meta, Google, Amazon, Adobe and IBM signed an accord signaling their intention to adopt a common framework for responding to AI-generated deepfakes intended to mislead voters. Thirteen other companies, including AI startups OpenAI, Anthropic, Inflection AI, ElevenLabs and Stability AI and social media platforms X (formerly Twitter), TikTok and Snap, joined in signing the accord, along with chipmaker Arm and security firms McAfee and TrendMicro.

The undersigned said they’ll use methods to detect and label misleading political deepfakes when they’re created and distributed on their platforms, sharing best practices with one another and providing “swift and proportionate responses” when deepfakes start to spread. The companies added that they’ll pay special attention to context in responding to deepfakes, aiming to “[safeguard] educational, documentary, artistic, satirical and political expression” while maintaining transparency with users about their policies on deceptive election content.

The accord is effectively toothless and, some critics may say, amounts to little more than virtue signaling — its measures are voluntary. But the ballyhooing shows a wariness among the tech sector of regulatory crosshairs as they pertain to elections, in a year when 49% of the world’s population will head to the polls in national elections.

“There’s no way the tech sector can protect elections by itself from this new type of electoral abuse,” Brad Smith, vice chair and president of Microsoft, said in a press release. “As we look to the future, it seems to those of us who work at Microsoft that we’ll also need new forms of multistakeholder action … It’s abundantly clear that the protection of elections [will require] that we all work together.”

No federal law in the U.S. bans deepfakes, election-related or otherwise. But 10 states around the country have enacted statutes criminalizing them, with Minnesota’s being the first to target deepfakes used in political campaigning.

Elsewhere, federal agencies have taken what enforcement action they can to combat the spread of deepfakes.

This week, the FTC announced that it’s seeking to modify an existing rule that bans the impersonation of businesses or government agencies to cover all consumers, including politicians. And the FCC moved to make AI-voiced robocalls illegal by reinterpreting a rule that prohibits artificial and prerecorded voice message spam.

In the European Union, the bloc’s AI Act would require all AI-generated content to be clearly labeled as such. The EU’s also using its Digital Services Act to force the tech industry to curb deepfakes in various forms.

Deepfakes continue to proliferate, meanwhile. According to data from Clarity, a deepfake detection firm, the number of deepfakes that have been created increased 900% year over year.

Last month, AI robocalls mimicking U.S. President Joe Biden’s voice tried to discourage people from voting in New Hampshire’s primary election. And in November, just days before Slovakia’s elections, AI-generated audio recordings impersonated a liberal candidate discussing plans to raise beer prices and rig the election.

In a recent poll from YouGov, 85% of Americans said they were very concerned or somewhat concerned about the spread of misleading video and audio deepfakes. A separate survey from The Associated Press-NORC Center for Public Affairs Research found that nearly 60% of adults think AI tools will increase the spread of false and misleading information during the 2024 U.S. election cycle.



Source link