International Trade Today is a Warren News publication.

Fight Deepfakes With Trust Tech, Rules: Experts

Technologies are emerging to combat deepfakes, but rules might be needed, panelists said at a Tuesday webinar hosted by the Convention of National Associations of Electrical Engineers of Europe (EUREL). Deepfake technology enabled some beneficial uses, but it's increasingly difficult…

Sign up for a free preview to unlock the rest of this article

If your job depends on informed compliance, you need International Trade Today. Delivered every business day and available any time online, only International Trade Today helps you stay current on the increasingly complex international trade regulatory environment.

to distinguish between real and fake content and people, said Sebastian Hallensleben, chairman of German EUREL member VDE e.V. One common argument is that AI fabrications aren't a problem because we can use other AI systems to detect them. As deepfakes become more sophisticated, there will be more countermeasures, causing a "detection arms race," said Hallensleben. What's needed is a "game-changer" to show what's real online and what isn't, Hallensleben said. He's working on "authentic pseudonyms," identifiers guaranteed to belong to a given physical person and to be singular in a given context. This could be done through restricted identification along the lines of citizens' ID cards; a second route is through self-sovereign identity (SSI). If widely used, authentic pseudonyms would avoid the "authoritarian approach" to deepfakes, Hallensleben said. SSI is a new paradigm for creating digital ID, said Technical University of Berlin professor Axel Kupper. The ID holder (a person) becomes her own identity provider and can decide where to store her identity documents and what services to use. The infrastructure is a decentralized, tamper-proof, distributed ledger. The question is how to use the technology to mitigate the use of automated content creation, Kupper said. Many perspectives besides technology must be considered for cross-border identification infrastructure, including regulation, governance, interoperability and social factors, said Tanja Pavleska, a researcher at the Joef Stefan Institut Laboratory for Open Systems and Networks in Slovenia. Trust applies in all those contexts, she said. Asked whether the proposed EU AI Act should classify deepfakes as high-risk technology, she said such fakes aren't just done by a single player or type of actor, so rules aimed at single points might be difficult. All panelists agreed the EU general data protection regulation should be interpreted to cover voice and facial data.