While some aspects of the episode remain confined to the realm of sci-fi, others have already started to unfold in reality. In 2019, the Zao app was introduced to the Chinese market. Zao—which means to make, build, or fabricate in Mandarin Chinese—enables users to digitally graft their faces onto the bodies of actors and actresses in movies, television shows, and music videos. This feature requires only a set of selfies or profile photos, and takes about ten seconds to generate a video.
Shortly after its launch, Zao became viral for its novelty, though Chinese superapp WeChat quickly banned users from sharing material generated using the face-swapping app. It cited privacy concerns as the reason for the ban and sought to nip the problem at its bud, bringing Zao’s virality to a premature end. But that is just the start of a deepfake problem that looks set to affect the world.
While deepfakes are a global problem, the world splinters in perspective on the solutions needed.
China wants to solve it with regulations, enacting new laws in December 2022 to govern the use of deep synthesis technology. Issued by the Cyberspace Administration of China, the rules significantly restrict the usage of AI-generated media, with consent, disclosure, and identity authentication among the key tenets. Intriguingly, a rule specifies the requirement of carrying identifiers, like watermarks. One might wonder how well that will hold up—if AI can be used to superimpose faces onto bodies, wouldn’t it be equally capable of erasing watermarks from media?
Meanwhile, the west is seemingly singing a different tune from China. While countries like the US are equally cognizant of the dangers that deepfakes can pose, solutions hitherto proposed in this region tend to take on a more technological spin. For years, the US government has been collaborating with various research institutions to develop tools that can reliably identify and circumvent deepfakes.
One example is PhotoGuard, which is a preemptive solution that uses “perturbations” to disrupt the manipulative capabilities of AI models. Like computers, AI models see images as complex sets of data points describing pixel color and position. PhotoGuard immunizes them from manipulation by making minute changes to the way they are mathematically represented. These changes are invisible to the human eye thus preserving their visual integrity, while simultaneously protected from manipulation if they are fed to an AI model.
|