The term “deepfakes” has now entered the realm of popular discourse as yet another technology-driven threat to society’s mutual understanding of just what facts are. In a talk sponsored by World Innovation Lab (WiL), Intertrust’s Yutaka Nagao, Vice-President of Global Technology Initiatives and General Manager, Japan, presented one potential solution to deepfakes: using immutable ledger and other software security technologies to establish a trusted chain of custody to help the viewer determine whether a piece of media can be trusted or not.
It’s Not a New Phenomenon
As Nagao pointed out, altered images are nothing new. The first known altered image was created by Hippolyte Bayard who superimposed an image of himself on a corpse in 1840. Deepfakes have taken the concept into the 21st century by using deep learning technologies to create falsified videos that are difficult to distinguish from real ones. Nagao said that initially (around 2017), deepfakes could be identified because of issues like the lips on superimposed faces not moving in sync with the voice audio track. In 2020, that is no longer the case since AI-powered deepfake video technology has improved to the point where the falsified images are truly lifelike. Even more concerning is the fact that anyone with a personal computer can create these images using easily available tools such as Deepfakes web β, a cloud service that can be rented out for $2 an hour.
Nagao did stress one often-overlooked point, which is that there are arguably legitimate uses for deepfake videos. One example he showed was an anti-malaria public service message where the actor gave the message in several languages even though he obviously didn’t speak them. Still, given the potential damage deepfakes can create by spreading disinformation, calling into doubt media used as evidence in courtrooms, and even deepfake audios used for corporate fraud, there is a need for a way to identify them.
Potential Solutions: AI & Trusted Chains of Custody
One solution to deepfakes that is often touted is using AI techniques to find deepfake videos. Given the massive number of videos uploaded to distribution platforms like YouTube as well as the fact that even Facebook had difficulty to deal with a deepfake video of Mark Zuckerberg, Nagao doubts that this is an effective approach. The solution that he presented focuses on securing the “data supply chain.”
A simplified supply chain of a typical video distributed on the Internet is as follows: Someone takes a video on a device, it’s packaged by the software on the device, and uploaded to a video distribution service. The video is then repackaged for delivery by the distribution service and delivered to the viewer. Nagao said that someone could create a deepfake by somehow modifying the video software on the device or by attacking the video distribution service software to introduce altered services. These sorts of attacks could be thwarted by using hardware security and software application shielding technologies such as whiteCryption products on the device side. On the cloud service side, distribution services can use data rights management platforms such as Modulus to securely distribute video.
An additional approach is to create a trusted chain of custody for media. The trusted chain of custody is not a new idea; for example, it’s already used by supermarkets in Japan that place QR codes on produce, then use the QR codes to track the produce as it travels through the supermarkets’ supply chain. For a piece of video, at each point the video travels from the originating device to the viewer, a number of digitally signed hashes are created: a digest of video and its metadata which contain information on who, when, and what images were created/handled. These hash digests are then cached in immutable ledgers such as blockchain if a digital signature is successfully verified by trusted third parties that maintain the ledgers. The software on a viewer’s device can then be set up to query the ledgers associated with the video and its metadata to determine if there is any break in the chain of trust and report to the viewer the likelihood if the video can be trusted or not.
How Intertrust Fits In
To show that this was more than just an idea, Nagao showed a demonstration created by Intertrust of an original video and the same video that had been modified with a mark. Each video segment was authenticated by querying the ledgers associated with the video and analyzing the replies to determine whether the video was the original or had been modified by unauthorized parties. Nagao also said that a prototype of the technology had been created using an IoT sensor that produced image data and associated metadata needed to populate one part of the trusted chain of custody.
Nagao noted that there are still a number of issues that need to be resolved before this technology reaches the market, such as who will actually run the trusted immutable ledgers and the business models for the technology. Still, the approach does seem to be one worth exploring to help maintain trust in our society.