A chance for a new platform technology to uphold authenticity, determine provenance and guarantee integrity of digital content through secure and interoperable environments
The glitchy video of Catherine, Princess of Wales, announcing a cancer diagnosis has netizens and technology enthusiasts pausing, zooming in, zooming out, and (alas) theorizing conspiracies. BBC Studios apparently filmed the announcement (and not the occasionally bungling royal press office). And yet why does the video have such low resolution? Were all the HD cameras at the BBC out for repair? And where were the experienced lighting techs? No self-respecting lighting tech would intentionally use “the bad outdoor light” on a sick princess, would they? The hair people, for their part, did an excellent job: Not a single strand moved in the outdoor breeze. Not a single one. And #WheresWilliam? Surely, there was enough room on that garden bench for a doting prince on leave from royal duties to hold the princess' hand during her difficult announcement. Wasn't there? Perhaps the hand with the disappearing and reappearing Diana Spencer sapphire engagement ring?
On the other hand, if an algorithm isn’t trained to mimic stray hairs moving in the breeze…if it is easier to manipulate video images with low contrast and few shadows…if deep faking two subject is very difficult with the current technology…
...is Catherine’s video even real?
In an era where technology continues to blur the lines between reality and fabrication, the rise of realistic albeit fake content poses a significant challenge to the veracity of digital content. Princess Catherine’s video is reigniting concerns about the authenticity of media in the digital age: We’ve already very likely seen deep fakes–highly realistic but entirely fabricated videos created using artificial intelligence (AI) and machine learning–and not known it. The sophistication of deep fake technology has advanced rapidly, making it increasingly difficult to distinguish between real and utterly fabricated content. A University of Washington study found that subjects had great difficulty distinguishing between real and AI-generated faces from the website This Person Does Not Exist (https://thispersondoesnotexist.com/), with most individuals performing no better than chance.
To create a deep fake, machine learning trains algorithms on large datasets of images and videos of real people, for which there is no shortage for a public figure like Princess Catherine. The algorithm then generates entirely new content that convincingly mimics the appearance and behavior of the base subjects. The system synthesizes facial expressions, movements, and speech patterns from source material and overlays what it has learned onto a target person. This process can involve altering facial expressions, generating entirely new audio and visual content, or even swapping faces onto a highly realistic digital "puppet."
Fueled by advancements in AI and machine learning, deep fakes are designed occasionally to entertain, but always to deceive and manipulate. From political propaganda to celebrity scandals, the proliferation of deep fakes will further erode trust in media and other institutions. And, as the technology continues to evolve at a breakneck pace, deep fakes pose significant challenges to the authenticity of digital media and have the potential to be used for various deceptive purposes, including spreading misinformation, impersonating individuals, and manipulating public opinion.
In the case of Princess Catherine's video announcement, the stakes are particularly high. As a prominent public figure, any doubts about the authenticity of her statements will have far-reaching consequences, both personally and politically. The possibility of widespread dissemination of deep fake content could undermine public trust in the royal family and other established institutions, including the BBC and the U.K. government. Unchecked, misinformation is bound to spread.
Amidst the challenge of realistic fabricated content, we need robust mechanisms to establish veracity and to confidently and conveniently verify the authenticity of digital content. One very promising avenue lies in the concept of data spaces. A data space is a secure and interoperable environment that facilitates the sharing and analysis of diverse data sources while preserving privacy and security.
A data space is a network of separate and independent secure nodes which can enable the verification of authenticity through multi-modal data analysis. Data spaces offer a potential solution to the deep fake dilemma by integrating various data sources, including audio, video, metadata, IoT and other other integrated data sources, to provide a comprehensive view of the provenance and integrity of digital content. Through advanced techniques such as blockchain technology and cryptographic hashing, data spaces can create immutable records that attest to the originality and integrity of media assets.
In the case of an important public announcement like Princess Catherine's video, data spaces could play a crucial role in validating its authenticity. The creation of such video content already creates a mountain of various bits of disparate data. For example, the hypothetical videographer Jane Smith, who has the necessary security clearance ABC, enters with her keycard into the secure BBC offices, checks out camera number 1234, and grabs the keys for van XYZ with a specific GPS tracker on her way to do the job. The van logs the drive to Windsor Castle with Jane and the camera inside. Once filmed, Jane sends raw timestamped footage to the BBC editor Bob Jones via a certain IP address. And so on and so forth for all activities involved, from the hair and makeup people to the royal PR team, to the princess herself, many of whom very likely have IoT–Apple watches, smartphones and AirTags and so on–pinging and confirming their location and identity. A data space offers the opportunity to analyze the necessary bits of data created during this endeavor needed to authenticate the video created: It will analyze the video's metadata, such as timestamps, geolocation data, and device identifiers. And then the data space will verify whether the video was captured under legitimate circumstances by triangulating other relevant data. And this will be done all without compromising any private information, for example Jane's Apple Watch geolocation data without revealing Jane Smith’s name. Additionally, by cross-referencing the video with other trusted sources, like the official royal press office or trusted independent news outlets, data spaces could further corroborate the content's veracity or alert those organizations to false content.
In practice, the implementing of a data space to verify digital content could look like this: Imagine that Princess Catherine's video announcement is uploaded to a data space platform specifically designed to authenticate digital content. The platform utilizes advanced technologies such as blockchain and cryptographic hashing to create immutable records–from Jane Smith’s clocking into work that morning to Bob Jones’ uploading the final version for release–that attest to the originality and integrity of media assets.
The process might work like this:
1. Metadata Analysis: The data space platform will analyze the metadata embedded within the video file, including timestamps, geolocation data, and device identifiers. This information provides details about when and where the video was recorded and what equipment was used.
2. Chain of Custody Verification: The platform traces the chain of custody for the video, documenting each step of its creation and dissemination. For example, it tracks the videographer who filmed the video, the equipment used, the location where it was filmed, and the individuals involved in editing and distributing the video.
3. Privacy-Preserving Verification: Importantly, the platform ensures the privacy and security of individuals involved in the creation and distribution of the video. It anonymizes personal information while still providing a digital trail of verified activities, protecting the privacy rights of all parties involved.
4. Cross-Referencing with Trusted Sources: The platform cross-references the video with other trusted sources, such as official royal communications, independent news outlets, and social media posts from verified accounts. This helps corroborate the authenticity of the video by confirming its alignment with other credible sources.
By leveraging data spaces in this way, stakeholders can confidently verify the authenticity of digital content like Princess Catherine's video announcement. This not only helps combat the spread of misinformation and deep fakes but also reinforces trust in digital media and public institutions.
The emergence of deep fakes represents a profound challenge to the integrity of digital media and the information we consume. As the mushrooming conspiracy theories around Princess Catherine's video announcement demonstrate, the spread of misinformation fueled by deep fake technology can have far-reaching consequences for modern societies. By harnessing the power of data spaces, we may have a chance to uphold authenticity in the new digital age and safeguard against the proliferation of deceptive content. However, achieving this goal will require collaboration and innovation across sectors to develop and implement robust mechanisms for verifying the authenticity of digital content.
Comments