Rwanda Forensic Institute has the capacity to detect deepfakes, according to an official. A deepfake is an image, or recording, that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not done or said.
David Irasubiza, a quality assurance specialist at Rwanda Forensic Institute, said that there is a technology that aims to address the growing threat of deepfake content by providing a means to verify the authenticity of videos and pictures.
ALSO READ: Babyl launches digital healthcare system
"We provide a reliable and scientifically backed method for determining whether a video, or image, is original or manipulated. We aim to alleviate the distress and uncertainty experienced by victims of deepfake attacks,” he said.
This technology, he said, "will enable individuals to confidently refute false claims and protect their reputation and privacy, which could be used by victims to consult legal counsel.”
The pledge to spot deepfakes comes after OpenAI, an artificial intelligence (AI) research organisation, through its blog post, unveiled a sample of a new artificial intelligence tool called a voice generator that can mimic human voices with startling accuracy.
This tool uses a 15-second sample of someone speaking to generate a convincing replica of their voice. Users can then provide a paragraph of text and the tool will read it in the AI-generated voice and it can also provide the same voice in different languages.
ALSO READ: Inside Rwanda’s priority areas as new AI policy takes shape
OpenAI blog post includes an example of an audio clip of a human reading a passage about friendship, alongside an AI-generated audio that sounds like the same person reading the same passage in Spanish, Mandarin, German, French, and Japanese.
In each of the AI-generated samples, the tone and accent of the original speaker is maintained. This follows the AI-generated video tool released by OpenAI IN March which can produce lifelike 60-second videos based on text instructions. It can depict scenes with various characters, specific movements, and detailed backgrounds.
OpenAI’s ChatGPT can also generate images from a text prompt.
The AI tools have a range of potential applications, including accessibility services. However, it could leave someone worried as it could also fuel the creation of disinformation or make it easier to perpetrate scams.
This was seen after the trending video purporting to be of Elizabeth Ann Warren – an American politician and former law professor who is the senior United States senator from Massachusetts, serving since 2013 – who was seen claiming that "allowing Republicans to vote in the 2024 presidential could threaten the integrity of the election and the safety of the electorate” on MSNBC, an American news-based television channel.
Shocking and terrifying
These fake videos can be weaponised for blackmail, extortion, or harassment, putting individuals at risk of reputational harm and personal exploitation, and this sometimes goes on for a long time.
A victim of deepfakes in Rwanda who preferred anonymity said that it was "shocking and terrifying” to receive such a threatening message, and a distressing experience to receive a message containing a nude video of herself, along with a demand for money.
"I was overcome with fear and panic with the thought of such a fake private and intimate video being shared without my consent, it was incredibly upsetting, I felt violated and vulnerable, not knowing what to do or who to turn to for help as they told me that if I told anyone they would publish it still,” she said.
She said the creators threatened to share the video with co-workers if she did not pay a huge amount of money.
Later, a friend suggested taking a risk and stopping the payment to see how it goes.
The video creators didn't send it as they had already collected around Rwf580,000 from her.
ALSO READ: Rwanda needs $76m to implement new AI policy
How to spot deepfake AI generated content
Detecting deepfake AI content poses a significant challenge due to its highly convincing nature and sophisticated manipulation techniques.
However, according to Norton 360—an ‘all-in-one’ device privacy protector—there are some fairly simple things you can look for when trying to spot a deepfake.
Awkward-looking body or posture is one of the main ways to spot a deepfake, that is, if a person’s body shape doesn’t look natural or there is awkward or inconsistent positioning of the head and body.
This may be one of the easier inconsistencies to spot because deepfake technology usually focuses on facial features rather than the whole body.
Unnatural body movement is another way to spot if someone looks distorted or off when they turn to the side or move their head, or their movements are jerky and disjointed from one frame to the next, you should suspect the video is fake.
In addition, blurring or misalignment of the edges in images; for example, where someone’s face and neck look awkward. Inconsistent audio and noise, since deepfake creators usually spend more time on video images rather than the audio. The result can be poor lip-syncing, robotic-sounding voices, strange word pronunciations, digital background noise, or even the absence of audio.
Stanford University has also developed AI tools that can detect the lip-synching processes that are frequently used to put words never spoken by the victim.
Furthermore, the development of blockchain-based solutions and digital authentication techniques offers promise in verifying the authenticity of digital content and detecting instances of manipulation.
By creating immutable records of original content and tracking its provenance, blockchain technology can help establish trust and transparency in the digital landscape.
To check hashtag discrepancies, there’s a cryptographic algorithm that helps video creators show that their videos are authentic. The algorithm is used to insert hashtags at certain places throughout a video. If the hashtags change, then you should suspect the video has been manipulated.