"If it walks like a duck and quacks like a duck, it is a duck.” It used to be that if we saw something with our very eyes and heard something with our very ears, that was reality. The thing is, that isn’t the case anymore.
With AI (Artificial Intelligence) tools that can generate realistic images, videos, and voices, the tools that human beings have used for millennia to interpret the world are fast becoming defunct. Just last week, I saw a tweet that sent a shiver down my spine.
The tweet, by a media company called Dexerto, showed just how capable AI has become vis-à-vis creating hyper-realistic videos from scratch.
In the frankly frightening tweet, AI is asked to create a video of Hollywood actor, Will Smith, eating a plate of pasta.
In the first iteration, the AI-generated video was shaky, mismatched, and frankly weird—no one would think that was the "real” Will Smith. However, the same prompt on the AI image generator, i.e., ‘Will Smith eating pasta,’ now almost looks lifelike.
Right now, I’m not in full panic mode because I can still tell whether an image is AI-generated, but bearing in mind the speed at which AI is progressing, it wouldn’t shock me if, in less than two years, computers are able to generate images that humans can’t distinguish as "real” or "fake.” Think I’m overestimating AI’s capability?
Well, today AI can generate a voice so accurate that if someone wanted to create a deepfake of me saying something like, "the color of grass is red,” they could create something so realistic I bet even my own spouse couldn’t tell it was fake.
I’m bringing this up because local lawyer Jean-Paul Ibambe is requesting the Supreme Court to repeal Article 39 of the law on prevention and punishment of cybercrimes. That particular article states that "Any person who, knowingly and through a computer or a computer system, publishes rumors that may incite fear, insurrection or violence amongst the population, or that may make a person lose their credibility, commits an offence.”
His overall reasoning is that it is illogical to criminalize this particular misuse of information technology when defamation was decriminalized by the Supreme Court in 2019.
Ignoring the fact that I think his central thesis—that the two things (spreading rumors online and defamation) are one and the same and therefore need to both be decriminalized—is not legally sound, I worry that if the Supreme Court agrees with him, we will be in a world of trouble, especially in the context of the ever-advancing capabilities of artificial intelligence.
If we cannot tell what is real or fake, how then can we make the right decisions? We saw just how dangerous misinformation was during the Covid pandemic; remember when a Rwandan exile in France spread rumors that President Kagame had passed away and that the person people were seeing was a clone? That rumor spread like wildfire online.
Now imagine if, in addition to a rumor, there were lifelike images as well. I remember that around the height of that particular rumor, President Kagame did an interview with the Rwanda Broadcasting Agency. That interview quieted the rumor because everyone, friend and foe, could see that the president was alive and well.
I worry that in less than 36 months, AI will be able to generate a fake presidential interview that is indistinguishable from a real one. So, the question I’d like to ask our lawmakers is this: what guardrails are you putting in place for us? I know that Western legislators are doing so right now for their citizens. So, how about us?
We have a small window to put in place laws and regulations that protect us from the dangers of AI-generated images and sounds while Rwandans can still tell what is real and what isn’t. I hope we don’t wait until something bad happens as a result of AI misuse (and abuse) before our MPs act. Just a reminder to them: the clock is ticking.
The author is a socio-political commentator