Criminal investigators in Rwanda have said they are bracing for tough coming years in terms of cybercrime as technologies like Artificial Intelligence (AI) continue to evolve.
ALSO READ: Technology use in the judicial sector: Experts warn of risks
The message was shared during the conclusion of a three-day investigators’ retreat on Thursday, June 29 in Nyamata, Bugesera District, where top officials from the Rwanda Investigation Bureau (RIB) met to reflect, evaluate, and renew their commitment towards realising the institutional vision of becoming a professional investigative institution.
"The times are changing; we are entering into a world where technology changes everything, be it how crimes are done or how evidence can be distorted. The Artificial Intelligence era is here and it requires us to be ready,” said RIB’s Secretary General Jeannot Ruhunga in a media interview.
He noted that advanced technologies don’t present threats only, but opportunities as well to investigators, since they can be used to ease their work.
"We need to move fast. The one who moves faster – between us and the criminals – will have the advantage,” he noted.
According to Ruhunga, there are ongoing efforts to build the capacity of the investigators, giving them skills to investigate cybercrimes.
Justice Minister Emmanuel Ugirashebuja, who was a speaker at the event, highlighted the need to equip RIB officials with skills to combat emerging crimes, especially in the era of the 4th generation driven by AI.
Despite its many advantages, AI presents potential harms and disruption, according to tech researchers. For instance, a study done by the University College of London (UCL)’s Dawes Centre for Future Crime in 2020, identified 20 ways AI could be used to facilitate crime over the next 15 years.
These were ranked in order of concern – based on the harm they could cause, the potential for criminal profit or gain, how easy they would be to carry out, and how difficult they would be to stop.
The authors of the study ranked fake audio or video content as the most worrying use of AI in terms of its potential applications for crime.
ALSO READ: Make Artificial Intelligence work for Africa – Kagame
They noted that fake content would be difficult to detect and stop and that it could have a variety of aims – from discrediting a public figure to extracting funds by impersonating a couple’s son or daughter in a video call.
Such content, they said, may lead to a widespread distrust of audio and visual evidence, which itself would be societal harm.
In a previous interview, Collins Okoh, an intern at the Centre of Intellectual Property and Information Technology Law, told The New Times about how deep fake technology can negatively affect judicial processes.
Deep fake is a type of Artificial Intelligence (AI) used to create fake but convincing images, audio, and video hoaxes.
"We have a situation where machine learning is growing. Artificial intelligence and deep algorithms can alter a lot of things. That leads to the discussion of deep fakes which are alterations made to either video or pictures to change them in such a way that you can only use deep-fake detectors to actually know that a particular image or video has been altered,” he said.