Labelling AI-generated content: Why content creators in Rwanda should embrace move
Saturday, November 23, 2024
Digital content creators and distributors may be required to label Artificial Intelligence (AI)-generated content, if the proposed media policy is eventually approved in Rwanda.

Digital content creators and distributors may be required to label Artificial Intelligence (AI)-generated content, if the proposed media policy is eventually approved and starts being implemented in Rwanda.

The proposed policy, prepared in 2023, presents guidelines on the country’s next steps in regards to media in the age of AI and citizen journalism. It presents guidelines on the country’s next steps in media development and management.

ALSO READ: Major AI tools to ease your content creation work

Labelling, a commonly proposed strategy for reducing the risks of generative AI, is an approach that involves applying visible content warnings to alert users to the presence of AI-generated media online; on social media, news sites, or search engines.

Why label AI-generated content?

Innocent Muramira, a Kigali-based lawyer, pointed out that "law is made out of space and time” and, he thinks, this is the right time to get regulations governing online content. But there is a need, he said, for experts to handle such matters.

"To enforce such regulations, we will need well trained experts since there is a lot of emerging technology,” he noted.

According to Irina Tsukerman, a security lawyer, business analyst and President of a media and security strategic advisory Scarab Rising, Inc., specializing in information warfare and reputational management, emerging threats, private intelligence gathering and analysis, geopolitical analysis, and more, AI is increasingly becoming more realistic.

She pointed out that particularly ChatGPT-generated texts are becoming nearly indistinguishable from human-written texts in some cases, and for that reason, AI is being widely used for propaganda purpose or to otherwise mislead and manipulate public opinion.

"While videos and images have not yet reached the exact level of realism, some companies are already producing extremely realistic content, which could be confusing for people if it is incorporated well into human-backed content or is shown very quickly,” she noted.

"Requiring labelling is important to underscore potential conflicts with intellectual property rights, and as a reminder to check sources and avoid being pulled in by "hallucinated" (fabricated) material,” she pointed out.

ALSO READ: Rwanda to integrate artificial intelligence in school curriculum

It also helps the audience be aware that generated content may be machine driven and as a result may have significant differences from what a human being would have produced, she noted.

Philbert Murwanashyaka, a business entrepreneur and tech enthusiast with experience in Virtual Reality and AI, said regulations can mitigate the risk of disinformation and keep trust in the digital ecosystem. For him, easy differentiation between human created content and AI-generated content is important so that people approach the content with a different mind-set.

Murwanashyaka noted that if the policy is to be implemented, content creators must adopt some tools for labelling AI-generated content, for example watermarks or metadata, that don’t overwhelm consumers with complex details.

Noting that it is getting harder to differentiate AI-generated content from human generated content, he said that there should be measures to avoid the use of AI-generated content in places like courts as evidence.

"People need to understand to ensure that the regulation is effective. They need to understand AI content and how to approach it because, in general, it is fake content. It can be seen as true but it is not true,” he noted.