Rwanda is actively contributing to the shaping and operationalising of responsible Artificial Intelligence (AI) principles and practices in relevant international and regional platforms, according to the AI Readiness and Maturity Framework for Rwanda 2023 by the Ministry of ICT and Innovation (MINICT). Although AI generates many opportunities and great potential to promote development, there are significant risks with many of its applications. Therefore, the National AI Policy also includes ethical considerations to capture the opportunities for economic development and mitigate the risks. ALSO READ: Inside Rwanda’s priority areas as new AI policy takes shape Victor Muvunyi, Senior Technologist in Emerging Technologies at MINICT, offers a profound insight into the nature of AI, noting its capacity for decision-making influenced by societal constructs. He illustrated this with an example saying, “In the system of loans, AI may leverage data to favour individuals with stable employment, inadvertently marginalising those with unstable employment.” He highlighted how AI translations can introduce biases, leading to instances where individuals within the system may unjustly be denied loans. Muvunyi also pointed out that AI possesses the capability to generate highly convincing images, enabling impersonation and fabrication of actions, falsely attributing them to individuals who are innocent of any wrongdoing. He emphasised, “To employ AI responsibly, we rely on ethical guidelines of AI which include beneficence, nonmaleficence, autonomy, justice, and explainability, among others. These principles serve as a compass, guiding our actions and enabling us to meticulously audit the model.” ALSO READ: UNESCO’s 10 ethics on Artificial Intelligence (AI) use He emphasised that to ensure responsible use of AI, “Users must understand that these AI tools rely on algorithms and the data provided by companies or governments, often containing sensitive information. Maintaining privacy is paramount. People should exercise caution when integrating AI into their work processes.” He also emphasised the need for vigilance in using AI tools, highlighting that users often accept their outputs unquestioningly, viewing them as absolute truths. He gave an example where a doctor may rely solely on AI services for patient treatment, which can be problematic. He urges users to avoid blind reliance and always crosscheck the information provided by AI tools before making critical decisions. Theoneste Murangira, an assistant lecturer in the Department of Computer Science at the University of Rwanda, stated that AI, akin to any technology, can be a force for good or ill. Highlighting its profound impact on various facets of life, including privacy, he underscored the imperative for conscientious deployment to ensure positive outcomes and protect individual rights. “In the hands of malicious people, AI becomes a weapon, capable of inflicting harm through avenues like cybersecurity breaches and the proliferation of deepfakes. These technologies disrupt not only privacy and dignity but also perpetuate discrimination. Vigilance in AI’s deployment is paramount to safeguarding our collective well-being,” he added. ALSO READ: Deepfake AI content and how to detect it As Muvunyi pointed out, Murangira emphasised the power of AI to depict certain groups while neglecting others. This selective representation underscores the importance of diversity and inclusivity in AI development to ensure equitable outcomes for all. He advocates for responsible AI usage by emphasising the critical role of ethical adherence and combating biases. He stressed the shared responsibility of data users in mitigating risks, underscoring the need for collective efforts to minimise potential harms and foster a more equitable digital landscape. Audace Niyonkuru, CEO of Digital Umuganda, a Rwanda-based AI and open data company committed to enhancing access to information and services in local African languages, prioritises inclusivity in its approach. He emphasised that their first step as an AI company is to curate datasets, ensuring adequate representation of both women and men as well as individuals with different accents. He stressed the importance of adjusting to local circumstances instead of just rolling out solutions without considering the community’s particular requirements. Additionally, he emphasised that their work ensures inclusivity by focusing on local languages, not just foreign languages, to prevent discrimination. Niyonkuru highlighted their efforts to promote equitable AI usage saying, “Our work revolves around addressing biases, with our clients being responsible users. We have a duty towards our clients to ensure proper usage, avoiding representation that favours one segment of the population over another.” Furthermore, in their data collection, particularly during quality assurance, they prioritise identifying and flagging biases to ensure fairness and accuracy.