In Rwanda, like everywhere else in the world, Artificial Intelligence (AI) has become a prominent subject of discussion. Its rapid and exponential progress along with its unpredictable evolution has instilled some form of deep-seated perennial nervousness among policymakers and the public alike. In turn, this AI anxiety has amplified into ‘urgent calls’ for robust and comprehensive regulatory frameworks on a global scale. It is indeed undeniable that AI presents global risks that warrants a unified global response, however, what is interesting about the ‘urgent calls’ is that they are often framed as global concerns, seemingly unrelated to any specific geography or peoples. This perspective is not entirely accurate. The status, need, benefits, and risks of AI vary significantly across countries and regions. For example, while some countries primarily consume AI technologies, others are active producers; some have already integrated AI systems and tools into their daily operations, while others have chosen to start with or concentrate on its regulation; some countries have progressed beyond the experimental phase of adoption and diffusion of AI, while others are still in the early stages of understanding the subject. It is exactly due to these diverse realities of AI, on issues such as its status, desirability, necessity, and potential, that different countries have gone about different ways in dealing with risks and benefits posed by AI. For Rwanda, I suggest that we take a step back and adopt a more inward-looking approach to AI regulation, grounded in what I term ‘regulatory realism’ as opposed to ‘abreast regulatory paranoia.’ I will explain what the two terms mean. First, ‘abreast regulatory paranoia’. Usually it has become our habit, I mean Rwandan regulators, to act swiftly. This is of course commendable, and it is what regulation is supposed to be—a real time checking and balancing exercise of a specific issue. Otherwise, it becomes an act of responding, acting after the fact. However, this does not take away the sane and very necessary time to carry out rational and realistic studies of the issue at hand to come up with the best regulatory measures. It is this delicate equilibrium between swift responding and ex-post reactions that we have failed to attain or maintain. That is not all though. If that was the only problem, I would be able to propose a straightforward solution, but the issue is even more nuanced. We have not only failed to attain or maintain the regulatory equilibrium but instead we have gone further to have some form of abreast paranoia—anxiously reacting to barely-existing or even sometimes complete non-existent issues. This regulatory abreast paranoia stems from a commendable culture that Rwandans as people have decided to embrace, a culture of striving to move faster, embrace innovation and stay abreast of global trends. This kind of proactive stance has so far registered excellent results. However, there’s a flip side to this approach. In our eagerness to remain ahead of the curve, we sometimes find ourselves addressing challenges that may not directly pertain to us. We tend to adopt challenges simply because they are global, not necessarily because they are relevant to our context. Of course, I’m not suggesting that Artificial Intelligence isn’t our problem. In fact, given Rwanda’s bold and innovative technological ambitions, including initiatives like 'proof of concept'—similar to trial-and-error in medical industry—Rwanda is likely to be significantly impacted by AI, possibly more so than other African countries. I’m instead arguing about the extent and nature of how AI is a problem to Rwanda, i.e., AI’s poised risks, threats and uses, compared, for example, to United States or European Union. For instance, the approach to AI regulation in the U.S. and the EU reflects differing priorities, regulative motives and influences due to different factors such as big tech corporate pressures and how they respectively conceptualize concepts such as data privacy and innovation. On one hand, the U.S. approach is mostly characterized by a decentralized, sector-specific risk management strategy overseen by various federal agencies, and development of non-legislative measures, such as the recent Executive Order on AI. This kind of approach is usually more industry-friendly, self-regulatory, and potentially influenced by the significant role of large tech companies in the U.S. economy and innovation terrain. On the other hand, the EU’s approach is more comprehensive, with a range of legislation tailored to specific digital environments including the ever-pending EU's AI Act. As one can safely discern, the EU's AI regulation strategy is influenced by its strong stance on data privacy and protection, as evidenced by the General Data Protection Regulation (GDPR) and other related acts. Further, EU’s regulatory framework suggests that its priority is more on protecting its citizen data privacy and ensuring transparency in AI applications. Now coming back to ‘regulatory abreast paranoia’, what is really Rwanda’s priorities as far as AI is concerned? and what is her regulatory motive that is behind, for example, the newly published National AI Policy? Whatever it is the case, our regulators need to be contextually aware and responsive. We need to ask ourselves more relevant and practical questions: how many AI systems and tools that have actually been deployed and in use both in public and private sectors? How many actual risks and harms that have resulted from such few deployments, or if it continues to remain speculative threats and fears. Is AI regulation actually on top of our very urgent needs or it is AI research and development? Answering these questions makes us more of regulatory realists than overcautious paranoid rheostats. Looking things from the lenses of ‘regulatory realism’ is to account for the context and actual and compelling needs as they prevail. The recent published Rwanda’s National AI Policy serves as an exemplary starting point as the policy reflects well-considered priorities for the country. The policy ambitiously aims to develop a highly skilled workforce equipped with 21st-century skills and AI literacy, strengthen AI education and research at universities, enhance storage and compute capabilities, improve data quality for AI training, and promote Trustworthy AI in public sectors. All these focus areas are mostly R&D, adoption and deployment of AI which align perfectly with what's needed. Yet, there’s even a broader view that regulators must keep in mind, even when they are tempted to act swiftly and moved by some bandwagon interests. This view involves contemplating about: why we need AI (purpose), how much of it (extent) and how much and how far can we compromise to have concrete real AI development (sacrifice). We therefore need to explore different ways. For instance, whether we need to activate the ‘proof of concept’ framework for AI to serve as an ‘free innovation economic zone’ (FIEZ) modeling the classical free economic zone (FEZ) concept. In that way, we can prioritize development of AI over stringent regulation. Which in turn will encourage rigorous building and testing of AI systems and tools, shielding them from overly restrictive regulations. Everyone has been here. At the try and fail phase. There has been indeed a significant risk taking and regulatory adjustments to allow companies such as Open AI to bloomsome. Therefore, a choice will have to be made. I hold that regulating AI in the coming few years is going to be a very nationalistic and state-personalized task where countries will look inward and regulate AI according, for example: to how they make sense out of it; the value and conceptual weight they attach to; for example, things such as personal data and right to privacy; whether the country is more culturally and socially innovation friendly or technologically skeptic; whether they start off by understanding AI as a negative force or positive one; whether they think AI produces more of social benefits than social harms; Whether they believe that they can achieve the AI regulatory equilibrium or have surrendered their fate to copy and paste of solutions. Lastly, Rwanda and Africa in general need to be even more cautious with whatever regulatory approach they take, particularly considering the continent’s historical sidelining from key ‘civilizational moments’. Historically, Africa’s trajectory has been profoundly marked and affected by its exclusion and disruption during the most life-altering civilizational moments like the Agricultural and Industrial Revolutions. During these two critical turning points Africa was embroiled in exploitation through slavery and colonialism which significantly affected the continent’s position to catch up with the rest. It was left behind, indeed left behind for a good distance. This disruption systematically diverted Africa from significant global contributions and advancements. It is thus not far-fetched, in the context of AI, to inquire whether if the current push for AI regulation, in the absence of corresponding and equivalent AI development, is another iteration of such historical diversions and distractions. Therefore, Africa’s approach to AI must be thoughtfully considered, grounded in a clear understanding of its own interests and needs, to ensure meaningful participation in the AI revolution. Michael Butera is Associate Director, Certa Foundation and LL.M. Candidate, Harvard Law School.