IA responsable : les défis d’un développement adapté aux cultures et aux langues africaines
At the 4th edition of GITEX Africa Morocco, the issue of responsible artificial intelligence was a recurring theme in discussions about the continent's technological future. Invited to participate in several panels, Philippe Beaudoin, a Canadian professor and researcher, former Google engineer, and AI specialist, shared his perspective on the challenges related to developing technological tools adapted to African realities.
When asked about the concept of aligned artificial intelligence, the researcher offered a definition centered on human experience. According to him, truly aligned AI relies on cultural and social proximity to its users. He described this approach as follows: “An aligned artificial intelligence (AI) is an AI used by people from different cultures and backgrounds, all over the world. Each of these people interacts daily with others. They share certain values and common beliefs. An aligned AI is one with which we have the same feelings. We feel we could interact with it every day without being surprised. It conveys jokes, values, and beliefs that we recognize.”
This perspective leads directly to the question of responsibility in the development of artificial intelligence technologies. Philippe Beaudoin explains that the increasing use of these systems in decision-making demands a higher level of rigor in their design. He observes that companies are already delegating numerous tasks to these digital tools.
“If we don’t create responsible AI, the problem is that as we delegate tasks to AI—tasks for making decisions, tasks for programming—today we see it with the rise of agentic AI, there are more and more things that we delegate to AI, if we don’t have an AI that is responsible and aligned, each of the decisions that will be made by that AI will resemble us less and less. We will have the impression that we have delegated this to people who do not understand us or to a system that does not understand us as we are, that does not understand our reality, our way of seeing the world,” he says.
The expert clarifies that this issue isn't about a spectacular vision of technology. He believes that a lack of alignment would primarily prevent these tools from functioning correctly in the contexts where they are used. He states: "So the real importance of building ethical, responsible, and benevolent AI has nothing to do with whether it will save humanity or not. It's simply that it won't even be an AI that operates correctly in the environment in which we want it to operate—our country, our jurisdiction, etc."
The issue of technological sovereignty was another point addressed during the interview. Philippe Beaudoin believes that the African continent has room to maneuver in developing its own artificial intelligence systems. He does, however, point out that several technical conditions must be met.
The AI specialist indicates that this sovereignty rests first and foremost on essential infrastructure. The existence of data centers, access to energy, and data availability are prerequisites. The researcher also emphasizes the linguistic and cultural dimension. According to him, AI systems must be able to understand local languages and respect cultural contexts to meet the expectations of the population.
He describes a progressive technical approach that could facilitate this development. The researcher suggests building systems in successive layers. The first step involves a generic training phase focused on language comprehension. Additional layers can then be added during a process called post-training.
This method, he argues, paves the way for broader participation of African stakeholders in the development of these technologies. Consequently, these subsequent phases could be implemented at a more affordable cost. Cooperation between countries or institutions would also allow for the creation of a common foundation based on open-source models, which could then be enriched with cultural and linguistic data specific to the continent.
The issue of bias in artificial intelligence systems remains a central concern in public debate. Philippe Beaudoin believes that these biases do not stem solely from the data used during the initial learning phases. "But the truth is, we often focus on biases inherent in the data itself. However, what we now know through technology is that these biases can gradually disappear with subsequent training, with post-training," he states.
The data collected online would primarily serve to establish a baseline for understanding the language. Biases may appear at this stage, but they can be corrected later through interactions with human trainers. These trainers question the system, evaluate the responses, and adjust the model's behavior when necessary.
The researcher points out, however, that the total absence of bias remains impossible. He believes that all forms of language or decision-making necessarily involve a degree of subjectivity. "There will always be biases. Because speaking means having biases. Deciding means having biases. We don't have a choice. But the goal is to ensure that these biases reflect those of the population we are studying," he summarizes.
According to him, this cultural adaptation occurs precisely during the post-training phase. The systems can then integrate cultural, social, or linguistic references specific to the societies that use them. Biases initially present in the data can also be progressively corrected during this process.
The cost of developing artificial intelligence systems remains another frequent topic in discussions about technological sovereignty. Philippe Beaudoin acknowledges that these projects require significant investments. However, he qualifies the idea that they are reserved for global superpowers.
“Creating AI is expensive. Or it may seem expensive. That’s actually what they want you to believe in a certain way. What the big American companies want you to believe. It’s so expensive. It’s so complex. Don’t do it. Let us do it,” he thinks.
To support his claims, he also cites the example of certain international initiatives to illustrate the possibility of reducing these costs. He specifically mentions the case of DeepSeek in China. According to him, the most expensive phase involves data collection and pre-training the models. Subsequent stages become progressively more affordable.
With all this in mind, the researcher encourages collective initiatives. Cooperation between African countries could establish common foundations for the development of artificial intelligence systems. These platforms could then incorporate characteristics specific to each society. "I would tell you not to be misled by people who say it's too expensive, too expensive, too expensive. But to realize that we can do it, at least partially. And above all, we can join forces. Because the interests of African countries, or even countries in Europe, lie in creating this common foundation on which we can develop AI that reflects our own culture," he concludes.
Commentaires (1)
Participer à la Discussion
Règles de la communauté :
💡 Astuce : Utilisez des emojis depuis votre téléphone ou le module emoji ci-dessous. Cliquez sur GIF pour ajouter un GIF animé. Collez un lien X/Twitter, TikTok ou Instagram pour l'afficher automatiquement.