Skip to main content

Day 2 - Successfully harnessing AI in Africa

December 2 - December 4 2024 I WP3458

A,Shot,Of,A,Smiling,Female,African,Farmer,Using,A

Session 3: Maximising the potential while mitigating the risks of AI

As AI technologies become more advanced, accessible, and embedded in Africans’ daily lives, ensuring their safe, ethical, and equitable use becomes increasingly urgent. This session examined how African governments, the private sector, and civil society can develop inclusive AI ecosystems that mitigate risks such as disinformation, biased data, labour exploitation, and cybersecurity threats. Speakers discussed AI governance models, trust-building strategies, and the potential for Africa-led regulatory frameworks that balance innovation with protection.

It was highlighted that AI’s economic potential is ultimately unlocked by removing barriers such as language and coding expertise. This is already happening: for example, non-technical African creators can use AI to build apps and leverage AI to reach new markets. However, to scale AI use further requires focus on trust and safety – and such elements as clear product policies, content moderation systems, and AI model testing. Google’s AI Principles outline risk-reduction strategies such as watermarking AI-generated content, red-teaming, and using classifiers to detect abuse.

There are many challenges to operationalize these principles. It was noted, for example, that frameworks like “AI for Good” do not necessarily address labour market disruptions and inequality as effectively as desired. In addition, bias in training data can reinforce discrimination. Governments can help solve these challenges through promoting open government data and prioritizing inclusive regulations that protect vulnerable communities, such as low-income workers performing data labelling tasks.

There is also a need to promote the reliability of data, to ensure AI offers Africans better information than they may receive “offline” through neighbours, frontline health workers, educators, and other service providers – and that these traditional sources of information are also equipped to access reliable data faster through AI.

The key recommendations from the session to mitigate potential risk of AI in Africa included:

  • Adopting risk-based governance frameworks for AI: Governments should implement dynamic regulatory systems that adapt as AI technologies evolve, through safety benchmarks, interoperable standards, and AI testing protocols.
  • Ensuring convergent national data policies and data privacy: Governments should create African-led data governance frameworks that emphasize open data initiatives while safeguarding personal and sensitive information.
  • Building awareness and mainstreaming AI ethics in AI development: AI training programs should include AI ethics, social science, and human rights considerations alongside technical AI skills, and stress the protection of human rights and vulnerable communities in operationalizing AI.

Session 4: AI in service of citizens: What governments can do to improve services with AI

AI has the potential to transform African governments’ public service delivery. This session explored practical steps for African governments to successfully integrate AI into public service.

The session explored how African governments can harness AI to improve public service delivery, administrative efficiency, and transparency. Case studies from governmental leaders in Rwanda, the UK, and South Africa illustrated how governments are already experimenting with AI to enhance public services and their delivery while managing technological and ethical challenges.

A representative from Rwanda discussed the government’s national AI policy issued in 2023, which focuses on translating AI into adding six more percentage points to Rwanda’s GDP. Rwanda has succeeded at deploying AI-powered chatbots in healthcare and supporting telemedicine and emergency logistics through drone-based medical deliveries. However, data availability remains a significant barrier.

The UK government shared lessons from its AI deployment strategy, which is guided by its Generative AI Framework. The government has considered AI applications in a two-by-two model with low to high complexity work on one axis and low to high proximity of work to citizens on another. For example, teachers and doctors perform high-complexity work at high proximity and need different AI applications than needed in low-complexity services – such as updating vehicle permits. In the UK, AI applications such as customer service chatbots and document processing have boosted productivity by 5-10 percent. There are real gains shaping public servants’ lives – for example, a nurse can see another patient with that time.

However, challenges like AI hallucinations in chatbots have led to a cautious, experimentation-driven approach. This reflects the early internet era, where technological promise outpaced policy safeguards.

South Africa’s government outlined its work on national AI policy, which is envisioned to stress AI ethics, digital identity systems, and data infrastructure. Collaboration with universities has advanced public-sector AI tools, including a chatbot for e-government services. However, concerns remain about workforce displacement, digital rights protections, and balancing AI-driven automation with job retention.

There was a vibrant discussion on uses of AI in the government. Some called for governments to test out AI for various use cases even when AI may still be imperfect, while others promoted a more gradual approach. The UK has struck a balance between scaling AI in simpler tasks while experimenting and iterating with AI in more complex and citizen-facing settings.

Key recommendations from the session included:

  • Establishing goals for AI in the government: Governments should implement AI action plans that set clear goals for digital transformation while balancing data security, transparency, and service equity.
  • Promoting responsible AI use in the public sector: Policymakers should adopt AI tools through iterative experimentation, rolling out low-risk, high-impact applications like administrative automation, health diagnostics, and agricultural forecasting while iterating more on higher risk applications such as provision of personalized healthcare. AI policies must align with citizen rights, ensuring public trust and accountability.
  • Using public-private partnerships to promote public service delivery: Collaborations with tech companies, universities, and NGOs can accelerate AI adoption while reducing costs and sharing technical expertise. Governments should consider funding AI research through multi-sector innovation funds.

The World Café at the end of day 2 focused on small group collaboration exploring AI use-cases in agriculture and food systems, assistive tech, health, and education.

Previous

Fireside chat with Google DeepMind

Next

Day 3 – Successfully harnessing AI in Africa

Want to find out more?

Sign up to our newsletter