“The question is no longer whether AI will shape healthcare in Nigeria. It already has. The question is whether our laws will catch up—before we leave trust, dignity, and lives behind.”
The AI said the lump was harmless. The surgeon believed it. The lump was cancer. Imagine this—not a real case, but one that could happen any day.
By the time doctors realised the mistake, the patient was dying. The family sued.
The hospital pointed at the software company. The company blamed the doctor. The doctor blamed the machine.
The judge had no similar case to guide him. The law had no answer. And the dead man had no justice.
This is not science fiction. This is the reality Nigerian healthcare is walking into—without a legal roadmap.
AI Is Already Here
Artificial Intelligence (AI) is not the future of medicine in Nigeria. It is the present—and it is growing fast.
Experts say Africa’s AI industry could be worth between $13 billion and $18 billion by 2030.
Nigeria is well placed to lead, if we build the right foundation.
Look At What Nigerians Are Already Doing
Professor Ibraheem Adeola Katibi at the University of Ilorin has created an AI system that reads ECG heart tests for African patients. Unlike foreign machines trained on Western data, his works for us.
Okiemute Obodo built an AI platform that has cut diagnostic delays by more than 60 percent. It even spotted a tuberculosis cluster two weeks before traditional methods found it.
You may have used Wellvis—an app that uses AI to help figure out what might be wrong with you and suggest next steps.
More than 77 per cent of Nigerian healthcare workers believe AI can improve how we deliver care.
And with fewer than four doctors for every 10,000 Nigerians, AI is not a luxury. It is a necessity.
But necessity without law is dangerous.
Who Is Responsible When AI Gets It Wrong?
Here is the problem: Nigeria has no law that specifically deals with artificial intelligence.
Other places do. The European Union’s AI Act puts medical AI in the “high-risk” category and forces strict safety and transparency rules.
We have the Nigeria Data Protection Act of 2023. It says no one should be subjected to a decision made purely by automation without human involvement. That is a start. But it has never been used in an AI-medical error case. Not once.
Our Medical and Dental Practitioners Act was written long before computers could think. It controls human doctors, not digital assistants.
Nigerian legal scholar, Abba Elgujja, has pointed out that AI “operates within a medico-legal framework designed for exclusively human decisionmaking.”
He has proposed something called a “Standard of Reasoned Justification” for AI-driven care—a test that would ask: was the machine’s decision based on solid evidence? Could it be explained? Did it respect the patient’s rights? Who was ultimately responsible?
The silence in our laws gets louder as AI spreads.
In January 2026, the government announced plans to start regulating AI. In September 2025, Nigeria released a National Artificial Intelligence Strategy. But plans are not laws. And laws are not yet enforced.
Where Does The Blame Fall?
Let us break it down.
The doctor? She relied on the AI. But modern AI is often a “black box”—not even its creators can fully explain how it reaches a conclusion.
Can we really expect a busy doctor in an underfunded hospital to overrule a system trained on millions of pieces of data?
The hospital? It bought the software from a vendor, often overseas. Hospitals rarely have the power to audit how the AI really works.
The developer? If the AI was trained on foreign data that does not reflect Nigerian bodies and diseases, is that bad design or bad deployment?
The machine? The law does not recognise computers as people. You cannot sue an algorithm.
The patient? They suffer the harm. And the law gives them no clear path to justice.
The Risks Are Real — And Nigerian
Two big dangers stand out.
First, algorithmic bias. The Federal Government has warned that many global AI platforms under-represent African data.
A machine trained mainly on European or American patients may miss symptoms that look different in us. It might say “nothing wrong” when something is very wrong.
Second, lack of informed consent. Patients often have no idea that an AI is involved in their diagnosis or treatment. No one tells them. No one explains the limits. No one gives them the right to say, “I want a human doctor to decide.”
Is it dignified to be diagnosed by a machine without knowing it? To get treatment advice from an algorithm you cannot question or hold accountable?
Dignity is not just about the final result. It is about how you get there. It is about having a say. It is about trust. And you cannot trust what you cannot hold responsible.
What Must Change
We do not need to reject AI. We need to govern it.
First, pass a dedicated AI Liability Act. It should spell out who is responsible when something goes wrong—developers for bad design, hospitals for poor oversight, doctors for final judgment.
And it should create a no-fault compensation fund for patients harmed when no one is clearly at fault.
Second, make mandatory disclosure a rule. Every patient must be told when AI is being used in their care, what role it plays, and what its limits are. Consent must be real, not assumed.
Third, keep humans at the centre. The law should say clearly: AI assists, but a qualified human clinician makes the final call.
Fourth, set up regulatory standards for medical AI. Before any algorithm can be used in a Nigerian hospital, it must be tested for safety, accuracy, and whether it actually works for our population.
Fifth, guarantee data governance and transparency. Patients have the right to understand decisions about their health—and to challenge them when they are wrong.
The Bottom Line
The machine that misdiagnosed that patient had no bad intentions. It had no feelings. It had no awareness.
But it had power. And power without accountability is dangerous—whether the power is held by a human or a computer.
Technology will keep improving. AI will get smarter, more widespread, more essential. But one thing must never change: the law must keep up.
Because when machines decide and humans suffer, staying silent is not an option.
The question is no longer whether AI will shape healthcare in Nigeria. It already has. The question is whether our laws will catch up—before we leave trust, dignity, and lives behind.




















