By Confidence Mbang
Abstract
The rapid adoption and proliferation of Artificial Intelligence (AI) systems across Nigeria’s sectors such as healthcare, finance, transport, workplaces, amongst others, have raised questions pertaining to the determination of liability were AI-related harm occurs. Utilising the doctrinal methodology, this article adopts the analytical and comparative methods in examining the adequacy of existing common law liability principles in addressing AI-related harms in Nigeria. It evaluates extant legislations relative to AI, identifying critical gaps that current doctrine fails to bridge. The article found that the Nigerian Law is ill-equiped to provide meaningful redress to victims of AI-caused harm. Drawing from selected jurisdiction, such as the European Union and United Kingdom, the article maps the contours of the liability/responsibility vacuum and calls for urgent scholarly and legislative attention. It is intended as a forerunner contribution – assessment in purpose – to a conversation that Nigerian jurisprudence can no longer defer.
Keywords: Artificial Intelligence, Liability, Responsibility, Regulatory Gap, Emerging Technologies.
As an operational reality, AI is no longer a distant technological prospect for Nigeria. Algorithm systems now perform several task cutting across diverse sectors; credit scoring evaluation in fintech companies, diagnostic process in heatlhcare institutions, public administrative institutions, and even function in workplaces for employment relations – emphasising it cyber-physical impacts. Together with other emerging technologies, AI has transformed public administration, social relations, and commercial practices. Generally, the overwhelming features of AI, such as opacity, autonomy, and distributed agency have made its regulation complicated.[1] The ‘black Box’ machine learning models limit the ability of the courts to determine fault based on conventional evidence standards. As AI systems become more autonomous and capable of self-learning and decision-making, they become cumbersome to exact control and impose liability.[2] Moreover, the factors to determine liability in semi-autonomous and autonomous systems largely differ[3], as traditional models on responsibility based on human action, intent, and foreseeability such as liability in Tort[4], Contract[5] administrative, and criminal law[6] are no longer adequate to regulate AI and determine its liability regime. These shortcoming are also peculiar to the Nigerian Legal System where cybersecurity threats and attacks has been strengthened by AI systems and applications.[7]
In November 2023, the OECD launched its AI Incidents Monitor, which recorded over 11,000 incidents and hazards so far, which explains why compensating the harms that occur should be a primary concern if responsible AI and AI safety are to become a reality.[8] This is not good for the most populous black nation in Africa which has positioned AI as a cornerstone of its National Digital Economy Policy and Strategy (NDEPS) 2020–2030. However, the federal government has not relented from developing initiatives and programmes like the 3 Million Technical Talent (3MTT) and AI Trust, amongst others. But beyond this expanding development and deployment, there remains one under-examined issue; who is liable, and through what mechanisms can AI liability and responsibility be determine?
This issue is not merely academic but practical. As noted above, the features of AI strains traditional liability principles and exposes regulatory lacunas. This article examines those strains. It maps the existing Nigerian legal framework against the distinctive liability challenges posed by AI systems, identifies the gaps that emerge from that mapping, and draws selectively on comparative developments – particularly the European Union’s Artificial Intelligence Act and the United Kingdom’s evolving regulatory posture – not to prescribe transplantation solutions, but to sharpen the diagnostic picture. The article is expressly descriptive in orientation: its contribution is to name and frame the problem with sufficient clarity that future scholarship, judicial reasoning, and legislative effort may proceed on firmer ground. Nigeria needs this conversation. This article is its forerunner.
Artificial Intelligence broadly refers to technological computational systems designed to undertake tasks that would ordinarily demand human cognition, such as reasoning, pattern, recognition, decision-making, and natural language processing. Within this broad categorisation, machine learning and deep learning neural networks present the most acute liability challenges. Accordingly, three features of AI create particular difficulties for liability doctrines and existing laws.
First, opacity. AI systems, especially deep neural networks functions as black boxes’, by producing untraceable and unexplainable outputs unknown to even the designer, different from the inputted data. It directly undermines the fault based liability regime, which demands proof of standards by the Claimant – an exercise that is practically impossible when the system reasoning is inscrutable.[9]
Second, autonomy. AI is not like conventional softwares that executes pre-programmed instructions, rather it exhibit a distinct degree of behavioral adaptability ND independence that may lead to decisions or actions that are not explicably anticipated by the designer and deployer.[10]
Third, distributed agency. The distribution of agency across multiple chain actors in the development and deployment of AI systems. A typical AI deployment in Nigeria involves a developer who creates the underlying model, a vendor who adapts it for a specific application, an operator – such as a hospital or bank – who deploys it in a professional context, and an end-user who relies on its outputs. When harm occurs, the causal contribution of each actor is difficult to isolate, and responsibility may be diluted across the chain in ways that leave victims without a clear defendant. These features constitute the core liability challenge that must be confronted.
This section focuses on analysis, albeit briefly, the exisitng policies and Ai-related legislations in Nigeria. It also discussed about the traditional common law doctrines and how there are strained to apply to AI-related harms and risk.
In Africa, Nigeria has been at the forefront in regulating AI and its liability implications. Although there are no specific AI legislation, tangible efforts have been made posturing the nation’s interest thereof. The Federal Government through the Federal Ministry of Communication, Innovation, and Digital Economy (FMCIDE) – in expanding the National AI Policy (NAIP) according to its 2023 whitepaper – released the National AI Strategy (NAIS) 2024 as a central policy document and to serve as a strategic blueprint designed to make Nigeria a global leader by 2030.[11] The NAIS identified four broad AI risks: economic, ethical, societal, and AI model, and provides a roadmap for developing a robust framework that will support the ethical and responsible use of AI.[12]
There is also the NITDA Draft Code of Practice for AI. The Draft Code of Practice for Artificial Intelligence is a NITDA-led regulatory instrument intended to provide sector-agnostic standards, risk-based obligations, and governance processes for the development, deployment and use of AI systems in Nigeria. The draft aligns Nigeria’s AI governance with national priorities (data protection, fundamental rights, cybersecurity) while proposing conformity, transparency, and accountability measures to mitigate harm from AI.[13]
There are also several legislative bills under review at the National Assembly. For instance, the Control of Usage of Artificial Intelligence Technology Bill (HB 942), to create a unified legal framework for AI licensing, and incident reporting, while the Bill on Establishment of the National Institute for AI and Robotic Studies focuseson creating a national instituteof research, standard-setting, and capacity building. This signifies a drift from policy-driven regulatory pattern to legislative-driven regulatory regime.
Aside from these, there are sector specific regulations, guidelines, and policies, such as the Nigerian Data Protection Act, General Application and Implementation Directive (NDPA-GAID), 2025, NBA Guidelines for AI in the Legal Profession, 2024, Code of Practice for Interactive Computer Service Platforms, and even the National Digital Economy Policy and Strategy (NDEPS).
Nigeria has also attempted regulation of AI through reliance on existing legislations, such as the Nigerian Data Protection Act (NDPA), 2023, Cybercrimes Act, 2015 (as amended, 2024), Federal Competition and Consumer Protection Act (FCCPA), 2018, Nigerian Communications Act, 2003, National Information Technology Development Agency (NITDA), 2007, amongst others. The application of the forgoing legislations is usually imperfect and, many a time, leaves victims without compensation.
The doctrine of negligence, which is anchored the foundational principle and general duty that a person must take reasonable care to avoid foreseeable harm to the person of another[14] is one of the doctrinal vehicles through which AI liability may be addressed in Nigeria. Renown global scholars like Diamantis argue in its favour as the best model.[15] However, in Nigeria, the immediate difficulty with negligence as applied to AI is the breach inquiry. The process of establishing that a defendant’s conduct fails below the reasonable test pressuposes the defendant’s (which in this case may be the developer, deployer, designer, or user) conduct rather than the system’s output as the operative cause.
This difficulty is more apparent for autonomous systems without any human agency than semi-autonomous systems where human interaction can be discovered. Even in semi-autonomous systems, identifying the precise human decision that constituted the breach becomes deeply problematic. Did the developer breach a duty by releasing an undertested model? Did the operator breach a duty by deploying it in a high-risk context without adequate oversight? Did the user breach a duty by over-relying on its outputs? Nigerian negligence law offers no principled mechanism for resolving these questions as applied to AI.
Vicarious liability – which renders an employer or principals liable for torts of employees or agents within the scope of their authority – is another doctrinal vehicle which might hold operators liable for the deployments of AI systems. While this doctrine is highly considered even globally[16], it is strained and applied imperfectly. It application demands the presence of legal personhood and volition conduct. Unfortunately, an AI system is neither an employee nor an agent in any cognisable legal sense. Therefore, the extension of this doctrine by analogy would require substantial judicial innovation that Nigerian courts have not indicated any appetite to undertake without legislative direction.

