AI governance remains a major concern, writes SONNY ARAGBA-AKPORE
The future of technology is getting more interesting as the adoption of Artificial Intelligence (AI) and internet of things (IoT) takes centre stage.
But while the fast adoption creates excitement for scientists and those who desire to deploy technology for everyday use, there are manifest fears of possible abuse of AI if strategies are not put in place to guide both promoters and users.
 Questions of ethics and compliance are being raised and these have created worries for everyone.
And to douse these fears and create a semblance of comfort for all, AI governance is becoming necessary to stem a potential unwholesome practice.
âAI governance encompasses the frameworks, policies and practices that promote the responsible, ethical and safe development and use of AI systems. It establishes the guardrails that enable innovation while protecting stakeholders from potential harm,â analysts agree.
Responsible AI governance considers among others: Ethical standards which define AI governance policies to promote human-centric and trustworthy AI and ensure a high level of protection of health, safety and fundamental human rights.
 On regulations and policies, Boards consider compliance with applicable legal frameworks that govern AI usage where they operate, or intend to operate, such as the European Union (EU) AI Act.
The governance treats accountability and oversight to ensure organizations assign responsibility for AI decisions to ensure human oversight and prevent misuse and abuse.
Chief technology officers, risk officers, chief legal officers and their boards must develop a governance approach that protects data, prevents unauthorized access to  ensure that AI systems donât become a cybersecurity threat.
As AI governance fast emerges as one of the most pressing strategic challenges facing boards and everyday living today, its governance remains a major concern.
In the  Q4 2025 Business Risk Index conducted by Diligent Institute and Corporate Board Member, â60% of legal, compliance and audit leaders now cite technology as their top risk concern â well ahead of economic factors (33%) and tariffs (23%). Yet despite this urgency, only 29% of organizations have comprehensive AI governance plans in place.â
Although thereâs currently no wide-scale governing body to write and enforce these rules, many technology companies have adopted their own version of AI ethics or an AI code of conduct.
 AI ethics are the set of guiding principles that stakeholders (from engineers to government officials) use to ensure artificial intelligence technology is developed and used responsibly. This means taking a safe, secure, humane, and environmentally friendly approach to AI.
A strong AI code of ethics can include avoiding bias, ensuring privacy of users and their data, and mitigating environmental risks. Codes of ethics in companies and government-led regulatory frameworks are two main ways that AI ethics can be implemented.Â
By covering global and national ethical AI issues, and laying the policy groundwork for ethical AI in companies, both approaches help regulate AI technology.
The future will see large parts of our lives influenced by Artificial Intelligence technology. Machines can execute repetitive tasks with complete precision, and with recent advances in AI, machines are gaining the ability to learn, improve and make calculated decisions in ways that will enable them to perform tasks previously thought to rely on human experience, creativity, and ingenuity.
âAI innovation will be central to the achievement of the United Nationsâ Sustainable Development Goals (SDGs) by capitalizing on the unprecedented quantities of data now being generated on sentiment behavior, human health, commerce, communications, migration and more,â according to the International Telecommunications Union (ITU) documents.
 ITU said it will provide a neutral platform for government, industry and academia to build a common understanding of the capabilities of emerging AI technologies and consequent needs for technical standardization and policy guidance.
âCountries must put in conscious efforts to mitigate the dangers of deployment if they want to achieve positive results.â ITU said.
 AI governance is calculated to prevent bias whereby AI models can inherit biases from training data, leading to unfair hiring, lending, policing and healthcare outcomes.
The report states that Governance proactively identifies and mitigates these biases.
Aside that, AI governance prioritizes accountability when AI makes decisions, and holds someone responsible.
Governance holds humans accountable for AI-driven actions, preventing harm from automated decision-making.
Price WaterhouseCooper (PwC) Â Head of AI Public Policy and Ethics, Maria Axente was quoted as saying that âWe need to be thinking, âWhat AI do we have in the house, who owns it and whoâs ultimately accountable?ââ
AI governance should Protect privacy and security where AI relies on vast amounts of data, a particular risk for healthcare and financial organizations handling sensitive information.
Governance establishes guidelines for data protection, encryption and ethical use of personal information.
Governance Prepares for AIâs
environmental, social and governance (ESG)
impact: âGenerative AI has a significant environmental impact, requiring massive amounts of electricity and water for every query. It also reshapes job markets and corporate operations.â Governance helps create policies that balance AIâs opportunities with its ESG risks and by promoting transparency and trust. Many AI systems are considered âblack boxesâ with little insight into their decision-making.
Governance encourages transparency and helps users trust and interpret AI outcomes.
Governance balances innovation and risk as AI holds immense potential for progress in healthcare, finance and education, governance weighs innovation alongside possible ethical considerations and public harm.
As the future of AI becomes a way of life, global telecommunications regulators, the ITU says Geneva ,Switzerland is fast becoming the global headquarters for AI.
From July 7 through 10, 2026 the world will converge to deliberate and conclude talks on AI governance. It will host the seventh edition of âAI for good summitâ as governments, institutions crystallize strategies for the future of AI across industries, homes, governments and the work place.
Aragba-Akpore is a member of THISDAY Editorial Board
đ¨ BREAKING: Watch the full clip here â¤

