5 AI Auditing Frameworks to Encourage Accountability

5 AI Auditing Frameworks to Encourage Accountability

Artificial Intelligence (AI) is proliferating across industries and deepening its impact on many aspects of our personal lives — from voice recognition software like Apple’s Siri or Amazon’s Alexa to driver assist systems like Tesla Autopilot. While AI’s growing ubiquity delivers competitive advantages and other beneficial outcomes, it also brings a commensurate number of potential risks and pitfalls.  

To position organizations to reap the benefits of AI, internal audit functions need to consider how to incorporate control monitoring to prevent and mitigate potential risks. Failing to effectively provide assurance over AI initiatives may have far-reaching impacts. Regulators have signaled that they will be watching and may hold organizations accountable for discrimination and bias in their AI algorithms. 

This article will provide a quick overview of the stages in the AI life cycle where internal audit could provide assurance and highlight five potential AI auditing frameworks that internal audit functions can leverage to take advantage of AI, while protecting organizational assets and reputation. 

When Should Internal Audit Get Involved With Artificial Intelligence? 

Oftentimes questions only get asked about an AI-enabled system after it has been designed, developed, and deployed — but waiting introduces unnecessary risk. A best practice is to ask questions as a team moves through each of the various stages of the AI life cycle — design, development, deployment, and monitoring.

By conducting assessments throughout the entire AI life cycle — not just solely at ad-hoc “points-in-time” — system-wide issues become more apparent to internal audit and not unknowingly overlooked. 

What AI Auditing Frameworks Can Internal Audit Leverage?

Protecting an organization against a diverse and ever-expanding range of risks may be daunting, but avoiding AI and the risks it presents is not a viable option in today’s highly digitized business environment.

With guidance on how to audit AI initiatives becoming more commonplace — even if it remains in its relative infancy — several AI auditing frameworks have been published by a mix of international organizations and governments that can aid the internal audit function. 

  1. Control Objectives for Information and Related Technologies (COBIT) Framework 
  2. Committee of Sponsoring Organizations (COSO) Enterprise Risk Management Framework 
  3. US Government Accountability Office (GAO) AI Framework 
  4. Institute of Internal Auditors (IIA) Artificial Intelligence Auditing Framework 
  5. Singapore Personal Data Protection Commission (PDPC) Model AI Governance Framework 

We will provide a brief introduction to each of these potential AI auditing frameworks below in no particular order. You may benefit from exploring the guidance and education provided by multiple frameworks, even if you select just one as your primary framework for auditing AI-enabled initiative.

1. COBIT Framework 

The most recent version of the COBIT framework, COBIT 2019, was released by the ISACA (Information Systems Audit and Control Association) in 2018 to replace its predecessor COBIT 5. Considered an “umbrella” framework — and recognized internationally for the governance and management of enterprise information and technology — it includes process descriptions, desired outcomes, base practices, and work products across nearly all IT domains. This broad IT applicability makes it well-positioned to serve as the initial starting point for the internal audit function when auditing AI-enabled initiatives

2. COSO ERM Framework 

Updated by the Committee of Sponsoring Organizations in 2017, the COSO ERM framework’s five components — governance and culture, strategy and objective-setting, performance, review and revision, and information communication and reporting — and 20 principles provide internal audit functions with an integrated and comprehensive approach to risk management. COSO ERM’s risk management approach can offer guidance to provide governance over AI and effectively manage its associated risks for the benefit of the organization. 

Additionally, COSO and Deloitte’s white paper, “Realize the Full Potential of Artificial Intelligence,” sets out a helpful five-step framework to follow when establishing an AI audit program: 

  1. Establish a governance structure and identify a senior executive to lead the AI program and provide risk and performance oversight. 
  2. Collaborate with stakeholders throughout the organization to draft an AI risk strategy that defines roles, responsibilities, controls, and mitigation procedures. 
  3. Complete an AI risk assessment for each AI model in use, understanding how it uses data and whether it introduces any unintended bias. 
  4. Develop a view of risks and opportunities such as those pertaining to model malfunction. 
  5. Specify an approach to manage risks and the associated risk-reward trade offs. 

3. U.S. Government Accountability Office AI Framework 

Developed by the U.S Government’s Accountability Office (GAO), and published in June 2021, the Artificial Intelligence Accountability Framework for Federal Agencies and Other Entities was created to “help managers ensure accountability and responsible use of AI in government programs and processes.” Although this AI auditing framework has a focus on accountability for the government’s use of AI, because it is anchored in existing control and government auditing standards, internal audit functions can easily adapt it to their organization’s needs. 

Defining the basic conditions for accountability in all respects of the AI life cycle, the GAO AI framework is organized around four complementary principles: 

  • Governance — Promote accountability by establishing processes to manage, operate, and oversee the implementation. 
  • Data — Ensure quality, reliability, and representatives of data sources, origins, and processing. 
  • Performance — Produce results that are consistent with program objectives. 
  • Monitoring — Ensure reliability and relevance over time. 

Specific questions, and audit procedures for each of the four framework principles expanded upon — governance, data, performance, monitoring — are available. 

4. IIA Artificial Intelligence Auditing Framework 

Comprising three overarching components — Strategy, Governance, and the Human Factor — and seven elements — Cyber Resilience, AI Competencies, Data Quality, Data Architecture & Infrastructure, Measuring Performance, Ethics, and The Black Box — the Institute of Internal Auditors’ (IIA) AI auditing framework aids the internal audit function in fulfilling the role of “helping an organization evaluate, understand, and communicate the degree to which artificial intelligence will have an effect (negative or positive) on the organization’s ability to create value in the short, medium, or long term”. 

In their three-part series entitled “Artificial Intelligence – Considerations for the Profession of Internal Auditing” (Part I, Part II, Part III) The IIA provides detailed recommendations for each of these overarching components including engagement objectives and procedures internal audit can use as it formulates an AI program in accordance with their organization’s risk profile and strategic objectives. 

5. Singapore PDPC Model AI Governance Framework 

Created by Singapore’s Personal Data Protection Commission (PDPC) in conjunction with the World Economic Forum Center (WEF) for the Fourth Industrial Revolution (4IR), the Model Artificial Intelligence Governance Framework 2nd Edition focuses on four broad areas — internal governance and measures, human involvement in AI-augmented decision-making, operations management, and stakeholder interaction and communication — as a baseline set of considerations and measures for organizations to adopt as they roll out AI initiatives. 

Though not created specifically with the internal audit function in mind, the framework and its related Implementation and Self-Assessment Guide for Organizations (ISAGO) provides auditors with useful information when setting up or analyzing their organization’s AI program. Additionally, their Compendium of Use Cases contains detailed practical illustrations of the framework in action across varying sectors, including specific examples of how organizations have effectively put in place accountable AI governance. 

Internal Audit Hyperautomation Blueprint Series

Getting AI Auditing Right for Your Organization 

Whether you are starting an AI program from the ground up or are seeking to add an auditing structure to an existing one, these AI auditing frameworks can be used as a starting point or more generally as a point of reference. Note that each framework does not necessarily need to be used alone or in its entirety — we recommend that you take the pertinent component parts and ideas from each to build an AI auditing framework that is best suited to your organization and its needs. 

Mai-Ann Nguyen

Mai-Ann Nguyen is an Associate Director within CrossCountry Consulting’s Risk and Compliance team. For over 13 years, Mai-Ann has supported her clients with internal audit, risk management and SOX implementation across a variety of industries in the North America and Australian Pacific region. Connect with Mai-Ann on LinkedIn.

Philip McKeown

Philip J. McKeown is a Managing Consultant within CrossCountry Consulting’s Intelligent Automation and Data Analytics team. For over 10 years McKeown has driven digital transformation strategy and execution across a broad range of industries and verticals for clients such as the Royal Bank of Canada, Bank of America, and Duke Energy. Connect with Philip on LinkedIn.

Related Articles