5 AI Auditing Frameworks to Encourage Accountability

5 AI Auditing Frameworks to Encourage Accountability

Artificial intelligence (AI) is transforming industries at a dizzying pace, and it’s not just limited to futuristic robots or self-driving cars. From AI systems like Siri and Alexa to Tesla’s Autopilot, algorithms are creeping into our daily lives faster than auditors consume coffee during tax season. But with great AI comes great accountability. Without comprehensive internal audit procedures, organizations risk bias, misuse, and worse, regulatory roadblocks.

Why AI Auditing is Essential

AI auditing ensures that organizations minimize risks while maximizing the benefits of their AI systems. With AI’s expanding influence, regulators are improving internal controls to limit discrimination and bias that exist in AI algorithms. Internal audit functions must integrate AI audibility—yes, that’s a word we own—into workflows to satisfy these criteria. But it’s not only about compliance. It’s also about ensuring governance and reducing security risks with robust frameworks that cover the entire AI lifecycle—from design, development, deployment, and monitoring.

Managing AI risk is more than just documentation. It includes tracking inputs, outputs, and training data while evaluating the methodologies behind machine learning models. Mishandling datasets can lead to disastrous outcomes, especially in sensitive areas like healthcare, where patient data privacy and algorithmic auditing take center stage. Developing AI the right way is critical for aligning AI technologies with ethical practices and regulatory expectations.

When Should Internal Audit Engage With AI?

If your auditing team is waiting until the AI system is deployed, you are already putting yourself in a reactive situation rather than a proactive position. The black-box nature of AI demands continuous assessment throughout its lifecycle. This proactive approach enables auditors to uncover issues before they evolve into public relations nightmares. Governance, monitoring, and policy management must take priority while aligning with enterprise risk management objectives. And don’t forget cybersecurity—a breached algorithm is just as damaging as a miscalculated balance sheet (ask anyone who’s faced a ransomware attack).

Regulations like the AI Act and GDPR emphasize the importance of protecting data privacy and implementing comprehensive risk management frameworks. Auditors must also consider human factors, ensuring AI doesn’t sideline ethical decision-making while trying to achieve total automation. A structured audit process that evaluates AI models, data collection practices, and information security protocols is essential for maintaining organizational integrity.

Top AI Auditing Frameworks for Internal Audit

Here are five frameworks that will make auditors breathe a little easier (and maybe even help them better sleep at night):

1. COBIT Framework

COBIT (Control Objectives for Information and Related Technologies) isn’t just an IT framework—it’s the Swiss Army knife of governance. Developed by ISACA, COBIT 2019 provides detailed guidelines on internal controls, risk metrics, and performance measures. It’s perfect for organizations looking to streamline operational risk management and AI governance without losing their sanity.

2. COSO ERM Framework

The COSO Enterprise Risk Management (ERM) framework brings structure to chaos. Its emphasis on governance, strategy, and stakeholder collaboration makes it indispensable for AI risk management. Need to conduct AI risk assessments or monitor model performance? COSO has you covered with tools that optimize decision-making processes.

Additionally, COSO and Deloitte’s white paper, “Realize the Full Potential of Artificial Intelligence,” sets out a helpful five-step framework to follow when establishing an AI audit program:

  1. Establish a governance structure and identify a senior executive to lead the AI program and provide risk and performance oversight.
  2. Collaborate with stakeholders throughout the organization to draft an AI risk strategy that defines roles, responsibilities, controls, and mitigation procedures.
  3. Complete an AI risk assessment for each AI model in use, understanding how it uses data and whether it introduces any unintended bias.
  4. Develop a view of risks and opportunities such as those pertaining to model malfunction.
  5. Specify an approach to manage risks and the associated risk-reward trade-offs.

3. U.S. Government Accountability Office (GAO) AI Framework

The GAO’s AI Accountability Framework might sound federal, but it’s highly adaptable for private organizations. Focused on governance, data quality, performance, and monitoring, this framework enhances compliance while improving oversight. It’s like having a GPS for AI accountability—no wrong turns here.

Defining the basic conditions for accountability in all respects of the AI life cycle, the GAO AI framework is organized around four complementary principles:

  • Governance — Promote accountability by establishing processes to manage, operate, and oversee the implementation.
  • Data — Ensure quality, reliability, and representatives of data sources, origins, and processing.
  • Performance — Produce results that are consistent with program objectives.
  • Monitoring — Ensure reliability and relevance over time.

 

Specific questions, and audit procedures for each of the four framework principles expanded upon — governance, data, performance, monitoring — are available

4. IIA Artificial Intelligence Auditing Framework

The Institute of Internal Auditors (IIA) brings strategy, governance, and ethics to the forefront with its AI framework. Covering everything from cyber resilience to data architecture, it’s designed to align AI initiatives with corporate objectives. Plus, it helps tackle the challenges of measuring performance and addressing the human factor in AI. Adherence to auditing standards ensures consistency and reliability in the evaluation of AI technologies.

In their three-part series entitled “Artificial Intelligence – Considerations for the Profession of Internal Auditing” (Part I, Part II, Part III) The IIA provides detailed recommendations for each of these overarching components including engagement objectives and procedures internal audit can use as it formulates an AI program in accordance with their organization’s risk profile and strategic objectives.

5. Singapore PDPC Model AI Governance Framework

Leave it to Singapore’s Personal Data Protection Commission (PDPC) to set the gold standard. Their Model AI Governance Framework emphasizes transparency, stakeholder communication, and policy management. Auditors can use this framework to safeguard reputations and ensure ethical AI implementation. Bonus points for its practical use cases, which bring theory into action and highlight the responsible use of AI.

Though not created specifically with the internal audit function in mind, the framework and its related Implementation and Self-Assessment Guide for Organizations (ISAGO) provide auditors with useful information when setting up or analyzing their organization’s AI program. Additionally, their Compendium of Use Cases contains detailed practical illustrations of the framework in action across varying sectors, including specific examples of how organizations have effectively put in place accountable AI governance.

Leveraging AI Auditing Frameworks for Better Accountability

AI-enabled systems are as complex as they are powerful. To navigate these intricacies, internal auditors should blend elements from multiple frameworks. Tailored methodologies ensure regulatory compliance, mitigate risks and drive value creation. And let’s not forget the practical side: incorporating use cases and real-world applications into audits showcases AI’s value while addressing automation’s inherent pitfalls.

Remember, auditors: Big data might be intimidating, but so is balancing the books during the year-end close. Take it one step at a time. Whether addressing AI models or fine-tuning an audit process, success lies in the details.

Frequently Asked Questions about AI Auditing

  1. How can AI be used in auditing? AI enhances auditing by automating repetitive tasks, improving fraud detection, and providing insights through data analytics. Think of it as your overachieving intern who never sleeps.
  2. How to audit AI algorithms? Auditors can assess AI algorithms by examining data sources, evaluating model performance, and ensuring compliance with governance frameworks. Spoiler: It’s a lot more exciting than auditing expense reports.
  3. Which auditing firms use AI? Leading firms like Deloitte, PwC, and EY are embracing AI to streamline processes and enhance risk assessments. Turns out, AI is just as good at spotting errors as your most nitpicky team member.
  4. How is generative AI used in auditing? Generative AI supports auditors by creating dynamic reports, simulating scenarios, and generating predictive analytics. It’s like having a crystal ball for audit outcomes.
  5. Will auditors be replaced by AI? Not likely. While AI can assist with audits, it can’t replace the judgment, ethics, and strategic thinking humans bring to the table. Plus, robots don’t get accounting jokes—and that’s half the fun.
Mai-Ann

Mai-Ann Nguyen is an Associate Director within CrossCountry Consulting’s Risk and Compliance team. For over 13 years, Mai-Ann has supported her clients with internal audit, risk management and SOX implementation across a variety of industries in the North America and Australian Pacific region. Connect with Mai-Ann on LinkedIn.

Philip

Philip J. McKeown is a Managing Consultant within CrossCountry Consulting’s Intelligent Automation and Data Analytics team. For over 10 years McKeown has driven digital transformation strategy and execution across a broad range of industries and verticals for clients such as the Royal Bank of Canada, Bank of America, and Duke Energy. Connect with Philip on LinkedIn.