From AI to IA: An Internal Audit Guide to Artificial Intelligence

Organisations implementing artificial intelligence into their standard operating procedures are in the same position as a builder starting new construction.

Building a house starts with its foundation. With proper planning, quality materials, and sound construction, a well-laid groundwork allows builders to confidently erect a stable structure. But rather than working with architectural schematics and concrete, organisations must lay the foundation of AI with effective controls and strong data governance.

Internal audit holds the critical role of enacting those controls and establishing parameters so that the organisation can reap AI’s benefits while mitigating its inherent risks. 


The Role of Internal Audit

There are myriad use cases for AI that apply to almost every industry and service sector. These include enhanced efficiency, improved productivity, and automated processing. Attaining those benefits requires careful navigation through the AI onboarding process, and internal audit is distinctly positioned to take a macro view, acting as a compass that guides the organization toward enabling the technology’s capabilities while identifying risks. In that regard, internal audit’s function remains unchanged from its traditional one. What does change, however, are the types of risk auditors must be aware of — specifically the risks AI can present.

Those can include bias within AI models, inadequate training for users, failure to properly vet outputs, and other issues that come about as the technology evolves. When specifically looking at the risk of bias within AI models, there are three types that internal auditors and their organizations need to be aware of:
  • Socioeconomic Marginalisation - Exclusion from societal participation due to economic status, limiting access to resources, opportunities, and services.
  • Racial Discrimination - Unfair treatment based on race or ethnicity, manifesting in prejudiced attitudes, biased behaviors, and discriminatory policies.
  • Demographic Exclusion - Systematic exclusion of demographic groups from opportunities, resources, or decision-making, leading to underrepresentation and marginalization
If left unchecked, these biases can create a ripple effect that, at a minimum, limits the efficacy of AI and may even lead to legal repercussions.


Keeping an Eye on Bias

One of the reasons bias within AI can be so detrimental is how easily it can sneak into a model. If AI is trained using biased data, it will incorporate that information into its outputs, leading to skewed, inaccurate results. And if someone working with that AI assumes the results are accurate, it can lead to erroneous reporting and conclusions.

For example, an AI model being used to review homeowners insurance applications may automatically offer less coverage to or deny applicants from hurricane-prone states based on its training. If the data used in training showed a large volume of claims activity from one or two states but not others, the algorithms could unintentionally create a correlation between frequent, high-value claims and those areas. Subsequently, this could lead to rejections for applicants who reside within the same state but who do not live in an area at risk of severe hurricane damage.

In that scenario, there are multiple AI risks that internal auditors should be aware of and identify. The first risk is taking AI’s output at face value without assessing it with a human eye. AI is far from perfect, and it requires knowledgeable, experienced professionals who understand the subject matter to review its results. The second risk comes from the biased or “poisoned” model. Once an organisation realizes its algorithms contain bias, it must take steps to retrain the AI with higher-quality data. That leads into the third risk internal audit can help address — establishing checkpoints and regular model evaluation so that if any bias does creep in, the organisation can address it before it becomes firmly ingrained within the model.


The Consequences of Missing Risk and How To Address It

When left unchecked, AI risks can lead to a raft of potentially damaging consequences, ranging from fines all the way to the Federal Trade Commission prohibiting an organization from using AI. In one notable example, the FTC banned a national retailer from using facial recognition software after bias in its AI disproportionately identified women and people of color as shoplifters.

The failure of that organisation’s risk identification ultimately led to severe repercussions due to the biased algorithm being enacted in a real-world application. It also highlights another area of risk that the company failed to address. Inadequate employee training led to a lack of scrutiny into the AI’s fallacious results. What’s more, without a holistic view of AI output or regular monitoring, the company missed identifiable patterns that would have indicated a problem with the number of false positives that appeared.

Auditors should ask questions about access and controls, particularly during the training stage of AI model creation. Proper controls allow organisations to maintain a reliable structure throughout the modeling phase while making it easier to establish what data the model takes in.

How Internal Audit Can Establish Effective AI Risk Management:
  • Build a strategic plan for the adoption of AI and place a governance model around it.
  • Align the organisation’s strategic plan to the internal governance model.
  • Identify government and regulatory rules that already exist around AI.
  • Develop an AI trusted testing program to confirm that AI tools are built using a diversified model, that they don’t contain software errors, and that inaccuracies are identified throughout the development process.


Adapting to Change

It’s also crucial that organisations approach AI with a change management plan. Companies that are bringing AI into their standard operation procedures for the first time will need to pinpoint acceptable use cases and train employees in how to responsibly use the new tools without exposing the organization to harm. For instance, generative AI can analyse text, including programming code, while suggesting improvements. But attempting to verify proprietary information in a publicly connected system exposes it to anyone around the world with access, potentially unveiling trade secrets and nonpublic information.

It's not just users who require additional training either. Auditors themselves will also need to stay abreast of emerging risks and the different ways their organizations may be subject to them. While the general skills of risk identification and mitigation still apply, AI’s rapid evolution will require auditors to employ their core competencies in relation to unprecedented, new challenges.


Regulations Are Changing

Regulatory bodies around the world are adapting to the changing AI landscape in their own ways. The European Union Artificial Intelligence Act (EU AI Act) is the first full-scale set of regulations regarding AI in the world, and it’s become a de facto standard for organisations to follow. This is especially true of companies that have a presence in any EU countries. Auditors must understand how regulations abroad affect their organisations and what influence foreign laws may have on domestic legislation that follows. Further, the EU AI Act takes a risk-based approach to AI, so companies with audit teams who understand its rules and regulations are well positioned to bring AI services into their operations.


Building Atop the Foundation

AI is, at its core, still a technology developed by people. While it may be tempting to see AI as a cyber risk, it’s also an inherently human one. It’s prone to the same errors that any other tool is, and mistakes in model creation can have a domino effect that is difficult to undo.

Internal audit has the ability to help the organisation realize AI’s potential while navigating around pitfalls that can hinder progress. Auditors are not — and should not be seen as — obstacles to AI use, but rather enablers who help their organisations adopt AI responsibly and build a foundation for future success.
 
Learn how BDO Malta can help your internal audit team prepare for AI’s opportunities and risks.

Want to know more?
Contact us


Original content provided by BDO USA