What Are the Risks of Artificial Intelligence?

Philip McKeown
Philip McKeown
What Are the Risks of Artificial Intelligence?

Artificial intelligence (AI) is coming up routinely as technology advances and software providers build and integrate AI into their products. Deloitte’s Global Artificial Intelligence Industry whitepaper predicts that the world will see AI-driven GDP growth of $15.7 trillion by 2030. AI applications are continuing to grow across organizations and industries, as shown by the figure below.

As the benefits of AI systems increase, so do the risks that come with these tools. This article will address two main questions: what are the risks of artificial intelligence, and do the benefits of artificial intelligence outweigh the risks.

What Are the Risks of Artificial Intelligence?

Artificial intelligence has many potential risks — and as AI’s capabilities and pervasiveness expand, the associated risks will continue to evolve. For the purposes of this article, I will focus on five of the most common AI risks that exist today. 

1. Lack of AI Implementation Traceability

From a risk management perspective, we would often start with an inventory of systems and models that include artificial intelligence. Utilizing a risk universe allows us to track, assess, prioritize, and control AI risks. 

Unfortunately, the growing popularity of Shadow IT means that technology is increasingly being implemented outside the purview of the official IT team. A McAfee study indicates that 80% of enterprise employees use non-approved SaaS (Software-as-a-Service) applications at work. Most often, this is not done maliciously but to increase productivity. Departments may opt for easy to purchase, cloud-based systems that include an AI component. At other times, a routine system upgrade may introduce AI into an application. In either case, artificial intelligence can be readily introduced without Risk Management and IT knowing.  

Digital Risk Report 2023: Pervasive Risk, Persistent Fragmentation, and Accelerating Technology Investment

2. Introducing Program Bias into Decision Making

One of the more damaging risks of artificial intelligence is introducing bias into decision-making algorithms. AI systems learn from the dataset on which they were trained, and depending upon how this compilation occurred there is potential for the dataset to reflect assumptions or biases. These biases could then influence system decision making.   

3. Data Sourcing and Violation of Personal Privacy

With the International Data Corporation predicting that the global datasphere will grow from 33 zettabytes (33 trillion gigabytes) in 2018 to 175 zettabytes (175 trillion gigabytes) by 2025, vast amounts of structured and unstructured data is available for companies to mine, manipulate, and manage. As this datasphere continues to experience exponential growth, the risks of exposing customer or employee data will only increase, and personal privacy will become harder to protect. When data leaks or breaches occur, the resulting fallout can significantly damage a company’s reputation and represent potential legal violations with many legislative bodies now passing regulations that restrict how personal data can be processed. A well known regulatory example of this is the General Data Protection Regulation (GDPR) adopted by the European Union in April 2016, which subsequently influenced the California Consumer Privacy Act passed in June 2018.

4. Black Box Algorithms and Lack of Transparency

The primary purpose of many AI systems is to make predictions, and as such the algorithms can be so inordinately complex that even those who created the algorithm cannot thoroughly explain how the variables combined together reach the resulting prediction. This lack of transparency is the reason why some algorithms are referred to as a “black box,” and why legislative bodies are now beginning to investigate what checks and balances may need to be put in place. If, for example, a banking customer is rejected based on an AI prediction about the customer’s creditworthiness, companies run the risk of not being able to explain why.

Considering the potential risks of artificial intelligence discussed so far, these concerns lead to the question of legal responsibility. If an AI system is designed with fuzzy algorithms, and machine learning allows the decision-making to refine itself, then who is legally responsible for the outcome? Is it the company, the programmer, or the system? This risk is not theoretical — in 2018, a self-driving car hit and killed a pedestrian. In that case, the car’s human backup driver was not paying attention and was held responsible when the AI system failed. 

Do the Benefits of Artificial Intelligence Outweigh the Risks?

The risks of artificial intelligence are significant, but the use of these tools and their growth is also inevitable. The benefits go beyond simple efficiency gains and include a more equitable decision-making scenario when the algorithms are trained to avoid bias. As we increase our understanding from risk management and audit perspectives, we should look for key features in the AI systems:

  • AI systems should include clear design documentation.
  • Machine learning should include testing and refinement.
  • AI control and governance should take priority over algorithms and efficiency.

We all have a responsibility to learn more about the risks of artificial intelligence and control those risks. The topic is not going away, and the risks will continue to grow and change as technology becomes more advanced and more pervasive. Organizations that embrace the three key points above will be better equipped to manage AI system risks that could otherwise have devastating legal and reputational consequences. 

Philip McKeown

Philip J. McKeown is a Managing Consultant within CrossCountry Consulting’s Intelligent Automation and Data Analytics team. For over 10 years McKeown has driven digital transformation strategy and execution across a broad range of industries and verticals for clients such as the Royal Bank of Canada, Bank of America, and Duke Energy. Connect with Philip on LinkedIn.

Related Articles