AuditBoard recently commissioned The Harris Poll to survey over 1,200 employed Americans on the use of AI-powered tools in the workplace compared to the presence of basic risk management controls. The results highlight interesting trends and potential concerns in AI adoption in the workplace.
Regarding adoption, the survey found that about half of employed Americans (51%) use AI-powered tools (e.g., ChatGPT, DALL·E 2, Grammarly) for work, which can be viewed as a positive trend as employees seek more efficient ways of working.
However, only about a third of employed Americans (37%) say their company has a formal policy for using non-company-supplied AI-powered tools for work. From a risk management perspective, this gap represents an unmitigated risk in which employees use AI however they choose with sensitive company information.
The following survey result solidifies this concern by confirming that nearly half of employed Americans (48%) have entered company data into an AI-powered tool their company did not supply to help them do their work. This finding highlights risks associated with data security, privacy, and AI functioning as employees choose their AI tools without proper vetting by IT security professionals.
The following statistic from the survey underscores a more significant concern. Nearly two-thirds (64%) of employed Americans say using AI-powered tools for work is safe and secure.
The survey results draw attention to a major risk tied to AI – a human cognitive bias known as the Dunning-Kruger effect. The Dunning-Kruger effect explains our tendency toward overconfidence when ignorant of potential risks. In an AI context, this cognitive bias could lead employees to overestimate the capabilities of an AI tool when they lack an understanding of the technology.
John Wheeler, Senior Advisor, Risk and Technology at AuditBoard, points out that “AI systems, due to their human influences and inherent lack of self-awareness, can inadvertently reflect the Dunning-Kruger effect — leading to overconfidence in their capabilities.” To illustrate this point, an employee might use an unapproved AI tool to analyze confidential company data in a manner that produces an unintended result. The tool may provide calculations or conclusions without knowing its limitations, similar to the Dunning-Kruger effect in humans. The employee could take the inaccurate results at face value, placing too much trust in the AI’s capabilities.
Overconfidence and a lack of understanding of AI’s limitations can pose severe risks to any organization. The survey findings underscore the need for robust Integrated Risk Management strategies considering the Dunning-Kruger effect in AI. The most basic controls needed include clear guidelines for AI tool usage, data handling, and employee education on AI’s limitations. The use of AI tools in the workplace will continue to expand with or without formal intervention, so a comprehensive approach to risk management and policy development is crucial as we continue to integrate AI tools into the workplace.
About the Survey
This survey was conducted online within the United States by The Harris Poll on behalf of AuditBoard from June 22–26, 2023, among 1,217 employed U.S. adults ages 18 and older. Harris online polls’ sampling precision is measured using a Bayesian credible interval. This study’s sample data is accurate to within + /- 3.2 percentage points using a 95% confidence level. For complete survey methodology, including weighting variables and subgroup sample sizes, please contact firstname.lastname@example.org.