Shadow AI is widespread and executives use it the most
The use of unauthorised AI platforms, known as shadow AI, is a significant problem facing businesses across sectors today, according to an international report from UpGuard.
In a remarkable development, UpGuard found that roughly one-quarter of workers consider their AI tools to be “their most trusted source of information,” nearly on par with their manager and higher than their colleagues or search engines. Employees in manufacturing, finance and health care reported the highest levels of trust in AI tools.
That trust perspective has consequences. “Employees who view AI tools as their most trusted source of information are far more likely to use shadow AI tools as part of their regular workflow,” UpGuard said.
Companies in a wide range of industries have shadow IT issues, with consistently high percentages of employees reporting periodic and regular unauthorized AI use across finance, IT, manufacturing and health care, among other sectors. Mid-level managers and low-level employees had the highest levels of overall shadow AI use, while executives had the highest levels of regular use.
All corporate departments use a lot of shadow AI, UpGuard’s report found, although marketing and sales teams reported using it to a greater extent than operations and finance personnel.
For security teams trying to reduce the prevalence of shadow AI, one of UpGuard’s findings is particularly notable: Employees use unapproved tools because they think they know enough to manage the risks.
“We found a positive correlation between users reporting that they understood AI security requirements and that they regularly used unapproved AI tools,” UpGuard said. “This data suggests that as employees’ knowledge of AI risks increases, so does their confidence in making judgments about that risk – even at the expense of following company policies.”
The correlation suggests that security awareness training is not a sufficient safeguard against threats, according to the report. “Such programmes need new approaches in order to succeed.”
Indeed, fewer than half of workers said they knew and understood their companies’ policies about AI usage. Meanwhile, 70% said they were aware of employees inappropriately sharing sensitive data with AI tools. That rate was even higher for security leaders, according to the report.
UpGuard’s report is based on two 2024 surveys of 1,500 security leaders and lower-level employees in the US, the UK, Canada, Australia, New Zealand, Singapore and India.
Cybersecurity Dive





Subscribers 0
Fans 0
Followers 0
Followers