Artificial intelligence

Risky shadow AI use remains widespread

Netskope report says combination of novel threats and legacy security concerns will define threat landscape in 2026
Pro
Image: Shutterstock via Dennis

6 January 2026

Shadow AI has been a known issue for years, but it remains a persistent challenge for organisations that are racing to incorporate AI into their workflows.

Nearly half (47%) of employees using generative AI platforms are doing so through personal accounts that their companies aren’t overseeing, according to Netskope’s report, which is based on cloud security analytics from October 2024 to October 2025. Unmonitored AI use creates gaps in companies’ security defenses that hackers could exploit.

“A substantial share of employees are relying on tools such as ChatGPT, Google Gemini and Copilot, using credentials not associated with their organisation,” Netskope said.

 

advertisement



 

The data painted a mixed picture of trends in personal AI use. On one hand, the percentage of people using personal AI apps (47%) dropped significantly from the prior year, when it was 78%. Similarly, the percentage of people using company-approved accounts increased from 25% to 62%. On the other hand, the percentage of people switching between personal and enterprise accounts increased slightly year over year, from 4% to 9%. That finding, Netskope said, indicated that companies “still have work to do to provide the levels of convenience or features that users desire.”

Personal AI use in corporate environments creates multiple risks, including incomplete regulatory compliance and unsecured API connections between external AI services and internal company servers. Data exposure remains one of the most common consequences of unvetted AI use, and Netskope said it observed a year-over-year doubling in “the number of incidents of users sending sensitive data to AI apps,” with the average company experiencing 223 such incidents per month.

Security experts say the best way for organizations to crack down on shadow AI use and prevent such incidents is by prioritising the adoption of AI governance processes.

“The shift toward managed [AI] accounts is encouraging,” Netskope said, “yet it also highlights how quickly employee behaviour can outpace governance”. The company recommended that organisations implement “clearer policies, better provisioning, and ongoing visibility into how AI tools are actually being used across the workforce”.

Cybersecurity Dive

Read More:


Back to Top ↑