
Organisations don’t know how to shield their data from ChatGPT
A survey from compliance specialist Legal Island has found that organisations do not know how to stop ChatGPT from training on their data, despite the feature being freely available to stop it from doing so.
A new survey of more than 100 organisations found that just 4% of users knew how to disable ChatGPT’s training function, the simple privacy toggle that prevents OpenAI from reusing sensitive data.
Before using ChatGPT, users can disable its training mode, a setting that, when left on, allows OpenAI to store and use input data to refine future responses. The report found that employers are allowing the use of ChatGPT without any proper training, or making their employees aware of the importance of turning off the training function before using the tool.
Barry Phillips, chairman of Legal Island and author of a new book, ChatGPT in HR, said: “When the training feature is left switched on, OpenAI can capture the information entered into ChatGPT and recycle it to improve future outputs. If your staff are using ChatGPT with the training function left on, you’re potentially leaking commercially sensitive data into a giant AI engine. That data could pop up in someone else’s prompt next week. It’s a legal, reputational, and regulatory mess waiting to happen.
“While it’s encouraging that employees are embracing ChatGPT and teaching themselves how to use it, the lack of formal training is alarming. Our research shows a worrying knowledge gap as most employees in Ireland don’t even know the tool has a training function, let alone how to disable it.”
Kellie Shields, chief compliance officer at Legal Island, added: “People treat GenAI like a harmless toy, it’s anything but. Without proper training, it’s a data breach in the making. This issue is too important to ignore, so we’re encouraging employers to take action today and avail of the free compliance training.”
TechCentral Reporters
Subscribers 0
Fans 0
Followers 0
Followers