ChatGPT on a phone

ChatGPT errors prove it doesn’t have ‘all the answers’

Some tech bros are about to get a lesson in defamation law
Blogs

6 April 2023

It looks like ChatGPT is getting into a bit of bother on a number of fronts. First, the Italian government temporarily banned the chatbot. As far as I am aware, there is no truth in the suggestion by one wag that the chatbot had been asked if it was ever acceptable to put pineapple on pizza and replied that “it was matter of personal choice”. According to reports, the rationale for the Italian government’s decision appears to be a combination of privacy and security.

The Italian Data Protection regulator, known as Garante, pointed to a data breach at OpenAI, ChatGPT’s owner, which allowed users to view conversations other people were having with the chatbot.

Garante warned that there didn’t appear to be any legal basis “underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies”. It was also concerned by the lack of age restrictions and the number of factually incorrect responses ChatGPT provided.

 

advertisement



 

Ireland’s Data Protection Commission (DPC) and the French privacy regulator have already been in contact with Garante. A spokesperson for the DPC confirmed it was “following up with the Italian regulator. We will coordinate with all EU data protection authorities in relation to this matter.”

Germany’s commissioner for data protection Ulrich Kelber told the Handelsblatt newspaper it could issue a similar ban to Italy’s in the future.

At an EU level, work is currently being undertaken on how to include generative AI in the forthcoming AI Act.

ChatGPT’s tendency to get its answers wrong could also prove costly. A mayor in Australia is threatening legal action against OpenAI because the chatbot falsely claimed he had served time in prison for bribery. In fact, Brian Hood, mayor of Hepburn Shire, 120km northwest of Melbourne, was the person who notified the authorities of the bribery.

As of 6 April, OpenAI had not responded to Hood’s letter threatening a possible defamation law suit.

It should, perhaps, not come as a surprise that ChatGPT is suddenly facing potentially significant legal hurdles in the realms of privacy and accuracy. It’s a common fault of the ‘tech bros’ that their enthusiasm for the possibilities of a technology (and the dollars it might earn them) often eclipses their concern for its legal consequences.

When Microsoft announced that the next version of Bing would be powered by OpenAI, CEO Satya Nadella stated: “It’s not just a search engine; it’s an answer engine – because we’ve always had answers, but with these large models, the fidelity of the answers just gets so much better.” Well, if mayor Brian Hood is correct, maybe not quite.

There are discussions about how people can be made aware when they are interacting with AI-generated content and an organisation called the Partnership on AI has set out voluntary recommendations that OpenAI and others have signed up to. But as Harry Farid, professor at the University of California in Berkeley told the MIT Technology Review, “voluntary guidelines and principles rarely work”.

And as he points out, the recommendations acknowledge that AI could have seriously adverse consequences, so “why aren’t they asking the question ‘Should we do this in the first place?’”.

Maybe that’s a question someone should ask ChatGPT. Hopefully, it won’t get the answer wrong.

Read More:


Back to Top ↑

TechCentral.ie