Grok image controversy shows government more interested in placating elites than protecting children
I am not easily shocked but I have to admit to being appalled by the comments attributed to Patrick O’Donovan, Minister for Media, in The Journal on 8 January in response to the disgusting, unlawful creation and publication of child sexual abuse material (CSAM) on X using its generative artificial intelligence chatbot, Grok. According to the minister, X is not responsible for making child sexual abuse images and the responsibility lies with the people using it to make the images – images which, apparently, include a sexualised picture of a 14-year-old girl who died in the recent fire at a Swiss ski resort.
Speaking to Virgin Media News at the Young Scientist convention in Dublin, Minister O’Donovan said: “Ultimately, at the end of the day, it’s a choice of a person to make these images.” He added: “The use of the artificial intelligence is being developed at such a phenomenal rate that even if the law is changed in relation to this particular aspect of it, there’s no doubt about it, the technological advancements that are being made by young people and people that are not so young is far faster than the way in which law can be able to respond.”
Where to start with what seems like an absolutely craven abdication of responsibility by a government minister. I’m not even going to talk about the “technology is moving too fast for the law to catch up with” argument because, very often, the crimes being facilitated by advances in technology are merely an evolution of existing crimes. They might be easier and faster to commit, but they can still be punished.
Anyway, let’s move on to a really simple question for the minister: would someone be able to create CSAM as easily and quickly without Grok’s assistance? Would all those who have made sexualised images of children choose to have done so without Grok’s assistance? Critically, would those images have been disseminated and shared as widely and rapidly if they had not been made available on X?
As the Minister for Media, perhaps Mr O’Donovan might be able to tell us if his reaction and that of the “government” he is part of, would be the same if the Irish Times or another national newspaper opted to publish CSAM images submitted by readers in its pages everyday on the grounds that the readers had chosen to make the images? I put that word in quote marks because there seems to be very little enthusiasm for governing in this instance.
Consider, for instance, the reluctance of governments and their departments in Ireland and elsewhere to withdraw from X. You would think that the government, which supposedly represents the people of Ireland, might feel compelled to disassociate itself from a platform that has widely published CSAM images. After all, if a newspaper or magazine featured CSAM photographs, you wouldn’t expect the government to defend remaining on it because, in the words of Ireland’s leader, Micheal Martin: “Platforms can be misused and abused, or they can be used for positive reasons as well, in terms of articulating various positions, government policies and so on like that.”
Perhaps we should stop to think about that for a minute because what he says about the government’s reluctance to stop publishing information on a platform that also published CSAM images is being said in the name of you and me and everyone else in this country.
It is some small mercy that the Minister of State for Artificial Intelligence, Niamh Smyth, has actually stated the dissemination of these images is illegal. She told NewsTalk Breakfast on 8 January that she was seeking a meeting with X to tell the company that “this is not legal, it is not acceptable, it is apparent, and it should be stopped”.
Regulation
She argued the issue was “about enforcement. It is about Coimisiún na Meán, a regulator, acting with the European Commission, working with the guards. We have laws in place and they need to be enforced and you would hope that there is some moral compass within X and Twitter that they would ensure that the Irish public are not subjected to harmful content”.
But it’s revealing that although she thinks the dissemination of the images is illegal we should be dependent on X (formerly Twitter) having a “moral compass” to stop us from being “subjected” to harmful content. Again, if the Irish Times or any other national newspaper had published CSAM images, do you think she would be appealing to the publisher’s moral compass?
The creation and dissemination of CSAM images should not be tolerated in any shape or form. Any platform that publishes them should be held legally liable.
It is quite something that politicians and governments across Europe are being accused of soft pedalling this appalling state of affairs because, in the words of Politico, it is turning into a “test for Europe on whether it dares crack down on Musk and other American Big Tech platforms, knowing it will draw the ire of US President Donald Trump amid a major crisis in transatlantic trust and saber [sic] rattling over Greenland”.
Stop to think about that framing and then try to keep in mind that what Politico is talking about here is child sexual abuse images. Whatever awful things this scandal says about the Irish government and its fellow EU members, it reflects even worse on a US administration that is widely believed would threaten retaliation for any state seeking to enforce legal responsibility and accountability for images of child sexual abuse from the US platform that enabled people to create and disseminate them.
Yes, serious people honestly believe that the US administration would move to punish countries seeking to take action against one of its companies for helping to create and disseminate sexualised images of children. How corrupted has our world become that this should be a serious political issue? And how terrible is it that this shocking assault on our shared moral framework should have been facilitated and enabled, in no small part, by technology?





Subscribers 0
Fans 0
Followers 0
Followers