Office workers

Computer says ‘yes’

Sloppy AI use allows us to be told what we want to hear, which is a worry in the workplace, says Jason Walsh
Blogs
Image: Cottonbro Studios via Pexels

7 April 2026

There is a certain Pinter-like comedy to the idea of people using artificial intelligence (AI) to win arguments. Less funny is the possibility that in doing so, they are not so much sharpening their thinking as sealing it off.

As reported on this website, a new study has found that AI is having a curious effect on interpersonal interactions: it harms our ability to deal with criticism.

The problem, according to researchers from US universities Stanford and Carnegie Mellon, is that responses from AI chatbots could reinforce harmful beliefs and intensify conflicts. According to the study, published in the journal Science, after a single interaction with an approving AI, participants were more convinced they were in the right, while their “willingness to take responsibility, apologise or resolve conflicts declined”.

 

advertisement



 

While studies are to be welcomed, we should already have intuited this outcome. Indeed, we have already seen various newspapers run reports on how families and couples are seeking to arm themselves with AI-augmented rhetoric in order to win arguments, whether in advance or with the ‘spirit of the staircase’. And, I suppose, to better understand each other, but we’ll come to that. While, tonally, they tend toward the ‘ho-ho-ho, silly people’ slice-of-life or its posh cousin the armchair sociologist, the truth is that applying the logic of flowcharts, even unimaginably complex ones, to our activities is going to have an impact, and probably not a great one.

The fact that AIs tell people what they want to hear or, rather, produce output based on what is input to them is not news. Indeed, many of the problems with AI simply come down to our failure to use it correctly. 

Frankly, the old computing adage ‘garbage in, garbage out’ applies in all situations, not just where machines are involved. Nevertheless, having a computer sharpen our arguments, even if only on a surface level, is very appealing.

Unfortunately, here is a simple fact that such a model cannot account for: people are complex. 

This itself is, of course, one of our great preoccupations. Anyone familiar with, frankly, any form of narrative art will tell you how genres develop in complexity, often leading to revisionist or ‘New Wave’ phenomena where simple ideas of good and bad are made rather more complicated. 

In other words, on some fundamental level we know that, and tell stories about how, other people’s motivations are obscure to us, and even that our own can be unknown. So, if we know everyone is a mess of contradictions, why then would we act as though they are input-output devices?

We need to talk about work

Thinking about all of this, one thing that occurred to me was: what does this mean for the workplace? Much of the talk about AI in work is, naturally, about job losses, with job creation, with so-called guardrails and the tendency of AIs to hallucinate untruths appearing to come a close second and third.

The fact that wonky algorithmic analysis has already had a negative impact, such as being the proximate cause of citizens being wrongly fined in the Netherlands, shows just how dramatic the potential problems are, but the idea that AIs can be used to shut down arguments has me pondering a more quotidian form of conflict.

Workplace communication is simply hard. Most of us are not properly trained for it, many of those who are trained have learned oddball pseudoscientific management techniques, motivation levels vary, as do their roots, the tasks we are set are only partially ours, and, of course, every organisation and every individual within them differ, and some are, in today’s parlance, ‘toxic’.

What, then, do we do when our communications are driven by a machine that is not so much a copilot as an autopilot?

The answer, or at least part of one, may lie in remembering what AI actually is: a system that processes what we give it. It does not know what we do not tell it. In fact, it does not know the many useful facts it can tell us, even when those facts are accurate. More than that, though, an AI cannot weigh what we have withheld. In the workplace, where so much is unspoken, that is a significant limitation.

However impressive an AI’s data sifting appears, and however much better its recall seems compared to ours, what it is actually doing is processing the information we give it. Its apparent superiority is a reflection of our own input, organised and returned to us. 

When we talk to an AI, we are talking to ourselves. That is genuinely useful for thinking – but it is also worth thinking about.

Read More:


Back to Top ↑