Lambs to the Slaughter

Lambs to the Slaughter

Part 2 of 5 in a blog series looking at AI’s journey since the release of ChatGPT in November 2022.

After November 2022, many people who normally acted with care and caution seemed to lose the plot.

It was clear that Large Language Models (LLMs) were often misleading or just plain wrong. The ChatGPT user interface, like others later on, said so right up front. And there was no shortage of social media posts illustrating how an LLM had confidently invented a biography, fact, or citation – what came to be called LLM hallucinations.

And then everyone carried on using them anyway – not just for fun, but in the workplace. Why?

When a new tool saves you time, it offers more than convenience. It offers relief, from the cognitive burden you carry all day, every day. It offers to make you faster, more articulate, more competent, more productive. All this for free, at a time when the modern world is relentlessly making you ever more overloaded, interrupted, and anxious.

So we all started using LLMs to summarise documents and draft new ones. Then to help us make decisions, at every level from forming company strategy to deciding on new hires. Human judgement dies not through ignorance, but through exhaustion.

Plausible is not the same as true

An LLM in its raw form doesn’t generate truth, but patterned plausibility. It creates something that sound right, without any internal mechanism to say whether it’s true or not. With equal confidence, an LLM will  compose a fluent essay about Shakespeare and cite non-existent legal cases or statistics. But fluent doesn’t mean reliable. Years on from ChatGPT’s public debut, hallucinations are as prevalent as ever. A major study in October 2025 showed that almost half of all AI answers contain errors and a third have missing, misleading or incorrect attributions.

Startlingly, attorneys have repeatedly been accused of citing fabricated legal authorities in official brief. In a Connecticut case in early 2026, a lawyer was accused of including over a dozen non-existent cases in a motion, then amending it only to include further inaccuracies. In similar UK and US federal filings lawyers apologised for including AI-generated citations that didn’t exist at all. These aren’t interns but qualified legal practitioners expected to verify facts before they appear in front of judges. Courts globally are now moving on from warnings to imposing significant sanctions for AI hallucinations, primarily involving fabricated case law, non-existent statutes, and fake quotes in legal filings.

Other professional associations around the world are now debating standards and policies for AI use in research and drafting. Regulatory attention is increasing. News reports now routinely highlight AI errors in healthcare, public health information, and legal domains. Inaccurate AI health summaries, for example, could dangerously mislead millions.

But the underlying problem remains. LLMs still hallucinate, providing confident but incorrect responses without transparent links to verifiable evidence. Yet studies continue to find high error rates across AI-produced material in critical domains such as news, law, research, and policy. Why do we continue to trust LLMs?

The psychology of misplaced trust

The psychological dynamic is deep. Humans are almost always cognitively overloaded – certainly in the modern workplace. Further, we respect fluency and instinctively trust prose that sound authoritative. When an AI gives a concise, well-structured answer, many of us feel, emotionally, that we don’t need to verify every fact – especially if the answer conforms to our prior expectations and prejudices, which LLM responses are designed to do. As every successful politician knows, this is the basis of rhetoric.

And as many politicians learn to their dismay, the rubber hits the road when smooth and plausible statements are used as the basis for professional decisions. That’s when the system breaks and careers suddenly come crashing to an end.

The silent surrender of critical thinking

The real shift here isn’t about truth. People, like LLMs, always have and made mistakes. The fundamental change is in accountability. People across the world are starting to believe that an LLM has greater authority than they do personally, ceasing to take responsibility for their decisions, and losing sight of the fact that they are still accountable for the consequences. The world has already seen how, whatever happens to ChatGPT, Sam Altman won’t lose his job. But you might.

For the foreseeable future, the real threat of AI isn’t runaway superintelligence taking over the world. It’s our willingness to outsource human judgment to tools that have only the veneer of authority – and none of the accountability.

To find out more about why we invented Dedoctive, keep reading this blog series.


Leave a Reply