Part 1 of 5 in a blog series looking at AI’s journey since the release of ChatGPT in November 2022.
In the noughties, I started blogging about the action research into collaboration I had been conducting for over a decade. Following an approach from a US publisher, my first book, “Human Interactions”, was published in 2005. I spent the next few years on the road, explaining the formal theory of human collaborative work to academics, businesses, and governments. One conversation was particularly memorable, with a professor of management at Oxford University.
“So,” he said, once I had finished setting out key ideas, “you want to change the way that everyone does everything.”
“Uh, yes, I guess,” I replied.
“Well, good luck with that,” he said.
Prophetic words. Over the two decades since, I have had as many failures as successes, and often the successes reverted back to failures once I stepped away personally. However, there is an upside. You learn a lot more from failure than from success. In my case, I have come to understand about vested interests, and how they can render history circular.
By contrast, time and again I have worked with people who achieved immediate success, and took away from the experience a powerful belief in their own personal judgement and ability. Unfortunately, however, success is typically not transferable from a specific domain at a specific period to different areas of life, different aims, and different situations.
This makes successful people dangerous. They treat their past triumph as evidence for always being right, when in fact it simply means they were lucky enough to get it right once. They may have just been lucky enough to be in the right place at the right time – and to get out before the limitations of what they provided became apparent.
The AI industry as a whole is heading full-tilt in this direction. Astonishing early demonstrations of what an LLM can do have led pioneers to aim higher and higher. Platform providers train models in ever more areas of human expertise, and are aiming for broad-based human capability (aka Artificial General Intelligence). Meanwhile, the foundations are more like sand than rock.
Every day we hear a new example of LLM failure – or rather, of a failed workflow that uses LLM technology. People are coming to recognise the danger of relying on LLM responses, but the attraction of doing it – if only to save precious time in the working day – is so powerful that they do it anyway.
LLMs alone have limitations. That’s why we built Dedoctive – to enable a foundational technology of huge potential value to be used safely, with confidence, and without creating a public scandal.

In the next instalment of this blog series, I’ll look at the limitations of LLMs, which we designed Dedoctive to remedy.

Leave a Reply
You must be logged in to post a comment.