Part 5 of 5 in a blog series looking at AI’s journey since the release of ChatGPT in November 2022.
In the working world, there’s a secret sauce to sustainability: trust. Without it, every decision has to be verified, safeguarded, and hedged. That takes time, costs money, and burns energy.
This matters enormously for AI. We often talk about AI as if it were a neutral tool that just happens to consume electricity. In reality, the degree of trust people can place in AI outputs has direct energy consequences that ripple through organisations and the systems around them.
Imagine generating a report with AI. If the output is trustworthy, you can review the main arguments and act. If it isn’t, you must verify every fact, check every source, correct every error, and run additional searches to ensure completeness. You must then try to reconstruct the reasoning that led to the conclusions – and since that reasoning is usually opaque, you may need to involve colleagues or external reviewers just to regain confidence.
All this turns humans into brake pads — constantly absorbing friction, slowing processes, and dissipating energy through repetition. AI may promise to reduce human labour, but when it can’t be trusted, it does the opposite: it increases human work and total energy use.
Trustworthiness, then, is not a nice-to-have feature bolted on at the end of a workflow. It’s fundamental. Like safety in engineering, it must be designed in from the start. In mature systems, trust comes from traceability, auditability, and accountability. Scientists cite sources. Engineers document failure modes. Lawyers trace claims to authority.
However, mainstream AI systems don’t work in this way, but produce conclusions without exposing sources, reasoning, or uncertainty. That may be tolerable for drafting a shopping list, but it’s disastrous for decisions with financial, legal, environmental, or societal consequences. So people respond by adding human activity that eliminates the potential energy saving.
For AI to become energy-efficient, it must draw only from validated data, use reliable processes, and provide explainable outputs. This cuts rework, collapses review cycles, and prevents organisations from piling on governance layers just to manage risk. It frees human time for higher-value tasks, reducing total energy use across offices, meetings, processes, and data centres.
Making AI sustainable means a lot more than cleaner power or smarter infrastructure. It means making it trustworthy.
Learn more about how Dedoctive wraps any LLM to make it trustworthy.


Leave a Reply
You must be logged in to post a comment.