The silent cost of AI efficiency: the judgement imperative

Technology

In part two of this two-part series, we unpack how accounting firms protect expertise in an AI-first world, writes Andrew Cooke.

14 May 2026 By Andrew Cooke 7 minutes read
Share this article on:

Read part one of this series here.

Are the people inside your firm getting better or worse at the work that makes your AI tools worth having?

That question is not rhetorical. It is the design question that separates firms building something durable from firms running a sophisticated efficiency exercise.

This article provides the practical response: a five-point framework for firms that want to capture AI’s productivity benefits without dismantling the professional expertise those benefits depend on.

Two distinctions that change everything

Every AI application in an accounting practice is either automative or augmentative.

Automative AI replaces a task entirely. The professional exits the cognitive loop. Work gets done faster, but the friction that forms expertise is removed. Augmentative AI enhances a task while keeping the professional actively engaged. First-pass analysis is handled by the AI. The thinking remains human.

 
 

The Stanford ADP study behind the 13 per cent decline in entry-level employment found that when AI augments rather than automates, employment effects for young workers were muted. The problem concentrates where AI substitutes for labour, not where it complements it.

The 3 capabilities AI erodes without replacing

  1. Technical judgement: Applying APES 110, the Corporations Act, and tax legislation to ambiguous situations as an active reasoning process – atrophies precisely where it matters most. AI handles clear-cut cases accurately. It handles ambiguous cases with confident-sounding outputs that may be materially wrong. When practitioners rely on AI for borderline calls, expertise erodes at the point clients are paying for it.
  2. Contextual intelligence: Reading a client situation, understanding what is genuinely at stake, knowing what advice they need versus what they want to hear, is built through exposure over the years. AI has no history with your clients. The experienced accountant does. That accumulated understanding cannot be transferred to a model.
  3. Professional scepticism: The trained instinct to question what looks right is the foundation of audit and assurance. A 2025 review in Artificial Intelligence Review found that AI-assisted practitioners showed progressive disengagement from complex cognitive tasks, reducing both technical proficiency and nuanced judgment. Scepticism atrophies fastest when confident-looking AI outputs arrive without challenge.

Myth v reality

Myth: “If something goes wrong with an AI output, we have senior review as our backstop. The controls are in place.”

Reality: Senior review is only a meaningful control when the reviewer has the underlying expertise to detect the error. That expertise is built through the kind of junior-level work AI has automated. When a manager reviews AI-generated tax analysis without the foundational experience of building that analysis manually, their review is not a control – it is a confidence performance. The governance exists on paper. The capability it depends on may not exist in practice.

The five-point expertise protection framework:

  1. Map every AI workflow as automative or augmentative. For each workflow, ask explicitly: is the professional still doing the thinking, or has AI removed it entirely? Automative uses require deliberate compensating mechanisms. Most firms have never made this distinction systematically.
  2. Redesign junior roles around what AI cannot do. If AI handles reconciliations, what replaces that experience for graduates? Exception handling and root-cause analysis. Client-facing explanation of complex outcomes. Compliance narrative drafting with structured senior commentary. The tasks change – the expertise-formation requirement does not.
  3. Build deliberate friction into AI-assisted review. Before a manager approves an AI-generated tax position, ask them to articulate, without reading the AI output, what they would expect the answer to be and why. If they cannot, the review has no technical substance. Singapore’s National University Health System introduced mandatory AI-free clinical periods after observing exactly this pattern. Accounting needs an equivalent discipline.
  4. Track expertise development with a metric most practices never measure. Utilisation rates tell you whether AI is working commercially. One metric most firms don’t track: the number of times per quarter a team member documented an override of an AI recommendation – with written reasoning – that was subsequently validated. The ability to disagree with AI correctly is a professional capability worth protecting.
  5. Make AI literacy a non-negotiable professional baseline. Understanding what AI cannot do – where it fails and how to verify outputs – is now foundational competency. ASIC’s REP 798 found 50 per cent of licensees had no policies addressing AI fairness or bias. Firms whose people understand AI’s failure modes are the firms whose senior review is worth something.

The forward answer

Within a single firm generation, practices that deliberately address this will have something distinctive: practitioners doing higher-value work because AI handles volume, and governance that is robust because the people doing oversight actually understand what they are overseeing.

The alternative produces a firm that is quick to generate outputs that no one inside it is fully equipped to question.

The question contains the answer. Firms that can answer it honestly, specifically, and with evidence, are already ahead of most.

Andrew Cooke is the founder of Growth & Profit Solutions AI (GPS-AI).

Accountants DailyWant to see more stories from trusted news sources?
Make Accountants Daily a preferred news source on Google.
Tags: