Advertisement

When the draft becomes the truth: how to stop AI from weakening professional judgement

Technology

Your biggest AI risk isn't that it will get something wrong – it's that it will get everything almost right, writes Andrew Cooke.

19 February 2026 By Andrew Cooke 10 minutes read
Share this article on:

A partner recently shared a file where the "AI-assisted" draft sailed through to the client. The logic felt sound, the tone confident, the structure partner-ready. Nobody questioned it because the writing looked finished.

That's the problem.

The real risk generative AI introduces isn't that AI will replace accountants or that it'll be wrong sometimes. The bigger threat is that polished drafts quietly become accepted truth, and professional judgement weakens with every unchallenged output.

AI is 20 per cent technology, 80 per cent psychology – and this is exactly where the psychology becomes dangerous.

Why this matters right now

APES 110's technology-related revisions took effect on 1 January 2025, explicitly bringing technology into how accountants think about competence, due care, and ethics. APESB's guidance is unambiguous: using AI doesn't reduce a member's ethical obligations. If anything, it makes supervision more important because work can look "right" when the reasoning is thin.

CPA Australia warns that "undue reliance on AI threatens adherence to the fundamental principles of APES 110."

 
 

This lands exactly where accountants earn trust: estimates, assumptions, judgements, and the discipline of asking "what could make this wrong?"

The myth firms are believing

Myth: AI outputs just need light review like any junior's work.

Reality: AI produces fluent nonsense that defeats normal review processes. A junior's weak reasoning shows up in hesitant language and gaps in logic. AI delivers confident garbage with perfect grammar and formatting. The signals that trigger deeper review – awkward phrasing, uncertain tone, structural problems – are absent. Partners are reading with the wrong mental model, and judgement errors slip through.

What's actually happening

AI output arrives with two seductive qualities: speed and confidence. That combination encourages cognitive offloading – firms delegate the thinking, and reviewers become editors of someone else's reasoning rather than owners of their own.

Research confirms what practitioners see: heavier AI use correlates with reduced critical thinking. The more firms let the tool do the mental heavy lifting, the less practice professionals get with the judgement muscles that accounting depends on.

When used as an "answer machine," AI creates false mastery. Successful firms put guardrails around it.

A Melbourne mid-tier firm discovered this during an annual file review. Three memos drafted using AI sailed through despite containing circular reasoning and unsupported assumptions. The memos looked polished, cited appropriate standards, and sounded authoritative. The partner's review instincts – honed over 20 years – failed to trigger because the warning signs were absent.

Where AI belongs, and where it doesn’t

Problems first, platforms second. Here's the line successful firms draw:

Low-judgement work (AI is safe with light review): Drafting emails, agendas, reports, summarising background.

High-judgement work (AI must be constrained): Revenue recognition, impairment reasoning, going concern, tax positions, audit planning, expert reports – anything scrutinised by regulators, insurers, or courts.

If the output could change a decision or client outcome, treat AI as a second brain you don't fully trust.

3 controls firms are implementing

1. A mandatory "judgement check" in the review workflow

Before any AI-assisted analysis leaves the firm, leading practices require the preparer to answer in their own words:

  • What are the critical facts, and how were they verified?
  • What assumptions is this relying on, and are they valid for this client?
  • Which standard is doing the heavy lifting?
  • What could be missing that would change the conclusion?
  • If challenged, what would the firm defend and what would need revisiting?

This isn't admin. It's a safeguard against fluent uncertainty.

2. "Workings transparency" for AI-assisted outputs

When staff use AI to draft, better-governed firms require them to attach the prompts, source materials, and a reasoning trail explaining how they accepted or rejected key points.

This turns AI use from hidden dependence into supervised practice. Reviewers can see whether judgement happened or the draft simply "sounded right."

3. AI as red-team, not author, for high-risk matters

For judgement-heavy areas, effective firms flip the prompt. They don't ask AI to conclude. They ask it to challenge:

  • "List the strongest counterarguments to this position."
  • "What evidence would be needed to be confident in this conclusion?"
  • "What are the common failure modes in this type of memo?"
  • "What questions would an auditor, regulator, or opposing expert ask?"

The accountant does the thinking. The tool raises the quality of the challenge.

Human-led AI, not AI-led humans

The firms that win will build deliberate friction into their AI-enabled workflows, so speed never comes at the cost of scepticism. The approach isn't complicated: governance as enabler, not blocker. Make AI safe by making judgement mandatory.

The 5-step human check

Successful firms implement this checkpoint before any AI-assisted work gets finalised:

  1. Have core facts been independently confirmed?
  2. What's the strongest counterargument to this position?
  3. What would need to be different to change this conclusion?
  4. Can the preparer defend this reasoning without referencing the AI output?
  5. Is there a clear trail showing where judgement was applied?

If the answer to any step is "no" or "uncertain," the work isn't ready regardless of how polished it looks.

AI won't replace accountants. But poorly governed AI can quietly erode the judgement that makes accountants valuable. The difference between firms that thrive with AI and those that stumble isn't the technology they choose – it's the human judgement they protect.

Andrew Cooke is the principal consultant at Growth & Profit Solutions AI.

Tags:
You need to be a member to post comments. Become a member for free today!
You are not authorised to post comments.

Comments will undergo moderation before they get published.