A note before you read: the scenarios below are illustrative, not legal advice. If you think this applies to your business, talk to a privacy lawyer before December.
Somewhere in your claims process, an algorithm is making a decision about your client.
It might be deciding whether to auto-settle a straightforward claim. It might be scoring the claim and routing it to an adjuster with a recommended action already attached. It might be generating the written summary the adjuster reads before they’ve looked at anything else.
In most cases, your client doesn’t know this is happening. In many cases, neither do you — not in enough detail to know what it means for you legally.
From 10 December 2026, that changes.
What the law now requires.
The Privacy and Other Legislation Amendment Act 2024 received Royal Assent on 10 December 2024. Most provisions came into effect immediately. The automated decision-making requirements have a two-year grace period — which ends on 10 December 2026.
From that date, if an individual’s personal information was used in a substantially automated decision that significantly affected them, they are entitled to know. The entity responsible must be able to tell them — clearly, not buried in a privacy policy written before any of this technology existed.
An insurance claim decision — whether to pay, how much, and how fast — affects a client’s rights under their policy contract. That puts it squarely in scope.
The software your operation is probably already using.
This isn’t theoretical. The tools doing this work are already inside Australian insurance operations.
Gallagher Bassett — Waypoint and Luminos
Gallagher Bassett is the largest third-party claims administrator in Australia. Their Waypoint system embeds machine-learning recommendations directly into the claims workflow at specific decision points. When a Resolution Manager opens a file, they see a system-generated recommended action before they’ve formed their own view. Their Luminos platform generates AI summaries of claims from submitted documents — meaning the adjuster is reading an AI brief, not the raw file.
At time of writing, May 2026, both tools are in active use in Australian claims operations.
AIA Australia — SCOR VClaims
AIA completed the rollout of SCOR’s VClaims automated decisioning tool across several of its Australian funds in 2025. VClaims is not a recommendation tool. It makes the decision at lodgment. Simple claims that meet predefined criteria are processed and the policyholder is notified of the outcome — without a human adjuster having reviewed the file. The policyholder is not told the decision was made by a rules engine.
At time of writing, May 2026, VClaims is in active deployment across AIA’s Australian funds.
Salesforce Agentforce
Salesforce’s own Australian documentation describes the capability plainly: agentic AI can automate claims from start to finish — first notice of loss, approval, and payment. A policyholder submits photos of a damaged vehicle. The AI handles the claim from intake to resolution and triggers a payout. All of this, their documentation states, can happen without waiting for human approval.
At time of writing, May 2026, Agentforce is in active use in Australian insurance operations.
The response most broking firms will have — and why it doesn’t hold.
We have claims managers who review everything.
Here is the problem with that.
Straight-through processing rates for simple claims have reached 70–90% in operations that have scaled AI. Those claims are never reviewed by a human. The client submits, the system decides, the client receives an outcome.
For complex claims that do reach an adjuster, consider what they actually see when they open the file: a queue ordered by AI prioritisation, an AI-generated brief summarising the submission, and a system recommendation already attached. The human is making a decision — but the algorithm has already shaped what they see, in what order, with what steer.
The Act does not require a decision to be made entirely by a machine. The question is whether personal information was used in a substantially automated process to produce an outcome that significantly affected the individual. Shaped by AI before a human sees it still qualifies.
What needs to change before December.
Three things, at minimum.
Your privacy policy almost certainly predates AI claims processing. If it doesn’t disclose that automated decision-making is used in claims assessment, it needs to.
Your clients need a genuine way to find out whether their claim was assessed algorithmically — not a theoretical right buried in documentation, but a process that actually works.
And they need a real path to request human review if they believe the automated process produced an unfair outcome.
Most operations have none of these in place. Not because anyone decided not to — but because the tools arrived faster than the governance did.
The question worth sitting with.
Do you know, specifically, at what point in your claims process the algorithm is making or shaping the decision?
And does your current privacy policy say anything about it?
If the answer to either is uncertain, that’s the gap December is coming for.
This is one in a series of scenarios examining what Australian businesses are doing today that will require attention before 10 December 2026.