Predictive analytics promises speed. The risks compound quietly.
The promise
Predictive triage models in workers compensation claims sort incoming claims by likely complexity, expected duration, or risk of dispute. The promise is operational. Faster triage. Better load balancing. Earlier intervention on claims likely to escalate. Cleaner data for management reporting.
A handful of Australian schemes are running production deployments. Many more are in pilot. The technology is no longer experimental. It is, in pockets, business as usual.
This piece is a risk analysis, not a hit piece. The point is that the promise is real and the risks compound quietly. Both need to be understood.
Where the value sits
Three areas where predictive triage genuinely improves operations.
Earlier intervention on complex claims. Models are reasonably good at predicting which claims are likely to involve longer durations or higher complexity. Earlier case manager engagement on these claims, with appropriate seniority, demonstrably improves outcomes for both claimants and schemes.
Load balancing across teams. Schemes with consistent caseload variability across case managers benefit from triage that distributes claims by complexity. The model does not have to be perfectly accurate to be useful here. It just has to be better than random.
Pattern detection at the portfolio level. Predictive models that summarise patterns across thousands of claims surface things that case-by-case review cannot. Emerging cohorts. Unusual concentrations. Process gaps. The portfolio view is genuinely informative.
Where the risk sits
Six categories of risk arise in practice. Each is real, even where the model is good.
Risk one. Procedural fairness drift. Section 14 and section 19 determinations require the case manager to consider the specific evidence on the specific claim. A triage score that influences the case manager's starting position can subtly shape the consideration that follows. The risk is not that the score is used illegally. The risk is that it shapes attention in ways that show up only in aggregate.
Risk two. Training data bias. Models are trained on historical data. If historical case management practice contained biases (for example, in the way claims from particular industries or demographic groups were handled) the model will reproduce those biases unless they are deliberately removed. Bias removal is hard. Most production models have not done it well.
Risk three. Reasoning trail dilution. Where a model assigns a triage band that influences the team's handling, the case manager's reasoning trail must include the role the model played. Most schemes have not yet adapted their file note conventions to capture this.
Risk four. Vendor opacity. Many predictive triage models are vendor-supplied. The scheme operator does not always have full visibility of the training data, the feature set, or the model architecture. Vendor opacity is a governance risk that compounds with use.
Risk five. Calibration decay. Models drift. The patterns the model was trained on shift over time. Without ongoing calibration monitoring, the model's predictions slowly become less accurate, in ways that are not visible in production until something goes wrong.
Risk six. Concentration of effect. Where a single model is used across an entire scheme, any error in the model affects every claim. This is different from human variance, where individual case manager errors are dispersed. Concentration is a governance risk on its own, separate from the underlying model quality.
De-identification callout. Where this article describes a worked triage scenario, all claim level details are placeholders. [CLAIMANTNAME], [CLAIMNUMBER], [CONDITION] are used in any sample. Predictive models are typically trained on de-identified data, but the protection is only as good as the de-identification process. Confirm with your privacy officer.
A worked scenario
A scheme operator has deployed a triage model that assigns each new claim a band of green, amber, or red on intake. Green claims go to standard handling. Amber claims go to the priority queue. Red claims are routed to senior case managers.
A new claim arrives for [CLAIMANTNAME] in respect of [CONDITION], claim [CLAIMNUMBER]. The model assigns red. The senior case manager who picks it up reads the file and concludes that, on the evidence, the claim is straightforward. There is a divergence between the model and the case manager's judgement.
The right response, in the framework below, is for the case manager to make the determination on the evidence and document the divergence in the file note. The override is recorded. The model's prediction is captured. The reasoning trail is intact.
Over time, the override register surfaces patterns. If a particular type of claim is being systematically over-triaged, the model gets recalibrated. If it is being systematically under-triaged, the same. The model improves because the humans are paying attention.
A governance framework
A defensible deployment of predictive triage covers six things.
Model description. A written description of the model, the training data, the features, the intended use, and the known limitations. This document exists before deployment.
Calibration monitoring. Ongoing measurement of the model's accuracy against actual outcomes. Quarterly review at minimum.
Override register. Every divergence between the model's prediction and the case manager's judgement is logged, with a brief reason. The register is reviewed for patterns.
Bias monitoring. The model's predictions are tested for systematic differences across cohorts that should not, on the evidence, be predictive. Examples include claims from particular industries, regions, or claimant demographics.
Reasoning trail integration. The file note conventions for AI assisted determinations are extended to capture the role the model played. The reasoning trail includes the score and the case manager's position on it.
Privacy Impact Assessment. The training data, the production data flow, and the model output are all covered by a Privacy Impact Assessment. This is not a one-off document. It is reviewed annually.
The case manager's role
For practitioners working with a triage model, three things matter.
First, the model assigns a starting position, not a determination. The case manager's analysis is what produces the determination. The model's prediction is one input among many.
Second, divergences are valuable. Where the case manager's view differs from the model, the divergence is information. It is not a failure of the model and it is not a defiance of operational expectation. It is the workflow doing what it should.
Third, the file note matters. A short note recording the model's prediction, the case manager's view, and the basis for any divergence is the line of defence that protects every individual determination. It is also the data that improves the model over time.
Risks and guardrails
The three risks most likely to come up in audit are listed above as risks one, three, and four. The guardrails are listed in the governance framework. The point is to deploy with the framework intact, not to retrofit it after a problem surfaces.
For practitioners
- Triage scores are inputs to your judgement, not replacements for it
- Document any deviation from a model recommendation in the file note
- Confirm de-identification covers any data fed into the triage tool
- Flag any outcome that feels inconsistent with the model assumption
- Escalate cases where the model and your judgement diverge meaningfully
For governance leads
- Require a documented model description before any production deployment
- Sample-audit triage decisions for procedural fairness consistency
- Track outcome metrics by triage band and watch for systematic drift
- Confirm a Privacy Impact Assessment covers the underlying training data
- Maintain an override register that captures every divergence from a model
SRC Act sections referenced
- Section 14, compensation for injuries (general liability)
- Section 19, compensation for injuries resulting in incapacity
These are the determinations most likely to be influenced by triage routing. Practitioners should always check the current Act text before relying on any specific provision.
What good looks like
A scheme operator running predictive triage well shows three external signals.
Signal one. Override patterns are visible. The operator can produce, on request, a register of model overrides over the last quarter, with summary patterns. This shows the model is being used as an input to judgement, not a substitute for it.
Signal two. Calibration metrics are tracked. The operator can produce, on request, the model's accuracy against actual outcomes by triage band, broken down by reasonable cohorts. This shows the model is being monitored, not assumed.
Signal three. The reasoning trail is intact. The operator can produce, on request, sample file notes from AI assisted triage decisions that walk through the model's prediction, the case manager's view, and the reasoning that led to the determination. This shows the human remains at the centre of the decision.
Operators that can produce all three are running the model well. Operators that can produce one or two are partway there. Operators that cannot produce any have a governance gap that needs attention.
The cohort question
The single most uncomfortable question in predictive triage is whether the model is producing systematically different outcomes for different cohorts of claimants in ways that are not justified by the evidence. This question is uncomfortable because the answer is almost never simple.
Three honest considerations apply.
Consideration one. Some cohort differences reflect real differences in claim profiles. Industries with higher rates of complex injury will appear differently in the model's predictions, and that may be appropriate. The question is whether the differences track real differences in the evidence or whether they reflect historical bias.
Consideration two. Bias detection is technical work, not a one-page report. Doing it properly requires statistical analysis of the model's outputs against benchmarks. Most scheme operators are not equipped to do this internally. Engaging external expertise is often the right answer.
Consideration three. The remediation path is not always retraining. Sometimes the right response to a detected bias is to change the inputs the model receives, sometimes it is to add a post-processing adjustment, sometimes it is to deprecate the model. The choice depends on the specific finding.
This work is not optional in any deployment that survives external scrutiny. It is also not a project that finishes; it is a monitoring discipline.
When to switch a model off
Three conditions, in our reading, justify switching off a deployed predictive triage model.
Condition one. Calibration drift exceeds tolerance. Where the model's accuracy has decayed to the point that its predictions are no longer reliably informative, the model is doing more harm than good and should be paused for retraining.
Condition two. A bias finding is not promptly addressable. Where a systematic bias has been identified and the remediation path is not clear, the model should be paused while the remediation is worked out.
Condition three. Vendor changes the underlying behaviour. Where the vendor updates the model in ways the scheme operator has not assessed, the model should be paused until the new behaviour is understood and approved.
A model that has been switched off can be switched back on. A model that has caused a procedural fairness incident is more expensive to recover from than the inconvenience of a temporary pause.
Communicating with claimants
A question that comes up in policy discussions is whether claimants should be told that a predictive triage model has been involved in their claim. The answer is not yet settled across schemes.
Three considerations apply.
Consideration one. Legal duty. There is no current legal requirement under the SRC Act to disclose use of a predictive triage model. There may be transparency obligations under the Privacy Act if the model uses personal information in particular ways. Operators should take their own legal advice on disclosure.
Consideration two. Practical clarity. A model that affects only internal routing and prioritisation is a different kind of artefact from a model that influences a determination. Disclosure expectations may differ. Most current models sit in the routing category, but as use matures, the category line is likely to shift.
Consideration three. Trust posture. Schemes that take a transparent posture on AI use, even where disclosure is not strictly required, generally build more durable trust with claimant communities. Trust, once lost, is expensive to rebuild.
The default posture for most operators is internal-only at present, with a watching brief on the policy direction. As the regulatory environment matures, expect the disclosure question to become more pointed.
The bottom line
Predictive triage is an operational tool that, in well-governed deployments, demonstrably improves both efficiency and outcomes. In poorly governed deployments, it concentrates legal, ethical, and procedural fairness risk in ways that are not visible until something goes wrong.
The technology is moving faster than scheme operators' governance frameworks can keep up with. The discipline is to deploy with the framework intact, not to deploy and then catch up.
Build the framework first. Then turn the model on.
---
Content disclaimer: This article is for general educational purposes only and does not constitute legal advice, liability determination guidance, or a substitute for professional judgement. Workers compensation decisions must be made by appropriately qualified and authorised persons under the Safety, Rehabilitation and Compensation Act 1988. All AI outputs described in this article require human review before use in any claims management context.
TheAICommand. Intelligence, At Your Command.*
