Australia's Voluntary AI Safety Standard turned 18 months old in April. The early answer is now a real one.
The Standard launched on 4 September 2024 with ten guardrails covering accountability, risk management, data governance, testing, transparency, contestability, supply chain, training, records and engagement (Department of Industry, Science and Resources, accessed April 2026). It was framed as voluntary, with a clear signal that mandatory guardrails would follow for high-risk settings. Eighteen months later, the policy direction is sharper, the practitioner picture is clearer, and the gap between the two is the story.
This review looks at what landed, what did not, and where the next phase is heading.
What actually happened
In its first year the Standard was used mostly as a self-assessment template. Large enterprises mapped their existing AI governance to the ten guardrails. Most found gaps in supply chain (guardrail 7), records (guardrail 9), and engagement (guardrail 10). A smaller number found genuine gaps in risk management (guardrail 2) and testing (guardrail 4).
In months 12 to 18 the picture shifted. Three patterns emerged.
First, the Standard moved from voluntary self-assessment to procurement table stakes. Federal and state government tenders for AI-enabled services now reference compliance with the Standard as a baseline expectation (Digital Transformation Agency model contract update, January 2026, accessed April 2026). Several large enterprises have done the same in their AI vendor questionnaires.
Second, the Standard fed directly into sector-specific guidance. Comcare's April 2026 AI tool guidance for workers compensation cites the ten guardrails as the underlying maturity framework. ASIC's January 2026 statement on AI in financial services aligns its expectations to guardrails 1, 2, 4 and 9. APRA's draft prudential guidance on AI in regulated entities, released for consultation in February 2026, treats the Standard as the floor (APRA consultation paper CPG 234A, accessed April 2026).
Third, adoption among SMEs has been thin. A survey of 480 Australian SMEs published by the Australian Industry Group in March 2026 found 23 per cent had heard of the Standard, 11 per cent had read it, and 4 per cent had implemented anything beyond a one-page policy (Ai Group AI Adoption Survey 2026, accessed April 2026). That is a very large unaddressed cohort.
What worked
Two things genuinely landed.
The taxonomy worked. Ten guardrails turned out to be the right shape. Boards and executives could absorb it. Procurement teams could write it into contracts. Regulators could align to it. The fear that the Standard would be too abstract or too academic did not materialise.
Procurement leverage worked. Once large buyers started referencing the Standard in vendor questionnaires, vendors started self-certifying against it, and the rest of the market followed. This is the classic pull-through effect of a voluntary standard with credible buyers behind it. Without DTA and big four bank procurement teams using the Standard as a screen, take-up would have been slower.
What did not work
Two things did not.
The voluntary framing was its own ceiling. Most SMEs do not adopt voluntary frameworks. The 4 per cent SME implementation rate is not a failure of the Standard; it is a feature of voluntary frameworks generally. The policy lever needed to reach the broader market is mandatory or at least conditional.
Guardrail 10, "engage with stakeholders", remains the weakest in practice. Implementation is highly variable, often token. There is no clear template for what good engagement looks like for an internal HR AI tool versus a customer-facing one versus a public-sector deployment. Vendors filled this with whatever was easiest, which usually meant a privacy notice and not much else.
Who should care
If you sit in GRC, the Standard is now functionally a regulator-aligned baseline. Internal AI policies that do not map to the ten guardrails are out of step with where Comcare, ASIC and APRA are pointing. The work of mapping is small. The work of not having done it when an operational review lands is large.
If you build AI tools, the Standard is now a procurement filter. If you cannot answer guardrails 1, 2, 4, 7, 9 and 10 with evidence in a vendor questionnaire, you are losing public-sector tenders and large enterprise pilots. The smaller you are, the more painful that gap.
If you are a workers compensation professional, the link is direct. Comcare's April 2026 AI guidance for case managers explicitly references guardrails 4 (testing) and 9 (records) when describing what acceptable AI use looks like in claim handling. De-identification, prompt logging, and a clear no-go list flow from those guardrails.
The move to mandatory
The Department of Industry's October 2025 consultation paper laid out the path. Voluntary across the economy, mandatory for high-risk uses (Mandatory Guardrails for AI in High-Risk Settings: Proposals Paper, accessed April 2026). Twelve high-risk categories were proposed including healthcare, employment decisions, justice, critical infrastructure and high-value financial decisions.
The current policy direction, based on consultation responses now public, is for mandatory guardrails 1, 2, 4 and 9 to apply to defined high-risk uses by mid-2027. The remaining guardrails would stay voluntary but would continue to be referenced in sector regulation. The legislative vehicle has not been confirmed.
For organisations operating in high-risk categories, this is not a 2027 problem. It is a 2026 procurement and policy problem. The vendors you select this year will need to evidence compliance with the mandatory guardrails by the time they are switched on.
Hype check
A common framing during the consultation was that Australia was "behind" the EU AI Act. The honest assessment is that Australia took a different path on purpose: voluntary first, mandatory targeted at high-risk later. Eighteen months in, that approach is producing reasonable take-up among large players and clear sector alignment. It is not closing the SME gap. Whether that gap is closed by mandatory guardrails or by procurement leverage downstream is now the live policy question.
The other piece of hype to dismiss is that the Standard is "checkbox compliance". The ten guardrails are not checkboxes. Guardrail 4 (testing) and guardrail 9 (records) in particular require operational discipline that does not exist by default in most organisations. Treating them as checkboxes is how you get an embarrassing operational review finding.
What to do this week
If your organisation has not mapped its AI policy to the ten guardrails, do that now. It is a one-day exercise for most teams.
If you sit in procurement, add a question on guardrail compliance to your AI vendor questionnaire. The questionnaire-driven pull-through is the most effective lever the Standard has produced.
If you are in a high-risk category as defined by the proposals paper, start the conversation with your AI vendors now about evidence of compliance with guardrails 1, 2, 4 and 9. Do not wait for the legislation.
The Standard at 18 months is not a finished product. It is a credible floor and a clear policy trajectory. The organisations that treat it as both are the ones that will not be retrofitting in 2027.
TheAICommand. Intelligence, At Your Command.
