
AI News / Posts
Rapid Updates When AI News Breaks
Concise rapid updates when AI news breaks. 150-200 words, no filler, straight to the signal. Available on the site and cross-posted to X and LinkedIn.
Intelligence, At Your Command.
Federal Court draws a line on AI-generated evidence
A Federal Court judgment handed down in the NSW registry on 22 April 2026 has set the first detailed Australian test for the admissibility of AI-generated evidence. The case, a commercial dispute over contract performance, turned on document briefs that one party had summarised using a commercial generative AI tool and tendered without disclosing the tool, the prompts, or any verification step. Justice Henderson did not exclude AI-assisted material outright. The judgment instead lays out a three-part admissibility test that is likely to travel quickly across other registries. The model and version used must be identified on the record. The prompts and inputs that produced the output must be available for inspection. And a human verification step, performed by a person who can speak to the work, must be capable of being demonstrated. Material that fails any one of the three is open to challenge on weight, and may be excluded outright where reliance is heavy. The practical implication for in-house legal and compliance teams is immediate. Prompt logs are now a discovery item. Treat them like one from today. *TheAICommand. Intelligence, At Your Command.*
APRA puts AI model risk on every regulated CEO's desk
APRA has written to the chief executive of every regulated entity setting formal expectations for AI and machine learning model risk under CPS 220 and CPS 230. The letter, dated 17 April 2026, names four things APRA wants to see by the end of the financial year. A board-approved model risk framework that explicitly covers third-party AI. A registered model inventory with materiality ratings. Independent validation cadence tied to that materiality. And incident reporting that triggers on degradation, not just outage. APRA also signals it will run thematic reviews on banking and insurance subsets later in 2026, with findings published. None of this is new in spirit. CPS 220 has always covered models, and CPS 230 has covered third-party operational risk since 2025. What is new is that AI is now named, the expectation is written down, and the regulator is going to ask. If your model inventory still lives in a single team's spreadsheet, the next twelve weeks are going to be busy. Boards now own the framework, not just the policy. *TheAICommand. Intelligence, At Your Command.*
Microsoft splits Copilot into a cheaper everyday tier
Microsoft has restructured Microsoft 365 Copilot pricing for the first time since general availability, introducing a Copilot Standard tier at fourteen US dollars per user per month. The new tier sits beneath the existing thirty-dollar Copilot Pro seat, which remains the full-feature option. The split is the substance. Standard includes chat inside Word, Excel, PowerPoint, and Outlook, plus Teams meeting summaries, plus a monthly cap on agent runs. Pro keeps unlimited agent invocations, image generation through Designer, and the higher model-call ceiling that power users actually consume. Two procurement signals are worth reading. First, Microsoft is publicly conceding what every CFO already suspected. The full thirty-dollar seat is overkill for most knowledge workers, and one-size-fits-all licensing was leaving spend on the table. Second, the Standard tier is positioned to claw deployments back from Google Workspace AI and from leakage to ChatGPT Enterprise. For finance and procurement leads, the move opens a real re-segmentation question this quarter. Map your users to the right tier before the next renewal locks the wrong number in. *TheAICommand. Intelligence, At Your Command.*
OpenAI and Anthropic reset the enterprise floor price
OpenAI and Anthropic have both raised enterprise floor pricing inside the same fortnight. OpenAI lifted ChatGPT Enterprise minimums by roughly 28 per cent on 4 April, with the new floor applying to fresh contracts and renewals from May. Anthropic followed on 14 April with a roughly 22 per cent increase to Claude Enterprise minimums and a tighter committed-spend tier replacing the old pay-as-you-grow option. The published rationales differ. OpenAI cited inference cost and capacity allocation. Anthropic cited expanded enterprise feature set, including Sydney-based contracting. The pattern, however, is not coincidence. Both vendors are signalling pricing power for the first time since launch. Demand for production-grade enterprise AI is firm, frontier capacity is constrained, and the era of pilot-budget seats is ending. Two practical takeaways for procurement and finance leaders. If your renewal lands within the next ninety days, lock terms now before the floor moves again. If your team is still on a pilot SKU, model the production-tier number into your 2026 budget this week. The benchmark just shifted. *TheAICommand. Intelligence, At Your Command.*
NSW rewrites its AI procurement playbook for 2026
The NSW Department of Customer Service has published version 2 of its AI Procurement Framework, replacing the 2024 guidance that has shaped most state agency AI buying for the last eighteen months. The headline change is structural. A new mandatory AI schedule now attaches to standard ICT contracts whenever an agency buys a solution that uses generative AI. The schedule requires vendors to disclose model provenance at version level, training data sources at category level, the jurisdiction in which inference happens, red-team evidence specific to the agency use case, and a defined incident reporting obligation. Two clauses have real teeth. Silent model upgrades, where the underlying foundation model changes materially without notice, now trigger an agency exit right with refund. And vendors must provide ninety days notice before any feature deprecation that affects the contracted use case. The framework lands at the point where most agencies are renewing 2024-era pilots into production. If you sell AI into NSW, the RFP questions just got sharper. If you buy AI anywhere, this schedule is a defensible baseline. *TheAICommand. Intelligence, At Your Command.*
Anthropic plants a Sydney flag aimed at regulated work
Anthropic has opened its first Australian office, planted in Sydney and aimed squarely at enterprise customers and APRA-regulated entities. The launch announcement, published on 9 April 2026, names three priorities. A regional go-to-market team led from Sydney with claims-to-coverage roles posted across compliance, public sector, and financial services. Active conversations with Australian hyperscalers on local inference availability for Claude. And a stated intent to engage with the Australian Privacy Principles, the AI Safety Standard, and APRA prudential settings as part of standard contracting. The substance under the announcement is the data residency conversation. Australian regulated buyers have spent the last two years routing Claude through US contracting and US inference, which has been a friction point in CPS 230 third-party reviews and in any procurement that touches sensitive workloads. A local entity changes the contract surface. Two knock-ons matter for compliance buyers. Procurement now has a real counterparty in country. And the OpenAI versus Anthropic enterprise contest, already sharpening on price, just localised on the trust and assurance dimension that actually decides regulated deals. *TheAICommand. Intelligence, At Your Command.*
EU AI Act draws first blood with €18m fine
The European Commission has issued the first enforcement action under the EU AI Act. An €18 million fine against a Dutch recruitment platform that shipped a high-risk hiring system into the EU market without a fundamental rights impact assessment, and with bias testing the Commission described as materially incomplete. The decision, published on 4 April 2026, is the first under Article 99 since the Act's high-risk obligations bit on 2 August 2025. Three findings drove the size of the fine. No fundamental rights impact assessment on file at deployment. Bias testing limited to a single protected attribute, with no intersectional analysis. And operator-side logging that the regulator's auditors could not reconstruct from the records held. The platform has indicated it will appeal. The signal is the part Australian operators should read closely. The Commission is not waiting for harm. The penalty is being sized off documentation gaps alone. If you are deploying high-risk AI into the EU under the Brussels effect, the next audit conversation just got more expensive. *TheAICommand. Intelligence, At Your Command.*
TheAICommand Brief
One weekly edition. Four sections. ~1,200 words.
General AI updates, practical takeaways for a rotating audience, a practitioner-grade deep dive, and a prompt you can use. Free, human edited, source links on everything.
Free. No spam. Unsubscribe any time.