DDO and AI-Driven Personalisation: Where the Boundary Sits
← GRC

DDO and AI-Driven Personalisation: Where the Boundary Sits

AI personalisation is moving fast inside Australian financial services. The Design and Distribution Obligations were not written with adaptive recommendation engines in mind. The boundary between targeting and personal advice is the line GRC teams need to govern.

·9 min read·monthly

GRC content. Written for compliance, risk, and audit professionals in Australian financial services. General information. Not legal or compliance advice.

The personalisation engine sits between two regulatory regimes. Both apply.

Context for general readers: When a bank or superannuation fund wants to sell a product, two sets of rules apply. The Design and Distribution Obligations (DDO) require the issuer to define who the product is right for and only distribute to that group. The personal advice rules require that anyone giving personalised recommendations to a specific person hold an Australian Financial Services Licence with an advice authorisation, and meet best-interest duties. These regimes were built before AI personalisation engines were common. The grey zone is now where most adaptive customer experience design lives.

The DDO regime has been live since October 2021. ASIC has been actively supervising it, with notable enforcement activity around investment products and a steady stream of supervisory communication. AI-driven personalisation is the next pressure point.

Why the boundary matters

The legal architecture in Australia separates product design and distribution (DDO, Part 7.8A of the Corporations Act 2001; see also RG 274) from personal advice (Chapter 7, Part 7.6). Different licensing requirements, different supervisory expectations, different liability pictures. The distinction is not pedantic. It determines which licence the entity needs, which conduct obligations apply, and which dispute resolution and remediation expectations attach to the customer interaction.

DDO is conceptually about classes of consumer. The issuer defines a target market (for example, "consumers with a long investment horizon, moderate risk tolerance, and basic investment knowledge") and the distribution chain operates within that target. The obligation is satisfied at the level of consumer cohorts, not individuals.

Personal advice is conceptually about a specific person. When a recommendation considers an individual's circumstances and is provided in a way that a reasonable person might expect to take those circumstances into account, it is personal advice and triggers Chapter 7's licensing, conduct, and best-interest obligations.

AI personalisation engines create a problem because they operate at the level of the individual, but the institutions deploying them often do not hold a personal advice authorisation extending to the specific personalisation use case. The engine reads individual data, makes individual decisions, and presents individualised content. The legal question is whether the output crosses the personal advice line.

Where AI personalisation creates DDO risk

Four operational patterns are worth examining.

1. Recommendation engines for in-app product offers

A retail banking app uses an AI recommendation engine to decide which products to surface to which customers. The engine considers the customer's transaction history, demographics, and product holdings. The output is a ranked list of in-app product offers.

Inside DDO, this is generally manageable. The issuer's target market determination defines the eligible cohort. The recommendation engine is a distribution control if it suppresses offers to customers outside the target. The supervisory question is whether the engine's logic is documented, governed, and consistent with the target market determination.

The risk pattern: an engine optimised for conversion (rather than target market fit) can systematically drift toward customers near or outside the target boundary, particularly where the conversion model rewards customers with marginal financial position. ASIC's REP 762 work on investment products flagged exactly this kind of distribution drift.

2. Generative AI assistants in customer service

A bank deploys a generative AI assistant accessible through online chat. A customer asks a question framed broadly ("which credit card should I get?"). The assistant responds with a tailored answer that considers the customer's current product mix, balance, and transaction patterns.

This is where the line blurs. If the response is framed as general information about product options consistent with the customer's stated criteria, it can sit inside the general advice or factual information envelope. If the response considers the customer's circumstances and is presented in a way the customer reasonably expects to take those circumstances into account, it can be personal advice, regardless of any disclaimer.

The supervisory expectation, drawing from RG 175, is that the test is what a reasonable person in the customer's position would expect. A generative AI assistant that knows the customer's account holdings and uses them in its response is operating in a way that creates a strong expectation of personalisation.

3. Triggered communication and next-best-action engines

Many institutions use AI to decide when to communicate with a customer and what to say. A model identifies that a customer's offset balance has declined and triggers a personalised message about a savings product.

Whether this is general communication or personal advice depends on the framing of the message. A factual statement about a product feature that may be relevant to the customer's situation tends to sit inside DDO and general information. A statement that the product would be beneficial for the customer given their specific circumstances moves toward personal advice.

The supervisory direction of travel: ASIC's recent communication has emphasised that the substance of the communication matters more than the formal classification applied by the institution. Disclaimers cannot, on their own, change how the law characterises a communication.

4. Adaptive content and dashboard personalisation

A particularly subtle category involves AI-driven personalisation of the customer's dashboard or in-product experience. The AI engine decides what insights to surface, what spending categories to highlight, and what comparative benchmarks to present. The customer sees a curated view of their financial position.

This is operationally helpful and broadly acceptable as a customer experience design choice. The DDO and personal advice considerations enter when the curated view influences a product decision. A dashboard that surfaces a "you could be earning more interest" insight followed by a product comparison is shading toward advice territory; the same dashboard surfacing the insight without product comparison is more comfortably general.

The pattern that has emerged across the major Australian retail banks is to separate the insight surface (which can be highly personalised) from the product surface (which is more constrained by DDO and personal advice rules). Where the two surfaces are merged in a single AI-driven experience, the boundary becomes harder to govern.

Target market determinations and the AI question

Target market determinations (TMDs) are the central artefact of DDO compliance. A TMD describes the class of consumer for whom the product is appropriate, including financial situation, objectives, needs, and any consumer attributes that would make the product inappropriate.

AI personalisation engines interact with TMDs in two ways. First, an engine that filters product offers should be configured to filter against the TMD. The TMD is the source of truth; the engine's distribution rules should derive from it, not run in parallel. Where the engine's distribution rules drift from the TMD (for example, because the engine is optimising for conversion), the entity has a control failure that supervisors can pursue.

Second, an engine that personalises product content (for example, drafting a personalised explanation of why a product might suit the customer) is implicitly making representations about target market fit. Those representations need to be consistent with the TMD's articulation of who the product is for.

For many entities, the TMD was written before AI personalisation engines were deployed at scale. A practical action: review the TMDs for material products against the actual operation of the personalisation engines that distribute them. Where the engine is creating distribution outcomes the TMD did not anticipate, either the engine or the TMD needs to be updated.

ASIC's enforcement posture

ASIC has taken civil action under the DDO regime several times since the regime commenced in October 2021, including in relation to investment products and complex retail products. The enforcement focus has primarily been on TMD adequacy and distribution chain compliance. AI-driven distribution has not yet been the explicit subject of an ASIC enforcement matter, but supervisory engagement on AI is now active and the enforcement pathway is open.

The institutions most likely to attract supervisory attention are those whose personalisation engines have demonstrably influenced product distribution outcomes inconsistent with the TMD. Where an engine systematically reaches customers outside the target market, ASIC has a clear enforcement narrative.

Practical implications this quarter

For GRC and compliance teams supporting AI personalisation deployments, three actions are sensible:

  1. Map every AI-driven customer touchpoint against the DDO and personal advice framework. The mapping should identify, for each touchpoint, whether the design intent is general information, general advice, or personal advice, and whether the operational behaviour matches the design intent.
  2. Implement governance over recommendation logic, not just over outputs. Periodic testing of what the engine actually does, against what the target market determination says, is essential. The engine's optimisation function may diverge from the target market intent over time.
  3. Establish a clear policy on generative AI customer interactions. The default position for most institutions without a personal advice authorisation should be that generative AI assistants do not consider individual circumstances in product-related responses. Where this is not the design intent, the institution needs a documented basis for staying inside DDO or general advice.

Documentation that supports the line

Beyond the operational design, three documentation artefacts make the boundary defensible.

The first is the AI personalisation policy. A documented policy describing what AI tools can and cannot do in customer-facing contexts, with clear demarcation between general information, general advice, and personal advice surfaces. The policy should be specific enough to apply operationally, not aspirational language.

The second is the engine logic documentation. The recommendation engine's optimisation function, training data sources, and decision-rule overlays should be documented in a form that the conduct compliance function can review. Where the engine is operated by a third party, the entity needs to document its understanding of the engine's behaviour, even if the third party will not provide full transparency.

The third is the customer outcome testing protocol. A periodic testing protocol that examines whether the engine's actual distribution outcomes are consistent with the TMD, with sample-based investigation of edge cases. The testing should produce a report that compliance, business, and (potentially) regulators can read.

Direction of travel

ASIC has signalled active supervisory interest in AI in financial services through 2026. The DDO regime is one of the lenses through which that supervision will operate. Where AI personalisation creates outcomes that are inconsistent with the target market determination, or that move customers toward products outside their target, supervisory engagement is likely.

The boundary between AI-driven personalisation and personal advice is one of the most consequential governance decisions a regulated institution can make about its AI stack. It cannot be answered once and filed. It needs to be maintained as the engine learns and the regulatory expectation evolves.

Content disclaimer: This article is for general educational and informational purposes only. It does not constitute legal advice, regulatory guidance, or a substitute for professional compliance judgement. Regulatory obligations vary by entity type, licence, and circumstance. Always refer to primary source guidance from APRA, ASIC, or the relevant regulatory authority.

TheAICommand. Intelligence, At Your Command.

Context

DDO (the Design and Distribution Obligations) sits in Part 7.8A of the Corporations Act 2001. It requires issuers and distributors of financial products to design products for an identified target market and distribute them consistently with that target. ASIC enforces it. The obligation is product-centric (not advice-centric), but AI-driven personalisation can blur the line, because adaptive recommendation engines can move from defining a target market to producing something that looks like a personal recommendation.

AI angle

AI-driven personalisation engines are now used widely in Australian retail banking and superannuation. They drive product offers, content prioritisation, and next-best-action prompts. The behaviour of these engines determines whether the institution stays inside DDO or crosses into the personal advice regime under Chapter 7 of the Corporations Act.

Primary sources

DDOASICpersonalisationtarget marketAI governance
← Back to GRC

Content disclaimer: This article is for general educational and informational purposes only. It does not constitute legal advice, regulatory guidance, or a substitute for professional compliance judgement. Regulatory obligations vary by entity type, licence, and circumstance. Always refer to primary source guidance from APRA, ASIC, or the relevant regulatory authority.