Stu Long, CTO, Orro
There is a meaningful difference between an AI tool that answers questions and an AI agent that takes actions — and most organisations’ governance frameworks were written for the former, not the latter. Addressing this agentic AI governance gap is not a future concern; it is a present one, and for organisations that have already deployed Microsoft Copilot, workflow automation platforms, or AI-assisted customer service systems, it is a gap that exists right now in their production environments.
Key Takeaways: Closing the Agentic AI Governance Gap
-
- Most AI governance frameworks and Acceptable Use Policies were designed for conversational AI tools where a human reviews and acts on every output. They do not address AI agents that take actions autonomously across an organisation’s systems without a human in the loop for each step.
- The governance gap in agentic AI is not primarily a technical failure — it is a policy failure. Most current policies do not define what actions an agent can take without approval, what data it can access, or who is accountable when it acts incorrectly.
- The OWASP Top 10 for LLM Applications (2025) identifies Excessive Agency and Prompt Injection as two of the most significant documented risks for AI systems operating with real-world permissions — both are directly relevant to enterprise agentic deployments.
- Effective governance for agentic AI requires controls at the API and identity layer: least-privilege permission scoping per agent, audit logging below the UI layer, and explicitly defined human-in-the-loop thresholds for high-consequence actions.
- Under Australian law, organisations are accountable for how their systems handle personal information — not only how their employees do. An AI agent that accesses or surfaces personal data outside its intended scope may constitute a notifiable breach under the Privacy Act 1988, regardless of whether a human directed the action.
The Practical Difference Between an AI Assistant and an AI Agent
The distinction sounds simple. An AI assistant receives a prompt and returns a response: a drafted email, a summarised document, an answer to a question. A human then decides what to do with that output. An AI agent receives a goal and works autonomously toward it — querying systems, sending communications, booking meetings, modifying files, triggering API calls, and escalating workflows — with the human reviewing the outcome rather than each individual step.
That distinction is the source of the governance problem. When a human reviews every AI-generated output before acting on it, the governance framework needs to address what the AI says. When an AI agent acts without that review, the governance framework needs to address what the AI does — and most frameworks in Australian enterprise environments were not designed with that requirement in mind.
The tools driving this shift are not hypothetical. Microsoft 365 Copilot, now widely deployed across Australian enterprise environments, includes agent capabilities that allow it to query SharePoint repositories, summarise and send emails, schedule meetings, and interact with connected business applications. Workflow automation platforms with AI-driven decision nodes are in production across financial services, retail, healthcare, and logistics. AI-assisted customer service platforms with action capabilities — booking, modifying, escalating — are live in consumer-facing environments. The category of technology being governed has changed, but many governance documents have not caught up.
The Governance Gap in Current AI Policies
The AI Acceptable Use Policies that most organisations put in place over 2023 and 2024 were a responsible first step, but they were written for a different technology. They address employee behaviour: what data employees can share with AI tools, what kinds of content they can generate, how they should review AI outputs before using them. The fundamental assumption embedded in those policies is that a human remains the actor. The AI advises; the human decides.
That assumption does not hold for agentic AI deployments. When an AI agent is configured to manage a workflow, there is no human decision point for each action the agent takes. The agent decides — within whatever permission boundaries it has been granted — and executes. Current policies typically do not answer the questions that matter for these deployments: What actions can an AI agent take without human approval? What data can an agent access, and under what conditions? Who is accountable when an agent takes an incorrect or harmful action? How are agent actions logged, and are those logs auditable at the level where actions actually occur? What categories of action require a human to review and approve before the agent proceeds?
These are not abstract governance philosophy questions. They are operational requirements for any organisation with agentic AI in production environments. ASD’s ACSC guidance on engaging with artificial intelligence states that organisations need to understand the constraints of their AI systems and manage them within a defined security framework — but the specific challenge of agentic permissions requires governance that goes beyond general AI security hygiene. (ASD’s ACSC, Engaging with Artificial Intelligence)
What the Governance Gap Makes Possible
The risks created by an agentic governance gap are specific and documented, not speculative. Three failure modes are worth examining in detail.
The first is data leakage during task execution. An AI agent completing a legitimate task — compiling a report, responding to a customer query, summarising documents across a file repository — may access and include data it was not intended to surface. This happens not because the agent is malfunctioning, but because its access is determined by the permissions of the account or identity it operates under. If that identity has broad access, the agent inherits that access. It will include whatever it can reach that appears relevant to the task. The OWASP Top 10 for LLM Applications (2025) — LLM06: Excessive Agency identifies this directly, noting that an LLM extension designed for an individual user that connects to downstream systems using a generic high-privileged identity has access to data belonging to all users. (OWASP GenAI Security Project, OWASP Top 10 for LLM Applications 2025)
The second failure mode is bypassing human approval processes. AI agents operating in workflow automation contexts may be configured to execute actions that would normally require human sign-off — purchase approvals, contract execution, access provisioning, communications sent on behalf of an organisation — if the approval logic is not explicitly embedded in the agent’s permission architecture. The agent is not circumventing governance intentionally; it is executing within whatever it has been granted. OWASP classifies this as Excessive Autonomy: a failure to implement human-in-the-loop verification for high-impact actions. An agent that can delete files, send external communications, or modify access permissions without a confirmation step will do exactly that, at speed, whenever a task requires it.
The third is prompt injection — a specific and documented attack vector that organisations deploying agentic AI need to treat as a concrete operational risk. OWASP LLM01:2025 Prompt Injection describes how malicious instructions embedded in content that an agent processes — an email, a shared document, a web page, a calendar entry — can redirect the agent’s actions without the user’s awareness. An agent instructed to summarise incoming emails and act on them may encounter an email containing carefully crafted instructions that redirect it to forward sensitive information, modify access settings, or take actions the legitimate user never requested. The agent’s inability to distinguish malicious instructions embedded in processed content from legitimate directives is a known architectural limitation, and the mitigation is governance at the permission layer, not trust in the agent’s judgement.
Orro observes that the organisations most exposed to these failure modes are typically not those that have ignored AI governance — they are organisations that have implemented thoughtful policies for conversational AI tools and assumed those policies extend to agentic deployments. The assumption is understandable; the tools often look similar at the surface. The permission architecture underneath is categorically different.
Evidence Snapshot: Agentic AI Security in Australian Enterprise
Agentic AI Adoption and Deployment
- Australian SME AI adoption reached 40% in Q4 2024, a 5% increase over the prior quarter, with the Department of Industry’s AI Adoption Tracker noting a clear gap between responsible AI practices organisations intend to implement and those they have actually deployed. (Department of Industry, Science and Resources, AI Adoption Tracker Q4 2024, 2025)
- The Reserve Bank of Australia’s November 2025 survey of 100 medium and large Australian firms found growing interest in agentic AI tools, but noted that practical adoption of such tools has so far been low, with enterprise-wide AI transformation remaining the exception rather than the norm. (Reserve Bank of Australia, Technology Investment and AI: What Are Firms Telling Us?, November 2025)
Documented AI Security Risks and Attack Vectors
- OWASP Top 10 for LLM Applications 2025 identifies Prompt Injection (LLM01) and Excessive Agency (LLM06) as two of the most significant security risks for LLM-based systems. Excessive Agency results from excessive functionality, excessive permissions, or excessive autonomy granted to agents, and can affect confidentiality, integrity, and availability. (OWASP GenAI Security Project, Top 10 for LLM Applications 2025)
- The NIST AI Risk Management Framework (AI RMF 1.0, 2023), expanded via a Generative AI Profile in July 2024, provides structured governance guidance for AI system risk management, emphasising access controls, accountability mechanisms, and oversight commensurate with the impact of AI system actions. (NIST, Artificial Intelligence Risk Management Framework AI RMF 1.0, 2023)
- The average global cost of a data breach reached USD 4.88 million in 2024 — a 10% increase over 2023 and the largest annual increase since the pandemic — with customer personally identifiable information involved in 46% of all breaches. (IBM, Cost of a Data Breach Report 2024)
Regulatory and Compliance Obligations for AI Systems
- ASD’s ACSC, in conjunction with international partners, has published guidance recommending organisations apply AI security practices alongside the Essential Eight framework, understand their AI systems’ constraints, and establish clear governance of AI-related data handling. (ASD’s ACSC, Engaging with Artificial Intelligence)
- APRA CPS 234 applies to all APRA-regulated entities and requires information security controls commensurate with threats to information assets — obligations that extend to AI systems operating in regulated environments. The Financial Accountability Regime (FAR), active from March 2024, makes individual executives personally accountable for compliance with these obligations. (APRA, Prudential Standard CPS 234 Information Security)
What Agentic Guardrails Look Like in Practice
Closing the agentic governance gap does not require a complete overhaul of existing security architecture. It requires applying well-understood access control principles to a new category of actor, and extending existing policies to cover what AI agents can do rather than only what employees can ask.
The starting point is least-privilege access, applied per agent and per task. AI agents are frequently provisioned with user-level credentials — the credentials of the person who configured them, or a generic service account with broad access — rather than purpose-scoped identities that have access only to what the specific task requires. The NIST AI Risk Management Framework (AI RMF 1.0) provides a governance structure for managing this kind of AI system risk, emphasising that trustworthy AI deployment requires appropriate access controls and oversight mechanisms commensurate with the potential impact of the system’s actions. (NIST, Artificial Intelligence Risk Management Framework AI RMF 1.0, 2023) In practice, this means each agent should operate under an identity scoped to the specific systems and data it needs to complete its designated task — and nothing more.
The second requirement is audit logging at the API layer. Organisations that rely on user-level or UI-layer logging to monitor AI activity will miss agent actions that occur below that layer, which is where most consequential agentic actions execute. API-layer logging, capturing each system call the agent makes, each data object it accesses, and each action it triggers, is the only approach that provides genuine visibility into what agentic systems are doing. Without it, governance frameworks produce the appearance of oversight without the substance.
The third requirement is explicitly defined human-in-the-loop thresholds — and this is where governance documentation needs to be most specific. Not every agent action requires human review; requiring approval for every step would negate the operational value of the technology. But some categories of action should always require it: actions with financial consequences above a defined threshold; actions that modify access permissions for any user or system; actions involving external communications; actions that affect personal data or data classified as sensitive. These thresholds need to be defined in governance documentation and enforced at the technical layer, not simply described in policy. Policy without technical enforcement is a statement of intent, not a control.
Finally, most AI Acceptable Use Policies need a specific addition: a section defining Agentic Permissions. This section should articulate what categories of action an AI agent can take without human-in-the-loop approval, what categories require it, what data each deployed agent is authorised to access, and who is accountable when an agent takes an action outside those boundaries. This is the policy layer that supports and gives meaning to the technical controls. Both are necessary; neither is sufficient without the other.
Regulatory and Reputational Consequences
The stakes of ungoverned agentic AI are not confined to operational disruption. They extend to regulatory liability and reputational exposure in ways that are directly relevant to board-level risk owners.
Under the Privacy Act 1988 (Cth), organisations are responsible for how their systems handle personal information — not only how their employees do. If an AI agent accesses, processes, or surfaces personal information outside its intended scope during an otherwise legitimate task, that may constitute an eligible data breach requiring notification to the Office of the Australian Information Commissioner under the Notifiable Data Breaches scheme. The agent’s autonomy is not a mitigating factor; organisations are accountable for the systems they deploy.
For organisations subject to APRA oversight, the obligation is more specific. APRA Prudential Standard CPS 234 Information Security requires APRA-regulated entities to maintain information security controls commensurate with the threats their information assets face — and those obligations extend to AI systems operating within their technology environments. (APRA, Prudential Standard CPS 234 Information Security, 2019, updated) An AI agent that processes financial data, customer records, or other regulated information without appropriate permission scoping and audit controls is a CPS 234 compliance exposure, not only an operational risk. CPS 230, which came into force on 1 July 2025, extends these obligations further to operational risk management across the enterprise.
The reputational dimension is equally important. An AI agent taking a publicly visible incorrect action — sending an inappropriate communication to a customer, triggering an unintended external transaction, surfacing confidential data in a customer-facing context — will be attributed to the organisation, not to the AI. The public and regulators do not distinguish between human-directed errors and system errors in their assessment of organisational accountability. The reputational consequence of an AI agent’s action is the same as the consequence of an employee’s action, and in some cases more significant, because it implies systematic rather than individual failure.
Orro works with organisations across utilities, financial services, healthcare, and retail that are working through exactly these governance questions as their agentic AI deployments move from pilot to production. The consistent finding is that the technical capability to govern these systems appropriately exists and is well understood — the gap is almost always in the policy layer and in the clarity of escalation thresholds, not in the availability of technical controls.
The Practical First Step
Organisations do not need to begin with a comprehensive agentic AI governance programme. The highest-value first move is a targeted audit of current AI deployments with a specific question in mind: which AI tools currently in use within the organisation have action capabilities, not just generative capabilities?
That audit should map every deployed AI tool that can take actions — send, modify, create, delete, provision, communicate — identify the identity or credentials under which each operates, and assess whether those permissions are appropriately scoped for the tasks the tool is intended to perform. In most organisations, this exercise will surface agents running under user-level credentials with access broader than any specific task requires, workflow automations with no defined approval thresholds, and AI integrations with no API-layer audit logging in place.
The output of that audit becomes the prioritised remediation list. It does not require replacing tools or rearchitecting platforms; it requires scoping agent identities appropriately, adding approval gates to high-consequence action categories, and ensuring logging captures what agents are doing at the layer where their actions execute.
Governance documents that do not reach this layer are not governing agentic AI. They are describing it.
If this article has raised questions about whether your current AI governance framework addresses what your AI tools can do — not only what your employees can ask them — how your deployed agents are permissioned and audited, or whether your human-in-the-loop thresholds are technically enforced or simply documented, Orro’s team is available for a confidential discussion. There are no obligations involved — just a conversation with practitioners who work across these environments every day.
Is Your AI Governance Ready for Agents?
Orro’s Cloud Security and Managed IT practices help Australian organisations build governance frameworks that reach the layer where agentic AI risk actually lives — access controls, API-layer audit logging, and defined human-in-the-loop thresholds. Download the 2026 Australian Governance & Privacy Risk Checklist or speak with Orro’s team to assess your current agentic AI governance posture.