AI has compressed specific stages of the attack lifecycle in measurable, well-documented ways, and the consequence for defenders is concrete: the assumptions built into traditional assessment cycles no longer hold.
Key Takeaways
- AI has not invented new attack techniques, but it has materially shortened the time and skill required to execute existing ones, particularly in reconnaissance, vulnerability identification, and social engineering at scale.
- The window between vulnerability disclosure and active exploitation has narrowed sharply. VulnCheck’s analysis of 2024 data found that nearly a quarter of known exploited vulnerabilities were being exploited on or before the day their CVE was published, and exploitation volumes rose 20% year on year.
- Shadow AI introduces an underappreciated internal attack surface: nearly half of employees use unsanctioned AI tools at work, and many routinely share sensitive organisational data through platforms operating outside IT governance.
- AI-assisted defensive tools deliver measurable outcomes. Organisations using AI and automation extensively contained breaches 98 days faster and incurred $1.88 million less in breach costs, according to IBM’s 2024 Cost of a Data Breach Report — but those tools perform best when deployed on sound foundations.
- Continuous validation is the appropriate structural response to compressed attacker timelines. Organisations that rely on periodic assessments operate with an exposure picture that may already be out of date by the time it reaches a decision-maker.
AI Has Changed the Attacker’s Operational Tempo
The most useful framing is not that AI has transformed the threat landscape, but that it has meaningfully accelerated specific capabilities that were already part of the attacker’s toolkit. Automated reconnaissance can now map an organisation’s external attack surface in minutes. AI-assisted vulnerability research reduces the time required to identify exploitable weaknesses from hours or days to near-real-time. And social engineering content, historically constrained by the effort required to produce credible, personalised communications, can now be generated at scale with quality that once required significant human expertise.
ASD’s Annual Cyber Threat Report 2024–25 (ASD’s ACSC, Annual Cyber Threat Report 2024–25, 2025) notes that AI is amplifying attacker capability across the board, with deepfakes, fraudulent KYC documentation, and AI-generated phishing now routine components of criminal tradecraft in Australia. This is not theoretical: the report documented a 16% increase in calls to the Australian Cyber Security Hotline and an 11% increase in cyber security incidents responded to by the ACSC in a single year.
What AI has changed is the economics of attack. Techniques that previously required skilled operators to execute at scale can now be automated. This lowers the barrier to entry for a wider range of threat actors while simultaneously increasing the volume and velocity of attacks that skilled operators can conduct. The consequence is a threat environment where the operational tempo has shifted, and where defences calibrated to a slower pace are structurally mismatched.
The Exploitation Window Has Narrowed Significantly
VulnCheck’s analysis of 2024 exploitation data (VulnCheck, 2024 Trends in Vulnerability Exploitation, 2025) identified 768 CVEs first reported as exploited in the wild during the year, a 20% increase from 2023. More significantly, nearly a quarter of those known exploited vulnerabilities were being exploited on or before the day their CVE was published — before most defenders had any formal notification that the vulnerability existed. VulnCheck’s 2026 State of Exploitation report (VulnCheck, State of Exploitation 2026, 2026) confirmed this pattern held in 2025, with the proportion rising further to 29%.
Mandiant’s M-Trends 2025 report (Google Cloud Security, M-Trends 2025, 2025), drawn from more than 450,000 hours of incident response investigations, documented multiple critical vulnerabilities being exploited by more than a dozen distinct threat groups within two weeks of disclosure. Exploits remained the most common initial infection vector for the fifth consecutive year, accounting for 33% of all intrusions investigated. In the Asia-Pacific region specifically, exploits accounted for 64% of initial infection vectors, nearly double the global average.
The trend in exploitation speed is not new: it has been worsening for years. Rapid7’s longitudinal vulnerability research (Rapid7, 2022 Vulnerability Intelligence Report, 2023) established that 56% of high-significance vulnerabilities were exploited within seven days of public disclosure as far back as 2022, an 87% rise over 2020, with a median time to exploitation of just one day. AI-assisted tooling has accelerated that trajectory further. An organisation running a quarterly vulnerability assessment cycle and allowing 30 days for prioritisation and remediation is operating with a 90 to 120-day window that overlaps almost entirely with the exploitation lifecycle for the most actively targeted vulnerabilities.
Orro observes this pattern across the environments it supports: organisations that have invested in asset discovery and detection capabilities still find themselves making remediation decisions based on exposure data that is days or weeks old. In a threat environment where high-severity vulnerabilities can be weaponised within a fortnight of disclosure, the latency built into traditional assessment cadences is a structural risk.
Shadow AI Is an Underappreciated Attack Surface
The AI threat to Australian organisations is not only external. As AI adoption accelerates inside enterprises, often faster than governance frameworks can keep pace, new attack surfaces are emerging that many security teams have not yet fully mapped.
Shadow AI, the informal use of AI tools outside IT oversight and approval, is now widespread. A January 2026 BlackFog survey of 2,000 workers (BlackFog, Shadow AI Threat Grows Inside Enterprises, 2026) found that nearly half used unsanctioned AI tools, with 33% having shared organisational research or datasets, 27% having shared employee data, and 23% having shared financial information through those platforms. Cisco’s 2025 AI security research (Cisco, AI Security Study, 2025) found that 46% of organisations had already experienced internal data leaks through generative AI tools. IBM’s Cost of a Data Breach Report 2025 (IBM Security, Cost of a Data Breach Report 2025, 2025) found that 97% of AI-related security breaches involved AI systems lacking proper access controls, and that most breached organisations had no governance policies to manage shadow AI — both factors directly driving up breach costs.
The attack vectors introduced by ungoverned AI adoption extend beyond data exfiltration. Prompt injection attacks against AI systems, the compromise of AI platforms through which sensitive data flows, and the use of employee AI interactions to profile and target organisations are all documented and active techniques. OWASP’s Top 10 for LLM Applications (OWASP, Top 10 for LLM Applications 2025, 2024) identifies prompt injection as the leading security risk in AI system deployments.
This is not a reason to restrict AI adoption. The productivity case for AI tools is clear, and organisations that refuse to engage will face their own risks. It is a reason to govern it. The principle is the same as it has always been in security: inventory, visibility, and policy applied consistently. Shadow AI cannot be managed if it cannot be seen, and it cannot be seen without deliberate governance structures in place.
Defensive AI Delivers Genuine Value, on Sound Foundations
AI is also being applied defensively, and the outcomes data is credible. IBM’s 2024 Cost of a Data Breach Report (IBM Security, Cost of a Data Breach Report 2024, 2024) found that organisations making extensive use of AI and automation in their security operations contained breaches 98 days faster on average than those that did not, and incurred $1.88 million less in average breach costs. The use of AI specifically in prevention workflows (attack surface management, posture management, and proactive exposure reduction) generated the largest cost savings, at $2.2 million less per breach compared to organisations not using AI in those areas.
These are not marginal gains. They reflect genuine capability improvement in threat detection, telemetry correlation, and prioritisation of exposures against real-time threat intelligence. AI can process the volume of signals generated in a modern enterprise environment at a speed no human team can match, and surface patterns that would otherwise remain buried in noise.
The honest limits matter here, though. AI-assisted security tools perform best when the underlying data is clean, the asset inventory is complete, and the telemetry stack covers the full environment. Orro works with organisations that have deployed AI security tooling into environments with fragmented visibility and incomplete asset discovery, and the result is rarely improved outcomes. AI amplifies existing blind spots as readily as it surfaces genuine threats. The foundation has to come first. An organisation that cannot reliably enumerate its external attack surface or its internal asset inventory will find that AI-assisted analysis produces confident, fast, and sometimes wrong answers.
Evidence Snapshot: What Research Shows About AI-Accelerated Cyber Threats
Exploitation timeline compression
- Nearly a quarter of known exploited vulnerabilities in 2024 were being exploited on or before the day their CVE was published, rising to 29% in 2025. (VulnCheck, State of Exploitation 2026, 2026)
- 768 CVEs were reported as exploited in the wild for the first time in 2024, a 20% year-on-year increase. (VulnCheck, 2024 Trends in Vulnerability Exploitation, 2025)
- Exploits were the most common initial infection vector for the fifth consecutive year, accounting for 33% of intrusions globally and 64% in the Asia-Pacific region. (Google Cloud Security, M-Trends 2025, 2025)
- 56% of high-significance vulnerabilities were exploited within seven days of disclosure as far back as 2022, an 87% rise over 2020, with a median time to exploitation of one day. (Rapid7, 2022 Vulnerability Intelligence Report, 2023)
Shadow AI and internal exposure
- Nearly half of employees use unsanctioned AI tools; 33% have shared organisational research or datasets through them. (BlackFog, Shadow AI Threat Grows Inside Enterprises, 2026)
- 46% of organisations reported internal data leaks through generative AI tools. (Cisco, AI Security Study, 2025)
- 97% of AI-related security breaches involved AI systems that lacked proper access controls; most breached organisations had no shadow AI governance policies. (IBM Security, Cost of a Data Breach Report 2025, 2025)
Defensive AI outcomes and limits
- Organisations using AI and automation extensively contained breaches 98 days faster and incurred $1.88 million less in average breach costs. (IBM Security, Cost of a Data Breach Report 2024, 2024)
- AI in prevention workflows produced the largest cost reduction: $2.2 million less per breach compared to organisations not deploying AI in prevention. (IBM Security, Cost of a Data Breach Report 2024, 2024)
- Australia’s critical infrastructure entities received more than 190 ACSC notifications of potentially malicious activity in 2024–25, a 111% increase on the prior year. (ASD’s ACSC, Annual Cyber Threat Report 2024–25, 2025)
Australian Critical Infrastructure Faces a Specific and Documented Risk
ASD’s Annual Cyber Threat Report 2024–25 (ASD’s ACSC, Annual Cyber Threat Report 2024–25, 2025) documents a 111% increase in notifications to critical infrastructure entities of potentially malicious activity: more than 190 notifications in a single year. Critical infrastructure accounted for 13% of all cyber incidents responded to by the ACSC, with DDoS attacks against critical infrastructure increasing by 280%. State-sponsored actors, including APT40, a PRC-linked group that ASD and international partners have publicly attributed and profiled, are identified as a persistent and active threat to Australian government, critical infrastructure, and business networks.
The Security of Critical Infrastructure Act 2018 creates specific obligations for critical infrastructure operators to maintain and implement risk management programmes. A programme built on point-in-time assessments cannot demonstrate ongoing risk visibility, and ASD’s guidance is explicit: organisations should operate with an “assume compromise” mindset and implement continuous threat intelligence practices. The recommendation to “adopt continuous threat intelligence” is one of ASD’s four strategic priorities for Australian organisations. It reflects the same operational reality the exploitation window data illustrates: the pace of the threat environment has outrun the cadence of periodic assessment.
For operators in utilities, resources, and related sectors served by Orro’s critical infrastructure practice, this is not a generalised observation. It is a specific, documented, and escalating operational reality.
Continuous Validation Is the Structural Response to AI-Accelerated Threats
The appropriate response to compressed attacker timelines is not faster periodic assessments. It is eliminating the periodic model altogether in favour of continuous validation: a state in which an organisation knows its current exposure at any given moment, and can respond because it already has the picture.
When threat velocity increases, the organisations with continuous exposure validation in place retain the ability to act. The relevant question is not “when was our last assessment?” but “what is our exposure state right now?” Continuous Threat Exposure Management (CTEM) provides the operational discipline to make that question answerable. It covers the full cycle: scoping the attack surface, discovering exposures continuously, validating which exposures are genuinely exploitable, prioritising them against real-world threat intelligence, and mobilising remediation accordingly. As AI tools accelerate specific stages of that process (particularly discovery and validation), they amplify the value of a sound CTEM programme rather than substitute for one.
The organisations that will navigate AI-accelerated threat velocity most effectively are not necessarily those with the most sophisticated AI security tooling. They are those whose exposure management foundations are sound enough that additional capability, whether human or AI-assisted, can act on accurate, current information. That is the operational advantage that continuous validation provides, and in an environment where the exploitation window can be measured in days, it is the advantage that matters most.
Orro’s Visibility & Response and National Cyber Defence Centre capabilities are designed to support this model: continuous telemetry, validated exposure data, and the operational depth to turn intelligence into action.
If this article has raised questions about whether your current assessment cadence is keeping pace with the threat environment, how AI-introduced attack surfaces are being managed in your organisation, or how continuous validation maps to your existing security operations, Orro’s team is available for a confidential discussion. There are no obligations — just a conversation with practitioners who work across these environments every day.
Orro’s Continuous Threat Exposure Management service helps Australian organisations maintain continuous exposure visibility in an environment where threat velocity is increasing. Download the Continuous Exposure Playbook or book a consultation with Orro’s security team to explore what continuous validation looks like in your environment.
Sources & Further Reading
- ASD’s ACSC, Annual Cyber Threat Report 2024–25, Australian Signals Directorate, 2025
- Google Cloud Security, M-Trends 2025, Mandiant/Google, 2025
- VulnCheck, 2024 Trends in Vulnerability Exploitation, VulnCheck, 2025
- VulnCheck, State of Exploitation 2026, VulnCheck, 2026
- Rapid7, 2022 Vulnerability Intelligence Report, Rapid7, 2023
- IBM Security, Cost of a Data Breach Report 2024, IBM, 2024
- IBM Security, Cost of a Data Breach Report 2025, IBM, 2025
- BlackFog, Shadow AI Threat Grows Inside Enterprises, BlackFog/BusinessWire, January 2026
- OWASP Top 10 for LLM Applications 2025, OWASP, 2024
Further reading:
- Security of Critical Infrastructure Act 2018 (SOCI Act), Australian Government
- ASD Advisory: APT40 Tradecraft, ASD’s ACSC, 2024
- ASD’s ACSC, Annual Cyber Threat Report 2023–24, Australian Signals Directorate, 2024
- Orro: Visibility & Response | Strategy & Risk Management | National Cyber Defence Centre | Critical Infrastructure