Two years ago, the most advanced AI models available could barely complete beginner-level cybersecurity tasks. As of April 2026, AI models in controlled evaluations are autonomously executing multi-stage network attacks, from initial reconnaissance through to full network takeover, completing work that human security professionals estimate would take 20 hours. For business leaders who think of cybersecurity as an IT department concern, that timeline compression deserves attention.
The AI Security Institute published findings on April 13, 2026, from controlled cybersecurity evaluations of frontier AI models. The institute has been tracking AI cyber capabilities since 2023, building increasingly complex evaluation environments to keep pace with AI development.
Their most complex test, a 32-step simulated corporate network attack called “The Last Ones,” spans initial reconnaissance through full network takeover. The institute estimates that a human professional would take approximately 20 hours to complete the scenario.
In controlled testing in which AI models were explicitly directed and given network access, the leading model completed the full 32-step sequence in 3 of 10 attempts, averaging 22 completed steps across all runs. On expert-level capture-the-flag challenges, tasks that no AI model could complete before April 2025, the same model now succeeds 73% of the time.
The AISI is clear about the boundaries of what these results show. The test environments lack active defenders, defensive tooling, and security alert monitoring. Real enterprise environments with hardened security postures present meaningfully different challenges. The institute states it cannot confirm whether these AI capabilities would translate to well-defended systems.
The specific benchmark numbers matter less than the rate of change behind them. The AISI has been running these evaluations since 2023. The progression from models that could barely complete beginner tasks to models that could autonomously complete expert-level attack sequences occurred in roughly two years.
The institute also notes that performance on these evaluations continues to improve as more computing is applied. The capability ceiling has not yet been reached. Future frontier models, in AISI’s assessment, will be even more capable.
For CFOs and business owners, the relevant question is not whether your organization faces the specific threat profile tested in AISI’s lab environment. It is whether your current cybersecurity posture was built with this rate of capability improvement in mind, and whether it will hold up as that curve continues.
The AISI’s practical guidance focuses on fundamentals, not exotic defenses. Their recommendations center on regular application of security updates, robust access controls, security configuration, and comprehensive logging. The institute references the UK National Cyber Security Centre’s Cyber Essentials framework as a baseline.
These are not new recommendations. What has changed is the urgency behind them. Systems with weak security posture, outdated patches, poor access controls, or gaps in logging are now demonstrably more vulnerable to automated, AI-directed attack sequences than they were 24 months ago. The time it takes to move from identifying a vulnerability to exploiting it is compressing.
For mid-market businesses, the most immediate implication is not necessarily about sophisticated AI-directed attacks. It is about the downstream effect: as AI tools make attack execution faster and more accessible, the volume and frequency of attacks on organizations of all sizes tend to increase. Weak security posture becomes more costly to maintain, not less.
The AISI acknowledges that AI cybersecurity capabilities are dual-use. The same tools that bad actors can use to target systems can also accelerate defensive work, including vulnerability discovery, penetration testing, log analysis, and threat detection. The institute is actively researching how cyber defenders can both use and prepare for frontier AI capabilities.
For business leaders, this is a useful frame. The question is not simply whether AI creates new risks; it does, but whether your organization is positioned to benefit from AI-enhanced defense at the same pace that AI-enhanced offense is developing.
Wiss works with mid-market companies through our technology advisory practice and Wiss Labs innovation division to evaluate technology risk, assess systems and process vulnerabilities, and implement the financial and operational controls that support sound governance. For organizations reassessing their technology infrastructure in light of a rapidly shifting risk environment, our team can help you understand where your exposure lies and what a prioritized response looks like.
Contact the Wiss technology advisory team.
AI Disclosure: This article was produced with AI writing assistance and reviewed by the Wiss editorial team. Original research published April 13, 2026 by the AI Security Institute (AISI). This article does not constitute cybersecurity or legal advice. Organizations should consult qualified cybersecurity professionals regarding their specific risk posture.