AI Won't Kill Tech Jobs. It Will Define the Next Generation of Them.

Eviant
8 min read
AI Cybersecurity Careers Human-in-the-Loop Industry Trends Security Operations

One Week, Two Shockwaves

This week, one tool announcement sent shockwaves through the global tech industry. On February 20, Anthropic launched Claude Code Security — an AI-powered vulnerability scanner built into Claude Code that can reason through codebases the way a human security researcher would, identifying subtle logic flaws and access control failures that rule-based tools miss. The announcement triggered a significant stock price decline at established cybersecurity companies, with investors weighing the risk of AI agents cannibalising the market for traditional threat detection. CrowdStrike fell 8%, Cloudflare slumped over 8%, SailPoint shed 9.4%, and Okta dropped 9.2%. Billions in market value evaporated in a single session.

Then, two days later, the other shoe dropped. Block — the company behind Square, Cash App, and Afterpay — announced it was cutting 40% of its workforce, citing “intelligence tools” as the reason. Jack Dorsey warned that most companies would follow within a year. Amazon, which has already cut around 30,000 corporate roles in recent months, continues to shed headcount while simultaneously pouring billions into AI infrastructure. The market has responded enthusiastically to each announcement. The message Wall Street is sending is unambiguous: fewer humans, more AI, higher margins.

So are tech and office jobs finished? The data tells a more complicated story.

The Part the Market Conveniently Ignored

A central feature of Claude Code Security is mandatory human review. No change is applied automatically — developers and security teams must review each suggested patch and explicitly approve it. Anthropic emphasises that the tool identifies problems and suggests solutions, but the final decision always rests with humans. This is not a design limitation they plan to remove. It is a deliberate architectural choice, and it reflects something the market reaction conveniently ignored: AI-powered security tools do not operate in a vacuum. They surface findings. Humans decide what to do with them.

Research confirms this. Studies have found that agentic AI models perform best when developers review outputs after key checkpoints, rather than running fully autonomous sessions. Without those checkpoints, models produced longer, less maintainable codebases and missed security constraints entirely. The same dynamic applies to security operations more broadly. An AI that flags 500 vulnerabilities is not a replacement for a security engineer — it is a force multiplier for one. The question is not whether you need a human in the loop. It is whether that human is skilled enough to know what to do when the AI hands them the wheel.

The Counterintuitive Hiring Reality

Here is where the counterintuitive part of this week’s story comes in. While Block is halving its workforce and cybersecurity stocks are in freefall, the very companies building the tools responsible for this disruption — Anthropic, OpenAI — are on aggressive hiring sprees, competing fiercely for engineers, researchers, and security professionals. OpenAI recently filled a head of preparedness role paying up to $555,000 a year — specifically to ensure AI systems are safely developed and that the risks they pose are properly managed. These are not token hires. They represent a structural reality: the more powerful AI becomes, the more critical the humans who govern, oversee, and interrogate it become too.

The pattern is clear if you look past the headlines. The jobs being cut are routine, process-driven roles that AI can genuinely automate — data entry, basic ticket triage, repetitive code generation, and manual testing. The jobs being created (and paid at extraordinary rates) are roles that require judgement, context, and accountability: security architecture, AI safety, threat analysis, incident response, and governance. The workforce is not shrinking. It is reshaping.

Smaller Teams, Higher Skill, Greater Impact

The model that is emerging — and that we believe will define the next decade — is not AI replacing humans. It is smaller teams of highly skilled humans, augmented by AI, making decisions that AI cannot make alone. Consider what this looks like in practice for a security team.

Before AI augmentation, you might have a team of 10 analysts manually reviewing logs, triaging alerts, writing detection rules, and investigating incidents. Most of their time is spent on repetitive, low-value work — false positive triage, report writing, compliance checkbox exercises — and senior engineers are constantly pulled into mundane tasks because there aren’t enough junior staff to handle volume.

After AI augmentation, that same operational surface can be covered by a team of 4–5 highly skilled engineers, each leveraging AI tools. The AI handles initial triage, pattern recognition, log correlation, and report drafting. Human analysts focus on what they do best: making judgement calls about risk, communicating findings to stakeholders, designing security architecture, and investigating complex incidents that require creative reasoning.

The total headcount is lower, but the skill floor is higher, the output per person is dramatically greater, and the quality of decisions is better because humans are spending their time on problems that actually require human thinking. This is the human-in-the-loop model — not human-on-the-sideline, not human-as-rubber-stamp, but human as the critical decision-maker, augmented by AI that handles the cognitive load that used to consume 80% of their day.

What AI Cannot Do in Cybersecurity

The stock market’s reaction to Claude Code Security suggests investors believe AI can replace security professionals wholesale. Here’s what that analysis misses:

AI cannot assess business risk

An AI can tell you that a vulnerability exists in a production API. It cannot tell you whether patching it right now carries more operational risk than the vulnerability itself — because that requires understanding the business context, the deployment schedule, the customer impact, and the risk appetite of the organisation.

AI cannot communicate to a board

When a breach occurs, someone has to stand in front of the board and explain what happened, what it means for the business, and what the organisation is doing about it. That person needs to translate technical findings into language that drives decision-making. AI generates text. It does not take accountability for it.

AI cannot exercise professional judgement under pressure

During an active incident, the decisions that matter most — whether to isolate a system, whether to engage law enforcement, whether to notify regulators, whether to communicate publicly — are judgement calls with incomplete information and significant consequences. These decisions require experience, ethics, and accountability that cannot be delegated to a model.

AI cannot build trust

Clients, regulators, and partners trust people, not algorithms. When an Australian business needs to demonstrate to APRA, the OAIC, or a cyber insurer that their security posture is sound, they need humans who can explain their controls, justify their decisions, and take responsibility for their advice.

The Real Risk: Underinvesting in the Human Layer

The real danger of this week’s market reaction is not that AI will replace security professionals. It’s that businesses will draw the wrong conclusion and underinvest in the human capability that makes AI effective. When you deploy AI security tools without skilled humans to operate them, the problems compound quickly. Alert fatigue intensifies because AI generates more findings faster, and without experienced analysts to triage and prioritise, the noise increases and real threats get buried. Leadership develops false confidence — they see an AI tool running and assume the organisation is protected, when the tool is only as good as the humans interpreting its output. Governance gaps emerge because nobody is validating AI suggestions against regulatory requirements and business context. And when an incident occurs, the response collapses because there are no humans who understand the environment well enough to coordinate across technical, legal, and communications teams.

The organisations that will navigate this transition successfully are not the ones that replace their security teams with AI tools. They are the ones that upskill their teams to work alongside AI — building the judgement, context, and decision-making capability that turns AI output into better security outcomes.

What This Means for Australian Businesses

For Australian businesses thinking about where to invest in their security capability right now, the answer is not to wait and see whether AI makes the human role redundant. The answer is to build the human capability that makes AI useful in the first place.

Practically, this means:

  • Invest in people who can interpret AI output — not just run the tools, but understand the context, assess the risk, and make the call
  • Build or partner for security operations — whether through an internal team or a managed service like Spectra, ensure you have skilled humans in the loop monitoring your environment
  • Don’t confuse automation with security — deploying an AI scanner is not the same as having a security programme. The tool is one component. The people, processes, and governance around it are what determine whether it actually reduces risk.
  • Prepare for the skills shift — the security professionals your organisation needs in 2027 will look different from those you needed in 2023. Prioritise analytical thinking, incident response capability, communication skills, and the ability to work effectively with AI tools.

The Bottom Line

The headlines this week painted a picture of an industry in freefall — cybersecurity stocks cratered, a major fintech company cut nearly half its people, and the narrative was clear: AI is coming for your job. But look beneath the surface and the reality is different. The companies building these AI tools are hiring aggressively. The tools themselves are designed with mandatory human review. The research shows that AI performs best with human oversight, not without it.

The future of cybersecurity — and technology work more broadly — is not humans versus AI. It is humans with AI. Smaller teams, higher skill, greater impact. The security engineer is not finished. The one who refuses to adapt might be — but the one who learns to leverage AI as a force multiplier, who develops the judgement and context that AI cannot replicate, who can make the decisions that matter when the stakes are highest, is more valuable than ever. The question for every business is the same one it has always been: do you have the right people making the right decisions about your security? AI changes the tools those people use. It does not change the fact that you need them.

Share this article:

Ready to Work Together?

Let's discuss how we can help protect your business and achieve your security goals.

Get In Touch