Cracking open the black box: why AI-powered cybersecurity still needs human eyes

As phishing threats accelerate, the next stage of defence requires transparent systems, accountable decision-making, and AI that is continually strengthened through human verification

Artificial intelligence (AI) is rapidly transforming the cybersecurity landscape, promising unmatched speed as organisations face a surge in sophisticated threats. Phishing emails once exposed by clumsy errors now arrive polished, personalised, and nearly indistinguishable from legitimate communication, crafted by threat actors in seconds using modern generative tools. In an effort to keep pace, many organisations lean heavily on AI-driven detection, hoping automation alone can outsmart attackers.

But beneath the promise of efficiency lies a growing set of risks. Fully automated systems still struggle with nuance, context, and entirely new attack patterns. Even advanced filtering technologies often miss subtle or emerging phishing threats, such as false negatives that never trigger an alert and quietly slip into employee inboxes. In some cases, additional malicious emails surface only after human-verified review, revealing gaps that automation alone failed to catch.

A deeper concern is the opacity of many AI defences. These systems often act as sealed decision engines, blocking or releasing emails without offering any explanation. That lack of visibility creates real challenges: security teams can’t assess accuracy, compliance officers can’t audit decisions, and leadership can’t fully understand the risks being accepted on the organisation’s behalf. Matters are further complicated by the enormous volumes of behavioural data some systems collect, from communication patterns to relational “social graphs,” used to determine what constitutes “normal” activity. Without transparency, organisations can’t be certain whether their data is handled ethically, securely, or within regulatory bounds.

These limitations reveal why human judgement remains essential. Humans bring context, intuition, and the ability to recognise evolving patterns long before an AI model has been retrained to detect them. Pairing automation with human oversight strengthens accuracy, reduces false positives, and allows security teams to focus on the threats that truly matter. It also reinforces a strong reporting culture, an often underestimated but critical layer of defence that gives organisations early visibility into attacks that breach automated filters.



Just as important is maintaining control over sensitive data. Once emails or behavioural insights enter third-party platforms, organisations may lose visibility into how that information is stored, used, or shared. Flexible deployment options and transparent service models ensure that even when AI is involved, businesses retain ownership and oversight of their own intelligence.

Control of AI must operate on two levels. At the micro level, organisations need clear visibility into how automation and AI systems make decisions, ensuring they can trace and govern the logic behind every action within their security stack. This kind of transparency is essential not only for operational confidence but for demonstrating responsible data handling.

At the macro level, oversight is gradually being shaped by the evolution of global and regional regulations that define what systems may access, process, or retain. Despite limited AI-specific regulation, frameworks like the EU’s GDPR and NIS2, and the increasingly stringent regulatory stance across Europe, set boundaries on how personal and behavioural data can be used. They demand explicit accountability, consent, and auditability from the technologies that touch data. As these regulations evolve, they serve as both guardrails and expectations for the tools organisations adopt, and at a board level these stipulations should be applied to all AI solutions to ensure best practice is adopted now. This will not only make future compliance and adjustments for new regulations much easier but will help prevent or limit future reputational and financial risk.

Together, adopting these micro and macro controls ensures that AI operates not as an unchecked black box, but as a governed, transparent component of a broader and compliant security strategy.

Across Europe and beyond, data protection frameworks like GDPR, DORA, and HIPAA reinforce the need for responsible, transparent AI. These regulations don’t just govern how organisations collect and process personal data; they set clear expectations for how decision-making must be explained, monitored, and challenged. As AI-driven security tools ingest communication patterns, behavioural signals, and sensitive internal data, compliance will become more than a legal obligation; it becomes a fundamental part of maintaining trust. Under these regulations, organisations must ensure that automated systems can demonstrate fairness, provide meaningful explanations for their decisions, and give individuals visibility into how their information is used. In this environment, deploying AI without human oversight or transparency isn’t just risky; it’s incompatible with the regulatory standards shaping modern cybersecurity.

The future of phishing defence is not about choosing between humans and AI. It’s about building an ecosystem where each strengthens the other, where automation provides speed, humans provide clarity, and transparency ties it all together. By demanding openness, preserving data control, and keeping humans firmly in the loop, organisations can build security programmes that are not only faster and more adaptive but trustworthy by design.

As AI continues to advance and reshape the threat landscape, its capabilities will undoubtedly grow more powerful. But no matter how quickly these systems evolve, humans will remain indispensable, providing the context, oversight, and strategic judgement that anchor innovation to transparency and responsible governance. In the end, the strength of our defences will rely not on automation alone, but on how intentionally we guide it.

AI will continue to evolve—but humans will never cease to be needed in the process.

Further information
Produced with support from Cofense. To find out more about its enterprise-grade threat-intelligence and phishing-response services, visit www.cofense.com.



READ MORE: ‘Tech addiction: the hidden cybersecurity threat‘. From childhood screen habits to workplace fatigue, technology is draining focus across every part of modern life. The result is a workforce more vulnerable to mistakes, manipulation and increasingly sophisticated cyberattacks, warns Steve Durbin of the Information Security Forum (ISF).

Do you have news to share or expertise to contribute? The European welcomes insights from business leaders and sector specialists. Get in touch with our editorial team to find out more.

Sign up to The European Newsletter

By signing up, you confirm that you have read and understood our Privacy Policy. You can unsubscribe at any time.

RECENT ARTICLES