What Mexico’s giant data breach tells us about the new hacking age
Ian Copeland
- Published
- Opinion & Analysis

A huge hack of Mexican government systems exposed nearly 195 million identities and showed how everyday AI tools are being used to build attacks step by step. Has ChatGPT and other platforms opened hacking up to the masses, asks Ian Copeland, Techno-Sociology & Futures correspondent?
Between December 2025 and January 2026, a sustained intrusion across at least 10 Mexican government agencies and a financial institution led to the extraction of roughly 150GB of data. The haul became one of the largest public-sector data breaches on record, including voter files, taxpayer records and other personally identifiable information relating to nearly 195 million identities.
According to some reports, the breach may have been carried out by a single attacker using “jailbroken” AI chatbots, pushed past their built-in safeguards, to develop and refine the attack.
In simple terms, they kept asking tools such as Claude and ChatGPT how to get in, refining the responses step by step until something worked. More than a thousand prompts were reportedly used to push the systems into suggesting ways to exploit weaknesses, with the attacker moving from one model to another when a platform refused to help.
The approach, therefore, relied on natural language rather than any computer mastery. That matters because it shows how the barrier to entry is changing. When an attack can be built through repeated prompts and incremental adjustment using widely available AI platforms, often at little or no cost, it no longer depends on a high level of technical expertise.
It would be a mistake to treat this as a problem with one company or one model, or to focus on which system failed first. The attacker could use any sufficiently capable model, switching between them as needed.
AI has supported cybercrime for years, but what has changed is the ease and scale. Attacks can now be developed, tested and refined quickly and at low cost using widely available systems, while defences and regulations remain built for a world in which those activities required far more time, skill and resource.
The consequence is a broader and more accessible attack surface that is harder to control.
Cybersecurity has always favoured the attacker. Those defending systems have to protect everything, while an attacker only needs to find a single weakness. What kept that risk in check, until recently, was difficulty. Breaking in took time, skill and sustained effort, which limited how many people could do it and reduced the chances of success.
Widespread access to AI does not make people malicious, but it lowers the barrier for those who choose to act maliciously and continues to expand the pool of people able to carry out attacks.
More importantly, it makes experimentation cheap.
Attackers can quickly and easily probe, adjust, reframe, and try again. They can roleplay as authorised testers. They can present themselves as bug bounty researchers. They can iterate through hundreds of variations until something slips through. And if one system closes the door, another may not.
Once inside, AI can help extend access, identify further weaknesses, and adapt to defensive responses.
There is already a familiar policy response taking shape, centred on tighter safeguards around the technology itself. That includes stronger guardrails, controls built into the models, identity checks for access to more advanced systems, and greater use of licensing, auditing and reporting requirements.
These measures do have value in that they set clearer minimum standards, make it more difficult for bad actors to operate openly, help expose fraudulent providers and introduce a level of accountability that has often been missing.
Regulation, as it stands, focuses on controlling a limited number of known providers. The problem is that the tools being used are not confined to those providers, which means a large part of the risk sits outside that control.
We may be entering cybercrime’s “Napster moment”, less about one breach and more about a familiar pattern. Capability has escaped the place it was meant to be controlled. We have seen this before. The difference is that this time, what is spreading is not content, but the ability to act.
Open-source variants, which attackers can run locally with no oversight, are becoming increasingly powerful. While they often lag behind in general capability, they can be modified, specialised, and have their restrictions tamed or removed.
These are systems that sit largely outside traditional regulatory reach.
And, even if they could be regulated, are regulations going to help anyway? Regulation moves slowly by design. It is negotiated, constrained, and jurisdiction-bound. That makes it poorly suited to a capability that evolves quickly, crosses borders easily, and does not remain inside formal systems.
In what is probably the fastest moving industry of all time, how likely is it that regulations will be able to keep up?
There is a tempting counterpoint, however, that AI itself could accelerate regulation. It’s a possibility, but a slippery slope. Delegating oversight to the same class of systems that are expanding the attack surface introduces its own risks. How much would it be relied upon? And where does it end?
We are left in an uncomfortable middle ground. Regulation matters, but it does not reach the full capability landscape. It governs what is visible, not the full, widening range of what is possible.
That leaves defenders with a more practical question: if you cannot fully constrain capability, what are you actually optimising for?
The traditional answer has been prevention. Harden the perimeter, patch vulnerabilities, block access, and stop the breach from happening at all.
That does not go away. It cannot. But it is no longer sufficient as the primary strategy.
When experimentation becomes cheap and capability expands, the focus shifts beyond prevention alone. It becomes a question of how quickly unusual behaviour can be detected, how fast compromised systems can be isolated, how effectively any spread can be contained, and how much the overall impact can be limited. Or in other words, how well can you operate under the assumption that something will eventually get through?
The advantage may not go to the organisation that prevents every intrusion. It may go to the one that can recognise, contain, and recover before iterative probing turns a foothold into a systemic failure.
To understand how organisations are responding to this shift in practice, I spoke to Marcin Zbozien (CISM, CGEIT, CDPSE), Information Security Officer at METCLOUD, who pointed to the growing need for defences that can operate at the same speed and scale as the threats they are designed to counter:
“As artificial intelligence enables faster, more sophisticated, and highly scalable attacks, effective response mechanisms must be equally robust: capable of detecting threats, containing them, and delivering appropriate countermeasures in real time. In this context, there is no viable alternative to adopting equally, if not more, advanced AI-driven defences,” he told me.
“It is important that governance frameworks evolve in parallel. Organisational policies, risk management models, and decision-making processes need to strike a balance between control and agility, ensuring that advanced defensive capabilities can operate effectively within a responsive and enabling organisational environment.”
Parts of the industry are already moving in this direction. AI-driven systems are being used to identify unusual patterns, simulate potential attack paths and automate responses. The same capabilities that can be used to develop attacks can also be applied to detect, contain and respond to them.
As I explored in The Exodus Directive, AI is not inherently good or bad. Rather, it is a tool shaped by those who use it. That is precisely what makes this moment difficult.
And into that uncertainty steps the market.
Cybersecurity has always had a theatre problem. The less visible the threat, the easier it is to sell certainty. AI makes that worse. As the attack surface becomes harder to map,
vendors may increasingly claim visibility they cannot fully provide, and protection they cannot reliably guarantee.
As Ian Vickers, CEO of METCLOUD, put it: “AI is no longer optional in cybersecurity, it is foundational. As attackers industrialise AI to scale speed, precision, and deception, organisations must respond in kind. Regulation remains essential, but it cannot fully anticipate what is rapidly becoming possible.
“This is where regulatory sandboxes play a critical role giving innovators a controlled environment to test technologies, validate business models, accelerate time to market, and embed consumer protection from the outset. The future of defence lies in pairing responsible oversight with decisive adoption of AI across its full spectrum.”
If no one has a clear view of the full attack surface, it becomes difficult to judge how effective a security provider really is. In that situation, there is a risk that protection is based on assumptions rather than a complete understanding of the threat. That question becomes more important as AI is used on both sides, increasing complexity and reducing visibility.
It also brings the focus back to people. Your data is no longer only at risk from highly trained specialists working at the limits of technical capability. It is increasingly exposed to repeated attempts, trial and error, and AI-assisted exploration by people who would previously have been held back by complexity. The distance between simple curiosity and a successful intrusion has narrowed.
Not everyone will cross it. Most will not. But more people can approach it, test it, and, with enough persistence, step over it.
That is the real threshold change.
If the limiting factor is no longer skill but persistence, the uncomfortable reality is this: we are no longer securing systems against a fixed class of attackers but against a capability that is getting cheaper, faster and harder to see.

Ian Copeland is a British technologist, entrepreneur and author with more than two decades’ experience designing complex enterprise IT and digital systems. Founder of a UK-based digital agency and author of The Exodus Directive, he specialises in artificial intelligence, blockchain infrastructure, quantum computing and digital identity. As Techno-Sociology & Futures Correspondent for The European, he writes on AI governance, decentralised systems, automation, digital power structures and the long-term societal consequences of emerging technologies.
Main Image: Beka Ichkiti/Pexels
RECENT ARTICLES
-
Has Big Tech hijacked the AI summits? -
What Mexico’s giant data breach tells us about the new hacking age -
France’s quest to secure UNESCO recognition for sea rescue -
How the EU abandoned its cage ban promise -
What kind of masochist would want to run the BBC? -
Workplace inclusivity must be all or nothing — otherwise it fails -
Britannia no longer rules the waves -
Britain must defend its streets as well as its borders -
Silicon Valley is finally being forced to answer for what it built -
President Trump is the product of a constitution stretched beyond its limits -
How Japan’s beer-and-ski city became a global testbed for green AI -
The dating imbalance: why highly educated women are struggling to find partners -
New Hindu Kush Himalaya glacier reports warn of deepening risk to Asia’s water security -
First Adolescence, now Inside the Manosphere. How do we protect boys from misogynistic alpha male influencers? -
NATO reluctance signals limits on U.S. leadership -
Iran, nuclear proliferation and the hard choices facing democracies -
When AI customer service fails, don’t blame technology — it’s leadership at fault -
SUCCESS London conference highlights challenge of life after cure for brain tumour survivors -
A new generation of disability rights leaders is reshaping Europe -
Trump hasn’t broken America — he’s exposed what it really is -
AI is rewriting Europe’s networks from the inside out — and the continent isn’t ready -
Europe’s new gender strategy may be solving yesterday’s problems -
Why Britain still needs reporters in the courtroom -
Rivers run deeper than we think -
Spain’s rocket builder just landed €180 million — and Europe’s case for space sovereignty just got harder to ignore

























