The email arrives on a Tuesday morning, formatted exactly like every other internal communication the employee has received from the executive team. The sender name is correct. The writing style is familiar. The request is unusual — an urgent wire transfer, a request to forward credentials, an instruction to click a link and verify account access — but it is framed with the kind of context that makes it feel legitimate. The project is real. The pressure is plausible. The only thing that is not real is the sender.
This is what phishing looks like in 2026, and it is categorically different from what it looked like five years ago. The generic greeting, the awkward phrasing, the obvious urgency, the mismatched sender domain — the tells that security awareness training taught employees to recognize — have been eliminated by AI tools that can generate convincing, personalized, contextually appropriate deception at industrial scale. The arms race between attackers and defenders has shifted, and the implications for how organizations protect themselves are significant.
The uncomfortable reality is that security awareness training, delivered as the primary defense against phishing, is no longer adequate. It remains valuable. But it is not sufficient, and organizations that rely on it as the centerpiece of their email security strategy are accepting a level of residual risk that the current threat environment does not justify.

The traditional model of phishing detection relied on pattern recognition. Phishing emails had detectable patterns: structural anomalies, linguistic errors, mismatched metadata, generic content that was not tailored to the specific recipient. Security awareness training worked by teaching employees to recognize those patterns and pause before acting on messages that exhibited them. For the phishing campaigns that most employees encountered through the 2010s and early 2020s, this approach was reasonably effective.
Generative AI has systematically eliminated the patterns that made phishing detectable. Large language models can produce prose that is grammatically correct, stylistically consistent with the impersonated sender, and contextually specific to the target organization. Public information about the organization, its leadership, its clients, its ongoing projects, and its internal communication style is readily available across LinkedIn, company websites, press releases, and regulatory filings. AI tools can synthesize that information into a phishing message that reads like it was written by a person who knows the organization well — because, in a meaningful sense, it was.
The result is that the tell-tale signs of a phishing attempt that security training teaches employees to look for are increasingly absent from the most sophisticated attacks. The email looks right. The context is plausible. The request, while unusual, is within the range of things that people in the organization actually do. The employee who clicks is not failing because they are untrained. They are failing because the attack was designed to defeat a trained human evaluator.
AI did not make phishing better. It made it indistinguishable from legitimate communication. That is a different problem entirely.
The Numbers Behind the Shift
Social engineering and business email compromise attacks increased from 20 percent to nearly 26 percent of all security incidents in 2026, a trajectory driven directly by the availability of AI tools that lower the skill threshold for creating convincing deception. Security researchers analyzing Microsoft 365 environments found more than 200,000 malicious attachments sitting in organizational mailboxes — a significant portion of them bypassing standard email filtering because they did not match known malicious signatures.
Business email compromise, the category of attack where a fraudulent email impersonates a trusted party to redirect payments, transfer credentials, or authorize unauthorized transactions, resulted in losses exceeding $2.9 billion in reported incidents in 2024. The actual number, accounting for incidents that were not reported, is substantially higher. BEC works because it does not require malware. It does not require a technical exploit. It requires a convincing email and a recipient who has not been given the technical controls that would prevent the action the email requests. Training does not prevent the action. Technical controls do.
Why Training Alone Is Structurally Insufficient
The case for security awareness training is not wrong. Employees who understand phishing tactics make better decisions at the margin. Organizations that run phishing simulations identify employees who need additional support and create cultural awareness of the threat. Training reduces click rates and improves the likelihood that suspicious messages are reported. These are real benefits.
The structural problem is that training assumes a human evaluator can reliably distinguish a sophisticated phishing attempt from legitimate communication, given sufficient knowledge and motivation. In an environment where AI-generated phishing is designed specifically to defeat that evaluation, the assumption breaks down. And it breaks down at exactly the moments of highest risk: when employees are busy, when the request comes from someone with authority, when the context provided in the email is accurate and plausible, and when the urgency of the request creates pressure to act quickly rather than verify carefully.
Security professionals describe this as the human factor problem, and it is well understood in the field. Humans are not reliable security controls. They make decisions under cognitive load, under social pressure, under time constraints, and under conditions where the cost of incorrect skepticism — offending a senior leader, delaying an urgent business process — is perceived as higher than the cost of compliance. Technical controls do not have these vulnerabilities. They evaluate every message against the same criteria, without fatigue, without social pressure, and without the cognitive shortcuts that make human judgment exploitable.
Training improves human judgment at the margin. Technical controls eliminate the dependency on human judgment entirely.
What a Real Defense Looks Like in 2026
Effective email security in the current environment requires a layered architecture in which technical controls intercept threats before they reach the human decision point, and in which the human layer is the last line of defense rather than the primary one.
Email filtering and sandboxing at the perimeter. Advanced email security platforms analyze every inbound message before it is delivered, scanning links and attachments in a sandboxed environment that can execute potential payloads safely to determine whether they are malicious. Messages that carry malicious content are blocked before they reach the inbox. This layer catches the majority of commodity phishing — the high-volume, lower-sophistication campaigns that make up the bulk of phishing traffic — without any human involvement.
Behavioral anomaly detection that identifies unusual patterns. Sophisticated BEC attacks frequently do not carry malicious content. They are textually benign messages making fraudulent requests. Detecting them requires analyzing the behavioral patterns around the message: Is this sender communicating with this recipient in an unusual way? Is the request inconsistent with the established communication pattern between these parties? Is the message asking for an action that falls outside normal business process? AI-powered anomaly detection can evaluate these questions at scale, flagging messages for human review when the behavioral signature is inconsistent even when the content is clean.
Multi-factor authentication as the backstop for credential phishing. Credential phishing — attacks designed to capture usernames and passwords — is defeated by MFA even when the phishing attack succeeds and the employee enters their credentials on a fraudulent site. If accessing the account requires a second factor that the attacker does not have, the stolen credential is not actionable. MFA does not prevent credential phishing. It prevents credential phishing from resulting in account compromise, which is the outcome the attacker is seeking.
Incident response capability that contains damage quickly. In an environment where some phishing attacks will succeed despite the best technical controls, the speed and quality of incident response determines the scale of the damage. An organization that detects a compromised account within minutes and contains it before lateral movement occurs experiences a very different outcome than one that detects it days later, after the attacker has had time to establish persistence, identify valuable data, and prepare the next stage of the attack. palmiq's managed detection and response capability provides continuous monitoring that identifies compromised accounts and anomalous behavior in real time, enabling rapid containment before the initial access becomes a broader incident.

palmiq deploys Acronis Email Security as the core of our clients' email protection architecture, integrated with the broader Acronis Cyber Protect Cloud platform so that email security signals are shared with endpoint protection, user behavior analytics, and incident response capabilities. The integration matters because phishing attacks that bypass email filtering may still be detected when the payload executes on the endpoint, or when the behavior of the account that was compromised deviates from its established baseline.
For clients in regulated industries, the Acronis platform produces the compliance documentation that frameworks like HIPAA and CMMC require for email security controls — including evidence of filtering capability, records of detected and blocked threats, and audit trails for security incidents — as a natural output of operating the environment rather than as a separate documentation effort.
Security awareness training remains part of palmiq's recommended security program because the human layer, even when it is not the primary defense, is still a meaningful contributor to overall resilience. Employees who know what phishing looks like, who report suspicious messages rather than simply deleting them, and who apply appropriate skepticism to unusual requests add value at the margin. But they add that value within an architecture where technical controls have already filtered the volume and sophistication of what reaches them. The training improves a defense that is already working. It is not the defense itself.
The question most organizations are asking about email security is whether their employees can identify a phishing email. It is the wrong question. The right questions are whether technical controls are intercepting phishing before it reaches employees, whether the controls that are in place were evaluated against the current generation of AI-generated phishing rather than the previous generation, whether MFA is deployed comprehensively enough that credential phishing cannot result in account compromise, and whether the monitoring capability is sufficient to detect a compromised account before the attacker has time to do meaningful damage.
For most organizations, a candid assessment of these questions surfaces gaps. Not because the IT team has been negligent, but because the threat has evolved faster than most security programs have adapted to it. The organizations that have closed the gap are the ones that recognized the mismatch between the threat environment and their defenses early enough to address it proactively, rather than in the aftermath of an incident that made the gap undeniable.
palmiq works with organizations across the Americas to build email security architectures that are designed for the threat environment that exists today, not the one that training programs were written for three years ago. That conversation is available to any organization willing to take an honest look at whether its current defenses are adequate.
Is your email security keeping up with today's threats?
Contact palmiq for an email security assessment — palmiq.com | info@palmiq.com
