Phishing has a few basic conceptual problems which no one seems to want to address:
- You don't need to really be "fooled" by phishing. Not in the real sense. You just need to be tired one morning and click without looking. Even if you know how to check for phishing, you might need to click on content from 10s to 100s of emails per day. Scale this out to 1 year, and even the most educated among us can fail due to an honest mistake which we otherwise could have prevented.
- Part of the problem is just that a normal workflow is: receive email --> click on URL --> enter credentials into 3rd party website. ie, this is intentional and valid behavior for most white collar workers on a daily basis. This behavioral pattern is why phishing works, and in reality, email should not be a vector for this path. Until companies and technologies stop assuming this makes sense, phishing will continue to be successful.
> - Part of the problem is just that a normal workflow is: receive email --> click on URL --> enter credentials into 3rd party website. ie, this is intentional and valid behavior for most white collar workers on a daily basis.
Here's a crazy history of that happening...
I had a friend who was an employee of a Fortune 100 corporation. Part of employee training was not to click on links in emails. In the 1990s and the rise of the internet, they had an internal security "red team" periodically send a fake phishing emails to employees. If the employee mistakenly clicked on a link in that email, the red team would send a notice to the employee's manager. It worked well because employees would not want to be embarrassed by a manager having to review the security policy with them to get their access back.
When she retired, all that training became useless and she was phished by a fake AT&T email. Why? Because with the rise of smartphones, every _legitimate_ company started sending emails that had useful tappable links. With the touchscreen, you can't hover your finger over the link to see what the underlying url is. People just normalize pressing on links in transactional emails as a convenient thing to do. E.g. Amazon sends an email with a link to the order status. A legit bank will send an email with a link for "Please review your security setting."
Smartphones reversed 15 years of not clicking on email links.
The first company I worked for as a developer was like this, except worse.
We got hit with some Christmas virus. One of the devs was talking about how he had mistakenly clicked on the link, but nothing happened. We were at lunch and suddenly were all looking at each other like, "Dave, this isn't good!" told him to call support because we had all seen the emails from security to not click on any links in emails because so many of these were making the rounds.
They took his laptop, reimaged it and gave back to him. The funny part was the Outlook team disabled any links in any emails he got from then on. Not sure how they did it, but if you wanted to send him a link, you had to send it to his personal email or over one of his social media accounts. Any time he got a link, if it was for business, he would have to call support, open a ticket and then an hour later, they would send him the link to open.
It drove the guys nuts. He asked repeatedly to have them enable the links, but they basically told him once you were on the list, it was for good. He quit after four months and said one of the most infuriating things was security never allowing him to get off of the "naughty" list.
Re: the receive email -> click URL -> enter credentials.
We need SSO to stop being gated behind enterprise tiers. SSO tax is real, and can help solve this problem. I've moaned about this before as the leader of an IT team for a medium-sized company reliant on a lot of SaaS.
Enterprise plans are too much (both in terms of cost and features) for us, but we are smart enough to have security requirements and one of those is SSO & SCIM. Very few SaaS offers that on anything but the most expensive "call for quote" tiers. That's a huge problem.
That whole email invite->click link->enter credentials workflow is gone with proper SCIM provisioning and SSO. It's the bare minimum a SaaS product should offer and should be on the lowest available tier.
The other problem are services like DocuSign, which offer free trials that are abused to send out fake documents. User gets a legitimate email from DocuSign's domain, clicks on it, opens up a real document in the real DocuSign site, but the doc has a link to the phishing site.
All DocuSign needs to do is require a CC for the trial or contacting sales for a trial, problem solved. But they don't, so as far as I'm concerned they are complicit in enabling phishing.
Unfortunately, SSO often gets half-assed as a compliance exercise, and now you have to enter your SSO username/password and your MFA token in random places a dozen times per day.
The fact that your employer might direct you to a URL that doesn't look like their normal domain (or through some kinda link shortener so you can't see it without clicking) for legitimate reasons basically undoes all security yeah. Why can't security teams focus on correcting those parts?
The normal workflow is so ingrained in our company culture, that I received an email from our IT team about not clicking on embedded links, and that email had a embedded link to "learn more". ;-)
I’m somewhat surprised that enterprise email solutions still allow links… like, at all, in general.
The servers should scan emails for links and not allow them. If a link somehow slips through, the client should not render it as something you can click on and follow.
On work machines where everything is managed by IT, there shouldn’t be any need to send links around anyway. If anyone thinks they need to send a link around as an ongoing process, then that’s the sign that the process still needs to be designed.
This sounds quite short sighted to me. You can’t imagine needing links being sent in everyday workflow at the office, yet I can’t imagine not using links in emails.
How would people interact with vendors and salespeople that send links to product specs, troubleshooting articles, etc?
If it is a vendor you are buying hardware from, they could send a part number, for example. The workflow should be go to their site, and search it up.
I don’t think it is short sited. Actually, I think if it has a flaw it is the opposite one. Workflows that involve mailing around links are convenient for quick little in-the-moment thrown together actions. It’s liberating. I’ve done it too, sure. But, in the long run everything should be integrated somehow or another and sending links should not be necessary. One might say it is ridiculous to expect every process to reach that end state. Possibly true, but it is a good goal…
In general, if a program is not a Web browser, I do not want that program getting clever with displaying a URL as an active link. Just show it as the URL, https prefix and all, so I can see where it will be taking me if I copy it into the browser. (This is one of my very few gripes with Google docs.)
I don't know how exactly IT at work did it, but all links in emails (except some internal links) get replaced with a link to an url 'checker', I guess to block if the link goes to a known phishing domain (I guess, I don't think any link got blocked that way). Issue is, the original link is part of the url and sometime got mangled and it's annoying when you just want to copy the link to paste it somewhere else.
Many corporations do have systems that click every link in inbound emails to check them, I assume against huge lists or heuristics of suspicious domains. Not to different from “endpoint security” solutions that check every link you click in a browser.
Exchange ought to have the capability of rewriting the links' hrefs to a "link gateway" where a sandboxed renderer presents the outside page, maybe running over rdp and purged after the end of every session.
The local Blink (or WebKit) renderer should be for internal or white listed sites only.
And then what? The end goal of phishing is either that the victim enters credentials or downloads a malicious file. Neither of which would be prevented by your scheme (even anti virus products are imperfect). Phishing with the goal of exploiting a zero day in the browser are exceedingly rare.
> After sending 10 different types of phishing emails over the course of eight months, the researchers found that embedded phishing training only reduced the likelihood of clicking on a phishing link by 2%.
Company: Stop clicking on links to third party sites.
Also Company: All of IT, HR, benefits, cloud storage, customer management, and employee portal is moving to its own third party platform!
Yeah. Even worse(?) is banks like Citizens sending customers emails and text messages with links to shady-seeming domain names. No wonder so many people fall for phishing attacks.
Definitely worse. This sort of thing still happening in 2025 is completely bonkers to me. Recently a financial institution sent me an email asking me to "re-validate my ownership" of a linked account by uploading a bank statement. The link was to a completely unrelated and unknown domain (not even a shortener). The message itself didn't address me by name but simply said "Dear Customer". It also didn't including any legitimate info like partial account numbers. And when I logged into my account, there was no notice or message mentioning any re-validation requirement. I was convinced it was a low/medium-effort phishing attempt and submitted it to their support channel so others could be warned. It turns out it was actually their legitimate email. I told the CS rep that they're basically training their customers to fall for the next real phishing attack. Won't do any good, I'm sure.
"Company: Stop clicking on links to third party sites.
Also Company: All of IT, HR, benefits, cloud storage, customer management, and employee portal is moving to its own third party platform!"
Smart companies validate and tag those third party emails as "partner" or similar. That way the users are only using the extra scrutiny on the non-partner external emails.
Yes although this runs the risk of what you commonly see at daycares and schools.
There'll be a sign that says "Peanut free zone" and everyone will read it and respect it.
Then there'll be a sign that says "Please be sure to pick your kid up by x o'clock." And everyone will read it and respect it and silently stop looking at it cause they know.
And then there will be a sign that says "Please keep your child at home if you suspect they might be sick." And everyone will read it and be a little offended because why would they do that knowingly?
After a while the entrance will be plastered with notices and warnings that get put up and not taken down. And nobody reads them because they probably already know and it's not worth spending 20 minutes reading the entire wall.
I get the external/partner emails. And a notice that outlook removed extra line breaks from the message (whew). And a notice that if there are problems reading the email I can view it in a web browser. And a helpful suggestion that Copilot can give me the tldr.
What’s to stop a phishing email putting a “verified by IT anti-phishing software” line at the top of the email? People don’t pay attention to special verification flags when they are there, so they don’t see them when they’re missing.
You have ingress filters that strip the subject tag out of anything and only add it back if it is verified. It's really not that hard and the training is supposed to train people. Nothing is perfect, nor does it need to be with defense in depth.
Why is clicking the link a failure? I thought this was the point of keeping my browser up to date, so I can trust the sandbox!
A couple of times, I got emails that seemed suspicious, but I figured I would click the link to investigate further. I was on high alert and would not have entered login credentials or opened an executable or anything like that, I just wanted to check it out and see.
Of course, it was a phishing audit and I failed. WTF?
Phishers are working completely blind, thus any amount of info going back to the phishers is a benefit to them.
Just getting server logs from an opened link lets them know their messages aren't being quarantined and their server is reachable through the target's firewall.
The user agent and how the links are accessed give info about who is opening them
(A few every couple minutes == all good, 10 links sent to 10 different employees all opened within seconds with a non-standard user agent == you're being investigated and should burn the domain)
It's been a few years since I've done phishing engagements so details may vary with how things are done today.
But the goal is to limit any information going to the bad guys. Let them think their messages are being blocked until they go elsewhere.
*edit: That being said, phishing at least one person at a large company is not particularly hard. There's too many companies using domains indistinguishable from shady links for one thing. Limiting engagement is good, but companies also need to be prepared for the eventuality that somebody will get fooled.
The reason might be that the training programs are just ridiculously bad. I clicked on a pretend phishing link out of interest to see what happens. I was treated to a lecture of how clicking on links in emails is always bad and to never do it.
That advise would be fine (albeit maybe extreme) if it wasn't the case that for the last year I have been spammed by emails from said training company telling me to click on the included link to complete the next cybersecurity course. Even worse they use some nondescriptive weirdly named domain not their own to host the training courses. So if anything the courses are training people to click on phishing emails.
I recently got reported for clicking a phishing link three times. Looking at the audit log all three of these clicks supposedly happened within seconds of each other.
My suspicion is the training company realized no one was falling for the obvious bait anymore and they needed to gin up the numbers to keep the company convinced that paying for their services was worthwhile.
Meanwhile all corporate teams use the same VLAN and DHCP Address Pool. There is zero separation of departments on the network. A lot of companies get this precise situation backwards as we have.
Your company should get a better company, mine either asks to download and run a file or presents a login form. I'm not sure what they do after that because I've never failed that badly...
The only anti-phishing program I've ever seen that was even a little effective was at one company I worked at, where there was an ongoing phishing test.
Users were randomly selected to get the test, and each phish was hand-crafted to trick people specifically at our company (but using only publicly available information). Anonymized results were posted quarterly, divided by department.
I only got fooled once, but man, it felt so bad to see Engineering show up on the dashboard with one hit that quarter.
(Sales was usually at the top of the list, which makes sense, since they interface with a lot of folks outside the org)
These are exactly the kind of campaigns that studies show not to be effective (or even paradoxically ineffective). "Effective" doesn't mean "manages to successfully phish" (you'll always eventually be successful); it means reducing the likelihood that concerted attacks will be successful.
The actual response to phishing is to use authentication mechanisms that resist phishing.
Although, why limit it to publicly available information? Security is an onion. If somebody gets access to internal documentation, HR lists, etc, the organization should still be resistant to their phishes.
> If somebody gets access to internal documentation, HR lists, etc,
It's hard to be resistant to phishing at that point and you have bigger problems.
What if Susan in HR falls victim to token theft (let's say conditional access/MDM policies don't catch it or aren't configured, which many businesses don't bother with). Her email account is now pwned, and the company gets an email from her, it passes all verification checks because it's actually from her account.
It's still phishing, but the users have no way to know that. They don't know Susan just got compromised and the email they got from her isn't real. If this is a human attacker and not just bots, they can really target the attack based on the info in her inbox/past emails and anything else she has access to.
So there's no way for the organization to be resistant at that point until IT/security can see thea account compromise and stop it. Ideally, it's real-time and there's an SOC ready to respond. In practice, most companies don't invest that much into security, or they are too small and don't have the budget for a huge security operation like that.
HR shouldn’t be sending links anyway. They should send instructions: go to the portal (on your corporate controlled laptop, so this could be your new tab page) click on the paystubs link, blah blah.
Somebody in every big company is compromised already.
We got hit in a similar way. They didn't use HR's account to email but they grabbed the mobile phone numbers of everyone in the directory. They then started a text message campaign, pretending to be our CEO, demanding that employees go to Target and buy gift cards on behalf of a client.
One person actually did fall for it but decided to physically bring the cards to the CEOs office. Thankfully that exposed the attack and effectively halted any damage done.
i've noticed the gift card stands at Target and other stores around here now have a sign stating "If you received a text from your boss telling you to buy gift cards, you are being scammed" or similar
I’m assuming it’s the “easy” mode and they still have many successful phishing attempts, so it didn’t make sense to go to the next level if the company still fails in easy level.
Corporate practices are the primary form of cybersecurity training. I have seen too many corporations (including critical infrastructure corps) that force employees to login to foreign domains with corporate credentials. This includes email services, two factor authentication, team chat, LMS, dashboards, surveys, web meetings, code forges, ticket tracking, VPN, etc.
Corporations outsource almost every single tool used by their employees and train them to cough up their corporate credentials no matter what url the browser identifies. In essence, they phish their employees 100 times a day. Then they force employees to sit through training twice a year to identify phishing attacks. Every legitimate training will create cognitive dissonance with employees' every day work experiences.
Ive argued for a while: the value of these programs is to solve the management problem.
When you propose a security solution, someone is going to say "oh my users are too smart to be phished, don't worry about this". Ive had this argument for rolling out mfa at nearly every company ive worked with.
Most companies would have a much easier time with phishing if they quit sending official correspondence that mimics phishing. Sure, phishing is always evolving to look legitimate, but C͟l͟i͟c͟k͟ h͟e͟r͟e͟!͟ in literally every official email when whatever it is you need to do _should_ be reachable via known links. All the "click here" 's and "please see attached" tricks would quit working if it wasn't normal.
I'm of the opinion that most (not all) email phishing can be solved if we all just collectively admitted that HTML email was a mistake, and go back to text based only and enforce that everywhere.
No more logos, no more masked links (you have to acutally copy and paste the text, giving you a chance to review the URL), no more QR code phishing, no more realistic looking but fake DocuSigns. Get rid of attachments while we are at it, there are other, better ways to share files within an office environment (because ultimately, if we enforce text only, then all phishing would then arrive via attachment in the form of a PDF or rich word doc with the fake logos and a clickable link).
Giving up HTML emails? Prepostorous. Users won't accept it now that they are this used to it. Retraining that is unrealistic. Someone of the metrics!
The only solution (which will solve the problem, as it is marketed as phising-resistant) is to remove passwords entirely and force everyone to use passkeys.
My university routinely sends notifications about required annual phishing training that violate almost every point in the training about how to avoid getting phished. Its been happening for years. Urgency. Appeals to authority. Grammatical errors. Mystery click-me links that go outside the domain to training service providers that we do not use in any other context. References to alternative ways to get to the training that don't work.
I've reported it multiple times over the last few years but our IT security team blows off the concern, insists that I follow the link, and changes nothing. And no, it isn't just them testing people to see if they will fall for it. I am also in a position to see the tracking reports and be in meetings where expectations are discussed.
Our program is explicitly training people to get phished.
The part about sharing among other employees when an internal phishing test is active is intriguing to me. In my organization, when someone gets a phishing lure - they tell everyone around them to watch out for it. I wonder how this impacts success rates.
My employer gives my credentials to LinkedIn, Github, Microsoft, Google, Slack, Amazon, AuthIAM, SuperSecure, TrustMe, Cisco, Oracle, SAP, Peoplesoft, Shopify, Salesforce, and a dozen others. Then they gripe because my coworker gave their credentials to login.ad.azure.microsft.com.
I had an exec at a tech company once send out an email with the subject line "Important." All there was, was an attached .docx file, and a sentence saying to read it immediately. This guy should have been fired for this level of incompetence. No, it wasn't a phishing test.
Then Microsoft sends out e-mail advertisements with fucking QR codes in them to everybody to get people to install software without IT department's knowledge. So you not only can't see the link, you can't even de-obfuscate it by hovering over it.
There's a really easy fix for this. It's so fucking easy it hurts my brain.
Disable HTML e-mails. Disable hyperlinks. Feel free to send URLs, but make people copy and paste the link. This way they have to at least select the link. When they get a 6000 character link and can't copy paste it? That's good! Because they have no idea what the link actually is.
Nobody will do it, and I don't get why not. Do you really need to market to your internal employees so badly with images and links? That's what a portal is for. Post updates on your portal and stop bombarding my goddamn email box.
I always hated that clicking the phishing link in the email is considered a fail.
I don't think that's right, at least not from a phishing point of view. From a 0-day point of view, yes.
But because we get flooded by emails it's easy to miss something in an email, only for it to be apparent on the page itself. Primarily because the URL will be off, or that my password manager doesn't autofill stuff.
And the flood of emails got worse when people started sending emails to group addresses in BCC instead of in To. At least in Exchange you have no idea whether the sender put your email in BCC or the group in BCC (VERY low priority).
At least I found out that the phishing emails have a recognizable header in the email, allowing me to automatically filter those.
I work in security and have even performed phishing simulations. Want to know how to get me to click on your link? Send me a newsletter mail with an unsubscribe link. I will click 100% of the time no matter how weird the domain looks. (I won't enter credentials or download any files though.)
Corporate recently told me I'm specifically not allowed to unsubscribe from newsletters (probably for this reason), so now I have set up mutt to open links in a containerized browser, but that's as far as I'll go.
I use to always click on newsletter unsubscribe links too. Then I wondered if they were taking a shot in the dark at a real user on the end of my email and me unsubscribing confirmed it.
Here's what I do now. If I don't remember ever subscribing to something, I look for the Unsubscribe button in the email client, which is part of the headers. It feels a little less "phishy" than the link in the email. It uses the message’s hidden “List-Unsubscribe” header, thus no tracking.
The idea of not clicking on links is frankly preposterous. The company pays you to click on links in emails; it's the nature of your job! And it's also core to how we use the internet in our spare time.
One of the reasons IT doesn't want you clicking on Unsubscribe links is they have been turned into malicious Session Token stealing attacks. Hackers subscribe an victim to dozens of newsletters, many of which do not follow netiquette of sending a confirmation email for you to confirm your desire to receive the newsletter, or worse, the confirmation email contains a second Session Token stealing attach which you are even more likely to click on because you DID NOT subscribe to begin with and your hackles are up... so you click and BOOM, Business Email Compromise! It's happened on a couple of incidents I've been a part of earlier this year and led to 1000's of dollars in Wire Transfer losses! Banks have to add MFA to all Wire Transfers (but don't - Credit Union's are you listening here!!!). So, there's good reasons not to trust unsubscribe links. If you didn't subscribe, and there's no clear confirmation email from the domain in question, report the newsletter as SPAM and block.
Notice: I'm a virtual CISO at CyberHoot (and co-founder here) providing security program development and Incident Response services.
I have created an internal tool for my company. They are plugins that are installed in the browsers for the computers owned by the business, and reports the domain when you open a new browser or tab with a url that you did not manually typed. If blacklisted, it completely blocks the browser. If it is the first time we see the domain, we display a pop up “think before you act” and the it department will dive deep to whitelist it.
The first week there is a bit of noise while we whitelisted the common domains used by the users. After that it really puts you back on alert when you clicked on a email that takes you to new domains - that could be used for phishing.
I looked at the paper. How it's being reported is highly misleading. There were 4 different active training groups. One of the groups benefitted from the training and one of the groups actually got worse. So as a whole phishing training only has a 2% boost. However the message is not that phishing training is useless, only that if applied incorrectly it is useless.
No, it's not. Phishing isn't a social problem, it's a technological problem. Whether or not you can intercept my credentials shouldn't be a question of how much I trust my IT department or how well I'm trained; the credentials simply shouldn't allow that to happen. That's the entire reason U2F was invented, and then WebAuthn and FIDO2.
I think the conclusion of this article is slightly flawed. The issue isn't with engagement with the training (although, the typical corporate training material is pretty bad), rather how we go about teaching cybersecurity.
I take a page from Jayson E. Street's DefCon talk from a few years ago with my students: promote "Security Awareness", not Security Training. Get people to think about what is being asked of them and the consequences of said actions. People tend to take "Security Training" as "I need to remember A, B, C, etc." Humans are bad at this sort of thing, typically.
I admit that "Security Awareness" isn't all that easy, but clearly our current approaches leave much to be desired.
I always suspected technical tools were more effective (time, effort, money) than the training programs. However, only company-wide training programs provide visibility to the CISO, so they tend to be popular even if ineffective.
Because you cannot fix humans, technology is the most effective approach.
I love also safe link protections. You actually try to check the link, but instead it is mangled beyond recognition. Then you try to squint and figure out how it is encoded... And just give up...
It's not about preventing the phishing, it's about preventing the liability from the phishing. If someone can show you didn't follow cybersecurity training best practices, you may be liable for any failure of cybersecurity. Best way to prevent that is to follow the best practices, even if they don't work. A lot of things in the corporate world work this way.
The point of trainings is only secondarily to stop these things from happening. The goal is for the institution to avoid liability by transfer responsibility for their having happened to others.
I received an email this week which read at the top in red text "THIS IS NOT A PHISHING EMAIL." I thought...isn't that exactly what a phishing email would say?
Seems like they counted it as a failure if the user just clicked the link in the email. But what are the supposed to do? Never click links in emails? Only click links to some white-list of domains they hold in their head? I would think clicking a link is fine, but entering credentials is not.
It's no surprise people didn't engage with training material on the pretend phishing site!! At that stage, they're told it was a trap and they shouldn't even be there so of course they're going to get out asap.
Clicking a link can be more than enough to “get hacked”; you don’t always need to enter credentials. So yes, unfortunately the correct answer is either to have the whitelist of domains in your head (BUT this is also very risky due to homograph attacks [1]), or simply never click links in mails.
The secure thing to do is: Read mail that tells you to click link to whatever online tool you work with. Then instead of clicking link in mail you open a browser and manually visit the site the link was pointing to. If there is a message, notification, or something else that the emails wants you to look at, then it will also be there when you login “directly”.
Is there a good way (right now) to defend against this? I'm willing to live with a browser that only accepts ASCII in the address bar, and disables Unicode in email (replaced with �?)
No browser or browser extension that I know of, but it may exist.
I always circumvent it by just never clicking links sent to me (mail, sms, WhatsApp, etc). If I get a mail from, for example, Netflix that says there is a problem with my billing or whatever. I open a browser myself, go to Netflix’s site and login. If there really is a billing issue then I can see it after logging in. The links are actually never needed if you think about it.
Other than that use MFA (multi factor) everywhere you can. It doesn’t defeat phishing attack completely, but it is good protection. (Hackers can buy tools that provides them with a UI to build and execute phishing campaigns, even ones that include handling MFA)
There’s something else I notice in my daily work with all kinds of different people, which I like to call “tech avoidance.”
For example, this week I helped someone set up an account on an online library platform we use. I had to tell them multiple times not to tap the buttons in the email, website, or app right away, but to read them first. They were clearly nervous, and you could tell they just wanted to finish as quickly as possible and get out of “that very techie situation” to simply use the apps.
I mean, yeah, I get it. Technology isn’t for everyone. But the (sad) fact is that we live in a world largely dominated by it. And although it has created many problems we now need to solve with even more technology, it also helps us solve many of the problems we had before.
My hope is that AI will evolve to the point where it can become a kind of companion for those people, guiding them through situations involving technology that they find difficult or intimidating.
The point of these trainings is to satisfy compliance requirements and to deflect responsibility when someone inevitably fucks up. All HR mandated training courses are to protect the company, by allowing them to blame the employee when something goes wrong. It's not our fault, we told them not to, here's the proof.
> Overall, 75% of users engaged with the embedded training materials for a minute or less. One-third immediately closed the embedded training page without engaging with the material at all.
To call this "training" is highly misleading.
It's no surprise that the mere existence of training materials does not help if nobody reads and studies the training materials.
They should preface the training materials with "$100,000 USD will be transferred to your bank account if you read this and successfully answer the questions at the end."
Companies seriously concerned about security must include a standard disclaimer which reads "Never click on neither links nor pictures in emails" in every email, before the actual plaintext message.
This doesn't concern amazon, google, or banks, probably
So many "offers" and "promotions" to throw around with convenient links
Edit: "Go to our website and find more information under your account about lorem ipsum... "
"Cybersecurity Training Programs Don’t Prevent Employees from Falling for Phishing Scams - Click Here to Find Out How to Really Protect Your Employees"
You can tell if an email is from a training program just by looking at the email headers. I have a filter in outlook and those emails don’t even hit my inbox.
Here's a crazy history of that happening...
I had a friend who was an employee of a Fortune 100 corporation. Part of employee training was not to click on links in emails. In the 1990s and the rise of the internet, they had an internal security "red team" periodically send a fake phishing emails to employees. If the employee mistakenly clicked on a link in that email, the red team would send a notice to the employee's manager. It worked well because employees would not want to be embarrassed by a manager having to review the security policy with them to get their access back.
When she retired, all that training became useless and she was phished by a fake AT&T email. Why? Because with the rise of smartphones, every _legitimate_ company started sending emails that had useful tappable links. With the touchscreen, you can't hover your finger over the link to see what the underlying url is. People just normalize pressing on links in transactional emails as a convenient thing to do. E.g. Amazon sends an email with a link to the order status. A legit bank will send an email with a link for "Please review your security setting."
Smartphones reversed 15 years of not clicking on email links.
We got hit with some Christmas virus. One of the devs was talking about how he had mistakenly clicked on the link, but nothing happened. We were at lunch and suddenly were all looking at each other like, "Dave, this isn't good!" told him to call support because we had all seen the emails from security to not click on any links in emails because so many of these were making the rounds.
They took his laptop, reimaged it and gave back to him. The funny part was the Outlook team disabled any links in any emails he got from then on. Not sure how they did it, but if you wanted to send him a link, you had to send it to his personal email or over one of his social media accounts. Any time he got a link, if it was for business, he would have to call support, open a ticket and then an hour later, they would send him the link to open.
It drove the guys nuts. He asked repeatedly to have them enable the links, but they basically told him once you were on the list, it was for good. He quit after four months and said one of the most infuriating things was security never allowing him to get off of the "naughty" list.
We need SSO to stop being gated behind enterprise tiers. SSO tax is real, and can help solve this problem. I've moaned about this before as the leader of an IT team for a medium-sized company reliant on a lot of SaaS.
Enterprise plans are too much (both in terms of cost and features) for us, but we are smart enough to have security requirements and one of those is SSO & SCIM. Very few SaaS offers that on anything but the most expensive "call for quote" tiers. That's a huge problem.
That whole email invite->click link->enter credentials workflow is gone with proper SCIM provisioning and SSO. It's the bare minimum a SaaS product should offer and should be on the lowest available tier.
The other problem are services like DocuSign, which offer free trials that are abused to send out fake documents. User gets a legitimate email from DocuSign's domain, clicks on it, opens up a real document in the real DocuSign site, but the doc has a link to the phishing site.
All DocuSign needs to do is require a CC for the trial or contacting sales for a trial, problem solved. But they don't, so as far as I'm concerned they are complicit in enabling phishing.
It’s more secure this way.
The servers should scan emails for links and not allow them. If a link somehow slips through, the client should not render it as something you can click on and follow.
On work machines where everything is managed by IT, there shouldn’t be any need to send links around anyway. If anyone thinks they need to send a link around as an ongoing process, then that’s the sign that the process still needs to be designed.
Completely agreed, and I think it's telling that so few email clients or webmail services actually allow you to always render as plain text.
How would people interact with vendors and salespeople that send links to product specs, troubleshooting articles, etc?
I don’t think it is short sited. Actually, I think if it has a flaw it is the opposite one. Workflows that involve mailing around links are convenient for quick little in-the-moment thrown together actions. It’s liberating. I’ve done it too, sure. But, in the long run everything should be integrated somehow or another and sending links should not be necessary. One might say it is ridiculous to expect every process to reach that end state. Possibly true, but it is a good goal…
The local Blink (or WebKit) renderer should be for internal or white listed sites only.
Company: Stop clicking on links to third party sites.
Also Company: All of IT, HR, benefits, cloud storage, customer management, and employee portal is moving to its own third party platform!
Also Company: All of IT, HR, benefits, cloud storage, customer management, and employee portal is moving to its own third party platform!"
Smart companies validate and tag those third party emails as "partner" or similar. That way the users are only using the extra scrutiny on the non-partner external emails.
There'll be a sign that says "Peanut free zone" and everyone will read it and respect it.
Then there'll be a sign that says "Please be sure to pick your kid up by x o'clock." And everyone will read it and respect it and silently stop looking at it cause they know.
And then there will be a sign that says "Please keep your child at home if you suspect they might be sick." And everyone will read it and be a little offended because why would they do that knowingly?
After a while the entrance will be plastered with notices and warnings that get put up and not taken down. And nobody reads them because they probably already know and it's not worth spending 20 minutes reading the entire wall.
I get the external/partner emails. And a notice that outlook removed extra line breaks from the message (whew). And a notice that if there are problems reading the email I can view it in a web browser. And a helpful suggestion that Copilot can give me the tldr.
Outlook is beginning to feel like daycare.
A couple of times, I got emails that seemed suspicious, but I figured I would click the link to investigate further. I was on high alert and would not have entered login credentials or opened an executable or anything like that, I just wanted to check it out and see.
Of course, it was a phishing audit and I failed. WTF?
Just getting server logs from an opened link lets them know their messages aren't being quarantined and their server is reachable through the target's firewall.
The user agent and how the links are accessed give info about who is opening them (A few every couple minutes == all good, 10 links sent to 10 different employees all opened within seconds with a non-standard user agent == you're being investigated and should burn the domain)
It's been a few years since I've done phishing engagements so details may vary with how things are done today. But the goal is to limit any information going to the bad guys. Let them think their messages are being blocked until they go elsewhere.
*edit: That being said, phishing at least one person at a large company is not particularly hard. There's too many companies using domains indistinguishable from shady links for one thing. Limiting engagement is good, but companies also need to be prepared for the eventuality that somebody will get fooled.
It's an impressive level of DGAF.
That advise would be fine (albeit maybe extreme) if it wasn't the case that for the last year I have been spammed by emails from said training company telling me to click on the included link to complete the next cybersecurity course. Even worse they use some nondescriptive weirdly named domain not their own to host the training courses. So if anything the courses are training people to click on phishing emails.
My suspicion is the training company realized no one was falling for the obvious bait anymore and they needed to gin up the numbers to keep the company convinced that paying for their services was worthwhile.
Meanwhile all corporate teams use the same VLAN and DHCP Address Pool. There is zero separation of departments on the network. A lot of companies get this precise situation backwards as we have.
IIUC, some of them will pre-load a page by opening it for you.
Users were randomly selected to get the test, and each phish was hand-crafted to trick people specifically at our company (but using only publicly available information). Anonymized results were posted quarterly, divided by department.
I only got fooled once, but man, it felt so bad to see Engineering show up on the dashboard with one hit that quarter.
(Sales was usually at the top of the list, which makes sense, since they interface with a lot of folks outside the org)
The actual response to phishing is to use authentication mechanisms that resist phishing.
It's hard to be resistant to phishing at that point and you have bigger problems.
What if Susan in HR falls victim to token theft (let's say conditional access/MDM policies don't catch it or aren't configured, which many businesses don't bother with). Her email account is now pwned, and the company gets an email from her, it passes all verification checks because it's actually from her account.
It's still phishing, but the users have no way to know that. They don't know Susan just got compromised and the email they got from her isn't real. If this is a human attacker and not just bots, they can really target the attack based on the info in her inbox/past emails and anything else she has access to.
So there's no way for the organization to be resistant at that point until IT/security can see thea account compromise and stop it. Ideally, it's real-time and there's an SOC ready to respond. In practice, most companies don't invest that much into security, or they are too small and don't have the budget for a huge security operation like that.
It's a really hard problem to solve
Somebody in every big company is compromised already.
One person actually did fall for it but decided to physically bring the cards to the CEOs office. Thankfully that exposed the attack and effectively halted any damage done.
These criminals are relatively clever.
Corporations outsource almost every single tool used by their employees and train them to cough up their corporate credentials no matter what url the browser identifies. In essence, they phish their employees 100 times a day. Then they force employees to sit through training twice a year to identify phishing attacks. Every legitimate training will create cognitive dissonance with employees' every day work experiences.
When you propose a security solution, someone is going to say "oh my users are too smart to be phished, don't worry about this". Ive had this argument for rolling out mfa at nearly every company ive worked with.
Phishing tests give you the "well actually" data.
No more logos, no more masked links (you have to acutally copy and paste the text, giving you a chance to review the URL), no more QR code phishing, no more realistic looking but fake DocuSigns. Get rid of attachments while we are at it, there are other, better ways to share files within an office environment (because ultimately, if we enforce text only, then all phishing would then arrive via attachment in the form of a PDF or rich word doc with the fake logos and a clickable link).
The only solution (which will solve the problem, as it is marketed as phising-resistant) is to remove passwords entirely and force everyone to use passkeys.
/s
I've reported it multiple times over the last few years but our IT security team blows off the concern, insists that I follow the link, and changes nothing. And no, it isn't just them testing people to see if they will fall for it. I am also in a position to see the tracking reports and be in meetings where expectations are discussed.
Our program is explicitly training people to get phished.
Then Microsoft sends out e-mail advertisements with fucking QR codes in them to everybody to get people to install software without IT department's knowledge. So you not only can't see the link, you can't even de-obfuscate it by hovering over it.
There's a really easy fix for this. It's so fucking easy it hurts my brain.
Disable HTML e-mails. Disable hyperlinks. Feel free to send URLs, but make people copy and paste the link. This way they have to at least select the link. When they get a 6000 character link and can't copy paste it? That's good! Because they have no idea what the link actually is.
Nobody will do it, and I don't get why not. Do you really need to market to your internal employees so badly with images and links? That's what a portal is for. Post updates on your portal and stop bombarding my goddamn email box.
I don't think that's right, at least not from a phishing point of view. From a 0-day point of view, yes.
But because we get flooded by emails it's easy to miss something in an email, only for it to be apparent on the page itself. Primarily because the URL will be off, or that my password manager doesn't autofill stuff.
And the flood of emails got worse when people started sending emails to group addresses in BCC instead of in To. At least in Exchange you have no idea whether the sender put your email in BCC or the group in BCC (VERY low priority).
At least I found out that the phishing emails have a recognizable header in the email, allowing me to automatically filter those.
Corporate recently told me I'm specifically not allowed to unsubscribe from newsletters (probably for this reason), so now I have set up mutt to open links in a containerized browser, but that's as far as I'll go.
Here's what I do now. If I don't remember ever subscribing to something, I look for the Unsubscribe button in the email client, which is part of the headers. It feels a little less "phishy" than the link in the email. It uses the message’s hidden “List-Unsubscribe” header, thus no tracking.
Notice: I'm a virtual CISO at CyberHoot (and co-founder here) providing security program development and Incident Response services.
The first week there is a bit of noise while we whitelisted the common domains used by the users. After that it really puts you back on alert when you clicked on a email that takes you to new domains - that could be used for phishing.
If there's trust and respect, they'll reach out without fear of reprisal and inform right away when there's a problem.
If there's a culture of punishment, they'll fear the IT gestapo and try to cover up mistakes that could cost them their job.
It really is that simple.
I take a page from Jayson E. Street's DefCon talk from a few years ago with my students: promote "Security Awareness", not Security Training. Get people to think about what is being asked of them and the consequences of said actions. People tend to take "Security Training" as "I need to remember A, B, C, etc." Humans are bad at this sort of thing, typically.
I admit that "Security Awareness" isn't all that easy, but clearly our current approaches leave much to be desired.
Because you cannot fix humans, technology is the most effective approach.
And then put fucking mimecast infront of everything so I legit can't do what they are training me to do...
So yeah, the training is worthless and just there to tick a box.
Kurt Got Got - https://news.ycombinator.com/item?id=45520615 - Oct 2025 (216 comments)
It's no surprise people didn't engage with training material on the pretend phishing site!! At that stage, they're told it was a trap and they shouldn't even be there so of course they're going to get out asap.
The secure thing to do is: Read mail that tells you to click link to whatever online tool you work with. Then instead of clicking link in mail you open a browser and manually visit the site the link was pointing to. If there is a message, notification, or something else that the emails wants you to look at, then it will also be there when you login “directly”.
1: https://en.wikipedia.org/wiki/IDN_homograph_attack
Is there a good way (right now) to defend against this? I'm willing to live with a browser that only accepts ASCII in the address bar, and disables Unicode in email (replaced with �?)
I always circumvent it by just never clicking links sent to me (mail, sms, WhatsApp, etc). If I get a mail from, for example, Netflix that says there is a problem with my billing or whatever. I open a browser myself, go to Netflix’s site and login. If there really is a billing issue then I can see it after logging in. The links are actually never needed if you think about it.
Other than that use MFA (multi factor) everywhere you can. It doesn’t defeat phishing attack completely, but it is good protection. (Hackers can buy tools that provides them with a UI to build and execute phishing campaigns, even ones that include handling MFA)
For example, this week I helped someone set up an account on an online library platform we use. I had to tell them multiple times not to tap the buttons in the email, website, or app right away, but to read them first. They were clearly nervous, and you could tell they just wanted to finish as quickly as possible and get out of “that very techie situation” to simply use the apps.
I mean, yeah, I get it. Technology isn’t for everyone. But the (sad) fact is that we live in a world largely dominated by it. And although it has created many problems we now need to solve with even more technology, it also helps us solve many of the problems we had before.
My hope is that AI will evolve to the point where it can become a kind of companion for those people, guiding them through situations involving technology that they find difficult or intimidating.
To call this "training" is highly misleading.
It's no surprise that the mere existence of training materials does not help if nobody reads and studies the training materials.
They should preface the training materials with "$100,000 USD will be transferred to your bank account if you read this and successfully answer the questions at the end."
This doesn't concern amazon, google, or banks, probably
So many "offers" and "promotions" to throw around with convenient links
Edit: "Go to our website and find more information under your account about lorem ipsum... "
https://www.rfc-editor.org/rfc/rfc3514
In my previous company they literally had an X-PHISHING-ID header.
In my current company the phishing emails don’t have a single Received header.