When we did annual pen testing audits for my last company, the security audit company always offered to do phishing or social engineering attacks, but advised against it because they said it worked every single time.
One of the most memorable things they shared is they'd throw USB sticks in the parking lot of the company they were pentesting and somebody would always put the thing into a workstation to see what as on it and get p0wned.
>they'd throw USB sticks in the parking lot of the company they were pentesting and somebody would always put the thing into a workstation to see what as on it and get p0wned.
One of my favorite quotes is from an unnamed architect of the plan in a 2012 article about Stuxnet/the cyber attacks on Iran's nuclear program:
"It turns out there is always an idiot around who doesn't think much about the thumb drive in their hand."
Our company does regular phishing attacks against our own team, which apparently gets us a noteworthy 90% ‘not-click’ rate (don’t quote me on numbers).
Never mind that that 10% is still 1500 people xD
It’s gone so far that they’re now sending them from our internal domains, so when the banner to warn me it was an external email wasn’t there, I also got got.
At a previous position, I had a rather strained relationship with the IT department - they were very slow to fill requests and maintained an extremely locked down windows server that we were supposed to develop for. It wasn't the worse environment, but the constant red tape was pretty frustrating.
I got got when they sent out a phishing test email disguised as a survey of user satisfaction with the IT department. Honestly I couldn't even be mad about it - it looked like all those other sketchy corporate surveys complete with a link to a domain similar to Qualtrics (I think it was one or two letters off).
My former company would send out rewards as a thank you to employees. It was basically a “click here to receive your free gift!” email. I kept telling the security team that this was a TERRIBLE president but it continued none the less. The first time I got one I didn’t open it for ages, even after confirming the company was real. It was only after like the 5th nagging email that I asked security about it and they confirmed that it was in fact a real thing the company was using. I got a roomba, a nice outdoor chair, and some sweet headphones. =)
I'm so surprised by this, not because I don't think that many people would fall for a phishing attempt, but because the corporate "training" phishing emails are so glaringly obvious that I think it does a disservice to the people being tested. I feel like it gives a false impression you can detect phishing via vibes when the real ones will be much stealthier.
Are your phishing emails good? If so if you don't mind name dropping the company so I can make a pitch to switch to them.
If you are getting powned by running random executables found on usb drives, passkeys aren’t going to save you. Same if the social engineering is going to get you to install random executables.
If you're getting pwned a physical Security Key still means bad guys don't have the actual credential (there's no way to get that), and they have to work relatively hard to even create a situation where maybe you to let them use the credential you do have (inside the Security Key) while they're in position to exploit you.
These devices want a physical interaction (this is called "User present") for most operations, typically signified by having a push button or contact sensor, so the attacker needs to have a proof of identity ready to sign, send that over - then persuade the user to push the button or whatever. It's not that difficult but it's one more step and if that doesn't work you wasted your shot.
But, haven’t there been bugs where operating systems will auto run some executable as soon as the USB is plugged in? So, just to be paranoid, I’d classify just plugging the thing in as “running random executables.” At least as a non-security guy.
I wonder if anyone has tried going to a local staples or bestbuy something, and slipping the person at the register a bribe… “if anyone from so-and-so corp buys a flash drive here, put this one in their bag instead.”
Anyway, best to just put glue in the USB ports I guess.
What I heard about the Stuxnet attack was different from what you are saying:
The enrichment facility had an air-gapped network, and just like our air-gapped networks, they had security requirements that mandated continuous anti-virus definition updates. The AV updates were brought in on a USB thumb drive that had been infected, because it WASN'T air-gapped when the updates were loaded. Obviously their AV tools didn't detect Stuxnet, because it was a state-sponsored, targeted attack, and not in the AV definition database.
So they were a victim of their own security policies, which were very effectively exploited.
A USB can pretend to be just about any type of device to get the appropriate driver installed and loaded. They can then send malformed packets to that driver to trigger some vulnerability and take over the system.
There are a _lot_ of drivers for devices on a default windows install. There are a _lot more_ if you allow for Windows Update to install drivers for devices (which it does by default). I would not trust all of them to be secure against a malicious device.
I know this is not how stuxxnet worked (instead using a vulnerability in how LNK files were shown in explorer.exe as the exploit), but that just goes to show how much surface there is to attack using this kind of USB stick.
And yeah, people still routinely plug random USBs in their computers. The average person is simultaneously curious and oblivious to this kind of threat (and I don't blame them - this kind of threat is hard to explain to a lay person).
Stuxnet deployment wasn't just a USB stick, though. It was a USB stick w/ a zero-day in the Windows shell for handling LNK files to get arbitrary code execution. That's not to say that random thumb drives being plugged-in by users is good, but Stuxnet deployment was a more sophisticated attack than just relying on the user to run a program.
I've seen someone do a live, on stage demo of phishing audit software, where they phished a real company, and showed what happens when someone falls for it.
Live. On stage. In minutes. People fall for it so reliably that you can do that.
When we ran it we got fake vouchers for "cost coffee" with a redeem link, new negative reviews of the company on "trustplot" with a reply link, and abnormal activity on your "whatapp" with a map of Russia, and a report link. They were exceptionally successful even despite the silly names.
Ever since I almost got phished (wasn't looking closely enough at the domain to notice a little stress mark over the "s" in the domain name, thankfully I was using a hardware wallet that prevented the attack entirely), I realized that anyone can get phished. They just rely on you being busy, or out, or tired, and just not checking closely enough.
Counterpoint: don't use passkeys, they're a confused mess and add limitations while not giving any benefits over a good long password in a password manager.
They prevent you from being one of these, and copy pasting the password from password manager into the wrong input field. Something that still happens often with many websites not properly auto-filling from password managers.
> They just rely on you being busy, or out, or tired, and just not checking closely enough
It's far too common for websites to redirect to some separate domain for sign in which isn't the one originally used to sign up, getting users used to "oh gotta copy the password again" as a totally normal thing that happens
Password managers rarely are able to autofill 100% of the time. Autofill breaking is not a very strong indicator of a phishing attempt, people are used to manually filling the password in sometimes for totally legit sites.
I'm used to 1Password not being able to autofill, yes. But I'm not used to no account showing up at all when I open the UI panel. If that happens, I immediately know I'm on the wrong domain.
Yes, PKC authentication is good, but the way passkeys have been implemented is not great. Way too much trust built into the protocol; way too much power granted to relying parties; much harder for users to form a correct mental model.
So what happened exactly? Did Kurt enter his twitter password manually after clicking on that phishing link? Did he not get his sus detector going off after the password manager didn't suggest the password?
Unfortunately, this does not work. I see no end of banks, financial institutions, let alone random companies, who keep their authentication, for some reason, on different domain than main company, and sometimes they would have initial registration (which gets recorded in password manager) on one domain, and consequent logins on another, and sometimes it depends on how you arrived at the site, or which integration are you planning to use, etc. I wish there were a rule "one company - one auth domain" but it's just not true.
Example: Citi bank has citibankonline.com, citi.com, citidirect.com, citientertainment.com, etc. Would you be suspicious of a link to citibankdirect.com? Would you check the certificate for each link going there, and trace it down, or just assume Citi is up to their shenanigans again and paste the password manually? It's jungle out there.
That happened to me as well, I put it down to "fucking password manager, it's broken again".
For example, BitWarden has spent the past month refusing to auto fill fields for me. Bugs are really not uncommon at all, I'd think my password manager is broken before I thought I'm getting phished (which is exactly how they get you).
For me it wasn't even a phone, it was on the desktop, I'm just so used to everything being buggy that it didn't trigger any alarms for me.
Luckily the only things I don't use passkeys or hardware keys for are things I don't care about, so I can't even remember what was phished. It goes to show, though, that that's what saved me, not the password manager, not my strong password, nothing.
Yep. A technical half-baked solution to a problem that has been solved since it's inception. Really just feels like FAANG exists to invent new ways to charge rent...
So, in that case the browser (correctly) did not autofill? Is that a common occurrence for legit traffic from X? And no complaint about the website's identity from the browser -- the expected "lock" icon left of the URL?
This is exactly how not to defend against phishing. The meaningful defense is to foreclose on it entirely, not to just get super good at spotting fakes.
This is why properly working password managers are important, and why as a web site operator you should make sure to not break them. My password not auto-filling on a web site is a sufficient red flag to immediately become very watchful.
Code-based 2FA, on the other hand, is completely useless against phishing. If I'm logging in, I'm logging in, and you're getting my 2FA code (regardless of whether it's coming from an SMS or an app).
And if you read the story, it's because he ignored the fact that the password manager didn't prompt auto-fill.
"I went to the link which is on mailchimp-sso.com and entered my credentials which - crucially - did not auto-complete from 1Password. I then entered the OTP and the page hung. Moments later, the penny dropped, and I logged onto the official website, which Mailchimp confirmed via a notification email which showed my London IP address:"
> the 1Password browser plugin would have noticed that “members-x.com” wasn’t an “x.com” host.
But shared accounts are tricky here, like the post says it's not part of their IdP / SSO and can't be, so it has to be something different. Yes, they can and should use Passkeys and/or 1password browser integration, but if you only have a few shared accounts, that difference makes for a different workflow regardless.
Yes; 1Password was used. And it worked properly. But because humans are fallible, a human made a mistake anyways.
"Properly working password managers" do not provide a strong defense against real world phishing attacks. The weak link of a phishing attack is human fallibility.
Precisely. 1Password's browser integration would have noticed a domain mismatch and refused to autofill the password -- but in a panic, Kurt apparently opened 1Password and then copied/pasted the credentials manually.
Correct. The moral of the story is that hardware MFA and/or passkeys are a necessity in today's world. An infinitely complex password and 2FA are no match for attacks that leverage human psychology.
This is how they got my Steam account credentials, although I realized the stupid shit I did the second I clicked submit form, and reset my password to random 32 characters using bitwarden. Me! Someone who is deeply technical AND paranoid.
The key here is the hacker must create the most incisive, scary email that will short circuit your higher brain functions and get you to log in.
I should have realized the fact that bitwarden did not autofill and take that as a sign.
Same thing happened to me (not with Steam), but it's also the thought that "this could never happen to me" that leads you to assign an almost zero probability to the problem being a phishing attempt.
Because CEOs at startups are notorious for trying to problem solve aggressively by "just" doing the thing rather than throwing it at a person who _might_ have made the same mistake, but might be more primed to be confused as to why they are not logged into x dot com and why 1password's password prompt doesn't show up and why the passkey doesn't work or whatever.
It's always possible to have issues, of course, and to make mistakes. But there's a risk profile to this kind of stuff that doesn't align well with how certain people work. Yet those same people will jump on these to fix it up!
I have been almost got, a couple of times. I'm not sure, but I may have realized that I got got, about 0.5 seconds after clicking[0], and was able to lock down, before they were able to grab it.
This "content violation on your X post" phishing email is so common, we get about a dozen of those a week, and had to change the filters many times to catch them (because it's not easy to just detect the letter X and they keep changing the wording).
We also ended up dropping our email security provider because they consistently missed these. We evaluated/trialed almost a dozen different providers and finally found one that did detect every X phishing email! (Check Point fyi, not affiliated)
It was actually embarrassing for most of those security companies because the signs of phishing are very obvious if you look.
I was reading this and wondering why it was posted so high (I didn’t recognize the company name), and then I got to the name at the bottom. I think the lesson here is “if it could happen to Kurt, it could happen to anyone.” Yeah, the consequences here were pretty limited, but everyone’s got Some vulnerability, and it’s usually in the junk pile in the corner that you’re ignoring. If the attacker were genuinely trying to do damage (as opposed to just running a two-bit crypto scam), assuming the company’s official account is a fine start to leverage for some social engineering.
It genuinely took me a second - I was midway through writing a very different comment. Apparently reading comprehension is not on my skills list today…
That's some impressive work on the attackers part having that whole fake landing page ready to go, and a pretty convincing phishing email.
I'm don't know much about crypto so I'm not sure what makes them call the scam 'not very plausible' and say it 'probably generated $0 for the attackers', is that something that can be verified by checking the wallet used in that fake landing page?
I want to say again that the key thing in this post is that anything "serious" at Fly.io couldn't have gotten phished: your SSO login won't work if you don't have mandatory phish-resistant 2FA set up for it. What went wrong here is that Twitter wasn't behind that perimeter, because, well, we have trouble taking Twitter seriously.
We shouldn't have, and we do take it seriously now.
Twitter isn't an operational dependency of ours and we don't attest to it at all.
It also doesn't require we do that: what SOC2 actually demands of vendor security practices is much more complicated (and performative) than that. If Twitter were a real vendor dependency of ours, most of what we'd need would be a SOC2 attestation from them.
I'm always glad to see when companies, developers and CEOs make a heartfelt and humanistic mae culpa.
We would like to think that we're the smart ones and above such low level types of exploits, but the reality is that they can catch us at any moment on a good or bad day.
I think you'll be led astray thinking this is CEO-specific.
The whole theory of phishing, and especially targeted phishing, is to present a scenario that tricks the user into ignoring the red flags. Usually, this is an urgent call to action that something negative will happen, coupled with a tie-in to something that seems legit. In this case, it was referencing a real post that the company had made.
A parallel example is when parents get phone calls saying "hey it's your kid, I took a surprise trip to a tiny island nation and I've been kidnapped, I need you to wire $1000 immediately or they're going to kill me". That interaction is full of red flags, but the psychological hit is massive and people pay out all the time.
I razz CEOs in jest, but my point is: This is an example of a good phishing attempt? ChatGPT could surely find and fix most of the red flags I called out. Perhaps the red flags ensure they don't phish more people than they can productively exploit.
There are certainly phishing attempts that are pixel perfect, but I'd say way more energy tends to go into making phishing websites perfect. The goal of the email is to flip people into action as quickly as possible with as little validation.
CEO here, I also almost got taken by a fake legal notice about a Facebook post. My password manager would not auto enter my password so I tried manually entering it like a dummy. Fortunately, it was the wrong one.
Isn’t turning off auto enter exacerbating the problem?
The avenue for catching this is that the password manager’s autofill won’t work on the phishing site, and the user could notice that and catch that it’s a malicious domain
Yes. This is the problem with the "just use a password manager" answer to phishing-resistance. They can be a line of defense, situationally, but you have to have them configured just right, and if you're using phishing-resistant authentication you don't need that line of defense in the first place.
Isn't this backwards? If the autocomplete doesn't show up that's a flag that the password is going somewhere it doesn't belong. If you're always copy-pasting from a password manager then you're not getting that check "for free".
Obviously SSO-y stuff is _better_, but autofill seems important for helping to prevent this kind of scam. Doesn't prevent everything of course!
None of this password manager configuration stuff matters; we've just got Passkeys set up for the account now, which is what we should have done, but didn't, because we spent the last 2 years with one foot out the door on Twitter altogether.
Since this attack happened despite Kurt using 1Password, I'm really not all that receptive to the idea that 1Password is a good answer to this problem.
Autofill doesn't always work for every site. So, now you're having to store in your mind where it works and where it doesn't. By disabling it, it forces you to go the extra step (command-shift-L) every time.
You're right. The point is that hotkey makes me think and observe more. Again, I don't have to remember if the site previous worked with autofill, or not.
It doesn't seem irrelevant to me at all. Security these days isn't just one action, it is a multitude of actions and steps and thought processes.
By removing the expectation that my password manager is going to autofill something, I'm now making the conscious decision to always try to fill it myself.
This makes me think more about what I'm doing, and prevents me from making nearly as many mistakes. I don't let my guard down to let the tools do all the work for me. I have to think: ok, I'll autofill things now, realize that it isn't working, and then look more closely at why it wasn't working as I expected.
I won't just blindly copy/paste my credentials into the site because whoops, I think it might have worked previously.
No, that's the opposite of the moral of that story. If the person you responded to had listened to the fact that the auto-enter didn't auto-enter, they wouldn't have been at any risk. Likewise in the article, the problem was that the CEO copy-pasted the password into the phishing page's password field, NOT that the auto-enter prompted him to do so.
As I mention below: Autofill doesn't always work for every site. So, now you're having to store in your mind where it works and where it doesn't. By disabling it, it forces you to go the extra step (command-shift-L) every time.
if anyone @ x.com infosec is here, my buddy got her account phished / there is someone in CS selling creds. Then it was used to pump a crypto scam and she has been trying for months to get it sorted. She's had the account for 16 plus years, it's surprising it's this hard to fix.
It's x.com/leighleighsf, we've tried every channel but for filing a small claims lawsuit in Texas to get her account back.
Like with occupational safety, we should worry about near misses as well as actual hacks. If you realize you just logged into X from a link in an email, you should berate yourself for could-have-been-hacked. Never enter credentials into links from emails!
... could we get webauthn / yubikeys prioritized for fly? afaik (don't want to disable 2fa to find out), it only supports totp.
For everyone reading though, you should try fly. Unaffiliated except for being a happy customer. 50 lines of toml is so so much better than 1k+ lines of cloudformation.
We don't like TOTP, at all, for reasons even more obvious now, but our standard answer for advanced MFA has been OIDC, which is what most people should do rather than setting up bespoke U2F/FIDO2/Passkeys.
Fly has consistently surprised me at how late they have been to doing the "standard company" stuff. Their sort of lack of support engineering teams for a while affected me way more though.
You gotta take the Legos away from the CEO! Being CEO means you stop doing the other stuff! Sorry!
And yes they have their silly disclaimer on their blog, but this is Yet Another "oh lol we made a whoopsie" tone that they've taken in the past several times for "real" issues. My favorite being "we did a thing, you should have read the forums where we posted about it, but clearly some of you didn't". You have my e-mail address!
Please.... please... get real comms. I'm tired of the "oh lol we're just doing shit" vibes from the only place I can _barely_ recommend as an alternative to Heroku. I don't need the cuteness. And 60% of that is because one of your main competitors has a totally unsearchable name.
I don't know where the official list of "standard company" stuff is, but I'd wager that for small to medium sized tech companies, it's relatively unsurprising for "leadership" to still be in the weeds on various operational projects and systems.
We've had an unusually large security team for the size of our company since 2021. I'm sorry if you don't like the way I communicate about it but I have no plans to change that. We take security extremely seriously. We just didn't take Twitter that seriously.
The "CEO" thing is just a running joke. Kurt's an engineer. Any of us could have been taken by this. I joke about this because I assume everybody gets the subtext, which is that anything you don't have behind phishing-resistant authentication is going to get phished. You apparently took it on the surface level, and believe I'm actually dunking on Kurt. No.
I'm not talking security, which I generally feel like is probably being done correctly.
I was thinking about, IIRC, back in 2023[0], where you all were suffering a lot of issues. And I _believe_ I saw some chatter about Fly building out a team of support/devops-y/SRE engineers around that time. And I had just assumed up until there that, as a company about operations, that you would already have a team that is about reliability.
I am not a major user of you (You're only selling me like 40 bucks a month of compute/storage/etc), but I had relatively often been hitting weird stuff. Some of it was me, some of it was your side. But... well... I was using Heroku for this stuff before and it seemed to run swimmingly for very long. So I was definitely a bit like "oh OK so you just didn't care about reliability until then?" I mean this lightly, but I started basically anti-recommending you after the combo of the issues and the statements your team was making (both on this kind of operations and also communications after the fact).
I think you all generally do this better now though, so maybe I'm just bringing up old grudges.
> You apparently took it on the surface level, and believe I'm actually dunking on Kurt.
No, I took it in the same tone I take a lot of your company's writing.
> The "CEO" thing is just a running joke. Kurt's an engineer.
I think if you are the CEO of a company above a certain (very low!) headcount you put down the Legos. There are enough "running a company" things to do. Maybe your dynamics are different, since your team is indeed quite small according to the teams page.
Every startup engineer has had to deal with "The CEO is the one with admin rights on this account and he's not doing the thing because somehow we haven't pried the credentials from him so that people doing the work does it". And then the dual of this, "The CEO fixes the thing at 2AM but does it the wrong way and now thing is weird". A way you avoid this is by yanking all credentials from the CEO.
I'm being glib here, because obviously y'all have your success, the Twitter thing "doesn't matter", etc. I just want to be able to recommend you fully, and the issues I hit + the amateur hour comms in response (EDIT: in the past) gets on my nerves and prevents me from doing it!
> This is, in fact, how all of our infrastructure is secured at Fly.io; specifically, we get everything behind an IdP (in our case: Google’s) and have it require phishing-proof MFA.
Every system is only as secure as its weakest link. If the company's CEO is idiotic enough to pull credentials from 1Password and manually copy-past them on a random website whose domain does not match the service that issued it, what is to say they won't do the same for an MFA token?
They literally explain in the article they're using FIDO MFA that is phishing proof as the key authenticates the website (it's not your run-of-the-mill sms 2FA, it's using WebAuthn to talk to your MFA).
One of the most memorable things they shared is they'd throw USB sticks in the parking lot of the company they were pentesting and somebody would always put the thing into a workstation to see what as on it and get p0wned.
Phishing isn't really that different.
Great reminder to setup Passkeys: https://help.x.com/en/managing-your-account/how-to-use-passk...
One of my favorite quotes is from an unnamed architect of the plan in a 2012 article about Stuxnet/the cyber attacks on Iran's nuclear program:
"It turns out there is always an idiot around who doesn't think much about the thumb drive in their hand."
Never mind that that 10% is still 1500 people xD
It’s gone so far that they’re now sending them from our internal domains, so when the banner to warn me it was an external email wasn’t there, I also got got.
I got got when they sent out a phishing test email disguised as a survey of user satisfaction with the IT department. Honestly I couldn't even be mad about it - it looked like all those other sketchy corporate surveys complete with a link to a domain similar to Qualtrics (I think it was one or two letters off).
Are your phishing emails good? If so if you don't mind name dropping the company so I can make a pitch to switch to them.
These devices want a physical interaction (this is called "User present") for most operations, typically signified by having a push button or contact sensor, so the attacker needs to have a proof of identity ready to sign, send that over - then persuade the user to push the button or whatever. It's not that difficult but it's one more step and if that doesn't work you wasted your shot.
But, haven’t there been bugs where operating systems will auto run some executable as soon as the USB is plugged in? So, just to be paranoid, I’d classify just plugging the thing in as “running random executables.” At least as a non-security guy.
I wonder if anyone has tried going to a local staples or bestbuy something, and slipping the person at the register a bribe… “if anyone from so-and-so corp buys a flash drive here, put this one in their bag instead.”
Anyway, best to just put glue in the USB ports I guess.
The enrichment facility had an air-gapped network, and just like our air-gapped networks, they had security requirements that mandated continuous anti-virus definition updates. The AV updates were brought in on a USB thumb drive that had been infected, because it WASN'T air-gapped when the updates were loaded. Obviously their AV tools didn't detect Stuxnet, because it was a state-sponsored, targeted attack, and not in the AV definition database.
So they were a victim of their own security policies, which were very effectively exploited.
There are a _lot_ of drivers for devices on a default windows install. There are a _lot more_ if you allow for Windows Update to install drivers for devices (which it does by default). I would not trust all of them to be secure against a malicious device.
I know this is not how stuxxnet worked (instead using a vulnerability in how LNK files were shown in explorer.exe as the exploit), but that just goes to show how much surface there is to attack using this kind of USB stick.
And yeah, people still routinely plug random USBs in their computers. The average person is simultaneously curious and oblivious to this kind of threat (and I don't blame them - this kind of threat is hard to explain to a lay person).
(They will run programs, though. They always do.)
Live. On stage. In minutes. People fall for it so reliably that you can do that.
When we ran it we got fake vouchers for "cost coffee" with a redeem link, new negative reviews of the company on "trustplot" with a reply link, and abnormal activity on your "whatapp" with a map of Russia, and a report link. They were exceptionally successful even despite the silly names.
Use passkeys for everything, like Thomas says.
I’d like to write a follow-up that covers authentication apps/devices, but I need to do some research, and find free versions.
A few years ago, I managed to get our InfoSec head phished (as a test). No one is safe :)
> They just rely on you being busy, or out, or tired, and just not checking closely enough
Example: Citi bank has citibankonline.com, citi.com, citidirect.com, citientertainment.com, etc. Would you be suspicious of a link to citibankdirect.com? Would you check the certificate for each link going there, and trace it down, or just assume Citi is up to their shenanigans again and paste the password manually? It's jungle out there.
For example, BitWarden has spent the past month refusing to auto fill fields for me. Bugs are really not uncommon at all, I'd think my password manager is broken before I thought I'm getting phished (which is exactly how they get you).
Luckily the only things I don't use passkeys or hardware keys for are things I don't care about, so I can't even remember what was phished. It goes to show, though, that that's what saved me, not the password manager, not my strong password, nothing.
It's pretty incredible the level of UI engineering that went into it.
Some screenshots I took: https://x.com/grinich/status/1963744947053703309
Code-based 2FA, on the other hand, is completely useless against phishing. If I'm logging in, I'm logging in, and you're getting my 2FA code (regardless of whether it's coming from an SMS or an app).
"I went to the link which is on mailchimp-sso.com and entered my credentials which - crucially - did not auto-complete from 1Password. I then entered the OTP and the page hung. Moments later, the penny dropped, and I logged onto the official website, which Mailchimp confirmed via a notification email which showed my London IP address:"
> the 1Password browser plugin would have noticed that “members-x.com” wasn’t an “x.com” host.
But shared accounts are tricky here, like the post says it's not part of their IdP / SSO and can't be, so it has to be something different. Yes, they can and should use Passkeys and/or 1password browser integration, but if you only have a few shared accounts, that difference makes for a different workflow regardless.
"Properly working password managers" do not provide a strong defense against real world phishing attacks. The weak link of a phishing attack is human fallibility.
The key here is the hacker must create the most incisive, scary email that will short circuit your higher brain functions and get you to log in.
I should have realized the fact that bitwarden did not autofill and take that as a sign.
It's always possible to have issues, of course, and to make mistakes. But there's a risk profile to this kind of stuff that doesn't align well with how certain people work. Yet those same people will jump on these to fix it up!
Blaming some attribute about user as why they fell for a phishing attempt is categorically misguided.
I have been almost got, a couple of times. I'm not sure, but I may have realized that I got got, about 0.5 seconds after clicking[0], and was able to lock down, before they were able to grab it.
[0] https://imgur.com/EfQrdWY
We also ended up dropping our email security provider because they consistently missed these. We evaluated/trialed almost a dozen different providers and finally found one that did detect every X phishing email! (Check Point fyi, not affiliated)
It was actually embarrassing for most of those security companies because the signs of phishing are very obvious if you look.
I'm don't know much about crypto so I'm not sure what makes them call the scam 'not very plausible' and say it 'probably generated $0 for the attackers', is that something that can be verified by checking the wallet used in that fake landing page?
We shouldn't have, and we do take it seriously now.
We would like to think that we're the smart ones and above such low level types of exploits, but the reality is that they can catch us at any moment on a good or bad day.
Good write up
They literally admit they pay a Zoomer to make memes for Twitter. I think you are falling for the PR.
* "We've received reports about the latest content" - weird copy
* "which doesn't meet X Terms of Service" - bad grammar lol
* "Important:Simply ..." - no spacing lol
* "Simply removing the content from your page doesn't help your case" - weird tone
* "We've opened a support portal for you " - weird copy
There should so many red flags here if you're a native english speaker.
There are some UX red flags as well, but I admit those are much less noticeable.
* Weird and inconsistent font size/weight
* Massive border radius on the twitter card image (lol)
* Gap sizes are weird/small
* Weird CTA
The whole theory of phishing, and especially targeted phishing, is to present a scenario that tricks the user into ignoring the red flags. Usually, this is an urgent call to action that something negative will happen, coupled with a tie-in to something that seems legit. In this case, it was referencing a real post that the company had made.
A parallel example is when parents get phone calls saying "hey it's your kid, I took a surprise trip to a tiny island nation and I've been kidnapped, I need you to wire $1000 immediately or they're going to kill me". That interaction is full of red flags, but the psychological hit is massive and people pay out all the time.
The avenue for catching this is that the password manager’s autofill won’t work on the phishing site, and the user could notice that and catch that it’s a malicious domain
Obviously SSO-y stuff is _better_, but autofill seems important for helping to prevent this kind of scam. Doesn't prevent everything of course!
Since this attack happened despite Kurt using 1Password, I'm really not all that receptive to the idea that 1Password is a good answer to this problem.
We can always make mistakes of course. And yeah, sometimes we just haven't done something.
Whether that’s via a hotkey or not seems totally irrelevant.
By removing the expectation that my password manager is going to autofill something, I'm now making the conscious decision to always try to fill it myself.
This makes me think more about what I'm doing, and prevents me from making nearly as many mistakes. I don't let my guard down to let the tools do all the work for me. I have to think: ok, I'll autofill things now, realize that it isn't working, and then look more closely at why it wasn't working as I expected.
I won't just blindly copy/paste my credentials into the site because whoops, I think it might have worked previously.
It's x.com/leighleighsf, we've tried every channel but for filing a small claims lawsuit in Texas to get her account back.
tru tru
For everyone reading though, you should try fly. Unaffiliated except for being a happy customer. 50 lines of toml is so so much better than 1k+ lines of cloudformation.
We will get to this though.
https://fly.io/blog/tokenized-tokens/
You gotta take the Legos away from the CEO! Being CEO means you stop doing the other stuff! Sorry!
And yes they have their silly disclaimer on their blog, but this is Yet Another "oh lol we made a whoopsie" tone that they've taken in the past several times for "real" issues. My favorite being "we did a thing, you should have read the forums where we posted about it, but clearly some of you didn't". You have my e-mail address!
Please.... please... get real comms. I'm tired of the "oh lol we're just doing shit" vibes from the only place I can _barely_ recommend as an alternative to Heroku. I don't need the cuteness. And 60% of that is because one of your main competitors has a totally unsearchable name.
Still using fly, just annoyed.
The "CEO" thing is just a running joke. Kurt's an engineer. Any of us could have been taken by this. I joke about this because I assume everybody gets the subtext, which is that anything you don't have behind phishing-resistant authentication is going to get phished. You apparently took it on the surface level, and believe I'm actually dunking on Kurt. No.
I was thinking about, IIRC, back in 2023[0], where you all were suffering a lot of issues. And I _believe_ I saw some chatter about Fly building out a team of support/devops-y/SRE engineers around that time. And I had just assumed up until there that, as a company about operations, that you would already have a team that is about reliability.
I am not a major user of you (You're only selling me like 40 bucks a month of compute/storage/etc), but I had relatively often been hitting weird stuff. Some of it was me, some of it was your side. But... well... I was using Heroku for this stuff before and it seemed to run swimmingly for very long. So I was definitely a bit like "oh OK so you just didn't care about reliability until then?" I mean this lightly, but I started basically anti-recommending you after the combo of the issues and the statements your team was making (both on this kind of operations and also communications after the fact).
I think you all generally do this better now though, so maybe I'm just bringing up old grudges.
> You apparently took it on the surface level, and believe I'm actually dunking on Kurt.
No, I took it in the same tone I take a lot of your company's writing.
> The "CEO" thing is just a running joke. Kurt's an engineer.
I think if you are the CEO of a company above a certain (very low!) headcount you put down the Legos. There are enough "running a company" things to do. Maybe your dynamics are different, since your team is indeed quite small according to the teams page.
Every startup engineer has had to deal with "The CEO is the one with admin rights on this account and he's not doing the thing because somehow we haven't pried the credentials from him so that people doing the work does it". And then the dual of this, "The CEO fixes the thing at 2AM but does it the wrong way and now thing is weird". A way you avoid this is by yanking all credentials from the CEO.
I'm being glib here, because obviously y'all have your success, the Twitter thing "doesn't matter", etc. I just want to be able to recommend you fully, and the issues I hit + the amateur hour comms in response (EDIT: in the past) gets on my nerves and prevents me from doing it!
Anyways, I want you all to succeed.
[0]: https://community.fly.io/t/reliability-its-not-great/11253
Every system is only as secure as its weakest link. If the company's CEO is idiotic enough to pull credentials from 1Password and manually copy-past them on a random website whose domain does not match the service that issued it, what is to say they won't do the same for an MFA token?
With this setup, you can't fuck up.
That’s what makes it phishing-resistant.