> Real cardholders almost never buy something for exactly $1.00. Coffee is $4.73, gas is $52.81. The roundness is the signal.
Surely this depends on how the vendor sets their prices? If you're going to buy something from a website to test a stolen credit card you don't just get to make up your own prices.
And I think you may be over-indexing on the US "prices don't include tax" thing. Elsewhere, round-number prices are extremely common.
In fact a lot of the rest of the stuff in the post seems like it wouldn't work very well either. (E.g. you're flagging anyone who has done a transaction in the last 90 days outside the range of hours at which they have 2+ transactions? Wouldn't that be like 50% of people?).
It's unclear to me whether this article is an attempt at breaking down complex expertise into over-simplified SQL queries, or whether it is all speculative and made up.
There is a conflict between "Six SQL patterns I use to catch transaction fraud" and "Nothing here comes from anything I’ve actually worked on or seen".
The "transaction outside usual hour range" seems pretty basic.
I don't usually buy gas, coffee or snacks at 2am. But on the very rare occasion that I do, I'm dealing with some kind of personal emergency and don't also want to have to call my bank.
I get that that's also a time opportunistic thieves, etc, might be operating. But the cost of false positives is also a thing.
On the other hand, for online transactions I frequently do them outside the usual hour range.
However, before going to a distant country, which was also in very different time zone, I warned the bank that issued the card that I intended to use, so that they would not consider suspicious either the place or the time of the transactions.
Two things I have found that absolutely positively freak the shit out of my bank:
Buying a full tank of gas and then a full tank of petrol (dual-fuel vehicle) in two separate card transactions one after the other. Can't use the same pump, annoyingly, at least with the old system in Morrisons. Don't know if it's different since their petrol stations have been bought over by Motor Fuel Group.
Similarly, buying a full tank of gas at Morrisons in Bradford where the supermarket chain's headquarters are, then driving five hours north and refuelling again in a different Morrisons which show the transaction as coming from their banking systems in Bradford but tagged as a city in Scotland. This is apparently because it's implausible to drive from the central England to central Scotland in a few hours, and then need to refuel.
They are (or was at least, last checked a year ago) 100% repeatable.
Coffee usually _is_ a round number in my experience, and I know of people who aim for round numbers when filling their car, and of fuel stations which require a pre-set value, often 10, 20, 50€ etc
Yes, as your parent comment points out, the article centers itself on US transactions, where listed prices seldom include tax and are frequently a cent below a round number. For example, the menu says a dish is $15.00 but the restaurant charges $18.83 after tax and tip. Globally, there's no doubt the US is the exception rather than the norm.
That sounds reasonable for some states but 5 states have no sales tax
and many states have exclusions to sales tax. Many of those are also likely to have rural areas where small businesses like to use even amounts.
All of that is easy to account for, all of the metadata you need is available. This also applies to the sibling comment about rounding up to charity at the grocery store, the data is all there, even if it's e.g. the fraud analyst at the bank or credit card company instead of the fraud analyst at the grocery store.
Yeah I was in a bar one night and was peckish, so tried to buy a packet of crisps. They said minimum spend on card was £5, so I said just charge me the £5 it's fine.
Card got blocked as they thought it was fraud. Annoying! And not something inebriated me wanted to deal with at 2am.
Ok. Maybe they protected me from myself, but still!
I do not know if this is still used anywhere, but in the past there were places, e.g. hotels or car rental services, where the validity of a credit card was tested by a $1.00 transaction, before booking a room or renting a car.
In North America they typically charge a much bigger deposit for these with a hold unit after everything clears. This makes sense; it's more important that you have the credit than just a functioning CC, but doesn't really help with fraud.
Their approach to “Suspicious merchants” also confuses me: the description doesn’t make logical sense to me and doesn’t match the abuse pattern as far as I understand it.
> When a skimmer compromises a card reader at, say, a gas pump, you don’t get one fraud case. You get dozens. Every card swiped at that pump for the next few weeks is now in someone’s database. So the symptom from the merchant side is: an unusual number of unrelated cards spending more than usual, in a short window.
So he checks for hour-bucketed increases in high-value transactions originating from that merchant.
Seems to me like a good way to catch a sale, an opening, a launch event, or a product “drop,” a single high-value sale that somebody spreads across several cards… less so a good way to detect a steady trickle of stolen card data that’s inexplicably used back at the same merchant.
If you’re installing a card skimmer, why would you charge the stolen cards at the same business where you’re stealing them? And why would you concentrate your spending into bursts if the skimmer’s harvesting all day every day?
If you’re the merchant doing the skimming in order to spend at your own store, wouldn’t it be easier to punch a higher amount into the terminal? If you’re a skimming ring, wouldn’t you prefer to have purchasing power rather than this $5000 threshold (?!) of extra gas (plus a giant neon sign advertising where you placed your skimmer)?
Wouldn’t a more sensible approach involve something like looking for merchant clusters in the combined transaction histories of known-stolen accounts?
The LLM runs so strong in this whole enterprise… I want to give the person the benefit of the doubt, but I can’t resist the sneaking suspicion that LLM fabulism to push a slop novel just wasted 15 minutes of my life.
> a) trivial to bypass by adding dither to the test transactions and
I know someone who worked in fraud detection of financial transactions. He told me that indeed lots of filters that are applied mostly test for anomalies. The thing is that most criminals are not insanely smart, and commonly don't have a lot of inside knowledge about accounting, banking, finance system etc., so criminals often have a bad intuition about more subtle things that are looked at for fraud detection.
But if you are a very dedicated criminal with lots of inside knowledge about, say, accounting, banking, finance system, ..., you could likely outsmart these filters. But these people typically have much better career options (even if they want a career as a "big fish criminal": just look at the history of accounting scandals, stock manipulations, Ponzi schemes, ...).
I'm seeing a few stores here and there which have a "round up to donate" option. I guess I'm a bit of a sucker and I always use that option. My groceries are always a round number as a result.
Agree - I don't think a giant multinational should get the cumulative charitable donation through their "Gavin Belson Foundation", and frankly it coming while you're checking yourself out, and navigating all dark-pattern "share your email for an e-receipt?", "want our deal of the day?", "enter your loyalty card?", "fill out this poll?", "are you collectioning stickers?" nonsense really grinds my gears. I just want cheaper groceries!
the point of these drives is to get more people to give to charity. Then you use a lullaby-word as if setting up a charitable donation is as easy as saying "yes" when the checker asks if you want to give a small donation.
You're overthinking this. There's a publicity element to it, but the money just gets given to charity, like they say it does. It's not a conspiracy or a tax accounting trick.
Same way I can tell that my next door neighbor isn't poisoning my water supply or that Tesco doesn't have a secret chemical weapons program. That major supermarket chains are running scam charity appeals just isn't a hypothesis worth entertaining in the absence of any evidence for it.
> Border crossings inside 10 minutes. International rings.
Or normal people living in Europe in border-adjacent areas.
Also, I guess you don't include card-not-present transactions in this, but you incorrectly assume that every merchant has their location set correctly. And that every sale happens in a brick-and-mortar establishment, not from travelling salespeople or whatever. And that all transactions happen online.
Reading this to the very end uncovers empty and contradictory advice. I'm almost sure it's LLM generated.
We learn simultaneously that 'your team' shouldn't rely on any one of those patterns ('none of them is enough'), but that pattern 1 'alone will surface a useful amount of fraud'.
We also read strange sentences like "Every analyst on your team will use them (ie window functions) once they exist, and adding the next fraud pattern stops being a project. [end of paragraph]"
Or irrelevant discussions about how filtering by "IS NULL" might be not applicable when almost none of the provided examples uses it (and the one which does uses it in different context).
"Fixel Smith" is an AI-generated person, with an article that has very little to do with fraud analysis. 'This' is also a music artist (1), novelist (2), fraud analyst (3), influencer (4), and whatever else you can imagine.
220+ points and 70 comments, and very few notice it's quite a fake post — and no one that it's an AI generated person?
Hacker News has developed recently a frustrating habit of upvoting such low quality AI sloppy submissions.
Makes me wonder if this AI flood uncovers the unflattering truth about this community acuteness, or it's only a failure of existing guardrails and we just need to change them.
This is something that genuinely interests me, but from a slightly different perspective. I regularly participate in tech/AI/fraud/security conferences, and I was curious whether people are really listening to and understanding the panelists, or just pretending to.
Last week I was on a privacy panel discussion where one of the speakers had a bad microphone, and honestly no one in the audience could actually hear what he was talking about for five minutes. Afterward, the audience, perhaps out of politeness, perhaps because no one really cared, reacted and applauded as usual.
This post, and especially its reception, feels like a caricature of what's happening in the tech industry right now. Hundreds upvote, very few really try to understand what the context is about, much less have the real experience to judge it, and yet here we are on the front page of HN.
Now the East Coast will wake up, and I'm really curious whether anything will change in this thread.
I was checking the submission on the phone and only peeked at the comments section. While it's not always easy to judge if something is AI-generated or edited, here it was obvious at first glance from the quotes. Assuming that all of the comments were done in good faith, I think that the low AI literacy even here is really concerning.
A cursory glance does make it appear like either a prolific individual, or a bot. The fact that the novel bears little relation to the analytics posts, which seem to bear the style of LLM prose, makes the whole thing fishy. Ironic given the subject matter of TFA
I'd be more surprised to hear that most folks made a habit of investigating the people whose articles we read. To be honest, I usually don't even look at the byline, let alone the rest of the website.
I'm one of the creators of an open-source security framework (1). I've been eating online fraud for breakfast for 8 years. The article is delusional enough that I had to visit the top page of the domain (2).
It's definitely a real post. Yes it's obviously LLM-written, but if the worst thing you can say about an article is that it looks LLM-written, then maybe you don't have any real criticisms.
Whether the contents are made up or not is unclear, but you can criticise the content of the article without needing to speculate on whether it was written by an LLM or whether it is a work of fiction. It has plenty of much more concrete flaws.
The fact that the 'person' behind this post managed to publish a novel, a music album, and a few posts on fraud prevention all within a few days this month is enough for me to redflag it.
> you don't have any real criticisms.
Please check once again, I've already given my opinion on the article itself below. Here it is again for your convenience.
> I question the described approaches. For example, while impossible travel is a legitimate and widely used technique, it's related to online user behaviour based on IP address. Moreover, tirreno, for example, has separate rules for cases where the IP clearly comes from Apple Relay or VPN/Tor — those are separate flags. I assume some or all examples are LLM-generated, as the context is mixed up and no one actually collects GPS location in bulk for card swipes.
I could imagine a person having or doing all of these over time, people do have many interests, but a cursory glance does give an impression of AI. The Instagram account uses a lot of it at least, and the top domain was likely made in conjunction with AI, given the style.
Kind of fascinating, though it could still be a person doing this using AI as opposed to an entirely generated persona. Thanks for bringing it up.
We develop tirreno (1), an open-source security framework.
I question the described approaches. For example, while impossible travel is a legitimate and widely used technique, it's related to online user behaviour based on IP address. Moreover, tirreno, for example, has separate rules for cases where the IP clearly comes from Apple Relay or VPN/Tor — those are separate flags. I assume some or all examples are LLM-generated, as the context is mixed up and no one actually collects GPS location in bulk for card swipes.
> Fraud detection in transaction data is mostly SQL. Not machine learning, not graph databases, not whatever Gartner is hyping this year. SQL, run against the right tables, with the right joins, looking for the right shapes.
It's also not all program-integrity, which is the only work that could justify such blanket statements. Worse is better as long as it addresses the problem domain.
Fintech clients are generally interested in knowing whether a transaction happening _right now_ is fraud. They want to know that in a few milliseconds, for high-dimensional data. It's work done at a scale where relational databases cannot meet these real-time constraints, and instead find other uses like historical data loading. That's how you end up with in-memory databases, stream-processing engines, and yes, even machine learning.
Having said that, some of the author's points are valid, and I'm looking forward for their next writings, in particular dealing with noisy alerts is a general problem beyond performance engineering.
In my experience, what you're describing would more specifically be called Fraud Prevention rather than Fraud Detection. Both tend to coexist and are complementary in a mature setup.
For Prevention, you're always going to be constrained by latency requirements, available data and an incomplete picture of user behaviour. You make a quick decision using ML and rules that deals with the majority of cases. But those constraints make it impossible to precisely prevent all fraud.
Detection deals with the downstream consequences of this. A team of analysts will typically analyse the accepted transactions for signs of fraud. This is particularly important for fraud types where you don't get an external signal like a chargeback or customer complaint. Platform integrity is one such example. But Fintechs will also see this building anti-money laundering systems - you need to go looking for the fraud. This is the process the article is describing.
I say they're complementary because the detected transactions become the labels for training and evaluating the next iteration of prevention models.
Swiping a card (or inserting, or tapping) is a "card present" transaction. Online shopping, where you type in the card number, is a "card not present" transaction. Retailers and banks can tell the difference.
Fraudulent transactions will eventually cost the bank (when they would have to reverse/reimburse it and eat the loss). A denied transaction only results in an angry customer who will quickly forget after they complained - so the customer bears the brunt of the externalized cost.
Therefore, the bank's incentive is to err on the side of more caution, and deny transactions when finding false positives.
Spend retention is by far the highest priority. This is why they overnight fedex replacement cards.
The worst scenario for a credit card issuer is when a customer, for whatever reason, starts using another bank’s card in their wallet as their daily driver.
Isn't the point of ML that you learn these rules from the data? The right approach to me would be to use ML models to detect patterns that correspond with fraud and then evaluate them to see if any make sense. This way you might discover new hyptotheses.
Anything that can't be explained and iterated deterministically is too risky for the business of declining financial transactions.
Human analysts need to be able to explain to compliance in a single 5 minute email why a specific transaction was declined, and most importantly, what could have been done differently to avoid the adverse decision.
Fixing one problem with ML often creates two new problems that aren't quite obvious yet. SQL tends to have fewer surprises with regard to regressions and unexpected side effects as things change over time.
In my experience, Visa support can’t tell me why my legitimate transaction was tagged as fraudulent, other than to say it triggered an AI thing. They also can’t tweak the settings like they used to do, but they can manually allow specific transactions one by one on an ad hoc basis.
Recently, they've stopped even being able to allow specific transactions through for me. They can tag the flagged transaction as legitimate and hope the AI picks up on that, but that hasn't worked once in the last ~15 calls for me. I've just stopped trying to use Visa as my primary card online, a habit that bled into in-person purchases as well.
Umm no, they submit thousands of random pages of business communication and system spec in discovery. This does not include the source code of their algorithm, which in any case if not stored in any form which can be recreated and shared. If you pay a lawyer a million bucks to read them all it would say that they don't know how the algo works. At the same time they offer you low four digits to make the case go away, if you have a case. If you don't have a valid case at all, they rapidly spend $250,000 on filings and motions which you would have to spend $100,000 to stay in the game.
I agree the post looks a little AI written but generally this kind of analysis is quite common. Leaving aside human heuristics that are generally too well known to catch real scammers (like time travel or "7 days", which is bad because often weekly patterns are important so at the very least look at 10 days) and actually have low precision, what I find odd it that all results just return a user ID.
So this is really just surfacing cases, but with not enough context to be useful to prioritise. I would expect a score to be included.
Apart from that it misses a lot of signals like refunds, declines, disputes etc [1].
Thanks for clarifying the rule - I wasn't aware of that and follow it next time. But I also don't think it's a marketing link. The post explains why specifically in the cases of transaction fraud other signals are important, with specific SQL examples, and that should translate to any other payment provider.
I actually find it intriguing that you work for Stripe and therefore presumably understand the content of the article you're referring to, but continue to pretend that the SQL examples somehow have value for fraud prevention purposes.
OK, let's take a look at this SQL. I took a random example:
> select
> date_format(date_trunc('week', d.created), '%Y-%m-%d') as week_iso,
> r.rule_id,
> r.predicate,
> count(distinct d.charge_id) as count_total_charges
> from rule_decisions d
> join radar_rules r on r.rule_id = d.rule_id
> where d.created >= date_add('month', -3, current_date)
> and d.action = 'block'
> group by 1,2,3
> order by 1;
The example above is about grouping rule decisions by Radar rules for performance optimisation, and has no value for any other fraud prevention techniques.
Overall, the test is simple: the link is called 'How to continuously improve your fraud management with Radar for Fraud Teams and Stripe Data' and the article itself is in the 'Product resources' category. It is not a general example, and using Stripe is necessary to get any value from it.
All of this makes the article marketing material, and given that you're employed by this company, that must be disclosed.
After 35 years of building software systems I've learned to temper my hubris. These days I rarely assume things to be "definitely true".
For example "Impossible travel": these days you can add your credit card to your phone and use Apple Pay. Well, this is useful for many things, one of them being adding your credit card to your kid's (teenager) phone, so that your kid can use your card in case of need/emergency when they are away from you. I did exactly that recently and actually worried about fraud control systems when my child paid using my card in Boston while I was in Europe.
Many things which you think are true might not be.
Anecdotally, US banks are terrible at building fraud control systems. It seems US banks assume any transaction that is charged by an entity outside the US is fraud. In my 10-year history of running a SaaS, the US banks and their "fraud control" systems have been one of the biggest billing problems.
Apple Pay & Google Wallet are actually considered lower risk by card brand than other card transactions because Apple and Google have so much tracking and biometrics on you, the phone must be unlocked and pin entered to pay. My company gets lower rates on these types of transactions than regular card transactions, lower rates because fraud is paid for by the fees, less fraud for a transaction type, lower merchant rates.
So likely those transactions your kid does on their phone are flying way below the fraud threshold to trigger, even if it hits one trigger like “impossible travel distance”.
>Anecdotally, US banks are terrible at building fraud control systems. It seems US banks assume any transaction that is charged by an entity outside the US is fraud. In my 10-year history of running a SaaS, the US banks and their "fraud control" systems have been one of the biggest billing problems.
This rings home so true, as a Canadian company I am SO TIRED of US banks flagging our transactions as fraud. We have done so much to try to prevent it too. We have a mail forwarding office address in the US. A bank account in USD in the US registered to that address, the merchant account tied to that charging in USD, and still we get these fraud flags. And we’re over the 10 year mark now, I think almost 15. You would thing we would have built up some trust at these banks, but nope.
My next biggest hassle lately is we are a “tokenize and bill later” type service, and we don’t charge monthly recurring exact same amount, depends on the users incurred charges in that period. And lately it seems most Americans leave their cards on a permanently lock, and only unlock to allow a charge, this means most of our charges decline initially until the user unlocks their card and retries the payment. A real support headache if any has a fix to either of these problems I would pay good money for it.
The card processor collects a lot of data, presumably they would have a flag whether the card was used via a phone or real plastic. I suspect the "card used in 2 locations" thing is pretty old. Cards are supposed to have switched to chip for many years now. AFAIK magstripes are the only ones that can be cloned.
In reality, most banks perform a lot of these transaction checks in real time to block fraudulent txes up-front, instead of validating tx legitimacy retroactively at a point where the money is already gone. Some 15 years ago a security rep with Nordea (a large Nordic bank) called me late at night asking if I was currently in South Korea and had just a minute ago used my card in a shop. Someone had initiated a "card present" purchase with my card for 1337 SEK (I'm certain this amount was intentional), which Nordea automatically blocked as it was near the edge of possibility relative to my previous card swipe in Sweden earlier in the day, and they wanted to make sure they weren't about to mistakenly strand me abroad by blocking the card.
I had this happen once - I flew to a city about 8 hours of driving time away to buy a motorcycle and landed late in the evening. My card was declined when I got gas a little after midnight and I had no cash or other card with me so I called the 24 hour support line. I had a quick conversation with a support agent explaining that I was traveling and the card needed to be reactivated right away. Within five minutes the card was working and I was back to working my way down a long chain of mistakes.
As the tail end of the article explains these are independent pieces of evidence not independent proofs: most of them can be legitimate operations (even the speed one, airliners cruise at that speed but if you get to ride a long-range business plane they can cruise faster).
All of them can be genuine use, these are fraud signals not fraud proofs, and the article does cover this:
> What works is running them all and scoring each transaction across the signals. A transaction that fails on three or four of them is almost always fraud. A transaction that fails on one might be your grandma being weird with her debit card on vacation.
> If a card swipes in Chicago and seven minutes later swipes in Los Angeles, one of those swipes is fake. The card is cloned. This is the most uncontroversial fraud signal you’ll find — there’s almost no legitimate reason a single card is in two distant places in seven minutes.
The Apple/Google Pay cards have a DPAN (device account number) that is different to the CPAN of the physical card. It keeps the same issuer (first 6 digits) and the same "last 4" digits, but the others are different.
The DPAN is translated into the CPAN by software at the issuing bank, so it's not identifiable by the merchants.
Merchants get the "last 4" digits, but that's not enough to identify specific CPANs.
> A transaction that fails on three or four of them is almost always fraud. A transaction that fails on one might be your grandma being weird with her debit card on vacation.
The article states that the particular item is a clear sign of fraud. If that was true, then it should be treated in a special manner. A more paranoid bank could enforce it without adhering to this guidance of multi-factor detection.
It isn't though, so balancing it with other rules is fine.
The main problem with these SQL calculations is that they are deterministic shortcuts for a probabilistic problem. Fraud is not usually a “true because rule X matched.” It is more like "what is the probability this is fraudulent"? SQL patterns are useful, but they are blunt instruments. I really don't think banks use deterministic heuristics but more data science stuff.
I have a fair amount of experience in this industry, albeit a couple of years old now. I worked at Square on their payment risk team in 2015 and 2016, at Plaid om their ACH fraud API product called Signal from 2021 to 2024. At Plaid I was involved in client meetings and learned how many companies were already handling risk, and I've interviewed at a handful of other companies' risk teams when I was looking for a new role.
Basically it's not just banks and formal financial institutions doing this, and how they do it depends on the company size. Size tends to correlate not only with how many resources you have for a risk team, but also with whether fraud rings are targeting you.
Usually what I've seen is that companies start with some kind of batch SQL/simple logic process that runs daily and tends to flag accounts for manual review and block automatic events like settlement or trading (or whatever the platform does) until that review has been done. Then over time the company will transition to an ML-based approach that still mostly flags things for manual review. The goal of the ML is to improve the precision of the flagging without hurting dollar recall or fraud event recall too much. Depending on the payment system companies may be sensitive to both (for example, in ACH if you get too many returns, even very low dollar payment returns, you're going to get a hard time from your partner bank and you risk not being able to use ACH anymore).
This takes me back, fighting telephone fraud back when folks use to accept cc over the phone. We used similar patterns but only had phone numbers and the white pages. Cross state boundaries inside similar time frames and categorizing similar merchant types. It’s fun to see these same patterns still in use 20 years later for the same purpose.
This is very cool to read. Although I've never truly worked in fraud prevention, I stumbled into automating a lot of similar pattern checks to catch collusion and fraud when I wrote and ran a poker site / casino. Window functions were not available then so the queries were LONG. One way I'd deal with it was to assign uuids to every pair of players who'd ever shared a poker table, and then run nightly analysis of how much their betting deviated from expected norms and their own baseline on each stage of the game if they were in the same hand as each other. This could actually be done in one or two magnificent 100+ line SQL queries on the history table, on a read replica.
Lagging window functions and/or lateral joins probably would have reduced it to 1/4 the size but definitely increased the cost versus just narrowing the sets into smaller tables first.
These all seem pretty elementary TBH. they focus on identifying fraudulent transactions, vs (IME) the more valuable deciding if a transaction is fraudulent. This is totally double today. Example: instead of some sort of "outside of normal transaction" you can confidently determine that "coffee at 2am" is likely fine, if they also bought gas 10 minutes earlier from the same merchant, dinner 300 miles away at 7pm, and again gas 8 hours ago in their home town.
This seems interesting, but has so many signs of AI writing that I worry it's not just edited but generated from whole cloth. Probably still a lot of truth in there but it does give me pause!
Oh shyte, I use (and have used) these for a long time. Guess everything is classes as AI nowadays just yield and use it (everyone thinks you do anyway)
This comment certainly does not scan as AI! Look, this isn't perfect, but it's the best we've got, and so long as AI writing is meaningfully worse than human writing, people are going to try to tell the difference.
side topic: why should AI slop be banned or eschewed when 80% of the articles on here are ABOUT AI? In other words, why is HN so keen on promoting the pseudo-science of making AI work, then recoil when presented the fruits of their own labor? Seems contradictory and elitist. Like trying to force the board of McD's to Mukbang 3+ happy meals each.
AI != slop. Just because I'm positive about AI doesn't mean I want to consume slop. I personally use AI for productive means, not to write and publish low-quality articles.
pushing slop is "productive means" for many people. they can sell the accounts/identities to election-manipulators or advertisers. bots age like fine wine.
Article was written by AI (dozens of give aways), so take the content with a grain of salt! They author could not be bothered to write it by hand, yet demands your attention to read it. I'm not even sure which parts are based on his prompt and which were hallucinated... How to butcher what could have been an interesting article... sigh
This is the sort of thing I used to love doing and I often gaze at raw data analysis and sometimes wish my career had pivoted towards working with data like this.
But I must admit there was a point where I suddenly lost my love for SQL and it was pretty much when the OVER PARTITION BY syntax appeared.
It never clicks. I always have to look up how it works, I always find it unintuitive. I've never understood why I hate it so much.
> Most people are creatures of habit when they spend money. A nine-to-fiver doesn’t suddenly start buying gas at 3am.
Breaking out of a habit once in a while is what keeps one's mind sharp.
A big "fuck you" to financial analysts with those groundhog-day mindsets for making my life much more miserable than it needs to be and for adding a chilling effect to those little getaways that make life interesting and worthwhile. I despise you for this.
Surely this depends on how the vendor sets their prices? If you're going to buy something from a website to test a stolen credit card you don't just get to make up your own prices.
And I think you may be over-indexing on the US "prices don't include tax" thing. Elsewhere, round-number prices are extremely common.
In fact a lot of the rest of the stuff in the post seems like it wouldn't work very well either. (E.g. you're flagging anyone who has done a transaction in the last 90 days outside the range of hours at which they have 2+ transactions? Wouldn't that be like 50% of people?).
It's unclear to me whether this article is an attempt at breaking down complex expertise into over-simplified SQL queries, or whether it is all speculative and made up.
There is a conflict between "Six SQL patterns I use to catch transaction fraud" and "Nothing here comes from anything I’ve actually worked on or seen".
I don't usually buy gas, coffee or snacks at 2am. But on the very rare occasion that I do, I'm dealing with some kind of personal emergency and don't also want to have to call my bank.
I get that that's also a time opportunistic thieves, etc, might be operating. But the cost of false positives is also a thing.
However, before going to a distant country, which was also in very different time zone, I warned the bank that issued the card that I intended to use, so that they would not consider suspicious either the place or the time of the transactions.
Buying a full tank of gas and then a full tank of petrol (dual-fuel vehicle) in two separate card transactions one after the other. Can't use the same pump, annoyingly, at least with the old system in Morrisons. Don't know if it's different since their petrol stations have been bought over by Motor Fuel Group.
Similarly, buying a full tank of gas at Morrisons in Bradford where the supermarket chain's headquarters are, then driving five hours north and refuelling again in a different Morrisons which show the transaction as coming from their banking systems in Bradford but tagged as a city in Scotland. This is apparently because it's implausible to drive from the central England to central Scotland in a few hours, and then need to refuel.
They are (or was at least, last checked a year ago) 100% repeatable.
Coffee usually _is_ a round number in my experience, and I know of people who aim for round numbers when filling their car, and of fuel stations which require a pre-set value, often 10, 20, 50€ etc
> Real cardholders almost never buy something for exactly $1.00. Coffee is $4.73, gas is $52.81. The roundness is the signal.
Card got blocked as they thought it was fraud. Annoying! And not something inebriated me wanted to deal with at 2am.
Ok. Maybe they protected me from myself, but still!
> When a skimmer compromises a card reader at, say, a gas pump, you don’t get one fraud case. You get dozens. Every card swiped at that pump for the next few weeks is now in someone’s database. So the symptom from the merchant side is: an unusual number of unrelated cards spending more than usual, in a short window.
So he checks for hour-bucketed increases in high-value transactions originating from that merchant.
Seems to me like a good way to catch a sale, an opening, a launch event, or a product “drop,” a single high-value sale that somebody spreads across several cards… less so a good way to detect a steady trickle of stolen card data that’s inexplicably used back at the same merchant.
If you’re installing a card skimmer, why would you charge the stolen cards at the same business where you’re stealing them? And why would you concentrate your spending into bursts if the skimmer’s harvesting all day every day?
If you’re the merchant doing the skimming in order to spend at your own store, wouldn’t it be easier to punch a higher amount into the terminal? If you’re a skimming ring, wouldn’t you prefer to have purchasing power rather than this $5000 threshold (?!) of extra gas (plus a giant neon sign advertising where you placed your skimmer)?
Wouldn’t a more sensible approach involve something like looking for merchant clusters in the combined transaction histories of known-stolen accounts?
The LLM runs so strong in this whole enterprise… I want to give the person the benefit of the doubt, but I can’t resist the sneaking suspicion that LLM fabulism to push a slop novel just wasted 15 minutes of my life.
a) trivial to bypass by adding dither to the test transactions and
b) trivial to improve upon with proper statistical analysis and
c) shouldn't this kind of heuristic pattern recognition with no expectation of near-100% accuracy be what AI is good at?
I know someone who worked in fraud detection of financial transactions. He told me that indeed lots of filters that are applied mostly test for anomalies. The thing is that most criminals are not insanely smart, and commonly don't have a lot of inside knowledge about accounting, banking, finance system etc., so criminals often have a bad intuition about more subtle things that are looked at for fraud detection.
But if you are a very dedicated criminal with lots of inside knowledge about, say, accounting, banking, finance system, ..., you could likely outsmart these filters. But these people typically have much better career options (even if they want a career as a "big fish criminal": just look at the history of accounting scandals, stock manipulations, Ponzi schemes, ...).
Just set up a direct debit to your favourite charity.
the point of these drives is to get more people to give to charity. Then you use a lullaby-word as if setting up a charitable donation is as easy as saying "yes" when the checker asks if you want to give a small donation.
Its actually very easy to give £5/month direct to a charity. Takes about 2 minutes, just gotta do it.
Or normal people living in Europe in border-adjacent areas.
Also, I guess you don't include card-not-present transactions in this, but you incorrectly assume that every merchant has their location set correctly. And that every sale happens in a brick-and-mortar establishment, not from travelling salespeople or whatever. And that all transactions happen online.
We learn simultaneously that 'your team' shouldn't rely on any one of those patterns ('none of them is enough'), but that pattern 1 'alone will surface a useful amount of fraud'.
We also read strange sentences like "Every analyst on your team will use them (ie window functions) once they exist, and adding the next fraud pattern stops being a project. [end of paragraph]"
Or irrelevant discussions about how filtering by "IS NULL" might be not applicable when almost none of the provided examples uses it (and the one which does uses it in different context).
This is low quality and too long.
"Fixel Smith" is an AI-generated person, with an article that has very little to do with fraud analysis. 'This' is also a music artist (1), novelist (2), fraud analyst (3), influencer (4), and whatever else you can imagine.
220+ points and 70 comments, and very few notice it's quite a fake post — and no one that it's an AI generated person?
1. https://www.amazon.it/Forged-Soundtrack-Explicit-Fixel-Smith...
2. https://fixelsmith.com
3. https://analytics.fixelsmith.com/
4. https://www.instagram.com/fixeltales/
Makes me wonder if this AI flood uncovers the unflattering truth about this community acuteness, or it's only a failure of existing guardrails and we just need to change them.
Last week I was on a privacy panel discussion where one of the speakers had a bad microphone, and honestly no one in the audience could actually hear what he was talking about for five minutes. Afterward, the audience, perhaps out of politeness, perhaps because no one really cared, reacted and applauded as usual.
This post, and especially its reception, feels like a caricature of what's happening in the tech industry right now. Hundreds upvote, very few really try to understand what the context is about, much less have the real experience to judge it, and yet here we are on the front page of HN.
Now the East Coast will wake up, and I'm really curious whether anything will change in this thread.
Well,sure. But some people come here just for the comments and don't read the articles
1. https://github.com/tirrenotechnologies/tirreno
2. https://fixelsmith.com
It's definitely a real post. Yes it's obviously LLM-written, but if the worst thing you can say about an article is that it looks LLM-written, then maybe you don't have any real criticisms.
Whether the contents are made up or not is unclear, but you can criticise the content of the article without needing to speculate on whether it was written by an LLM or whether it is a work of fiction. It has plenty of much more concrete flaws.
The fact that the 'person' behind this post managed to publish a novel, a music album, and a few posts on fraud prevention all within a few days this month is enough for me to redflag it.
> you don't have any real criticisms.
Please check once again, I've already given my opinion on the article itself below. Here it is again for your convenience.
> I question the described approaches. For example, while impossible travel is a legitimate and widely used technique, it's related to online user behaviour based on IP address. Moreover, tirreno, for example, has separate rules for cases where the IP clearly comes from Apple Relay or VPN/Tor — those are separate flags. I assume some or all examples are LLM-generated, as the context is mixed up and no one actually collects GPS location in bulk for card swipes.
A fake blog post would be something that purports to be a blog post but actually isn't. This is definitely a real post.
Kind of fascinating, though it could still be a person doing this using AI as opposed to an entirely generated persona. Thanks for bringing it up.
Bunch of thresholds, no data proving those thresholds are meaningful.
I question the described approaches. For example, while impossible travel is a legitimate and widely used technique, it's related to online user behaviour based on IP address. Moreover, tirreno, for example, has separate rules for cases where the IP clearly comes from Apple Relay or VPN/Tor — those are separate flags. I assume some or all examples are LLM-generated, as the context is mixed up and no one actually collects GPS location in bulk for card swipes.
1. https://github.com/tirrenotechnologies/tirreno
It's also not all program-integrity, which is the only work that could justify such blanket statements. Worse is better as long as it addresses the problem domain.
Fintech clients are generally interested in knowing whether a transaction happening _right now_ is fraud. They want to know that in a few milliseconds, for high-dimensional data. It's work done at a scale where relational databases cannot meet these real-time constraints, and instead find other uses like historical data loading. That's how you end up with in-memory databases, stream-processing engines, and yes, even machine learning.
Having said that, some of the author's points are valid, and I'm looking forward for their next writings, in particular dealing with noisy alerts is a general problem beyond performance engineering.
For Prevention, you're always going to be constrained by latency requirements, available data and an incomplete picture of user behaviour. You make a quick decision using ML and rules that deals with the majority of cases. But those constraints make it impossible to precisely prevent all fraud.
Detection deals with the downstream consequences of this. A team of analysts will typically analyse the accepted transactions for signs of fraud. This is particularly important for fraud types where you don't get an external signal like a chargeback or customer complaint. Platform integrity is one such example. But Fintechs will also see this building anti-money laundering systems - you need to go looking for the fraud. This is the process the article is describing.
I say they're complementary because the detected transactions become the labels for training and evaluating the next iteration of prevention models.
Can also imagine an edge case: couple shares an online account, one is traveling and purchases with the saved card details.
This is an underrated CX factor: If my card gets denied when i’m a new customer or exhibiting a new pattern, i’m impressed with their software.
However if they deny a transaction where there is any previous history of me authenticating, then I’m frustrated by their naive paranoid algorithm.
Fraudulent transactions will eventually cost the bank (when they would have to reverse/reimburse it and eat the loss). A denied transaction only results in an angry customer who will quickly forget after they complained - so the customer bears the brunt of the externalized cost.
Therefore, the bank's incentive is to err on the side of more caution, and deny transactions when finding false positives.
The worst scenario for a credit card issuer is when a customer, for whatever reason, starts using another bank’s card in their wallet as their daily driver.
Human analysts need to be able to explain to compliance in a single 5 minute email why a specific transaction was declined, and most importantly, what could have been done differently to avoid the adverse decision.
Fixing one problem with ML often creates two new problems that aren't quite obvious yet. SQL tends to have fewer surprises with regard to regressions and unexpected side effects as things change over time.
So this is really just surfacing cases, but with not enough context to be useful to prioritise. I would expect a score to be included.
Apart from that it misses a lot of signals like refunds, declines, disputes etc [1].
1) https://stripe.com/gb/guides/improve-fraud-management-with-r...
If you're affiliated with Stripe as an Integration Engineer, that should come before the marketing link to Stripe website.
I actually find it intriguing that you work for Stripe and therefore presumably understand the content of the article you're referring to, but continue to pretend that the SQL examples somehow have value for fraud prevention purposes.
OK, let's take a look at this SQL. I took a random example:
> select
> date_format(date_trunc('week', d.created), '%Y-%m-%d') as week_iso,
> r.rule_id,
> r.predicate,
> count(distinct d.charge_id) as count_total_charges
> from rule_decisions d
> join radar_rules r on r.rule_id = d.rule_id
> where d.created >= date_add('month', -3, current_date)
> and d.action = 'block'
> group by 1,2,3
> order by 1;
The example above is about grouping rule decisions by Radar rules for performance optimisation, and has no value for any other fraud prevention techniques.
Overall, the test is simple: the link is called 'How to continuously improve your fraud management with Radar for Fraud Teams and Stripe Data' and the article itself is in the 'Product resources' category. It is not a general example, and using Stripe is necessary to get any value from it.
All of this makes the article marketing material, and given that you're employed by this company, that must be disclosed.
For example "Impossible travel": these days you can add your credit card to your phone and use Apple Pay. Well, this is useful for many things, one of them being adding your credit card to your kid's (teenager) phone, so that your kid can use your card in case of need/emergency when they are away from you. I did exactly that recently and actually worried about fraud control systems when my child paid using my card in Boston while I was in Europe.
Many things which you think are true might not be.
Anecdotally, US banks are terrible at building fraud control systems. It seems US banks assume any transaction that is charged by an entity outside the US is fraud. In my 10-year history of running a SaaS, the US banks and their "fraud control" systems have been one of the biggest billing problems.
This rings home so true, as a Canadian company I am SO TIRED of US banks flagging our transactions as fraud. We have done so much to try to prevent it too. We have a mail forwarding office address in the US. A bank account in USD in the US registered to that address, the merchant account tied to that charging in USD, and still we get these fraud flags. And we’re over the 10 year mark now, I think almost 15. You would thing we would have built up some trust at these banks, but nope.
My next biggest hassle lately is we are a “tokenize and bill later” type service, and we don’t charge monthly recurring exact same amount, depends on the users incurred charges in that period. And lately it seems most Americans leave their cards on a permanently lock, and only unlock to allow a charge, this means most of our charges decline initially until the user unlocks their card and retries the payment. A real support headache if any has a fix to either of these problems I would pay good money for it.
What about the tables?
> What works is running them all and scoring each transaction across the signals. A transaction that fails on three or four of them is almost always fraud. A transaction that fails on one might be your grandma being weird with her debit card on vacation.
The DPAN is translated into the CPAN by software at the issuing bank, so it's not identifiable by the merchants.
Merchants get the "last 4" digits, but that's not enough to identify specific CPANs.
It isn't though, so balancing it with other rules is fine.
Basically it's not just banks and formal financial institutions doing this, and how they do it depends on the company size. Size tends to correlate not only with how many resources you have for a risk team, but also with whether fraud rings are targeting you.
Usually what I've seen is that companies start with some kind of batch SQL/simple logic process that runs daily and tends to flag accounts for manual review and block automatic events like settlement or trading (or whatever the platform does) until that review has been done. Then over time the company will transition to an ML-based approach that still mostly flags things for manual review. The goal of the ML is to improve the precision of the flagging without hurting dollar recall or fraud event recall too much. Depending on the payment system companies may be sensitive to both (for example, in ACH if you get too many returns, even very low dollar payment returns, you're going to get a hard time from your partner bank and you risk not being able to use ACH anymore).
Lagging window functions and/or lateral joins probably would have reduced it to 1/4 the size but definitely increased the cost versus just narrowing the sets into smaller tables first.
> The roundness is the signal.
> Slight pain, same result.
to point at a few.
And my favourite most hated pattern, the no no no:
> Not machine learning, not graph databases, not whatever Gartner is hyping this year.
This is Claude talking isn’t it.
How do you deal with vacations and online shopping. You could be in another country or two in a few hours and purchase from across the world
Machine learning systems also learn your pattern. The article gives simple SQL rules. Don't dismiss this article as worthless.
Signal's he can check? So some random dude is looking at my credit card purchase history while playing around with his SQL queries?
But I must admit there was a point where I suddenly lost my love for SQL and it was pretty much when the OVER PARTITION BY syntax appeared.
It never clicks. I always have to look up how it works, I always find it unintuitive. I've never understood why I hate it so much.
> Most people are creatures of habit when they spend money. A nine-to-fiver doesn’t suddenly start buying gas at 3am.
Breaking out of a habit once in a while is what keeps one's mind sharp.
A big "fuck you" to financial analysts with those groundhog-day mindsets for making my life much more miserable than it needs to be and for adding a chilling effect to those little getaways that make life interesting and worthwhile. I despise you for this.
Or, the cardholder is trying to do the cannonball run:
https://www.youtube.com/shorts/Dx5WPNIEwiE
chargeback-mcp
or would you turn it all into a markdown file and call it a skill?