Specifically, I was curious about how Harvard's endowment has grown from its initial £780 in 1638, so I asked Google to calculate compound interest for me. A variety of searches all yield a reasonable formula which is then calculated to be quite wrong. For example: {calculate the present value of $100 compounded annually for 386 years at 3% interest} yields $0.736. {how much would a 100 dollar investment in 1638 be worth in 2025 if invested} yields $3,903.46. {100 dollars compounded annually for 386 years at 3 percent} yields "The future value of the investment after 386 years is approximately $70,389." And my favorite: {100 dollars compounded since 1638} tells me a variety of outcomes for different interest rates: "A = 100 * (1 + 0.06)^387 A ≈ 8,090,950.14 A = 100 * (1 + 0.05)^387 A ≈ 10,822,768.28 A = 100 * (1 + 0.04)^387 A ≈ 14,422,758.11"
How can we be so reasonable and yet so bad!?
Aside from the general limitations of this technology, Google needs this to be quite cheap if it runs for every request.
There is not a lot of revenue for a single search, and right now the AI results are actually pushing the links people are paying Google to display further down the page.
The AI overview is worse than useless. It either hallucinates things or it treats shitposts as equally valid information-wise as anything else.
Though also as a sidenote Harvard's endowment probably wasn't put in a bank account with a flat 3% interest rate for a few hundred years...
"slopsquatting" is the term coined for this.
Essentially, bad actors are registering these packages and uploading malware. If you happen to just blindly follow the AI, there's a chance your system gets infected.
I saw one example where someone asked about the fastest way to boil water, and the AI overview confidently stated that adding salt lowers the boiling point significantly, making it boil faster. It sounds vaguely scientific but gets a fundamental concept completely backward! That's the kind of error that's more worrying than just bad math – it confidently misrepresents basic, easily verifiable science.
It's a strange feeling having to approach Google search results with a layer of skepticism now, which used to be the gold standard for getting quickly pointed to reliable info. The AI Overview feels like a glossy, sometimes misleading, advertisement for the links I actually wanted in the first place.
Btw Google, you’re welcome I’m clicking links and making you money. For my needs, you’re a great search engine.
His example was "A swan won't prevent a hurricane meaning"
https://simonwillison.net/2025/Apr/23/meaning-slop/
Like people asked "does Lululemon use <name of some chinese company> to make its products" and Google says "yes", with no source except one tiktok video that falsely claims it to boost sales in face of tariffs. (Ignoring that it's not in the actual supplier list published by lululemon on their site)
Which means basically people would see that tiktok, go to fact check on google if it's true, and google overview will say "yes" (+ paragraphs of text that no one reads) citing that tiktok.
Vicious circle of LLM factchecking. Google used to be immune to it until it started to shove chatbot output to people's faces.
Also, the assumption of '3% interest' is wrong. There are records of stretches achieving 15% returns for several years and reaching 23% in 2007, for example.
https://www.bloomberg.com/news/articles/2005-01-11/harvard-l...
https://www.wsj.com/articles/SB118771455093604172
This was 2 minutes of old school search, no LLM needed.
What I am saying is that asking an LLM to do interest calculation is absurd in itself, let well alone the absurd setting of trying to calculate interest rates across 4 centuries and different denominations.
It would be much more rational, in seeking to understand the growth of the Harvard endowment, to search for factual information about its modern history is my point. And if you wanna do abstract financial modelling exercises just use spreadsheets. Either way LLMs are a hilariously bad fit.
780 compounded by 3% per year for ~400 years is about 100 million by the way. So ignoring all else, off by at least two orders of magnitude.
Also, I don't quite get it:
> everything was beautifully accurate up until the end when the number felt suspicious
The LLM generated text about compounding interest over 400 years from early modern british pounds to modern dollars was accurate? How is it possible to be accurate about an absurd operation?
I've been having my AI stuff successfully do math since early gp3 days with this method— even before "tool calling."
Why?
One item said 25/7/25 the other one said 25/7/24 as you can imagine I was sure the first one was safe but the second one was confusing.
It told me that it's safe to eat because japanese date format is Year / Month / Date.
I looked up japanese date format in google (with overview) just to confirm. I guess we'll find out. Will report back soon.
And even if you do ask a legitimate question, you have to then hope the system knows what you actually mean rather than taking every word in your question literally and returning a complete non answer. So you might ask "was [actor name] in Chicago (the movie)?", only for Google to say "no, [actor name] doesn't live in Chicago".
Add the dangerous misinformation, the extremist answers sometimes generated and its attempts to make up sources if I can't find any, and well, it's basically useless for just about everything.
We're all in IT. We know what an LLM is. But still we're fooled!?