ChatGPT serves ads. Here's the full attribution loop

(buchodi.com)

114 points | by lmbbuchodi 1 hour ago

18 comments

  • didip 5 minutes ago
    So news about OpenAI demise is real. They can’t sustain themselves without ads.
  • WD-42 1 hour ago
    Since they are served as distinct events then I would think they should be easy to block.

    Once the ads are injected directly into the main response is when things get interesting.

    • kardos 21 minutes ago
      > Once the ads are injected directly into the main response is when things get interesting.

      This would be where you post-process the LLM response with a second LLM to remove the ad..

      • tempest_ 8 minutes ago
        This is already how email works in the corporate world.

        A writes email with chatgpt to B.

        B sees big blob of text and summarizes email with chatgpt.

        Adding an LLM in the middle is just the next step.

        • torben-friis 2 minutes ago
          It's like one of those memes about the worst possible date picker, except for a communication system.
    • lmbbuchodi 1 hour ago
      you can block these URLs: |bzrcdn.openai.com^, ||bzr.openai.com^ It won't blanket block everything but will significantly reduce telemetry collected.
    • TZubiri 14 minutes ago
      Blocking transparent ads is not a good idea. The consequence is that you will be fed opaque ads.
  • blackjack_ 9 minutes ago
    It is one of the eternal lessons; All tech business plans eventually lead to serving ads. At least until we ban pixels / 3rd party tracking.
  • benleejamin 48 minutes ago
    I'd always thought that ChatGPT ads would be indistinguishable from actual content.
    • ticulatedspline 8 minutes ago
      I think that's where they want to be. feels like everyone knows it too, that the long term expectation is basically being able to buy ad words and have LLMs lean responses towards whatever people bought.

      Seems the playing field is a bit too open though, models are more fungible than the companies would hope so most of the current moat is brand based and seems like they're not ready to go all "Black Mirror" on us just yet.

    • irjustin 36 minutes ago
      this would be a breach of trust and short term would work great but long term is too detrimental.

      same thing could've been said for search results, so at least that part is still "safe".

      • SchemaLoad 19 minutes ago
        Long term all of the major LLM platforms will have invisible ads, influences, and propaganda woven into the content. The temptation will be irresistible for these companies.
      • bix6 34 minutes ago
        O you think trust matters? This is capitalism not trustism.
        • PradeetPatel 18 minutes ago
          Long term retention is built on brand trust and usability, then ensh*ttification happens.
        • nalekberov 17 minutes ago
          No, this is late stage capitalism without regulation.
    • senectus1 2 minutes ago
      I'm pretty sure that will be an eventual evolution of the product. The business model cant sustain itself as it is at the moment, eventually chatGPT wont be the product... we the users will be.
  • Aurornis 35 minutes ago
    The ads are in the free tier and the new ad-supported $8/month plan.

    Every time this comes up there are comments assuming that ads are being injected into the normal plans, but these are for the free tier and the new Go plan which warns you that it includes ads when you sign up.

    • ceejayoz 14 minutes ago
      Cable TV was once ad free. So was Netflix. Companies just can’t help themselves.
    • darepublic 28 minutes ago
      Would require a lot of training to implement ads blended into convo and not have it be too obvious/ eff up the results?
  • infinite_spin 53 minutes ago
    I see OpenAI making a significantly larger amount from defense contracts than from advertisements pumped into chats. So I wonder whose bright idea it was to create a public perception risk.
    • peddling-brink 47 minutes ago
      Maybe the negative press from ads is better than the negative press from powering murderbots?
      • tayo42 32 minutes ago
        Bad press from a contract like that happens once and everyone forgets. Ads are in your face everytime
    • Larrikin 42 minutes ago
      Every single MBA can show for at least one quarter revenue is up after they introduced ads. They do not care what happens after if they can plan their career around that.
  • djmips 1 hour ago
    And it begins.
  • keyle 1 hour ago
    Can't wait for "watch this ad for 90s to use xxhigh on your next prompt!"
  • dankwizard 33 minutes ago
    Really well written, technical post. Good read.
  • vicchenai 1 hour ago
    figured this was inevitable once they started the free tier. the attribution loop being a separate event stream is actually kind of clever engineering though -- means they can A/B test ad formats without touching the core model response
  • avaer 53 minutes ago
    Remember that ads are the "last resort" for OpenAI, and they're doing this despite the fact that it's "uniquely unsettling", according to Sam.

    Was he lying, or has OpenAI given up hope that this train wreck works economically without enshittification? Neither option is good, but I don't really see a third.

    • Aurornis 34 minutes ago
      The ads are only for the free and $8/month plans. They basically added an ad-supported super discount level that you can ignore if you’re paying for the normal plans.
      • RussianCow 17 minutes ago
        But the fact that they've added an ad-supported tier this early into their life as a company means they're desperate for revenue. You start inserting ads when you're optimizing for profit, not when you're still growing. It took how long for Netflix to introduce an ad-supported plan?
        • milkshakes 13 minutes ago
          when did netflix offer a free tier?
  • mock-possum 23 minutes ago
    Not to me they don’t, cause I canceled my account and stopped using their products when they made the announcement.
  • singingtoday 1 hour ago
    I don't like anything about this.
  • BoredPositron 40 minutes ago
    I don't get what's wrong with charging for your product. Like get rid of the free tier and make a small tier with an easy to serve model for like 5 bucks. Is it still the DAU rage of the 2010ss that's driving burning money?
    • teaearlgraycold 34 minutes ago
      How do you pick up new paying users without letting people use the service for free for a while first? Freemium is popular because it works well.
  • uriahlight 1 hour ago
    Let the enshittification commence!
  • gxs 1 hour ago
    This is gross

    It feels like we’ve been in the golden age and the window is coming to a close

    Let the enshitification begin, I guess

    • dannyw 59 minutes ago
      How do you expect the spend & COGS for free LLM inference to be funded? For users who don't want to pay, or maybe can't pay?
      • derektank 42 minutes ago
        Perhaps it’s a glib and easy thing to say, but after a teaser period, I would simply not offer free LLM inference. Agreeing to serve ads just completely re-aligns your interests away from providing the best possible user experience to something else entirely.
      • infinite_spin 49 minutes ago
        From things like defense/private contracts

        e.g. colleges pay for institutional subscriptions

        • 2ndorderthought 41 minutes ago
          The average person doesn't benefit from defense contracts ... Like ever.
          • IX-103 15 minutes ago
            The average person is slightly more female than male and has 2.1 children, but they do benefit from defense contracts since it makes up a small percentage of their salary.
    • iammrpayments 49 minutes ago
      It has begun ever since they nerfed chatgpt4 before releasing 4o
    • 2ndorderthought 1 hour ago
      In the past month local models have been ramping up in major way meanwhile the namesake providers have upped prices, went offline randomly, and started doing slimier and slimier things.

      I really think the future is local compute. Or at least self hosted models.

      • SchemaLoad 1 hour ago
        The hosted ones still have the advantage of being able to search the internet for live info rather than being limited to a knowledge cut off date.
        • gbear605 1 hour ago
          I’m not sure why a model needs to be hosted in order to make network calls?
          • hansvm 1 hour ago
            Is there a library of good tools for LLMs to call? I have to imagine the bot-detection avoidance mechanisms are a major engineering effort and not likely to work out of the box with a simple harness and random local LLM.
            • ossa-ma 1 hour ago
              Even the hosted ones are blocked from searching certain sites, for example Claude is banned from searching Reddit:

              `Error: "The following domains are not accessible to our user agent: ['reddit.com']."`

            • wyre 57 minutes ago
              Tavily, Exa, Firecrawl, Perplexity, and Linkup are all tools for agents to search the web.

              I’ve been building a harness the past few months and supports them all out of the box with an API key.

              • goosejuice 20 minutes ago
                Kagi also has an API. People who hate ads are probably the same folk that should be paying for Kagi. That's the sane alternative world where companies respect their users.
        • darepublic 1 hour ago
          Local ones that support tool use can do the same
        • eightysixfour 1 hour ago
          You can do that locally too!
      • CSMastermind 1 hour ago
        What's the rough equivalent of a local model? Are we talking GPT-4?
        • 2ndorderthought 47 minutes ago
          Qwen 3.6 which was released this month is a large but still smaller model. Supposedly it's at about sonnet level when configured correctly. It can be run on commodity hardware without purchasing a data center. https://www.reddit.com/r/LocalLLaMA/comments/1so1533/qwen36_...

          Then there are middle size ones which require multiple gpus which are like gpts latest flagships.

          Then there is kimi 2.6 which is a monster that is beating opus in some benchmarks. https://www.reddit.com/r/LocalLLaMA/comments/1sr8p49/kimi_k2...

          It's basically whatever you can afford. Any trash heap laptop can run code auto complete models locally no problem. The rest require some level of investment, an idle gaming pc, or a serious investment

        • Terretta 1 hour ago
          Depends on your VRAM or "unified" memory for how smart it is, and CPU/GPU for how quick it is.

          128GB of RAM? Sure, the early to mid 4s releases, except maybe 4o. And on an M5 Max, about the same speed.

          I wouldn't really bother under 64GB (meaning 32GB or less) except for entertainment value (chats, summaries, tasky read-only agent things).

        • kay_o 1 hour ago
          GLM 5.1 and DeepSeek 4 are acceptable, but the cost of hardware and energy cost that depending on your use case you may as well purchase a Tokens. They get useless and stupid rapidilty if you quant enough to run on single 16-24GB GPU style.
    • rnxrx 1 hour ago
      The arc of the technological universe is short, but it bends toward enshitification.
  • jesse_dot_id 49 minutes ago
    That's cool, I'll never see them.