77 comments

  • ryandrake 1 day ago
    Not just Amazon, too. It feels like all of big tech (and some smaller firms) have simultaneously gone insane. Imagine if your CEO woke up one day and told the company: "We need to encourage travel spending. Please book as many business trips as you can, and spend as much money as possible. Fly first class to our satellite offices! Take limos instead of Ubers! Eat at fine restaurants! Make sure you are constantly traveling. In fact, we are going to make Travel Spending part of your annual performance review: If you don't spend enough on business travel, you'll get a low rating!"

    We are living in a totally bonkers time.

    • dtnewman 1 day ago
      This is what inspired me to build my new CLI tool, Burn, Baby, Burn (https://github.com/dtnewman/burn-baby-burn/tree/main).

      (If you are a VP at Amazon, yes, I'll consider acquisition offers. I'm also working on an enterprise version of this with additional features.)

      Show HN here: https://news.ycombinator.com/item?id=48151287

      • stephenhuey 1 day ago
        Just sent it to some developers who could really benefit from this! Please let us know when you have Codex and Gemini versions ready to rumble.
        • dtnewman 1 day ago
          Sorry, it will be a while. We're currently building out enterprise features like SSO/SAML support, role based burn access, and a carbon offset marketplace. As you can imagine, we're burning a lot of tokens to get these out, but actual productivity isn't up as much as you'd think.
          • ted_bunny 12 hours ago
            How about a built-in AI assistant to answer my burning questions?
        • sznio 1 day ago
          I want a in-browser Gemini version. For some reason my company doesn't count Gemini CLI use. I guess I'm supposed to copy code between my browser and my editor.
      • sph 1 day ago
      • dyauspitr 1 day ago
        Only problem with this is that outcome metrics are still jira storypoints. Burning huge number of token while not improving the velocity is going to get you fired.
        • recursive 1 day ago
          If we had a way of measuring velocity, we'd already be using that instead of tokens.
          • promano 1 day ago
            We had a way of measuring velocity, but who cares about estimating stories when we could be spinning up more agents? Burn a bunch of tokens and those stories will be DONE before you could even find your planning poker cards!
            • recursive 1 day ago
              I've lived through a bunch of initiatives about improving planning and estimation. None of them turned into a stable process that worked for anyone. I don't know if I can extrapolate from that, but it gives me an inclination that no one really trusts anything that comes out of task estimation. Which would be why we're looking for more objective metrics like token burn rate. No room for argument - tokens are tokens!
            • jimbokun 23 hours ago
              This but unironically.

              The speed of generating code is now faster than the time it takes to plan and estimate how long it will take to generate the code.

              • recursive 23 hours ago
                Generating more code faster might be useful, but there have to be some other constraints on it.

                Using this paradigm, we can achieve unlimited bugs sooner than ever before.

                1. To fix a bug, always add code, never remove. 2. Whenever you fix one bug, always introduce at least two new ones.

                • joquarky 5 hours ago
                  This sounds like government software, in my experience.

                  I was brought on to one particular team to do cleanup and all I was given was band-aids to layer on top.

                  Odds are good your local or state government is running this software right now for managing its courtrooms.

          • dyauspitr 1 day ago
            What do you mean? You get story points for free with jira. That’s like the one metric every place uses.
            • recursive 1 day ago
              Story points are unicorn dust that crumbles under any attempt of serious optimization. The fundamental problem is that SP is not an objectively defined metric. If we come under serious pressure to improve velocity measured by SP, there's nothing to stop that initiative from trickling down into the SP estimation/measurement. SP works fine as long as you don't look too closely at it.
        • cgio 20 hours ago
          Next feature is creating stories. Double burn.
      • cyanydeez 1 day ago
        any plans for a distributed deployment via cloudflare works. I'm not sure this thing is powerful enough for my use case.
        • dtnewman 1 day ago
          Yeah, lots of enterprise features in the works, but first i need to raise money at a $1B+ valuation (this might seem high for a project that started 4 hours ago, but it's actually very low for the project that will soon be the #1 consumer of tokens on the planet)
          • Esophagus4 1 day ago
            You got four hours of Claude Code usage without hitting a rate limit???
          • cyanydeez 1 day ago
            recommend you extrapolate your value based on the token spend rates of FAANG; if you can spend 10x FAANG, then you should get atleast 10x valuation. godspeed.
      • LikeBeans 1 day ago
        Brilliant
        • kurthr 1 day ago
          Like attack ships off the shoulder of Orion, the only way to burn!
      • jimbokun 23 hours ago
        This is hilarious and utter genius.
      • eudamoniac 1 day ago
        Won't the company audit the requests to AI and see you're sending a bunch of BS?
        • jimbokun 23 hours ago
          If only Scott Adams were alive to write Dilbert comics about this.
        • palmotea 17 hours ago
          > Won't the company audit the requests to AI and see you're sending a bunch of BS?

          Shouldn't be too hard to game. Version 2 uses the M365 MCP server to load up your email and iterate over all the messages, summarizing them over an over.

        • cleaning 15 hours ago
          Do you have an example of any company ever doing this?
          • eudamoniac 6 hours ago
            No, but if you run this non judiciously and burn 100x the next guy with no output, maybe they would want to know how
    • isk517 1 day ago
      I know some that was told to try and use AI more on the job so they created some agent to just burn tokens and ended up using about 10x what the next highest employee used. Buddy expected to get shit but instead got an accolade and was asked to give a short talk to the other employees about how they could match their success.
      • darth_avocado 1 day ago
        In my first job ever, I used to get my work done on time and leave. There were a few people who’d stay in the office until late and show up on weekends. Same output, but they got the promotions and my bonus got prorated.

        This is the same thing.

        • j-bos 1 day ago
          At least this one doesn't require spending the manhours moving dung from pocket to pocket, now we finally get credit for automating it!
        • jazz9k 1 day ago
          While output may have been part of it. It's possible that by staying later (and working longer), they had better relationships with upper management.

          "I used to get my work done on time and leave"

          This sounds like you just wanted to get your work done and not foster any work relationships. This is fine, but you will not get promoted this way (as you've seen).

          Moving up in a company is 30% work and 70% networking/being likelable/noticed.

          I stopped that nonsense years ago. I work for myself now as a consultant. If I work more, I get paid more.

          • Loughla 1 day ago
            I took a job with the state I live in recently because friends were promoted over competent employees (not even counting myself in that because they were just promoted to my level). New job is fully remote and has a clear path to advancement based on clear work based metrics.

            While it may be true that it's pretty standard, I'm convinced that any organization that relies more on face time and friendships than on actual skill is absolutely toxic.

            • jimbokun 23 hours ago
              It’s also a workplace whose success is in the past and has started the glide down into lower profits, revenue and wages.
          • darth_avocado 1 day ago
            You’re assuming a lot here. Getting your work done on time and leaving doesn’t equate to not being likable. If it was a popularity contest, I would’ve been around the same as the people who were pretend working, if not more. My partner and my director wrote me a recommendation letter before I left, which I wouldn’t attribute to something they’d do if I was a nobody.

            There are other reasons why the bad behavior gets rewarded. If the management is incompetent, they genuinely focus on the optics and not on the actual work. And if they are competent, they understand that the people who stay behind unnecessarily or come over the weekends are more exploitable in the long run. And if the people in management are the kind of people who stay behind unnecessarily, having a team full of people who do the same, rewards them as well.

          • reactordev 1 day ago
            Moving up is 100% being likeable.
            • mjr00 1 day ago
              Yes, with the caveat that the 30% work allocation counts toward likability. You can be friendly, charming, well-spoken, fun, etc., but if you fail to deliver and make work for other people, cause your coworkers frustration, and make your manager look bad, you're not going to move up. You will be able to coast for a while though, as managers have a hard time firing people they personally like.

              It's ultimately a combination. A pretty good software developer who is friendly and pleasant is, in most organizations, going to get promoted over the grumbling angry software developer who is brilliant but everyone hates talking to. A lot of this has to do with most work at more senior levels being communication.

              • reactordev 1 day ago
                Your last statement is exactly right. Communication matters most when you're dealing with cross-org concerns and those that master it are usually the more friendly and pleasant ones. This is something I wish more people understood. I even sometimes fall into the latter even though I strive for the former.
                • gajjanag 6 hours ago
                  > Communication matters most when you're dealing with cross-org concerns and those that master it are usually the more friendly and pleasant ones.

                  I don't agree with the second one, but agree with the first.

                  Throughout my corporate career so far, I have found plenty of hot air/pretty picture slide decks that exist solely for ladder climbers to climb. Said ladder climbers are usually all smiles in public and "friendly", but you have to watch out for knives behind your back.

                  • reactordev 3 hours ago
                    At that level it's all Hunger Games isn't it?
            • darth_avocado 1 day ago
              Likability helps you move up, competency keeps you there. The role of Likability in moving up is overstated.
              • reactordev 1 day ago
                Really? Because I've met a lot of incompetent "leaders" that failed upwards because of their likeability.
          • watwut 1 day ago
            You just described a bad management - the one that favors butt in seat and rewards lack of outside life over actual benefits to company.
      • bonesss 1 day ago
        That’s the part I don’t get: Engineers are smart enough to ask an LLM to ask other LLMs to ask other LLMs to load the policy manual then count the R’s in “LLM fork bomb”.

        Additional story points completed per week, versus token-dollar spent, or some such combo would seem more sane.

        But maybe they aren’t really tracking productivity, so tracking tokens is all they have? … I dunno which part of that is dumber.

        • idle_zealot 1 day ago
          We never figured out how to track productivity anyway. Only macro-level success in achieving measurable goals. Any AI metric besides "are similar goals being met more quickly" is people encouraging specific behaviors decided a priori.
      • jimbokun 23 hours ago
        We need an Office Space 2 just about AI shenanigans.
        • andrekandre 21 hours ago
          office space 2... i need this injected into my veins right now lol
      • robotswantdata 1 day ago
        I believe it
      • dominotw 1 day ago
        i call BS on this story
        • mrgoldenbrown 1 day ago
          If you've never seen this level of perverse incentive, you have been lucky. The creation of and subsequent exploitation of them aren't new. For pre computer examples: https://freakonomics.com/podcast/the-cobra-effect-2/
          • runako 1 day ago
            I can't find the reference right now, but I remember reading literature about studies done at large programming organizations (like IBM, government) who used LOCs as a performance metric. Programmers could earn more money by including more lines of code in their work. This went exactly the way you'd expect.

            Edit: I think it may have been from Capers Jones's _Programming Productivity_[1]. Published in 1986, based on research covering the prior 30 years(!) or so. We have known that bad incentives specifically distort the performance of programming teams for a long time.

            1 - https://archive.org/details/programmingprodu0000jone/page/n1...

          • breppp 1 day ago
            The worse example I know is the time the Belgians forced the Congolese to harvest more rubber by cutting their hands if they haven't reached the correct quota, ensuing a cross-tribe hands trading economy
            • wayeq 1 day ago
              > cross-tribe hands trading

              sounds like they had some cross cutting concerns

              </dad>

            • NicoJuicy 1 day ago
              Belgians had nothing to do with that, nor the then governement

              The king had a side biz

              • breppp 1 day ago
                Similar to the British in India, it was first controlled by some kind of company that benefitted the host country by extracting resources, and later on the host country took control. Belgium took control of Congo in 1908
          • phainopepla2 1 day ago
            While it is good story for illustrating perverse incentives, there is no good historical evidence that the cobra bounty program actually existed.
        • mrandish 1 day ago
          • aspensmonster 1 day ago
            >This article is about statistics and government policy. For Nazi analogies in internet discussions, see Godwin's law.
        • zeroonetwothree 1 day ago
          I have seen similar at my company so it is highly plausible.
        • bensyverson 1 day ago
          I call unintended consequences on this KPI culture
        • elictronic 1 day ago
          They polished the turd more than stating, but the bones are real.
        • DANmode 1 day ago
          I don’t.

          Things that rhyme with this have indeed been happening at the biggest names.

        • re-thc 1 day ago
          I call AI on this comment
          • dominotw 1 day ago
            why?
            • pfdietz 1 day ago
              Imitating your own utter lack of explanation or evidence?
            • wetpaws 1 day ago
              [dead]
    • malfist 1 day ago
      At my company we were told AI spend was part of perf review and that the "singularity" had happened. Now 20% of our infrastructure spend is tokens. The average number of pull requests per dev per week increased with all this spend. From 4.2 to 5.1. And that includes a huge chunk of PRs that are just agents changing a line or two in a config. It's all magical thinking
      • spaniard89277 1 day ago
        Since you're an idiot or fired if you point at this, just collect the money man.

        It's their money. They want to do stupid things? So be it.

        • rightbyte 1 hour ago
          It is not their money though. It is the stock owners' so they don't care either.
      • mynameisash 1 day ago
        > The average number of pull requests per dev per week increased with all this spend. From 4.2 to 5.1.

        That's it? I've seen people that are consistently putting out four PRs per day. I don't/can't even code review them. So much of what we do is now just rubber-stamping PRs. We were even told that we shouldn't be writing code by hand anymore.

        • jimbokun 23 hours ago
          My main problem putting out that many PRs per day is getting them approved and merged back into main so I can start the next one.

          I guess “stacked” PRs are a thing now? I haven’t figured out the process that avoids making the merges for stacked PRs a complete mess, though.

      • Sharlin 1 day ago
        Wow, the Singularity happened and nobody bothered to tell me about it?! Vernor Vinge and I.J. Good must be rolling in their graves fast enough to rip a hole in spacetime. Allow me to coin a term for this: Singflation.
        • lapetitejort 1 day ago
          Cory Doctorow recently published a history book on the topic [0]. Sorry you were left out. I am merely qubits floating in the void. Just finished reading Shakespeare's First Folio translated into Catalan for the tenth time. Wondering what to do next

          [0]: https://en.wikipedia.org/wiki/The_Rapture_of_the_Nerds

      • treis 1 day ago
        It's definitely not. It's a fundamental shift on how we interact with computers.

        It's a tractors on farms kind of moment.

        • malfist 1 day ago
          I brought data to this discussion. What did you bring?
          • treis 1 day ago
            Your data shows a 20% improvement. That's $20-100k a year depending on how much devs are paid.
            • baobabKoodaa 1 day ago
              You just compared this AI shift to "tractors on farms". Did tractors increase farming output by 20%?
              • treis 1 day ago
                The first tractors in 1911 or whatever probably did. 50 years on and it was many times that.
                • rsynnott 3 hours ago
                  No, at least not if you're talking about tractors broadly. The first practical traction engines (~1860) were pretty much instantly revolutionary.

                  I suppose you could make the claim that the first things commonly called tractors (which were petrol-powered) might not have increased productivity very much, but "tractors on farms" should surely be read as the revolutionary moment, not the change in dominant fuel type 50 years later.

                  Also, it's likely not safe to read "20% increase in PRs" as "20% productivity improvement".

            • chadgpt3 18 hours ago
              You don't get paid 20% more for achieving 20% more achievement, that's for sure.
            • watwut 1 day ago
              They dont show that. They show 20% more PR. That is not the same as 20% more productivity.
          • dyauspitr 1 day ago
            [flagged]
            • s1ngular1ties 18 hours ago
              100%. My engineering teams velocity (and I mean this in terms of bug free, valuable needle moving features shipped) has gone up immensely. TBF it was already a very talented senior group of people, but having that kind of group embrace the tooling for what it is has made a massive difference.
            • AlexeyBelov 14 hours ago
              Is this satire?
              • s1ngular1ties 5 hours ago
                Sorry, I can’t tell if you’re replying to me or to the parent - HN interface is hard to figure out…maybe the parent since it was flagged? - but I assure you that my comment wasn’t satirical in the slightest.
        • bigstrat2003 15 hours ago
          It's not remotely comparable to tractors. Tractors actually do their job correctly and consistently.
        • rightbyte 1 day ago
          Bad analogy. Horses were the automagic being replaced.
        • corywadd 1 day ago
          Agreed, people confuse the (totally expected) bumps and bruises of early adoption with somehow equating to "this technology is useless."

          The Wright Brothers couldn't cross the Atlantic in their first flier and plenty of subsequent designs crashed and burned (literally). But now air travel is commonplace. Same will happen with AI, we just have to get past these early pains.

    • bluGill 1 day ago
      My dad worked at a company that had their own travel agency (early 90s when you needed a travel agent for reasons that no longer apply), and he was often booked on the more expensive flight because the travel agency made more money. More than once he could have got first class for less on a different flight but company policy didn't allow him to fly first class.

      We have always been living in bonkers time.

      • glenngillen 1 day ago
        Most big companies still have travel agencies/companies manage their corporate travel. I can’t remember who we used when I was at Amazon, but I made a similar complaint to my manager once given I could fly cheaper in a higher class on a different airline (also one I had heaps of points with so I would have preferred it because I’d be able to upgrade further and/or use the lounge).

        Turns out the price I saw in the booking portal isn’t actually what Amazon paid. It’s kinda more like a rack rate listing. But then there’s all kinds of discounting/cash back that happens on the backend based on the amount of travel booked each month.

      • jimbokun 23 hours ago
        We have always been living in the Dilbert and/or Office Space universe.
      • badc0ffee 1 day ago
        I worked at a tech company in the early 2010s that had its own travel agency.
      • varispeed 1 day ago
        I used to know someone whose parent worked at travel agency (also 90s) and their whole immediate family could book trips wherever, but only economy class.
    • autoexec 1 day ago
      > It feels like all of big tech (and some smaller firms) have simultaneously gone insane.

      Some companies might just have been scammed by the marketing that told them that AI would make all their employees 10,000x more productive and save them billions and when that didn't happen the assumption was that it's because employees weren't using the magical AI as often as they should be.

      Other companies, especially those working on their own AI products, might want employees to use AI as much as possible because they hope it will provide them with the training data they'll need to eventually replace most or all of those employees with the AI. Punishing workers who refuse to train their AI replacement might make sense to them because even though it's costly right now they expect the savings down the road to be much much greater.

    • lbrito 1 day ago
      Exactly this.

      And the fact that it is an industry-wide meme at this point makes bright red flashing lights and klaxons go off on my mind that a catastrophic reckoning can't be too far. There's not enough money in the world to keep this up for too long.

    • andrethegiant 1 day ago
      Bragging about token usage is like bragging about LoC written.
      • syntheticnature 1 day ago
        When I was at Amazon last year, the bragging (from the AI poo-bah in my section of Amazon, note) about AI included "look at the total line count of commits from the heaviest AI users!"

        So if AI screws something up and re-writes it and then screws it up again, needing another re-write, that counted as more positive than if it was done correctly, and simply, the first time.

        • jimbokun 23 hours ago
          This is like when the Pointy Haired Boss offers a bounty for fixing bugs and Wally pumps his fist and says “I’m gonna go code myself a Porsche!”
          • iugtmkbdfil834 19 hours ago
            It is almost as if Dilbert was a documentary.
      • zeroonetwothree 1 day ago
        It’s honestly 10x worse than LOC. At least in the human era LOC had correlation to shipping features.

        It’s more like bragging about compiler cycles spent.

        • 0xy 1 day ago
          I don't know where you're working but LLM enhanced development has skyrocketed our rate of feature development. As an example, a project roadmapped to take 7 months was delivered in only 4.5 because of CC/Codex.

          I'm confused how anyone could believe it isn't an enhancer, unless they have refused to use any of the technologies.

          • s1ngular1ties 18 hours ago
            Yeah I’ve experienced much the same as you. Like it’s overwhelmingly clear from everything it’s enabled for us that we’re going far, far faster than we ever have, and the guardrails we have in play have helped guard the architecture and make it even harder to commit a bad PR. Sometimes in reading these comments I’m left wondering what sorts of experiences people are having elsewhere that’s left them this soured on its usage in business.
            • ryandrake 8 hours ago
              Everyone is replying but nobody is reading.

              Moving faster doesn’t necessarily mean delivering business value faster. You may be moving faster in the wrong direction.

              More code doesn’t necessarily mean delivering more business value. You’re piling on debt and if that debt is growing faster than the value of the code, you’re actually losing.

              And even if you somehow are delivering more, better code, faster, and without building technical debt: writing code is somewhere around 1-5% of the actual work and time that it takes to deliver a software product. At least in all the places I’ve ever worked. You are optimizing the wrong thing.

              • s1ngular1ties 5 hours ago
                You might want to read my earlier comment in the overall thread re: delivering business value and moving the needle features while still building with solid architectural principles. As I mentioned above, the AI stuff - with the rails I’ve put in to guide it - has actually lessened tech debt introduced, not made it worse.

                I’m well aware of what you’re saying, but we are definitively moving faster in the correct direction. If this hasn’t been the case for where you’ve worked, my sympathies!

          • morkalork 1 day ago
            You're measuring success with time to delivery, that's a reasonable metric. Same with volume of features shipped. Also good. LoC or tokens burned... not so much.
          • AlexeyBelov 14 hours ago
            I can confirm it's an enhancement in writing code specifically. We've been actively using CC in our company for more than 6 months already.

            Notably, the product itself isn't really better for users. And almost everything else apart coding now takes the bigger percentage of time. So as devs we could either just fuck around and refactor endlessly, or chill out and "complete the sprint in 30% of time". It was known for a long time that churning out code is not the bottleneck.

      • andai 1 day ago
        Obligatory:

        Negative 2000 Lines of Code

        https://news.ycombinator.com/item?id=44381252

        • syntheticnature 1 day ago
          Versus my sibling comment to yours, I actually sent that to some internal folks after the bit about AI+total lines committed was said.
          • dijksterhuis 1 day ago
            was there any kind of response or reaction to that? it’s something i would have done and probably wouldn’t have gone well. xD
            • syntheticnature 1 day ago
              I didn't send it very high up the chain (and was looking for a job at the time anyhow) but mostly got back snickers from peers and an "I know, but this is a directive from above"
          • s1ngular1ties 18 hours ago
            I’m surprised that lines removed isn’t something your bosses at the time weren’t also advocating for, TBH. I don’t blame you for looking around.
        • jimbokun 23 hours ago
          Timeless classic.
    • xp84 1 day ago
      Even as a very happy NVDA shareholder I agree with you. It's comical that managers are being so naïve as to think that you can crap out a dashboard of "tokens consumed per week" and get any useful signal at all from it, beyond learning who's not using AI.

      Incompetent use of a coding agent, or just general shenanigans, can burn tokens all day but it's not going to get tickets done.

      Just looking at the work output - how many story points, tickets, how many new bugs are opened, etc. has not become any less relevant a metric for productivity with AI. If you're a skilled and proper user of AI those numbers would be changing in the right direction, compared to before you had it.

      • autoexec 1 day ago
        > It's comical that managers are being so naïve as to think that you can crap out a dashboard of "tokens consumed per week" and get any useful signal at all from it, beyond learning who's not using AI.

        If some guy decides to spend a bunch of money bringing AI tools into the company things might get very uncomfortable for him if they're seeing zero return on that investment. He's sure not going to get recognition and a massive bonus for it. If on the other hand, he can put some numbers in a spreadsheet or powerpoint showing that employees are using AI all the time and profits are up again this quarter, maybe he can take some credit for that or at least keep his boss or the company's shareholders from questioning the wisdom of dumping so much cash into those AI products.

        • andrekandre 20 hours ago

            > things might get very uncomfortable for him if they're seeing zero return on that investment. 
          
            > If on the other hand, he can put some numbers in a spreadsheet or powerpoint showing that employees are using AI all the time and profits are up again this quarter
          
          thats exactly what i see first-hand; no actual measure of dollars in vs dollars out, just x number of employees are generating y number of pr's with z% ai code + this quarter we made a profit = ai productivity boost...

          total brainrot

          • s1ngular1ties 18 hours ago
            > just x number of employees are generating y number of pr's with z% ai code + this quarter we made a profit = ai productivity boost...

            TBH, if x,y, and z led to your company making an increase in profits over the quarter (im not saying it did, just that your recounting of it implies the connection was made), then it pretty much did its job. The last one is a more than meaningful metric for bosses, companies, and shareholders =)

            • autoexec 17 hours ago
              The point is that kind of lazy accounting isn't really showing that the profit increase wouldn't have happened without AI. It may even be the case that profits would have been better without the cost and use of AI, or even that by generating a bunch of AI slop they're accumulating costly technical debt they'll eventually have to dig themselves out of.

              We've already seen companies admitting that they aren't seeing a return on their investments in AI. We've also seen companies trying to convince us (or themselves) that ROI isn't important, and people should just stop worrying about it and that maybe in 5-10 years they'll start to reap some benefits. Some of the efforts people make to justify the costly investment they've made in AI can start looking a little desperate. All this token burning and tokenmaxxing might be another sign of that.

      • svachalek 1 day ago
        All those numbers are equally gameable and terrible metrics for productivity. With any of those, as with AI spending, you've got to look at actual results qualitatively. There's no shortcut.
        • jimbokun 23 hours ago
          The eternal evergreen lesson of managing software developers, and knowledge workers more generally.
    • 01284a7e 1 day ago
      I think a lot of these execs have equity in Anthropic... and the dumb ones that don't are just "keeping up with the Joneses" so to speak.
    • dehrmann 1 day ago
      It's more like "We really value face-to-face interaction, so we're going to track that with your total travel spend. We don't want to get in the way, so there's no budget."
    • fooker 1 day ago
      This would be hilarious if a bunch of companies did not already do exactly this with exec travel. And academics do this all the time when travel has to be funded from grants.

      One reason it works out like that for travel funding is that it’s often the ‘use it or lose it’ kind of funding. If you do not use all of the funds allotted, you can’t ask for more and could realistically get less.

    • xnx 1 day ago
      Good time to be a sane company then. "Never interrupt your opponent while he is in the middle of making a mistake." and all.
      • uuyy 1 day ago
        awesome phrase
    • proteal 22 hours ago
      What if instead the manager was saying: “hey team I need you to all buy as many lotto tickets as possible!”

      I feel like that’s a better analogy. Some charlatans are buying fake tickets, but as a manager who wants to win big, I’m ok with some chicanery so long as the average person is trying to honestly meet my directive.

    • osigurdson 1 day ago
      It seems like a natural result. People have been trying to use dashboards / metrics to roll up / indicate how well teams and individuals have been doing for a long time. Therefore, "part 1" was already in place. Now, something even easier to track is available (token usage). So, just throw token usage on the dashboard and tell people that higher is better - what other outcome would you possibly expect?
      • 12_throw_away 1 day ago
        > dashboards / metrics to roll up / indicate how well teams and individuals have been doing for a long time

        I'm actually a little curious about how long it has been. Bad managers have always prioritized irrelevant metrics, of course, but I have a feeling (backed by no data, just vibes) that management in general crossed a point of no return as soon as "data-driven" became a cross-industry buzzword.

        Like, I vaguely remember a time when consumer interactions didn't always come with a request to fill out a survey (with the results getting turned into a number and fed into a dashboard somewhere). And then that changed, and now everything must turned into a number and that number must go up.

        • osigurdson 1 day ago
          "Data driven" essentially means "scalar driven". There is nothing wrong with it if your chosen scalar is a proxy for anything that matters. Of course, usually no one can explain this mapping.
    • overgard 1 day ago
      Look it might seem silly, but the point is to get all our employees to be travel-pilled. They just don't know how great travel is yet.
    • recursivecaveat 1 day ago
      Management has confided in me that token usage is a secret performance metric. At the same time I'm getting emails from infrastructure people about prompting techniques to get LLMs to speak more concisely to save the company money lmao. I'd prefer a video essay mode that bulks everything up.

      Two years ago everyone would have told you that 'impact' was the way to measure people, and been aghast at tracking inputs like hours. Say what you will, but at least showing up at 8 didn't cost the company money. Today I see people spending time and money vibe coding tools in search of a problem, just to spend tokens and demonstrate that they're on board with the singularity.

    • dkarl 1 day ago
      I kind of get what they're thinking in trying to make sure all engineers use AI. For myself, and for the engineers working with me, I saw everyone go through an initial aversion and resistance to AI, and then an instant productivity boost when we started using them. So there's definitely a good reason to get everybody to start using AI. You don't want a good engineer resisting AI indefinitely if you know it will make them more productive.

      Incentivizing people who are already using AI to use as many tokens as possible does seem a little crazy, though.

      • swatcoder 1 day ago
        It's worth reflecting on why it's so hard to convince hold outs to discover how AI might help them. The fundamental issue is that there really aren't many convincing demonstrations that hold outs can relate to and there remains basically no evidence of real value gained.

        Users attest to higher productivity and point to material but intermediate factors like token use, generated lines of code, pr counts, etc, but there doesn't seem to be a convincing revolution in the quantity or quality of mature software being delivered.

        Combine that puzzling impressions of outcomes with a sense, for many, that they don't feel like they have a personal problem that warrants a new tool, and you end up with a pretty earnest and defensible indifference.

        To get hold out engineers using AI, the industry needs to be focused on demonstrating relatable workflow improvements and demonstrating practical improvements to finished work product. Instead, policies like token use incentives just rely on luring them into pulling the slot machine handle with the expectation that once they do, they'll join the cadre of other converts who justify their transition with subjective improvements and intermediate metrics.

        • spatley 1 day ago
          Software engineering organizations have agreed for decades that a meaningful measure of developer productivity is a literal impossibility.

          So now introduce AI and then tell every developer that they need to be 20% more effective 20% of what?

        • dkarl 1 day ago
          Unfortunately, a convincing demonstration to convince a skeptical colleague would require measuring developer productivity.

          Among skeptics, I've only seen people won over by using it themselves, because when they use AI for their own work, they invest the time to review the code, understand it, and assess its quality by their own standards. That's how people learn to trust AI coding assistance.

          • recursive 1 day ago
            Perhaps amusingly, I think I actually trusted it more before I started using it. Specifically because of my assessment of its quality, including things like factual correctness.
          • parineum 18 hours ago
            There's only really one measure, ultimately, and that's profits. Are these heavy AI companies raking in 10x profits with their 10x'd developers?
            • ryandrake 8 hours ago
              THIS is exactly what I want to see. Show me 10 randomly selected “All in on AI” companies that are running circles around a control group of 10 randomly selected similar but traditional software development companies. Show the AI-using companies being measurably more profitable.
        • crabbone 1 day ago
          Here's one selling factor from the experience I'm experiencing right now:

          Others will use AI, and it will make your life miserable. You need to know enough about AI to be able to fight back.

          The experience: one employee, self-selected, assigned themselves to a task of configuring integration with MySQL HA deployment. They produced a mountain of code in a short month (we are talking about close to a hundred thousands lines of Python code). And they decided to go with Oracle's tools, instead of Galera...

          Everything this employee produces is, quite obviously, AI-generated. Also, in the initial stages, they worked on their project completely alone: no reviews. To give some sense of size of this insanity: one of the configuration scripts I'm working with now is a 9K+ loc of Python that's supposed to run from `mysqlsh`. About half of it is module-level variables.

          It will take many months to restructure this "prototype" by hand. It's a pain to read and to navigate. GitLab UI has a perceivable lag just trying to display the script, forget about diffs. I will absolutely need AI to try to make sense of it (I'm not allowed to fix it). But, and if it ever comes to fixing, I can't imagine this to be done without automation of some sort.

          Unfortunately, AI generates problems that, sometimes, only AI can fix. :(

        • ijidak 1 day ago
          > It's worth reflecting on why it's so hard to convince hold outs to discover how AI might help them

          I have. My conclusion is... humans are deeply irrational when it comes to rapid change.

          Egg or olive oil prices spike, humans out an entire government.

          The rate of immigration spikes, humans throw them into camps and break useful treaties.

          Most of the resistance I've observed amongst engineers is resistance to change generally.

          And then digging in when challenged.

          • autoexec 1 day ago
            > Most of the resistance I've observed amongst engineers is resistance to change generally.

            Most engineers I've known are enthusiastic when given the opportunity to play around with a new toy. What they don't like is anything being forced on them. There's nothing irrational about that. They've often invested a lot of time into optimizing their workflows.

            I've also found that if something actually makes their work easier, you will never have to twist their arm to make them use it. They'll apply it everywhere it helps. They'll even try using it in places and in ways it was never intended for. If they're digging in, you likely haven't made a very compelling case for your changes.

            • andrekandre 20 hours ago

                > What they don't like is anything being forced on them
              
              raises hand (n=1), i'm fine to use it when i need it, but the derangement by management about it is a total put-off and unnecessary (and in the end counter-productive)
            • AlexeyBelov 13 hours ago
              Yeah. Nobody mandated Jetbrains products, almost every developer I know decided for themselves. Actually, it was the opposite: I remember asking a company I once worked for to buy me a license. Took 6 months for them to finally agree. Now my monthly allotment of tokens is way bigger than the price of that license, and it was given freely.
              • ryandrake 8 hours ago
                Yep, if a toy was that good, then everyone would be begging their managers to use it, not the other way around.

                Nobody ever had to force me to use keyword coloring, code formatters, source control, or a bug tracker.

            • paradox460 20 hours ago
              Exactly this

              >Here's a new editor we made

              Cool, looks interesting, I'd love to use it more

              >We're forcing you to use it

              I hate it

          • dingaling 1 day ago
            > resistance to change generally

            Nah, software engineers were always butterflies fluttering from one language or framework to the Next Hot Thing. Change was part of the job, if you didn't keep up you fell behind and atrophied.

            Resistance to AI is, I think, more because it is seen as an existential threat, or because it's something whose ultimate long-term outcome is still undefined. It's going to be either a benefit or a hazard, and we don't yet know whether we'll need Bladerunners to rein it in.

            • bigstrat2003 15 hours ago
              Resistance to AI is because it doesn't work. It has nothing to do with job security. It's a tech with nothing but hype, no substance at all.
            • parineum 18 hours ago
              I think I can offer an alternate explanation and it jives with your first point.

              When I use AI to completely write code for me (not using it as a powerful auto complete), it's not fun. I don't learn anything. It takes everything I love about software development and makes it just like any other job.

              I'm also never happy with the result and, when I go back to make it work the way I want it to, I have to learn a new code base that isn't built the way I would have. If that happens to a project I'm working on as a hobby, I find it incredibly unmotivating.

              It turns my intellectual pursuit into an assembly line and I hate that.

      • com2kid 1 day ago
        There is a limit somewhere, but I keep finding more and more ways to use AI.

        Not just coding, but things like "here is my teams mandate, go through all my company's slack channels, linear tasks, notion pages, and recent merges in got, summarize any work other teams are doing that intersect with my team's work."

        That'll burn a lot of tokens.

        Set that up to run once or twice a week and give a report.

        • thinkharderdev 1 day ago
          Sure, findings ways to burn tokens is not hard. Even finding ways to burn tokens on things (like your example) which are actually useful is not hard. But what is the ROI on that from the company perspective. I mean, you could have also hired an intern to do the job of collating this report every week. But if you went to your boss and asked to hire someone to do something, they would, reasonably, ask what the value of that thing is and whether it justifies more headcount. But we're in this bizarro world where the bosses are basically saying "go hire more people, even if you don't have specific high-value things for them to do. Just create make-work jobs for them!" It's wild.
      • recursive 1 day ago
        I've been using it for many months. I still haven't gotten any kind of boost. If I'm going to get ranked on token use though, best believe I'll be using the optimal quantity of tokens.
      • jimbokun 22 hours ago
        Yeah management should make clear they just don’t want to see AI use of zero in a given week. Not “more tokens consumed the better“ on performance reviews.
      • bigstrat2003 15 hours ago
        People are not actually more productive with LLMs, they are less productive. The data has shown this. So there's no reason to push people into using them - it's all just hype and magical thinking.
    • 866-RON-0-FEZ 1 day ago
      > Imagine if your CEO woke up one day and told the company: "We need to encourage travel spending. Please book as many business trips as you can, and spend as much money as possible.

      I had a manager like this once. He didn't last very long, but it was without a doubt the most fun six months of my career.

    • Skidaddle 20 hours ago
      It might be an ROI calculation, e.g. some people will waste tokens, but if it means someone else feels empowered to make something awesome or impactful, it will have been worth it.
    • naveen99 11 hours ago
      its more like you have purchased unlimited monthly travel for a fixed cost for your employees for $20. And you want your employees to maximize travel, since you already paid the $20.
    • almost_usual 1 day ago
      It’s preposterous, companies are blindly funding slop and the product is fool’s gold.
      • brewdad 1 day ago
        It's the state of modern capitalism. Money must flow from one entity to another even if nothing of tangible value is produced. The flows of money prove the growth of both businesses.
        • uuyy 1 day ago
          If I spend money on tokens but my revenue doesn't increase, nor do I get any operating efficiency gains - where is my growth then buddy boyo?

          The growth in revenue's (since earnings are negative for them) only shows up for the model producer

        • sznio 1 day ago
          what kind of metric should we use to filter out that bullshit?

          dollarhours?

    • jmyeet 1 day ago
      You mean like using lines of code as a metric to rank engineers [1]?

      Managers love metrics. Bad managers particularly love metrics. Tokens used was almost the obvious bad metric that was going to be used.

      I would argue that tokens used has actually exposed a useful metric: any manager who focused on this, demanded this or ranked based on this should be fired, for being a bad manager.

      [1]: https://evan-soohoo.medium.com/did-elon-musk-really-fire-peo...

      • malfist 1 day ago
        In many many many cases it's not the manager choosing to do that. Its our brilliant job creator class demanding that he does
        • jmyeet 1 day ago
          Bad manager: "I have to give you a bad rating because of the company-wide LoC metric."

          Good manager (to good engineer): "can you please churn some code to update your LoC metric so I don't have to give you a worse rating?"

          I'm sorry but any manager who just claims they're a passive victim of company-wide mandates is a lazy and bad manager.

      • xp84 1 day ago
        LoC can occasionally give you signal. For instance, imagine you are joining a new team or company so you don't know how much oversight your predecessor did. If you ask an engineer how they spend most of their time and they say "Mostly just writing code" and you look at GitHub and it says they've made 3 minor commits in the past quarter, that person is lying and your predecessor was incompetent (quite possibly both of them have been MIA from their responsibilities for months).

        No, I'm not talking about the engineer who can point to significant contributions outside of code: writing technical specs, leading architecture discussions, etc. I'm talking about the ones who just say they're just coding, but are actually not working at all.

        TL;DR LoC and commit count etc can be used only to flag for review likely cases of quiet quitting.

    • crabbone 1 day ago
      You'd be surprised...

      I worked for an international (mothership in the UK, later acquired by the US) company, which had... sort of a similar policy.

      So, the (mothership) company acquired a lot of satellite companies, all in banking business. All over the world. Then they figured their CEO was corrupt, got in problems with the law, got kicked out. While they were waiting for the new "real" CEO to step in, they let some "interim" CEO to take his place.

      New new (interim) CEO didn't seem to have a clue about the business she was supposed to run, nor did she care. She knew her time was running out, and she figured she'd spend it traveling the world and partaking in fine dining in every corner of the world the company's tentacle could reach. But, to make it seem more plausible, she, sort of, created a policy of "experience exchange", which sent random troupes of select individuals from different branches of the company to "exchange experience" with another similarly randomly assembled troupe. Of course, the company picked the bill when it comes to lodging and dining.

      Our inconsequential branch in Israel saw a pilgrimage of high-ranking banking managers from all over the world, but, mostly the wealthier parts of it. Some didn't even bother to show up in the office though, and proceeded straight to the banquet hall of the most expensive hotel on the Tel Aviv beach.

      To be fair though, the interim CEO got the boot even before her time was supposed to end, but it was serendipitously close to the acquisition by the US company, and so she was let go as part of a "restructuring" and "optimization"... but it was a crazy year!

    • 2muchcoffeeman 18 hours ago
      Spending is just a proxy for AI use here. This is nothing new. I remember past CEOs saying “Ajax! Ajax! Ajax!”, “Big data! Big data! Why aren’t we using Big data!”

      AI is just the next tool to over spend on in poor ways, realise it’s shit and spend a ton more money trying to roll it back.

      The situations where is shines will continue to use it when the hype dies down.

      • DanielHB 13 hours ago
        Oh wow, I totally forgot that "Ajax! Ajax! Ajax!" was a thing back then, that was at the beginning of my career and it was just as baffling hearing executives calling for a such a under the hood tech choice without understanding how it actually works.

        The "Big data!" calls though did make more sense coming from executives, obviously it was also often dumb, but it was a lot easier for them to understand the results. However most companies would have been better off waiting 5-10 years before jumping into it as a lot of money was wasted on processes and tools that are completely outdated today.

    • zombot 6 hours ago
      If you view AI as a religion, it starts to make sense: "There is this new god now, and if we don't worship properly, we will get eaten."
    • kingleopold 1 day ago
      because it's come to CFO's as "free debt" aka fiat printing. They need to spend thisfree fiat to keep buble going. I'm sure some inv. banking team internally assured too. $Trillion instuitions have access to free printer now, you and I don't. This is different world since unlimited printer started in 2020. All debt math is fake now because they can create fiat money out of nothing, literally.
    • AlexandrB 1 day ago
      I wonder where in business school they teach you to "measure inputs and try to maximize them", because that's basically what's happening.
    • OtomotO 1 day ago
      The most important part being:

      "Because we FEEL this will make you more productive and we will make more money!"

      No evidence but more Lines of Code...

    • akomtu 1 day ago
      IMO, the investors behind AI play the Uber game: they subsidise the AI costs and inject it into all facets of society they can get their hands on. They can tell the execs to increase AI usage at any cost. Their bet is that we'll become AI addicts with athrophied brains before they run out of money.

      Also, don't forget that their datacenters will burn our electricity and boil our rivers at rates much cheaper than what we are billed in our homes. So while you're happy generating mountains of AI slop, somewhere there is a datacenter boiling a river.

      I'd compare this to a new patented formula of water that's nobody asked for, and the patent owners are trying to replace all water supply with their crap before we wake up.

      • vharuck 1 day ago
        No need to invoke a hypothetical water example, just look to how Nestlé pushed baby formula in developing countries¹:

        >For example, IBFAN claims that Nestlé distributes free formula samples to hospitals and maternity wards; after leaving the hospital, the formula is no longer free, but because the supplementation has interfered with lactation, the family must continue to buy the formula.

        1: https://en.wikipedia.org/wiki/1977_Nestl%C3%A9_boycott

      • krupan 1 day ago
        But Brawndo's got what plants crave. It's got electrolytes.
    • ljsprague 1 day ago
      I've definitely been in situations where managers tell me to "spend X amount before the end of the year." They don't want higher ups to think they can cut our budget.
    • twa927 1 day ago
      It's like if class-based society materialized within the IT. And the manager class collectively pushes the narrative of AI replacing ICs.

      Note that it has beaten capitalism, making rational choices to increase earnings has lost to this AI dream.

      • chadgpt3 18 hours ago
        Note that propagandizing managers was a rational choice by AI companies to increase the revenue of AI companies.
    • MrBuddyCasino 1 day ago
      People think we’re living in oh so capitalist times, why then does everything smell soviet.
      • sph 1 day ago
        Horseshoe theory
    • eudamoniac 1 day ago
      I'm making sure to use the most expensive model possible for the stupidest shit constantly. They asked for it!
    • dyauspitr 1 day ago
      Nonsense. It’s a little bit of a loss leader so devs are hooked on it and it’s considered incredibly unproductive to work without one. Then they will just have 10 peoples jobs replaced with one guy.
    • treis 1 day ago
      If we suddenly went from rail travel to jets that's exactly what would happen. We'd go from 0 to all the business flights that happen today. Everyone would be under enormous pressure to not be a laggard.
      • cucumber3732842 1 day ago
        I washed a former intelligence agency person get interviewed on a youtube talk show and (tangential to the policy subject being discussed) they they said that's basically how it was after 9/11. We couldn't onboard people fast enough to figure out how to spend the money so while we were doing that we flew first class half way around the world to waterboard people with bottled water. The people authorizing it didn't care. They were spending X to fight terrorism. The public was never gonna see the nitty gritty breakdown.

        That's basically how it seems to be with AI. Just replace "spent X fighting terrorism" with "spent X implementing AI workflows" or "invested X in AI" or whatever. Nobody actually knows or cares just how far the dollars are going.

        • saltcured 1 day ago
          I think this version is getting very close to The Emperor's New Clothes Subscription in terms of how transparently the leadership are displaying their delusions.
  • MrCharismatist 1 day ago
    Like six months ago we got a presentation from an AWS guy on the AI tooling available and how it fit with our particular use cases.

    At one point seemingly out of nowhere he pointed out on his screen share "Look at how many tokens I've used this month. I run so much Opus." It was a number that was offensively large.

    I remember thinking "That's a really odd flex, this crap is so expensive the fact that you use so much should be a red flag"

    He demonstrated a number of Claude Code use cases he had to manage and tweak AWS infrastructure that made me, the old greybeard sysadmin older than the internet think "You've used AI to do something that was a single command."

    So this story makes sense. They were being encouraged to just blast away at it six plus months ago.

    • funimpoded 1 day ago
      I notice a lot of Cursor's suggestions are just stuff a linter should auto-fix.

      But if you hit "tab" it'll claim that as an AI-edited line, LOL.

      (A lot of the rest of it is stuff I could already have been doing just as fast if I'd ever bothered to learn to use multiple cursors, learned vim navigation, or set up some macros—I never did because my getting-code-on-the-screen speed without those has never been slow enough to hold anything up, in practice)

      • jjice 1 day ago
        Cursor absolutely tries to maximize what they claim is "AI-edited" and it's nonsense a lot of the time. If it writes a function and then I got in and edit that function, it claims my edits _and_ any net-new lines I add above or below the function.
        • giancarlostoro 1 day ago
          So their diff mechanism is poorly labeled (or purposely?) then.
      • pwillia7 1 day ago
        you don't use vim/emacs for the productivity. It's a lifestyle decision
    • mrbungie 1 day ago
      I still don't know how to reconcile these reports with what other people say about GenAI-agentic assisted engineering being the only way of working nowadays, especially in startups.

      Probably there is no dichotomy going on and it depends on multiple factors, but it seems so weird to see reports that are so different between each other.

      • binary0010 1 day ago
        It's not required for startups. But if you are building trashy, brittle products and your main metric is speed to market, and have the expectation of high failure chances (e.g. most yc startup batches) - then yes you have to do agentic eng.

        If you are making extremely specific, high quality products over a long time window and your founders are deeply experienced in that field of engineering, then no, you don't need agentic engineering and probably want very little llm code in general (outside of some boilerplate, internal toolings, etc).

      • themafia 1 day ago
        > I still don't know how to reconcile these reports

        This is work related. So you can't expect everyone to have the same input demands or output expectations.

        > Probably there is no dichotomy

        It's literally staring you in the face.

      • SpicyLemonZest 1 day ago
        I think GenAI-agentic assisted engineering is the only way of working nowadays, and it's the only way I personally have worked for months. I still think that an outright majority of presentations on AI tooling I've seen have been in the nonsensical "Look how many tokens I can burn" genre. Had to sit through one guy recently who explained why you need a complex agentic team with 6 different roles in order to ask Claude to investigate a bug, which you most definitely do not.
        • dranudin 1 day ago
          A guy at my company (very old company, we need to maintain our software for 30+ years) gave a presentation about how they used opus 4.6 to onboard people, like giving new team members access rights etc. Then another guy (even higher in management) proposed using a team of agents for that.. It's getting pretty wild
      • wahnfrieden 1 day ago
        Wage workers are evaluated on behaviors, founders are evaluated on growth and revenue. Of course usage patterns and outcomes will be different
    • throwway120385 1 day ago
      I think you'll find that a lot of big investment companies are buried to the hilt in a lot of tech companies and also OpenAI and Anthropic. So you can do the math on where the directive is coming from and why it's not particularly careful or measured.
    • Henchman21 1 day ago
      > "You've used AI to do something that was a single command."

      As time passes and the layers of abstraction pile up, later generations won't understand the underlying layers of the abstraction. This is a huge weakness in our systems development -- and a huge potential attack surface for adversaries.

      • epgui 15 hours ago
        AI is not even an abstraction over a CLI command, in any sense.
      • cindyllm 1 day ago
        [dead]
    • Our_Benefactors 1 day ago
      > You've used AI to do something that was a single command

      Yes, and that’s a good thing! This is in fact where a lot of AI value lies. You dont need to know that command anymore - knowing the functional contract is now sufficient to perform the requisite work duties. This is huge!

      • funimpoded 1 day ago
        Not even joking that the main benefit I've seen from "AI" for editing code is that it lets me quickly do all the things I could already have been doing just as quickly if I'd ever bothered to learn to use my tools.

        Of course I lose about as much time as I save to its fuck-ups, so I'd still have been better off learning to actually use a text editor properly. Though (as I mentioned in a another post) part of why I've never done that in 25ish years of writing code for pay is that my code-writing speed has never been too slow for any of the businesses I've worked in, i.e. other things move slowly enough it never mattered.

      • mrbungie 1 day ago
        Once I learn a command that is both repeatable and useful, I prefer to either keep it in my mind or in my aliases. Thank you.
        • saxelsen 22 hours ago
          Yes but companies would love for a robot to be able to do it instead.
        • Our_Benefactors 1 day ago
          You can still do this! And AI will teach you that command far far faster than synthesizing it yourself.
          • mrbungie 1 day ago
            Yeah, I know AI is useful for that, that's why I said after I learn. Hopefully once.
        • mikojan 1 day ago
          That's what Skills are for

          :^)

          • mrbungie 1 day ago
            0 tokens per command >>> Hundreds of tokens per command
      • malfist 1 day ago
        Is it? If the LLMs change broke something do you know enough to fix it?
        • Our_Benefactors 1 day ago
          The same question can be applied to work without AI, so this isn’t a meaningful criticism
          • eikenberry 1 day ago
            In one case you are using tools you understand, in the other you aren't. Seem different to me.
      • MrCharismatist 1 day ago
        Look, I feel for junior admins, I was one 35 years ago and the only reason I'm where I am today was because I had to learn the hard way, repeatedly and often.

        I use the shit out of opencode to do things as a force multiplier, not as a way to keep me from knowing what its doing.

        The point at which we're optimizing for "we don't need to know that anymore" is the point at which everything blows up, because agentic work is not fully deterministic, models hallucinate even simple things.

        Blindly relying on your agent weapon of choice to just do the right thing because you didn't take the time to understand how the lego fits together is an actual problem.

        • thereisnospork 1 day ago
          Replace agent with 'direct report' and you've just described middle management. For better or worse, companies have always run on non deterministic tasks doled out by persons who barely understand the work.
          • paulhebert 1 day ago
            Honestly human employees feel closer to deterministic than LLMs.

            I have a pretty good sense of the quality of work my coworkers output, where they tend to struggle, where they're talented, what level of review is required, what I should double check, etc.

            By contrast LLMs are more like picking a contractor out of a hat. Even with good guardrails the quality and types of issues vary wildly prompt to prompt.

      • bluefirebrand 1 day ago
        > You dont need to know that command anymore

        I find it hard to read "You can do things without knowing things" as a positive improvement in work, society, life, anywhere

        • Our_Benefactors 1 day ago
          You are the worst kind of gatekeeper, then! A true reactionary who believes they are righteous for impeding others!
          • bluefirebrand 1 day ago
            I'm pretty sure you're being sarcastic. I hope so.

            It's hard to tell anymore because I have encountered people who genuinely do seem to think that disliking AI is gatekeeping somehow

          • nukedindia 1 day ago
            [dead]
      • perrygeo 1 day ago
        I watched people ask LLMs for linting/refactoring help, burning easily 5 minutes for something that could be completed deterministically, locally, in ms using any modern editor.

        Quite frankly it was embrassing. We've had tools for static analysis for ages. Use them.

        Someone with better knowledge could work 100x faster using 100x fewer resources. They did it the slow, expensive way but at least didn't have to think? Odd flex.

      • fg137 1 day ago
        > "You've used AI to do something that was a single command."

        A coworker created a shared Claude Code skill in our repo.

        It's obviously something that can be done as a python or bash+jq script and run deterministically.

        Instead we use natural language and waste tokens for that.

      • miyoji 1 day ago
        It's also several hundred times more expensive.
        • Our_Benefactors 1 day ago
          False! Labor is the most expensive input in creating software, not joules of energy. Using AI is far far cheaper than expecting workers to synthesize all knowledge themselves.
          • miyoji 1 day ago
            Both workflows involve typing a very small number of characters and should take under 30 seconds. There's no difference in labor cost. However, the compute and energy costs of the tokens to solve the problem vs the tool call will be orders of magnitude in difference, even for trivial stuff like grepping. It gets worse as the problem gets more complicated and the tools more specialized.
      • quikoa 1 day ago
        I can't tell if this comment is sarcasm or not. If you let AI run commands you don't understand (especially in production) you may end up with some nasty surprises.
        • xp84 1 day ago
          With a comment like that, it's no wonder you're dramatically below our minimum guidance for tokens consumed.

          If AI breaks production this way, you just tell AI to fix it! And look, now you've consumed tokens twice. Think on that and I'll see you at the end-of-year performance review.

          • andrekandre 20 hours ago

              > If AI breaks production this way, you just tell AI to fix it! And look, now you've consumed tokens twice. Think on that and I'll see you at the end-of-year performance review.
            
            it reminds me of using something like gdp as a measure where spending more money == good even if its at the cost of actually less productivity (more middlemen taking out rents, for example)
        • Our_Benefactors 1 day ago
          It’s not sarcastic at all. Using AI to accomplish things and fill in your knowledge gaps is literally the whole point of it. People are downvoting and salty because they thought the value in the job was in memorizing esoteric APIs (it never was)
  • pjc50 1 day ago
    Lots of people reporting their "I had to use up my tokens, so I burned them on worthless stuff" stories. Incredible thing to do in a climate emergency. Push harder guys, maybe we can hit 3C warming?

    This reminds me of the story of how the USSR nearly made whales extinct to meet a quota for whale meat that nobody wanted to eat.

    • jordanb 1 day ago
      I've been noticing how our economy keeps getting more Soviet as it becomes more top-down. We basically have central planning now with all the pathologies inherent in that system, but unlike the soviets we just have a bunch of guys who happened to get rich or bribe the right people running our GOSPLAN.
      • CaptainTaboo 1 day ago
        Things definitely feel 'Soviet' at my company. AI usage has been mandated by upper management (despite the fact that it doesn't really make sense or solve any problems in my particular job). They literally call it an "AI revolution." If you dare question the wisdom of the company's 'AI-First' policy, it's like you risk being singled out as a "counter-revolutionary."
        • RunningDroid 5 hours ago
          > They literally call it an "AI revolution."

          People proclaiming an "AI revolution" seem to have inspired at least one song:

          https://genius.com/Worldview-ai-revolution-lyrics

        • andrekandre 20 hours ago

            > They literally call it an "AI revolution."
          
          they are calling it "the ai era" around these parts lol... "coding in the ai era" it sounds so big and meaningful, but actually its just vapid.
      • throw-the-towel 1 day ago
        Yeah, the stories I've heard from Meta are very Soviet-coded. Like, trying to exceed the plan but not too much, because then the new plan would be hopelessly unachievable and you'd be punished for not meeting the insane expectations.
        • nomorewords 1 day ago
          So like Publix company earnings? You have to beat by a little so you're not just flat but beat by too much and you look incompetent/suspicious
          • Shalomboy 1 day ago
            YES! I've been trying to explain this about Publix for a week now but I couldn't put it to words.
        • pjc50 1 day ago
          Literally Stakhanovism.
      • nathan_compton 1 day ago
        The problem is that the founding fathers believed in constraining the state because it could be abusive, but they should have understood that all power ought to be subject to the people, not just state power.
        • xp84 1 day ago
          I wonder what the largest and most powerful private enterprise the FFs knew about was. I suppose they'd probably heard of the Hudson's Bay Company, but I have no idea how they really felt about the potential that many normal people would feel equal amounts of domination from companies with revenue much larger than most countries' GDP.
          • sojournerc 1 day ago
            The Dutch East India Company would've definitely been known, and was huge.

            https://en.wikipedia.org/wiki/Dutch_East_India_Company

          • kylestlb 16 hours ago
            The FFs were literally bourgeois, the american revolution was a bourgeois revolution. They were the big private enterprises, and they wanted to stop giving a cut of their winnings to king george.
      • danaw 19 hours ago
        soviet-coded minus the cheap/free housing and public transit
        • parineum 18 hours ago
          And famine
          • pona-a 5 hours ago
            Oh don't you worry your little head, it's coming in the next patch
      • kylestlb 16 hours ago
        Surprise - capitalism has always been top down planning. The bosses are the planners, my friend.
    • lmkg 1 day ago
      This is why we're clear-cutting forests to build new data centers? Not even for "real" productivity gains, but just for the sake of using the tokens.
      • Lord-Jobo 1 day ago
        Bullshit work has hit escape velocity, won’t be long now before we have huge warehouses filled with people doing sudoku for their daily food allowance, and that’s just how our entire economy functions.

        How are we sliding face first into “snowpiercer but dumber”?

      • andrekandre 20 hours ago

          > just for the sake of using the tokens
        
        capital requires growth, forests be damned
      • danaw 19 hours ago
        don't worry they're not building many of them anyways, they're just accumulating debt and padding the pockets of all the construction companies that are sitting around idle
      • buellerbueller 1 day ago
        Gotta scale and then IPO those startups, so the VCs can cash out profitably.
    • pjmlp 1 day ago
      No worries, we keep drinking from paper straws, because that is what really matters.

      The problem with not burning tokens is when you not meet the performance KPIs, get labelled as luddite and off you go, even before the job gets taken over by AI.

      I do agree with the sentiment, that and war mongers destroying the planet.

      • ZeroGravitas 1 day ago
        What's the logic of dissing paper straws in a comment raging against war and AI as threats to the environment?

        I see it a lot and assumed it was concern trolling from plastic manufacturers or libertarians funded by them but you seem genuine.

        Have you just fallen for that concern trolling? Grown so cynical that nothing matters anymore? I don't understand the intention if you have a genuine desire to improve society.

        What would we be doing differently in a world where we were still using plastic straws? Would that have freed up enough mental energy for a revolution? Would people be blowing up private jets while sipping their diet coke?

        • paulhebert 1 day ago
          I think it's a reference to the frustrating pressure to make small personal sacrifices while those in charge burn down the world.

          I don't mind paper straws (though I don't really use straws in general) but it is frustrating trying really hard to be sustainable and then hearing that Amazon is encouraging people to use as much computing power as possible.

          • disgruntledphd2 13 hours ago
            Paper straws really, really suck for little kids though.
        • jcranmer 1 day ago
          Replacing plastic straws with paper straws is at best little more than greenwashing (and honestly, possibly even worse), since the environmental effects are so minimal. Contrast this with plastic versus paper bags, where plastic bags due to their extremely light weight have a much greater tendency to become windblown litter, so they do have a comparatively greater impact on the environment.

          One of the real problems of greenwashing is that it's trying to sell an idea that with just a tiny, almost unnoticeable change to lifestyle, you can keep doing what you're doing and still have the peace of mind that you're not doing anything bad for the environment. Plastic recycling falls into this category--oh, just recycle this thing instead of throwing it away, that means there's no more guilt to be had over the environmental costs of plastic production (meanwhile ignoring the fact that plastic recycling is largely nonviable and so all of that goes straight to the waste stream anyways.)

          The hope is that in the alternative world, instead of praising companies for taking what are ultimately only token steps towards environmental stewardship, we'd actually castigate them harder and get them to take real steps to improving the environmental aftereffects of their activities.

        • pjmlp 1 day ago
          It is a distraction, the feel good for using paper straws instead of actions that actually make an impact, like improvements in the transport infrastructure, shutdown of factories that polluted the environment, flying people to meetings, limits of the kinds of engines that get produced, wars for profit, now AI data centers, and plenty of other examples that since covid the people big corps and governments could not care less.

          Nah, one gets a cocktail with paper straw and feels like they are doing their part saving the planet.

          Plastic straws aren't hard to find, by the way.

          • ZeroGravitas 1 day ago
            One of the three techniques discussed in the book "Rhetoric of reaction" is futility.

            Reactionaries are often arguing against good things, which makes it difficult for them to directly attack them.

            So they develop consistent techniques to attack them from oblique angles:

            > Hirschman describes the reactionary theses thus:

            > According to the Perversity Thesis, any purposive action to improve some feature of the political, social, or economic status quo only serves, perversely, to exacerbate the very condition one wishes to remedy (compare: Unintended consequences).[4]

            > The Futility Thesis holds that attempts at social transformation will be unavailing, that they will fail to "make a dent" in the problem, and the motives of those who keep attempting futile reforms are suspect.[5][6]

            > The Jeopardy Thesis states that the risk of the proposed change is too great as it imperils some previous, precious accomplishment.[6][7]

            > He characterizes these theses as "rhetorics of intransigence" that do not further constructive debate.[8] Moreover, he says they turn optimism about social advancement into pessimism.[9]

            The futility (and perversity) ones are what I think of when people are angry about straws on the internet.

            I just don't understand how complaining endlessly about leads to solving any of the bigger problems. Each of which could be dismissed in the same way.

    • Insimwytim 1 day ago
      > USSR nearly made whales extinct

      USSR barely accounted for 15% of the world caught amount (with Japan as the leader).

      > that nobody wanted to eat

      unsubstantiated.

    • wolvoleo 1 day ago
      Yeah but what can we do. I don't want to be punished by work either.

      Luckily I work in app management and I know they can only see the last date used so if I just put in one query per day I'm good.

      But I'm so sick and tired of this AI hype :(

  • laweijfmvo 1 day ago
    I work at a FAANG (not Amazon), and have heard this a lot, both internally and publicly. Except, never officially from anyone that mattered (leadership). It always starts with a rumor and/or someone (internal) creating a dashboard/metric, and blows up from there. I've even heard leaders proclaim that it's NOT what they're looking at, and that you better NOT be wasting those expensive tokens.

    Now, they might be; they've certainly used silly metrics in the past (LoC, commit count, etc.) without ever fully acknowledging it. But I don't believe that it's as simple as more tokens = more better.

    • eunoia 1 day ago
      Fellow FAANG. We have weekly manager meetings where leadership encourages us to increase token usage. We do push back, and leadership acknowledges that token spend is not a great metric and people are likely to game it... and then go right back to encouraging us to increase token spend in our teams.

      We have token tracking dashboards that leadership is looking at. I know because they show us in these manager meetings. Haven't opened them to everyone yet as some kind of leaderboard, so at least that's nice.

      Lots of rumors token spend will be involved in perf reviews. Leadership denies it... but then holds more meetings telling us how important it is to increase our token spend and discussing inadequacies from the token spend dashboards.

      • laweijfmvo 1 day ago
        Interesting. When you say leadership along with manager meetings, are you referring to managers, who might just be exacerbating the rumors I mentioned, or actual company leadership, like Directors, Vps, etc? And are they saying “AI usage”, or explicitly “Token count”?
        • eunoia 1 day ago
          Executive leadership talking to the managers in their orgs.

          I wish I was kidding, but they really are pushing increased token usage. Like I said, we push back. When we push back they acknowledge it's a bad metric and lately have started to add qualifiers about how we don't want to burn tokens unnecessarily and in fact we should be looking to use tokens more efficiently.

          And then in the next meeting we are once again talking about how to encourage our teams to use more tokens.

          The goal is to increase AI usage of course, but the only metric they track to measure progress on that goal is token usage. Also endless presentations of vibed tools that we never hear about again after a week. Get a lot of those too.

      • hirvi74 1 day ago
        I do not want FAANG and FAANG does not want me. So, as a goblin, I must ask: how is your and your team's morale doing?

        People in FAANG likely worked hard to get in there or lucked out or some combination of both. I feel like my soul would be crushed if I hacked away at Leetcode for months on end just to babysit and gaslight some algorithm into asymptotically following my instructions.

        • patrick451 6 minutes ago
          I work in a FAANG. My sense is that most of my teammates are loving the AI. Personally, I hate the AI tools and the hype around them and what it has turned this job into. My own morale has never been lower in the 7 years I've been there and I'll probably try to switch careers or semi retire sooner or later.
        • eunoia 1 day ago
          Mixed bag. Some engineers are excited about the company giving them a blank check to explore a new tool. Some engineers are upset because they feel their skills are being devalued by leadership.

          Overall I would say most are exploring the new tools while waiting for the madness to subside. Work in $BIGCORP for long enough and you get used to leadership being out of touch with the work on the ground.

          Engineers in $BIGCORP jobs are by and large not the hacker types anymore btw.

    • Aurornis 1 day ago
      I'm in a large-ish peer group for engineering managers. AI token over-use is a growing problem.

      The problem explodes at any company that puts up a token use leaderboard or hints that they might do layoffs for engineers that refuse to use AI tools. This triggers a race to use as many tokens as possible to stay ahead.

      Anecdotally, the problem is worst among devs who read a lot of social media. Twitter, Threads, Mastodon, LinkedIn, and others are filled with recycled viral stories about companies going AI-native and firing people who don't use enough AI. Anxieties are high right now so nervous developers see this and think they must burn tokens faster than their peers to avoid an inevitable culling.

    • thelopa 1 day ago
      I recently left a FAANG. Shortly before I left (for unrelated reasons) the director of my org got scolded by the VP he reported to because token usage in his org was low. After that the ICs in my org were told to use ai for everything or there could be consequences for their careers.
      • eikenberry 1 day ago
        > I recently left a FAANG.

        Congratulations!

      • hirvi74 1 day ago
        I wonder if we will ever reach a point on society where tokens just become the universal currency. We will all work for tokens, pay our bills in tokens, make purchases in tokens, strippers will dance for tokens, etc..

        I'm kidding, of course... but human stupidity is infinite, so...

        • __MatrixMan__ 1 day ago
          Systems where the money is redeemable for something make a lot more sense than what we're doing with dollars. It's easy: you value the money if you value the thing it's redeemable for. When you exchange that kind of money with people, you know something about which values you share with them.

          When it comes to dollars, it's hard to know what "value" even means.

    • pjmlp 1 day ago
      Enterprise consulting here, it is getting ridiculous, with forced trainings, workshops and hacktons to motivate use of AI in daily activities.

      Stuff that could be easily done as shell scripts gets asked how could we make an agent out of it.

    • wolvoleo 1 day ago
      In our place it is really a thing and comes from leadership. They feel like they spent a lot on copilot and they want to see people using it.
      • wahnfrieden 1 day ago
        It's too bad that they go with a safe enterprise option that is so deficient that the outcomes will be bad and lots of people will learn useless lessons that don't translate to state of the art tools and usage patterns
        • ecshafer 1 day ago
          I tried to have copilot create a powerpoint slide with some content and a rough design idea. It created an empty powerpoint slide with the default template, told me to download it, then created a separate bit of text and told me to copy and paste it in (random bullet point nonsense), no design elements. Whereas claude actually created a slide, with color and formatting, and the content was generated and fit it. Copilot feels like Chatgpt 3.0
          • wolvoleo 1 day ago
            Yes the PowerPoint generation was one shot only, it would let you make one and then if you wanted to make changes it would tell you to do it yourself lolllll. I mean what's the point then. The only option was to roll a whole fresh try.

            Not sure if this is still the case, I rarely use PowerPoint.

        • wolvoleo 1 day ago
          Yes agreed. It's actually the office version of copilot which isn't much to write home about.

          Apparently the github one is more useful for its target audience.

          • Shalomboy 1 day ago
            GitHub's Copilot CLI is nice enough; now it is not as well-packaged as Claude Code, but I like being able to use their harness in my emulator of choice, and it gets better at the little stuff every week.
          • wahnfrieden 1 day ago
            The GitHub one is trash
    • nitwit005 14 hours ago
      We got an email stating our managers would be monitoring our usage. I got an angry slack scolding for zero AI use, which was simply that I'd been signed on with an alternate version of my email, and they hadn't accounted for that.
    • tyleo 1 day ago
      I feel like it depends on the leader. I've definitely seen leaders value LoC beyond reason and cause worse, bloated codebases by rewarding cowboys with 10k line PRs.

      Big companies have thousands of leaders. Many good, many bad.

    • blindriver 1 day ago
      My friend at Google says they have a "ai-usage" dashboard that tracks everyone's ai token usage as well as aggregated per team, per org, etc. There's a sign on it that says "don't use this for perf reviews!" but I think everyone knows that that's exactly what they're going to use it for.
  • onion2k 1 day ago
    I'd bet that the goal is for people to 'game' it though. By pushing people to use AI more they'll try it, experiment with it, 'waste' time on it ... and from that they'll learn about it. That's the end goal.

    They're using tokens for pointless stuff right now in order to figure out use cases where it helps. You can't do that without also learning where it doesn't help.

    My company is doing the same thing.

    • this_user 1 day ago
      That is exactly the point. It may be wasteful, but it's the fastest way to explore how AI may actually be useful to your business. Even if 80% of employees are just wasting tokens, you still have 20% who are figuring it out.
      • mrbungie 1 day ago
        It is difficult to believe that you can cobra effect yourself into greatness. I'd rather say the most useful perk for companies doing this is the AI-washing adoption metrics they can report, which will hopefully (for them) increase valuations.
      • jordanb 1 day ago
        Even if that were true it'd mean that current AI usage is overshooting actual, productive use by 5x. This is a problem when all the AI projections are that the current state is the minimum and future usage will be 10+x.
        • onion2k 16 hours ago
          It does mean that, but in a situation where people don't know what the productive use is you have no other option.

          It's like that famous quote about advertising that says "Half my ad spend is wasted, but I don't know which half". 20% of token use is useful, but as you don't know which 20% it is you have to spend 5x more to get that knowledge.

      • nitwit005 14 hours ago
        That would be 20% who _might_ figure it out. That's essentially R&D spend, which has no guarantee of success.
    • DonsDiscountGas 5 hours ago
      Unless you install meshclaw and just tell it to use a bunch of tokens
    • krupan 1 day ago
      I'm sorry, but that's insane. I mean, I guess if you have cash to burn I could think of even worse ways to spend it, but seriously, this is dumb. What other tool have businesses spent millions of dollars and person hours on to try and find something useful the tool can do?? Talk about a solution looking for a problem! If it's not clear in the early stages that this tool solves a problem then ditch it and move on! Give that extra cash to your employees and shareholders instead!
      • paulhebert 1 day ago
        Every dollar your company gives to AI providers is a dollar they're not paying you.
        • onion2k 16 hours ago
          No it isn't. My salary won't increase if the AI budget reduces, or if the AWS bill goes down, or if the board decide not to do a retreat to Davos this year, or anything else. My salary is very strictly limited by the salary budget. The only impact of a reduced AI budget would be an increase in dividends to shareholders, but that would have a second order effect of making the company less valuable because our competitors would be racing ahead of us.
          • paulhebert 5 hours ago
            Why not?

            There’s only so much money to go around. Wouldn’t a big new expense affect how much is available for the salary budget?

            I’m sure it differs between companies but there seems to be a big trend of laying off workers while pouring more and more money into AI.

            If every company is using AI to compete they’ll still get the same slice of the pie but they’ll spend more money on AI and have less money for salaries

        • andrekandre 20 hours ago
          its also a dollar that is most likely going to busywork and not towards anything customer-facing/improving...
  • Qem 1 day ago
    It's a shame AI now has a universal basic jobs[1] program, but humans still not. Companies are paying AI to dig holes, so other AI can fill them.

    [1] https://locusmag.com/feature/cory-doctorow-full-employment/

    • philipallstar 1 day ago
      We didn't. The USSR had 100% employment long ago[0], and all the poverty that goes with it.

      This isn't like that, as it isn't funded through taxes. This is private companies experimenting with their money, and risking downstream cost increases that may cause people to go elsewhere, as they do when they try anything new.

      This is much better than just funding people regardless of productivity through forced taxes.

      [0] https://nintil.com/the-soviet-union-achieving-full-employmen...

      • rdedev 1 day ago
        Right now there are state govts bending over backwards to provide cheap energy for data centers. The difference is being paid by people who live nearby through increased electricity costs. This is a tax with just extra steps
        • drstewart 1 day ago
          Shit, the government providing infrastructure in response to public demand? Outrageous use of taxes!
          • jjulius 1 day ago
            Your sarcastic comment mistakes "public demand" for what is actually private sector demand.
            • drstewart 1 day ago
              Good point, companies should be exempt from taxes since they aren't expected to count or benefit from infrastructure improvements
      • krupan 1 day ago
        Are you sure this isn't being funded by our taxes? How many data centers are being built in areas where they have been given a huge tax break? How many banks are loaning money for AI infrastructure knowing that they'll be bailed out by taxpayers if they fail?
        • philipallstar 1 day ago
          A tax break isn't funding. Saying we'll take slightly less of your money is not giving money.

          Either way, I don't know what this has to do with Amazon getting workers to use AI more, which is what my comment was addressing.

          • crote 22 hours ago
            You're forgetting that taxes pay for government services, and those corporations consume government services as well. Give them a big enough tax break and it'll end up at a net negative.
      • chadgpt3 18 hours ago
        Some of these companies derive their revenue in a way similar to taxes, as you're forced to pay them for services. I don't see why it matters whether it's technically defined as a tax or not, if you still have to pay. Think of the TV license fee in some countries, or rent.
      • monkaiju 1 day ago
        > as it isn't funded through taxes

        This is simply not true, especially when you consider the massive amounts of government support so many parts of this "experiment with their own money" is getting. As a Utah resident its extremely evident in how forcefully they're pushing through what will be one of the largest datacenters in the world despite near universal disapproval from the citizens.

        • philipallstar 1 day ago
          Pushing through in what sense? The government is building a data centre near you that Amazon is pushing its people to use?
      • Qem 1 day ago
        > We didn't. The USSR had 100% employment long ago[0], and all the poverty that goes with it.

        I don't think USSR poverty rates surpassed those of Tsarist Russia that preceded them. To their credit, I think ideologic competition between capitalist and communist blocks was part of what allowed improvement of life conditions of workers in capitalist countries, after WWII. Fear of revolutions avoided one-percenters taking all productivity gains in the period. They had to share some to keep guillotines away. As soon as things went south in the USSR, from the 70s onwards, and capitalism took over the whole world, lacking any sort of viable extant competition, we reverted back to the old norm, the workers were denied their share of the productivity gains since then, and here are us now. A regime premised on free competition was undermined by lack of competition to itself.

        • philipallstar 1 day ago
          No, the ideas of individual liberty, law enforcement, and private property meant that central planners couldn't mess up the US. That's all you need to massively raise standards of living. That and energy from oil.
          • chadgpt3 18 hours ago
            We have central planners in the US. For instance Jeffrey Bezos, Sam Altman, Elon Musk. They compete with each other for planning power, but they compete by making the best plans for themselves, not for society.
        • chadgpt3 18 hours ago
          This is correct.
      • bobthepanda 1 day ago
        [dead]
  • nateglims 1 day ago
    Within Amazon, token usage is gamified if you use Kiro and your team isn't billed for it in the same way you are billed for AWS or have to account for your capacity in older systems. I've credibly heard of people gaming this internal ranking before anyone paid attention to it. There are also tons of enthusiasts doing all kinds of internal projects and sharing them.

    There's definitely some pressure from managers when they hear about N00% productivity boosts in internal presentations, but where I am at they would figure out if you were making up tasks rather than working pretty quickly and the pressure comes from aggressive deadlines and a shift from the yearly OP1 process to a more agile one.

  • mjr00 1 day ago
    I've heard similar stories from AWS and other non-AWS FAANG employees. All of the token leaderboards have a "this doesn't count toward your performance review" disclaimer, but there's an implied nudge nudge, wink wink after that statement.

    One person I've talked to has someone in their org who is running GasTown and chews through tokens 24/7. They don't contribute very much, but they're comfortably in the #1 spot.

    • MeetingsBrowser 1 day ago
      I have heard from multiple people at smaller to medium sized orgs where token usage and AI adoption are a central part of performance reviews.
    • ekjhgkejhgk 1 day ago
      Yeah my manager at my 400 people company is one of those. He runs gas town and his agents bump things here and there throughout the codebase and he has like 50 commits a day. Compat versions, formatting, stuff like that.

      But the thing is, the problem is the person, not the technology. He was already like this before LLMs. He would "refactor" repos into smaller repos, and all of a sudden all of the code has his name. If you just skim, it looks like he build a huge chunk of the codebase in the company. He also has a history of saying no to stuff I want to do, then he does it himself. Also nitpick my PRs to no end (or straight says he doesn't think he should do that thing) and then he turns around and implements it himself. He doesn't copy paste my code, but he does re-implement himself the same ideas that he just said no to after my PR was open. Very smart guy, very dishonest. But he's good at being dishonest. If you ask him about it he says "oh I just though that this way would be more organized" or something like that. From the outside you could make the argument that one way is better than the other (for reasons I would claim are irrelevant), so it's not obvious that he's being dishonest. But since I see 100% of what he does, it's entirely clear to me that this is a pattern.

      EDIT: just remembered another one. One time I asked him to take a specific week of holidays. He didnt say "no" but he did mention that we're under a lot of pressure to deliver The Thing, and if I would delay my holidays. I said "No, I'm not going to delay them", so he approved it. Then when the time came around, he took holidays in the same week. On this one I didn't challenge him, I already know him well enough to know the truth which is he's no ashamed to ask from others things that he would never himself accept.

      • r_lee 12 hours ago
        Sounds like the perfect employee honestly. I bet he's beloved by the upper management
      • da768 22 hours ago
        He's building his GitHub resume
      • esafak 1 day ago
        I would leave.
      • mial 1 day ago
        Sounds like an actual psychopath.
        • ekjhgkejhgk 11 hours ago
          Yes, I think so. And another aspect is that he's not the aggressive bully that perhaps you would stereotypically expect from my description. Instead he's super calm, super polite, the perfect picture of professionalism. But all of that is superficial, you might be a dishonest person in various styles. It took me a long time to notice the underlying substance of the behavior.
  • wolvoleo 1 day ago
    This is coming to my workplace too. They send us angry reminders if we don't use copilot in ms office every day :( I just type Hello to it.
  • Insanity 1 day ago
    Devil's advocate - this is a forcing function to get people to try out AI whom might be reluctant to try it. I'm speaking from personal experience, I was 'forced' to use the tools as it is being tracked, and found genuine use-cases.

    Of course at some point the 'benefit' is outweighted by the 'negatives', e.g people making up work. Tokens used is about as useful a measure of productivity as 'hours in office'.

    EDIT: My use-case still have relatively low token usage though lol

  • avelis 1 day ago
    Goodharts Law - When a measure becomes a target, it ceases to be a good measure.

    https://lawsofsoftwareengineering.com/laws/goodharts-law/

    • MeetingsBrowser 1 day ago
      But does Goodhearts law apply to things that were never a good measure to start with :)
  • AMerrit 1 day ago
    I've done similar at my job where management wants us to use all of our tokens before they expire. I usually set it to documentation tasks and other minor tasks just to eat up tokens.
    • funimpoded 1 day ago
      There's really no end to dot-language diagrams you can have it make. Call graphs, package dependency maps, let it try to figure out an architecture diagram, whatever.
      • gdulli 1 day ago
        Giving it busywork that you don't have the time or wherewithal to check carefully sounds like a disaster. Rather than introduce content that will be partially wrong and cause confusion if it's ever read, I'd consume the credits and send the output to /dev/null.
        • funimpoded 1 day ago
          Just title it "draft". Odds are nobody will look at it anyway.

          Add a pre-commit hook to re-create the diagrams on every commit (in case anything changed, of course), that way you can really burn tokens and look good to management.

    • rtkwe 1 day ago
      At least that nominally creates some value at the end of the day. Documentation is the thing everyone wants but no one has time/desire to create. My most recent token heavy task was having an agent write unit tests for coverage on a little graphAPI tool I'd written a bit ago to satisfy SonarQube.
      • sheept 1 day ago
        People don't want to read LLM-generated docs though. It'll lack the context to justify why things were designed the way they were, and there's always a risk of hallucination so you still have to verify the documentation's claims, since the person who published it likely did not scrutinize it.
        • rtkwe 1 day ago
          There's two major types of documentation "why is this like this" documentation and then there's "here's the features of this library/tool" documentation. LLM stuff is fine for the latter as long as you screen it for hallucinations. Your right the former they can't really do because they don't have access to the reasoning but I've often found even the latter to be lacking in many teams.
          • andrekandre 20 hours ago

              > LLM stuff is fine for the latter as long as you screen it for hallucinations
            
            that is such a productivity black-hole tho; at that point i might as well have written it myself
        • shigawire 1 day ago
          If you have artifacts saved as you develop it can use those when writing docs to capture intent and design decisions.
      • overfeed 1 day ago
        Inaccurate documentation can be worse than no documentation at all!
        • rtkwe 1 day ago
          ...Yes. I didn't say fire and forget but it can handle a lot of rote recitation of library flags and functions perfectly well. The kind of stuff that's autogenerated with javadocs, inputs, outputs and effects that are all in the code are available to the LLMs. Like all things with LLMs generate and review but I've seen some good outputs with minimal errors that saved days of work no one was going to be given the time to do.
  • dtnewman 1 day ago
    This article inspired me to build "Burn, baby burn", a CLI tool for burning tokens. See:

    - Show HN: https://news.ycombinator.com/item?id=48151287

    - github: https://github.com/dtnewman/burn-baby-burn

  • anigbrowl 1 day ago
    Maybe they could devote some resources to updating Amazon's customer facing AI instead. I ordered some programming books last night, and was told they'd arrive tomorrow (I'm a prime member and live near a major hub so this is the norm). This morning I found the date had been pushed back to May 27, a very surprising outcome (these were popular books from O'Reilly and similarly high volume publishers, not some obscure imprint).

    I asked Alexa (on the amazon web page) about it and it couldn't tell me which carrier had the items or why they were delayed, directed me to a non-existent phone number and then denied it had done so. The customer service bot I was eventually redirected to was even worse, and started telling my that items would be delivered both tomorrow and by May 27 in the same message. Finally I got human intervention, who said the items would arrive tomorrow and that the delivery status had been updated, but the order page still says they're arriving at the end of next week.

  • jacekm 1 day ago
    Slack recently introduced this option where it can tell you what kind of animal you are (not kidding) based on the conversations you had. When I saw this I immediately thought "managers pushed poor folks at Slack to incorporate more AI into the product".
    • lsakoert 1 day ago
      "Absolutely anything that keeps users on our app longer is good!"

      Slack will start serving porn next.

  • bravetraveler 1 day ago
    I'd do this if the other punch to follow didn't appear to be 'justify the expenditure'.

    Choosing to wait for the PIP instead, if $EMPLOYER goes this way. Tell me the work I'm not doing and how pieces of ~~flair~~, sorry, tokens might help. Or don't, I don't care.

    • MeetingsBrowser 1 day ago
      Anecdotal, but appears to be common among other comments in the thread.

      For companies doing this there is no 'justify the expenditure'. Employees are being praised for high expenditure, regardless of actual outcome.

      Leadership see the problem as 'people resisting AI'. Embracing AI is seen as the solution, and token usage is seen as the measure of success.

      • bravetraveler 1 day ago
        You're right; the justification is made by dancing. Meme moment... weird flex, but okay.

        I'm also hesitant to 'go for the gold' because it only means more B2B monopoly money, juiced stats, or expectation. Or, God forbid, become the resident Token Expert. That praise you mention is exactly what I don't want!

  • whoneedstokens 1 day ago
    Made this as a joke but maybe it'll get some use

    https://token-burner.pages.dev/

  • shriek 1 day ago
    When are they going to admit that they over invested in AI and somehow have to justify that spend with usage down our throat?
    • whynotmaybe 1 day ago
      Just like return-to-office is there to maintain real estate value.
  • mandeepj 1 day ago
    Being an investor in Anthropic, Amazon must have a preferred billing rate, but others do not. No wonder their revenue shot up so much, so fast, because of BS goals like those.
    • grtt21 1 day ago
      Advanced Circular flows

      If I own part of a company, and I spend money on their goods, and a result their revenues climb and consequently my valuation does too - then my firm value will be higher.

      This would also explain the gung-ho approach. Some pretty devious financial engineering akin to arbitrage

  • b3ing 23 hours ago
    People want to get promoted or put things on their resume about AI, the only way is to force others to use/work with AI to bump up their numbers on the resume or to say they pushed some new AI initiative. Because all the new jobs or promotions require the new hot thing - AI.
  • shibaprasadb 13 hours ago
    I had written about the same: https://www.shibaprasadb.com/2026/04/29/tokenmaxxing-tap-in....

    This is the new tap-in tap-out!

  • markboo 18 hours ago
    When a founder from a non-tech company told me that he personally spent over 2000 dollar in last month in coding (He never coded before) with proud, I'm asking him why using API instead of subscription, he told that, "the model is different, I spend more, it's a better and smarter model without limitation".

    I thought they just want to show up to others(especially non tech guys) that they spent high, so they know more about AI

  • rambojohnson 1 day ago
    I have colleagues at prime video who consult AI the way medieval clerks once consulted omens, generating entire chains of speculative labor after ritual examinations of any of their given codebases. no real or new initiatives / innovations are being pushed forward, and thats rumored to be happening in other departments as well.
  • cmiles8 1 day ago
    It’s not surprising given Amazon’s pretty lackluster position in AI beyond providing raw compute, which itself is basically a commodity at this point.

    Have heard very similar stories to what the article describes. There were also outright revolts from tech folks being forced to use Amazon’s own shit self-built AI vs Claude Code and other top-tier products.

    Given Amazon’s early start with Echo and Alexa they should have absolutely dominated this AI revolution but have been scrambling in a panic ever since ChatGPT showed up on scene and always seem two steps behind the market.

    It all paints a picture inside Amazon of clueless leaders at the top and mobs of others below them just gaming the system so a silly dashboard looks green. “Day 2” has arrived.

    • Schiendelman 16 hours ago
      The "Amazon's early start" really reminds me of what the hybrid-focused Japanese car manufacturers did - they knew the battery tech and squandered their lead. This is normal though, this is why disruption is necessary.
  • krupan 1 day ago
    So we've seen sellers of AI hardware invest in AI software companies to create demand for their hardware. Now we are seeing AI (and/or AI adjacent) companies requiring their employees to use AI to create demand for AI. When does this snake finish eating its own tail?
  • paulorlando 1 day ago
  • ruxiz 2 hours ago
    It is the same in almost all tech companies now.
  • HyperL0gi 1 day ago
    “Show me the incentive and I'll show you the outcome.” ― Charlie Munger
  • pvtmert 5 hours ago

      > But a representative for Amazon said that there is no such company-wide metric for AI usage, nor are there internal leaderboards where employees are measured against each other. Rather, employees are able to view their own AI usage on personal dashboards.
    
    Such a bullshit statement. There is a _global_ dashboard that ranks Kiro/QuickSuite (formerly Amazon Q) usage per-employee based on tokens. The dashboard itself is in QuickSight (well, that also became part of QuickSuite anyway).

    Not only the data is open to anyone, you can clearly sort by rank, daily/weekly/monthly/yearly usages. Current and former employees included. (By internal alias).

    Moreover, there is an internal "awards" system that shows up in PhoneTool profile, each employee gets "awarded" of Kiro/AmazonQ/Quicksuite titles like "Blaze", "Thunderstorm", etc. You can see other recipients of the same award just by clicking on it.

      Note: PhoneTool is an internal profile directory where you can look up other employees as oneself...
    
    ---

    On the side, I know several people who cannot produce proper code themselves, or integrate to anything on their own. Those who need constant hand-holding keep producing immense amount of stuff with Kiro/AmazonQ, over-ranking SDEs nowadays. (these are not SDEs, more of SysDev, Support Engineers and TPMs). This itself is not a specifically good or a bad thing. But once they stack-rank based on the token usage, I am pretty sure "good" engineers who put effort into writing "good" code will rank worse than people who does not put effort into "concise" solutions. Therefore, quality will eventually detoriate. And it will be too late once the leadership realizes what has been going on. (Well, they already seen the Amazon-Q/Kiro related outages and keep denying it...)

  • anthonj 1 day ago
    Let it write unit tests for every single function in the codebase lol

    I've chosen the wrong profession.

  • netdevphoenix 1 day ago
    Hasn't Anthropic being experiencing issues due to extremely high usage? Being their investor, you would think Amazon wouldn't do Anthropic dirty by weakening their ability to handle user traffic
    • nateglims 1 day ago
      Amazon runs anthropic models in it's own DCs with Bedrock.
      • brcmthrowaway 1 day ago
        How does this work?

        Anthropic sends .gguf and a claude-serve binary?

        • svachalek 1 day ago
          GGUF is mostly a hobbyist format, but yeah I'd assume it's a big pile of tensors and and executable.
  • giancarlostoro 1 day ago
    Here I am wishing my employer would give me any AI modestly. I do more in my time off, with higher quality (in terms of how much I build, I don't like to just spit out features mindlessly and endlessly) due to the ability to plan things out more, and have a wealth of knowledge to help me poke holes and find things that make sense that I have not thought of, or even bypass limitations I thought would block me forever.
  • delbronski 1 day ago
    This is what happens when you can code faster than you can think. It’s kind of similar to a Facebook hiring 100s of engineers before it even knows what to do with them.
  • omnee 15 hours ago
    If the financial or career incentives for management relies on 'Token usage growth', then that is what will happen to the detriment of everything else.
  • AvAn12 1 day ago
    1. so is everyone who is subject to a corporate mandate... but...

    2. this may be ok. A good way to learn a piece of software or tool or process is to play with it. We learn lots of general knowledge through play and experimentation. Heck we get better at musical instruments by playing on them.

    Mandates are kind of dumb in many ways. But they will force the issue of discovering whether anything useful can come from AI other than coding.

  • throwatdem12311 1 day ago
    When your incentive is to tokenmaxx don’t be suprised when people game the system. Measurements something somethjng benchmark something something bad.
  • swader999 1 day ago
    People need to start yelling, throwing things and publicly mocking execs that do this. What is wrong with you all? I do this (except the throwing) and I get nothing but respect. If you've been a good little soldier for years, done nothing but deliver and then you raise your ire people will listen.

    If you can't change your company, change your company!

    • monkaiju 1 day ago
      Similar situation here! In fact our team has a no-LLM policy that I'm quite happy with. We did experiment with it, to the point that one of our seniors atrophied so badly we had to let him go, and we're still paying down some of the slop residue...
      • untrust 15 hours ago
        > to the point that one of our seniors atrophied so badly we had to let him go

        What was the time scale here? Was it actual skill atrophy or were they just shipping slop PRs all the time

        • monkaiju 8 hours ago
          About a year. They were making slop, then when asked to work without AI were unable to be effective for months.
  • rafram 1 day ago
    > Unlike other AI models, OpenClaw and MeshClaw run locally on users’ own hardware, giving them unprecedented independence.

    No, they don’t.

  • dhruvrrp 1 day ago
    I work at AWS (disclaimer opinions are my own, do not reflect views of my employer) and i think the existence of a leaderboard has led to folks gamifying it. People see peers in a higher tier on the leaderboard and start burning tokens to catch up.

    I think the company realizes this and is actively trying to avoid this, since for the new tools there isn't a leaderboard.

    • stefan_ 1 day ago
      Right, the leaderboard is internal.
  • almost_usual 1 day ago
    Corporate tech has accelerated into a preposterous trajectory.

    Burn resources at all costs to appear productive and use proxy metrics to measure success.

    Fire productive employees to ensure we have resources to fund the proxy metrics.

    AI slop fool’s gold is the product.

    • andrekandre 20 hours ago

        > AI slop fool’s gold is the product.
      
      juniors who are stuck on ai and cant learn is the real product imo
  • xiaoyu2006 1 day ago
    > Amazon employees are reportedly using the company’s new internal AI tool, MeshClaw, to create extraneous AI agents—not to increase productivity, but to drive up AI activity.

    Every time I see "not... but..." I suspect an AI article. Not sure if this is the case here.

  • pvtmert 6 hours ago
    > But a representative for Amazon said that there is no such company-wide metric for AI usage, nor are there internal leaderboards where employees are measured against each other. Rather, employees are able to view their own AI usage on personal dashboards.

    Such a bullshit. There is (was) Amazon Q (Now QuickSuite) leaderboard which is a Quick Sight dashboard. Moreover, each PhoneTool

  • mattas 1 day ago
    Waiting for the YC startup in the next batch that provides tokenmaxxing-as-a-service.
  • tyleo 1 day ago
    This is foolish. High token use is associated with worse output. If you fill your models context you are going to be using a lot more context but the labs literally put out charts of how the models degrade at high context use.

    This is analogous to measuring productivity by LoC output.

    • Insanity 1 day ago
      High token usage does not mean high token usage in the same session / context window. But yeah, context rot hits hard, I find that with Codex/GPT5.4 after about 50% context window usage it's hard to get anything useful out of it on a moderately sized codebase.
    • drivingmenuts 1 day ago
      > This is analogous to measuring productivity by LoC output

      True, but it looks like productivity to people whose own productivity is measured by how busy their subordinates appear to be.

  • spprashant 1 day ago
    Good old Goodhart's law. https://xkcd.com/2899/
  • soorya3 1 day ago
    Feature Request: borrow and burn your teams token so that the whole projects looks green so that you can earn your performance bonus.
  • chris_engel 1 day ago
    I dont know... this works out until someone approaches you and says: well we see you are using LOTS of tokens so you must be incredibly productive. Please show your results.
    • Macha 1 day ago
      The type of leadership judging employee performance by token burn are usually doing it because they don’t have a clue how to judge performance so they’re just taking what they read on linkedin or their local cto roundtable about more AI = more better and turning it into a metric on the thing that makes a simple number.
  • manesioz 1 day ago
    Token-driven development
  • linsomniac 1 day ago
    Counterpoint: I've been "burning" a lot of tokens for the past year running experiments, not all of which have come to fruition. For example, I used around 15 hours of API-equivalent use building a DocuSign-like service which we arently likely to deploy to users. However, those experiments have definitely educated me on what and where and how to use the tooling.

    Like I tell my kids: If every experiment you do succeeds, you aren't trying hard enough.

  • 8note 1 day ago
    amazon needs to get the api gateway payload size fixed if they really want people to expand their bedrock use.

    that and make sure the tools are actually up treating amazon internal as real customers.

    its hard to stay excited about the tools when they can be down for a week because kiro launched.

  • brightball 1 day ago
    Goodhart's Law in effect right there.
  • par 1 day ago
    Narrator: “it wasn’t just Amazon”
  • jzer0cool 20 hours ago
    Better metric than lines of code added
  • graphememes 1 day ago
    can't come home today babe im tokenmaxxing
  • throwmeaway876 1 day ago
    What's the root cause of these ridiculous decisions being taken at tech corporations? Constantly, they fall into fads like these that everyone with a brain knows make no sense but still many companies decide to follow them. For example: RTO -> what's the point of this shit? we never knew for sure but higher ups at most tech companies suddenly decided that RTO was the way to go forward despite all the downsides. Another example: DEI policies, some of them were very non-sensical.

    I believe there has to be some downward pressure on these executives to take these decisions but I would like to know where it's coming from exactly and what's the logic behind them. Is it some big institution like Blackrock which has leverage on many of these companies? That's always been my bet but I never knew for sure.

    • pyrolistical 1 day ago
      Crappy managers don’t know (or actively avoid) how to measure business value from individuals. So they need you to be in the office so they can physically see if you are putting in the effort.

      Tokens is just yet another proxy for business value.

      The problem they face is if everybody is judge by business value in dollars, crappy managers are the first to go

  • 421986 1 day ago
    New proposed corporate slogan: "Tokens must roll for victory!"

    The original (third reich): "Wheels must roll for victory!"

    It will end in the same manner.

  • general1465 1 day ago
    Vicious cycle right here. Making up tasks to burn tokens -> Hey people love to use AI -> More data centers built -> You now have to make up more tasks to burn more tokens.
  • habosa 1 day ago
    They invented a product that has no possible cost control: you don't know how much you've used until you've used it. And then we somehow made it a virtue to use as much as possible. I can't think of a more effective money printing factory.

    I wonder when we'll see our first "My startup went bankrupt on AI use" post. Amazon is being dumb but at least they can afford it.

  • 2OEH8eoCRo0 1 day ago
    Use Vim or you're fired!
  • kittikitti 1 day ago
    There are some secret random seeds that will prevent the end token and just keep generating forever. This will ruin your hardware though.
  • rvz 1 day ago
    When a measure becomes a target....
  • gofreddygo 1 day ago
    Long live Goodhart!
  • insane_dreamer 23 hours ago
    I use CC almost daily, and I have a Max subscription and have never hit a hard cap. I spend $100/month.

    How are people burning through hundreds/thousands of $ of tokens a week/month?

    What am I doing wrong.

  • freakynit 1 day ago
    "When a metric becomes the target, it ceases to be a good measure".
  • shmerl 1 day ago
    Dumb bureaucracy with dumb requirements will be met with corresponding response.
  • xyzal 1 day ago
    Love it. This needs to become a new trend, and price per token can't rise soon enough.
  • jorblumesea 1 day ago
    Has anyone actually seen true business lift from agents or is this one of those "do stupid things faster" situation?
    • 121789 1 day ago
      I think it's mixed. I have seen people with really good use cases and the opposite. It feels like the AWS/GCP situation all over again. Step 1: "this is amazing tech we need to leverage it immediately, use it as much as you can" Step 2: "oh shit this is getting expensive and I'm not sure of the ROI". We are approaching step 2
    • untrust 15 hours ago
      [dead]
  • devmor 1 day ago
    I don’t even understand the point of making up tasks. Surely there’s some moonshot frustration project in your workday you could have an agent plugging away at, even if it’s unsuccessful.
  • blindriver 1 day ago
    This is what I do. I tell AI to go through every file in my project, identify up to 10 bugs per file, and then write the markdown with the name of the file plus "bugfix". This takes about 2 hours. Then I delete all the files with the suffix "bugfix" and then do it again.
    • drivingmenuts 1 day ago
      You should probably create an agent to make agents whose jobs are to figure out how to maximize the token usage (and one whose job is to calculate the minimum token usage, so it doesn't look like a boondoggle).
  • arian_ 1 day ago
    yesterday's front page: AI is making me dumb. today's front page: employees are making AI dumb. the circle is complete.
  • ekjhgkejhgk 1 day ago
    Happening at my company as well.

    This is an early symptom of the future devaluation of the skill of developing software. The value is going down because there is too little software developing work for the number of people who currently can do it.

    • 8note 1 day ago
      amazon will never have that problem

      theres so much work available that teams try to avoid taking stuff on as much as possible.

      the bottleneck to building more is almost certainly the cross team coordination

      likely the best place add agents too. an llm tpm would be super handy tool to scale amazon productivity, rather than coding agents.

  • mactavish88 1 day ago
    Play stupid games, win stupid prizes.
  • buellerbueller 1 day ago
    This seems like AI is the new ponzi scheme.
  • Serhii-Set 1 day ago
    [dead]
  • jknoepfler 1 day ago
    If GDP is going up, we must be wealthier and more productive, right? Surely? (/s)