59 comments

  • bkryza 2 hours ago
    They have an interesting regex for detecting negative sentiment in users prompt which is then logged (explicit content): https://github.com/chatgptprojects/claude-code/blob/642c7f94...

    I guess these words are to be avoided...

    • moontear 1 hour ago
      I don't know about avoided, this kind of represents the WTF per minute code quality measurement. When I write WTF as a response to Claude, I would actually love if an Antrhopic engineer would take a look at what mess Claude has created.
      • conception 27 minutes ago
        /feedback works for that i believe
    • BoppreH 2 hours ago
      An LLM company using regexes for sentiment analysis? That's like a truck company using horses to transport parts. Weird choice.
      • floralhangnail 0 minutes ago
        Well, regex doesn't hallucinate....right?
      • stingraycharles 1 hour ago
        Because they want it to be executed quickly and cheaply without blocking the workflow? Doesn’t seem very weird to me at all.
        • _fizz_buzz_ 1 hour ago
          They probably have statistics on it and saw that certain phrases happen over and over so why waste compute on inference.
          • mycall 54 minutes ago
            The problem with regex is multi-language support and how big the regex will bloat if you to support even 10 languages.
            • doublesocket 11 minutes ago
              Supporting 10 different languages in regex is a drop in the ocean. The regex can be generated programmatically and you can compress regexes easily. We used to have a compressed regex that could match any placename or street name in the UK in a few MB of RAM. It was silly quick.
            • TeMPOraL 43 minutes ago
              We're talking about Claude Code. If you're coding and not writing or thinking in English, the agents and people reading that code will have bigger problems than a regexp missing a swear word :).
              • MetalSnake 27 minutes ago
                I talk to it in non-English. But have rules to have everything in code and documentation in english. Only speaking with me should use my native language. Why would that be a problem?
              • formerly_proven 7 minutes ago
                In my experience agents tend to (counterintuitively) perform better when the business language is not English / does not match the code's language. I'm assuming the increased attention mitigates the higher "cognitive" load.
            • b112 46 minutes ago
              Did you just complain about bloat, in anything using npm?
        • Foobar8568 1 hour ago
          Why do you need to do it at the client side? You are leaking so much information on the client side. And considering the speed of Claude code, if you really want to do on the client side, a few seconds won't be a big deal.
          • plorntus 2 minutes ago
            Depends what its used by, if I recall theres an `/insights` command/skill built in whatever you want to call it that generates a HTML file. I believe it gives you stats on when you're frustrated with it and (useless) suggestions on how to "use claude better".
          • matkoniecz 51 minutes ago
            > a few seconds won't be a big deal

            it is not that slow

        • orphea 1 hour ago
          It looks like it's just for logging, why does it need to block?
          • jflynn2 45 minutes ago
            Better question - why would you call an LLM (expensive in compute terms) for something that a regex can do (cheap in compute terms)

            Regex is going to be something like 10,000 times quicker than the quickest LLM call, multiply that by billions of prompts

            • orphea 6 minutes ago
              You assume this regex is doing a good job. It is not. Also you can embed a very tiny model if you really want to flag as many negatives as possible (I don't know anthropic's goal with this) - it would be quick and free.
      • codegladiator 1 hour ago
        what you are suggesting would be like a truck company using trucks to move things within the truck
        • argee 1 hour ago
          That’s what they do. Ever heard of a hand truck?
          • eadler 1 hour ago
            I never knew the name of that device.

            Thanks

            • freedomben 41 minutes ago
              Depending on the region you live in, it's also frequently called a "dolly"
          • istoleabread 1 hour ago
            Do we have a hand llm perchance?
      • blks 1 hour ago
        Because they actually want it to work 100% of the time and cost nothing.
        • orphea 1 hour ago
          Then they made it wrong. For example, "What the actual fuck?" is not getting flagged, neither is "What the *fuck*".
      • draxil 1 hour ago
        Good to have more than a hammer in your toolbox!
      • throwaw12 34 minutes ago
        because impact of WTF might be lost in the result of the analysis if you solely rely on LLM.

        parsing WTF with regex also signifies the impact and reduces the noise in metrics

        "determinism > non-determinism" when you are analysing the sentiment, why not make some things more deterministic.

        Cool thing about this solution, is that you can evaluate LLM sentiment accuracy against regex based approach and analyse discrepancies

      • kjshsh123 32 minutes ago
        Using regex with LLMs isn't uncommon at all.
      • pfortuny 40 minutes ago
        They had the problem of sentiment analysis. They use regexes.

        You know the drill.

      • ojr 1 hour ago
        I used regexes in a similar way but my implementation was vibecoded, hmmm, using your analysis Claude Code writes code by hand.
      • sumtechguy 1 hour ago
        hmm not a terrible idea (I think).

        You have a semi expensive process. But you want to keep particular known context out. So a quick and dirty search just in front of the expensive process. So instead of 'figure sentiment (20seconds)'. You have 'quick check sentiment (<1sec)' then do the 'figure sentiment v2 (5seconds)'. Now if it is just pure regex then your analogy would hold up just fine.

        I could see me totally making a design choice like that.

      • lou1306 2 hours ago
        They're searching for multiple substrings in a single pass, regexes are the optimal solution for that.
        • noosphr 1 hour ago
          The issue isn't that regex are a solution to find a substring. The issue is that you shouldn't be looking for substrings in the first place.

          This has buttbuttin energy. Welcome to the 80s I guess.

          • 8cvor6j844qw_d6 1 hour ago
            Very likely vibe coded.

            I've seen Claude Code went with a regex approach for a similar sentiment-related task.

        • BoppreH 1 hour ago
          It's fast, but it'll miss a ton of cases. This feels like it would be better served by a prompt instruction, or an additional tiny neural network.

          And some of the entries are too short and will create false positives. It'll match the word "offset" ("ffs"), for example. EDIT: no it won't, I missed the \b. Still sounds weird to me.

          • hk__2 1 hour ago
            It’s fast and it matches 80% of the cases. There’s no point in overengineering it.
          • vharuck 1 hour ago
            The pattern only matches if both ends are word boundaries. So "diffs" won't match, but "Oh, ffs!" will. It's also why they had to use the pattern "shit(ty|tiest)" instead of just "shit".
            • BoppreH 1 hour ago
              You're right, I missed the \b's. Thanks for the correction.
    • ozim 32 minutes ago
      There is no „stupid” I often write „(this is stupid|are you stupid) fix this”.

      And Claude was having in chain of though „user is frustrated” and I wrote to it I am not frustrated just testing prompt optimization where acting like one is frustrated should yield better results.

    • 1970-01-01 20 minutes ago
      Hmm.. I flag things as 'broken' often and I've been asked to rate my sessions almost daily. Now I see why.
    • alex_duf 30 minutes ago
      everyone here is commenting how odd it looks to use a regexp for sentiment analysis, but it depends what they're trying to do.

      It could be used as a feedback when they do A/B test and they can compare which version of the model is getting more insult than the other. It doesn't matter if the list is exhaustive or even sane, what matters is how you compare it to the other.

      Perfect? no. Good and cheap indicator? maybe.

    • francisofascii 29 minutes ago
      Interesting that expletives and words that are more benign like "frustrating" are all classified the same.
    • speedgoose 34 minutes ago
      I guess using French words is safe for now.
    • sreekanth850 1 hour ago
      Glad abusing words in my list are not in that. but its surprising that they use regex for sentiments.
    • stainablesteel 8 minutes ago
      i dislike LLMs going down that road, i don't want to be punished for being mean to the clanker
    • nodja 2 hours ago
      If anyone at anthropic is reading this and wants more logs from me add jfc.
    • raihansaputra 2 hours ago
      i wish that's for their logging/alert. i definitely gauge model's performance by how much those words i type when i'm frustrated in driving claude code.
    • samuelknight 1 hour ago
      Ridiculous string comparisons on long chains of logic are a hallmark of vibe-coding.
      • dijit 59 minutes ago
        It's actually pretty common for old sysadmin code too..

        You could always tell when a sysadmin started hacking up some software by the if-else nesting chains.

      • TeMPOraL 40 minutes ago
        Nah, it's a hallmark of your average codebase in pre-LLM era.
    • dheerajmp 1 hour ago
      Yeah, this is crazy
    • smef 1 hour ago
      so frustrating..
  • cedws 2 hours ago

        ANTI_DISTILLATION_CC
        
        This is Anthropic's anti-distillation defence baked into Claude Code. When enabled, it injects anti_distillation: ['fake_tools'] into every API request, which causes the server to silently slip decoy tool definitions into the model's system prompt. The goal: if someone is scraping Claude Code's API traffic to train a competing model, the poisoned training data makes that distillation attempt less useful.
    • nialse 11 minutes ago
      Paranoia. And also ironic considering their base LLM is a distillation of the web and books etc etc.
      • petcat 2 minutes ago
        They stole everything and now they want to close the gates behind them.

        "I got the loot, Steve!"

        I feel like the distillation stuff will end up in court if they try to sue an American company about it. We'll see what a judge says.

      • spiderfarmer 2 minutes ago
        That isn't irony, it's hypocrisy.
  • treexs 3 hours ago
    The big loss for Anthropic here is how it reveals their product roadmap via feature flags. A big one is their unreleased "assistant mode" with code name kairos.

    Just point your agent at this codebase and ask it to find things and you'll find a whole treasure trove of info.

    Edit: some other interesting unreleased/hidden features

    - The Buddy System: Tamagotchi-style companion creature system with ASCII art sprites

    - Undercover mode: Strips ALL Anthropic internal info from commits/PRs for employees on open source contributions

    • BoppreH 1 hour ago
      Undercover mode also pretends to be human, which I'm less ok with:

      https://github.com/chatgptprojects/claude-code/blob/642c7f94...

      • mrlnstk 1 hour ago
        But will this be released as a feature? For me it seems like it's an Anthropic internal tool to secretly contribute to public repositories to test new models etc.
        • BoppreH 1 hour ago
          I don't care who is using it, I don't want LLMs pretending to be humans in public repos. Anthropic just lost some points with me for this one.

          EDIT: I just realized this might be used without publishing the changes, for internal evaluation only as you mentioned. That would be a lot better.

      • 0x3f 1 hour ago
        You'll never win this battle, so why waste feelings and energy on it? That's where the internet is headed. There's no magical human verification technology coming to save us.
        • RockRobotRock 51 minutes ago
          >There's no magical human verification technology coming to save us.

          Except for the one Sam Altman is building.

        • matkoniecz 43 minutes ago
          Even if it is impossible to win, I am still feeling bad about it.

          And at this point it is more about how large space will be usable and how much will be bot-controlled wasteland. I prefer spaces important for me to survive.

      • sandos 1 hour ago
        This is my pet peeve with LLMs, they almost always fails to write like a normal human would. Mentioning logs, or other meta-things which is not at all interesting.
        • sgc 16 minutes ago
          I had a problem to fix and one not only mentioned these "logs", but went on about things like "config", "tests", and a bunch of other unimportant nonsense words. It even went on to point me towards the "manual". Totally robotic monstrosity.
      • shaky-carrousel 1 hour ago
        > Write commit messages as a human developer would — describe only what the code change does.

        The undercover mode prompt was generated using AI.

        • kingstnap 32 minutes ago
          All these companies use AIs for writing these prompts.

          But AI aren't actually very good at writing prompts imo. Like they are superficially good in that they seem to produce lots of vaguely accurate and specific text. And you would hope the specificity would mean it's good.

          But they sort of don't capture intent very well. Nor do they seem to understand the failure modes of AI. The "-- describe only what the code change does" is a good example. This is specifc but it also distinctly seems like someone who doesn't actually understand what makes AI writing obvious.

          If you compare that vs human written prose about what makes AI writing feel AI you would see the difference. https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing

          The above actually feels like text from someone who has read and understands what makes AI writing AI.

      • vips7L 1 hour ago
        That whole “feature” is vile.
    • TIPSIO 32 minutes ago
      If this true. My old personal agent Claude Code setup I open sourced last month will finally be obsolete (1 month lol):

      https://clappie.ai

      - Telegram Integration => CC Dispatch

      - Crons => CC Tasks

      - Animated ASCII Dog => CC Buddy

    • denimnerd42 16 minutes ago
      all these flags are findable by pointing claude at the binary and asking it to find festure flags.
    • charcircuit 12 minutes ago
      People already can look at the source without this leak. People have had hacked builds force enabling feature flags for a long time.
    • avaer 3 hours ago
      (spoiler alert)

      Buddy system is this year's April Fool's joke, you roll your own gacha pet that you get to keep. There are legendary pulls.

      They expect it to go viral on Twitter so they are staggering the reveals.

      • cmontella 57 minutes ago
        lol that's funny, I have been working seriously [1] on a feature like this after first writing about it jokingly [2] earlier this year.

        The joke was the assistant is a cat who is constantly sabotaging you, and you have to take care of it like a gacha pet.

        The seriousness though is that actually, disembodied intelligences are weird, so giving them a face and a body and emotions is a natural thing, and we already see that with various AI mascots and characters coming into existence.

        [1]: serious: https://github.com/mech-lang/mech/releases/tag/v0.3.1-beta

        [2]: joke: https://github.com/cmontella/purrtran

      • JohnLocke4 2 hours ago
        You heard it here first
      • ares623 2 hours ago
        So close to April Fool's too. I'm sure it will still be a surprise for a majority of their users.
    • ben8bit 3 hours ago
      [dead]
  • kschiffer 1 hour ago
    • Gormo 52 minutes ago
      I'm glad "reticulating" is in there. Just need to make sure "splines" is in the nouns list!
      • avaer 46 minutes ago
        Relieved to know I'm not the only one who grepped for that. Thank you for making me feel sane, friend.
    • moontear 50 minutes ago
      What's going on with the issues in that repo? https://github.com/instructkr/claude-code/issues
      • avaer 30 minutes ago
        It seems human. It taught me 合影, which seems to be Chinese slang for just wanting to be in the comments. Probably not a coincidence that it's after work time in China.

        Really interesting to see Github turn into 4chan for a minute, like GH anons rolling for trips.

      • g947o 18 minutes ago
        There have been massive GitHub issue spams recently, including in Microsoft's WSL repository.

        https://github.com/microsoft/WSL/issues/40028

      • proactivesvcs 19 minutes ago
        I saw this on restic's main repository the other day.
      • Quarrel 26 minutes ago
        trying to get github to nuke the repo? at a guess.

        certainly nothing friendly.

      • tommit 37 minutes ago
        oh wow, there are like 10 opened every minute. seems spam-y
    • world2vec 18 minutes ago
      Did they remove that in some very recent commit?
    • bonoboTP 1 hour ago
      It's not hard to find them, they are in clear text in the binary, you can search for known ones with grep and find the rest nearby. You could even replace them inplace (but now its configurable).
    • spoiler 1 hour ago
      Random aside: I've seen a 2015 game be accused of AI slop on Steam because it used a similar concept... And mind you, there's probably thousands of games that do this.

      First it was punctuation and grammar, then linguistic coherence, and now it's tiny bits of whimsy that are falling victim to AI accusations. Good fucking grief

      • moron4hire 1 hour ago
        To me, this is a sign of just how much regular people do not want AI. This is worse than crypto and metaverse before it. Crypto, people could ignore and the dumb ape pictures helped you figure out who to avoid. Metaverse, some folks even still enjoyed VR and AR without the digital real estate bullshit. And neither got shoved down your throat in everyday, mundane things like writing a paper in Word or trying to deal with your auto mechanic.

        But AI is causing such visceral reactions that it's bleeding into other areas. People are so averse to AI they don't mind a few false positives.

        • bonoboTP 1 hour ago
          It's how people resisted CGI back in the day. What people dislike is low quality. There is a loud subset who are really against it on principle like we also have people who insist on analog music but regular people are much more practical but they don't post about this all day on the internet.
          • gunsle 12 minutes ago
            I think literally everyone could agree CGI has been detrimental to the quality of films.
          • trial3 18 minutes ago
            perhaps one important detail is that cassette tape guys and Lucasfilm aren’t/weren’t demanding a complete and total restructuring of the economy and society
          • Gigachad 1 hour ago
            Not really. The scale is entirely different. I think less of someone as a person if they send me AI slop.
        • sunaookami 1 hour ago
          No there is a very loud minority of users who are very anti AI that hate on anything that is even remotely connected to AI and let everyone know with false claims. See the game Expedition 33 for example.
          • neutronicus 15 minutes ago
            Especially true in gaming communities.

            IMO it's a combination of long-running paranoia about cost-cutting and quality, and a sort of performative allegiance to artists working in the industry.

    • Unfrozen0045 32 minutes ago
      [dead]
  • avaer 3 hours ago
    Would be interesting to run this through Malus [1] or literally just Claude Code and get open source Claude Code out of it.

    I jest, but in a world where these models have been trained on gigatons of open source I don't even see the moral problem. IANAL, don't actually do this.

    https://malus.sh/

    • rvnx 1 hour ago
      Malus is not a real project btw, it's a parody:

      “Let's end open source together with this one simple trick”

      https://pretalx.fosdem.org/fosdem-2026/talk/SUVS7G/feedback/

      Malus is translating code into text, and from text back into code.

      It gives the illusion of clean room implementation that some companies abuse.

      The irony is that ChatGPT/Claude answers are all actually directly derived from open-source code, so...

    • sumeno 22 minutes ago
      No real reason to do that, they say Claude Code is written by Claude, which means it has no copyright. Just use the code directly
    • NitpickLawyer 3 hours ago
      The problem is the oauth and their stance on bypassing that. You'd want to use your subscription, and they probably can detect that and ban users. They hold all the power there.
      • avaer 3 hours ago
        You'd be playing cat and mouse like yt-dlp, but there's probably more value to this code than just a temporary way to milk claude subscriptions.
        • esperent 56 minutes ago
          If you're using a claude subscription you'd just use claude code.

          The real value here will be in using other cheap models with the cc harness.

        • stingraycharles 1 hour ago
          I don’t think that’s a good comparison. There isn’t anything preventing Anthropic from, say, detecting whether the user is using the exact same system prompt and tool definition as Claude Code and call it a day. Will make developing other apps nearly impossible.

          It’s a dynamic, subscription based service, not a static asset like a video.

      • woleium 3 hours ago
        Just use one of the distilled claude clones instead https://x.com/0xsero/status/2038021723719688266?s=46
        • echelon 2 hours ago
          "Approach Sonnet"...

          So not even close to Opus, then?

          These are a year behind, if not more. And they're probably clunky to use.

      • pkaeding 2 hours ago
        Could you use claude via aws bedrock?
    • dahcryn 1 hour ago
      I love the irony on seeing the contribution counter at 0

      Who'd have thought, the audience who doesn't want to give back to the opensource community, giving 0 contributions...

      • larodi 1 hour ago
        It reads attribution really?
    • aizk 41 minutes ago
      This has happened before. It was called anon kode.
    • kelnos 1 hour ago
      Oh god, I was so close to believing Malus was a real product and not satire.
      • magistr4te 1 hour ago
        It is a real product. They take real payments and deliver on whats promised. Not sure if its an attempt to subvert criticism by using satirical language, or if they truly have so little respect for the open source community.
      • otikik 1 hour ago
    • TIPSIO 25 minutes ago
      Eh, the value is the unlimited Max plan which they have rightfully banned from third-party use.

      People simply want Opus without fear of billing nightmare.

      That’s like 99% of it.

    • gosub100 59 minutes ago
      What are they worried about? Someone taking the company's job? Hehe
  • mohsen1 2 hours ago
    src/cli/print.ts

    This is the single worst function in the codebase by every metric:

      - 3,167 lines long (the file itself is 5,594 lines)
      - 12 levels of nesting at its deepest
      - ~486 branch points of cyclomatic complexity
      - 12 parameters + an options object with 16 sub-properties
      - Defines 21 inner functions and closures
      - Handles: agent run loop, SIGINT, rate-limits, AWS auth, MCP lifecycle, plugin install/refresh, worktree bridging, team-lead polling (while(true) inside), control message dispatch (dozens of types), model switching, turn interruption
      recovery, and more
    
    This should be at minimum 8–10 separate modules.
    • mohsen1 42 minutes ago
      here's another gem. src/ink/termio/osc.ts:192–210

        void execFileNoThrow('wl-copy', [], opts).then(r => {
          if (r.code === 0) { linuxCopy = 'wl-copy'; return }
          void execFileNoThrow('xclip', ...).then(r2 => {
            if (r2.code === 0) { linuxCopy = 'xclip'; return }
            void execFileNoThrow('xsel', ...).then(r3 => {
              linuxCopy = r3.code === 0 ? 'xsel' : null
            })
          })
        })
      
      
      are we doing async or not?
    • siruwastaken 10 minutes ago
      How is it that a AI coding agent that is supposedly _so great at coding_ is running on this kind of slop behind the scenes. /s
    • phtrivier 1 hour ago
      Yes, if it was made for human comprehension or maintenance.

      If it's entirely generated / consumed / edited by an LLM, arguably the most important metric is... test coverage, and that's it ?

      • grey-area 1 hour ago
        LLMs are so so far away from being able to independently work on a large codebase, and why would they not benefit from modularity and clarity too?
      • mdavid626 47 minutes ago
        Oh boy, you couldn't be more wrong. If something, LLM-s need MORE readable code, not less. Do you want to burn all your money in tokens?
      • konart 1 hour ago
        Can't we have generated / llm generated code to be more human maintainable?
      • mrbungie 46 minutes ago
        Can't wait to have LLM generated physical objects that explode on you face and no engineer can fix.
      • Bayko 53 minutes ago
        Ye I honestly don't understand his comment. Is it bad code writing? Pre 2026? Sure. In 2026. Nope. Is it going to be a headache for some poor person on oncall? Yes. But then again are you "supposed" to go through every single line in 2026? Again no. I hate it. But the world is changing and till the bubble pops this is the new norm
  • hk__2 1 hour ago
    For a combo with another HN homepage story, Claude Code uses… Axios: https://x.com/icanvardar/status/2038917942314778889?s=20

    https://news.ycombinator.com/item?id=47582220

    • ankaz 19 minutes ago
      [dead]
  • lukan 2 hours ago
    Neat. Coincidently recently I asked Claude about Claude CLI, if it is possible to patch some annoying things (like not being able to expand Ctrl + O more than once, so never be able to see some lines and in general have more control over the context) and it happily proclaimed it is open source and it can do it ... and started doing something. Then I checked a bit and saw, nope, not open source. And by the wording of the TOS, it might brake some sources. But claude said, "no worries", it only break the TOS technically. So by saving that conversation I would have some defense if I would start messing with it, but felt a bit uneasy and stopped the experiment. Also claude came into a loop, but if I would point it at this, it might work I suppose.
    • mikrotikker 2 hours ago
      I think that you do not need to feel uneasy at all. It is your computer and your memory space that the data is stored and operating in you can do whatever you like to the bits in that space. I would encourage you to continue that experiment.
      • lukan 2 hours ago
        Well, the thing is, I do not just use my computer, but connect to their computers and I do not like to get banned. I suppose simple UI things like expanding source files won't change a thing, but the more interesting things, editing the context etc. do have that risk, but no idea if they look for it or enforce it. Their side is, if I want to have full control, I need to use the API directly(way more expensive) and what I want to do is basically circumventing it.
        • mattmanser 59 minutes ago
          It doesn't matter what defence you can think of, if they want to ban you, they'll ban you.

          They won't even read your defence.

          • lukan 54 minutes ago
            I know. All I could do in that case is a blogpost "Claude banned me, for following claude's instructions!" and hope it gets viral.
      • singularity2001 2 hours ago
        You are not allowed to use the assistance of Claude to manufacture hacks and bombs on your computer
  • Painsawman123 1 hour ago
    Really surprising how many people are downplaying this leak! "Google and OpenAi have already open sourced their Agents, so this leak isn't that relevant " What Google and OpenAi have open sourced is their Agents SDK, a toolkit, not the secret sauce of how their flagship agents are wired under the hood! expect the takedown hammer on the tweet, the R2 link, and any public repos soon
  • zurfer 20 minutes ago
    too much pressure. the author deleted the real source code: https://github.com/instructkr/claude-code/commit/7c3c5f7eb96...
  • Squarex 3 hours ago
    Codex and gemini cli are open source already. And plenty of other agents. I don't think there is any moat in claude code source.
    • rafram 2 hours ago
      Well, Claude does boast an absolutely cursed (and very buggy) React-based TUI renderer that I think the others lack! What if someone steals it and builds their own buggy TUI app?
      • loveparade 2 hours ago
        Your favorite LLM is great at building a super buggy renderer, so that's no longer a moat
  • dheerajmp 3 hours ago
    • zhisme 2 hours ago
      https://github.com/instructkr/claude-code

      this one has more stars and more popular

      • 101008 5 minutes ago
        which has already been deleted
      • moontear 50 minutes ago
        Popular, yes... but have you seen the issues? SOMETHING is going on in that repo: https://github.com/instructkr/claude-code/issues
        • nubinetwork 37 minutes ago
          Looks like mostly spam making fun of the code leak.
      • treexs 2 hours ago
        won't they just try to dmca or take these down especially if they're more popular
        • paxys 1 hour ago
          Which is why you should clone it right now
        • panny 2 hours ago
          They can't. AI generated code cannot be copyrighted. They've stated that claude code is built with claude code. You can take this and start your own claude code project now if you like. There's zero copyright protection on this.
          • krlx 1 hour ago
            Given that from 2026 onwards most of the code is going to be computer generated, doesn't it open some interesting implications there ?
          • 0x3f 1 hour ago
            I'm sure it's not _entirely_ built that way, and in practically speaking GitHub will almost certainly take it down rather than doing some kind of deep research about which code is which.
            • panny 55 minutes ago
              That's fine. File a false claim DMCA and that's felony perjury :) They know for a fact that there is no copyright on AI generated code, the courts have affirmed this repeatedly.
          • nananana9 37 minutes ago
            Try not to be overly confident about things where even the experts in the field (copyright lawyers) are uncertain of.

            There's no major lawsuits about this yet, the general consensus is that even under current regulations it's in the grey. And even if you turn out to be right, and let's say 99% of this code is AI-generated, you're still breaking the law by using the other 1%, and good luck proving in court what was human written and what not (especially when sued by the company that literally has the LLM logs).

  • vbezhenar 3 hours ago
    LoL! https://news.ycombinator.com/item?id=30337690

    Not exactly this, but close.

    • ivanjermakov 2 hours ago
      > It exposes all your frontend source code for everyone

      I hope it's a common knowledge that _any_ client side JavaScript is exposed to everyone. Perhaps minimized, but still easily reverse-engineerable.

      • Monotoko 2 hours ago
        Very easily these days, even if minified is difficult for me to reverse engineer... Claude has a very easy time of finding exactly what to patch to fix something
  • mesmertech 2 hours ago
    Was searching for the rumored Mythos/Capybara release, and what even is this file? https://github.com/chatgptprojects/claude-code/blob/642c7f94...
  • karimf 3 hours ago
    Is there anything special here vs. OpenCode or Codex?

    There were/are a lot of discussions on how the harness can affect the output.

    • simonklee 1 hour ago
      Not really, except that they have a bunch of weird things in the source code and people like to make fun of it. OpenCode/Codex generally doesn't have this since these are open-source projects from the get go.

      (I work on OpenCode)

  • agile-gift0262 6 minutes ago
    time to remove its copyright through malus.sh and release that source under MIT
  • bob1029 3 hours ago
    Is this significant?

    Copilot on OAI reveals everything meaningful about its functionality if you use a custom model config via the API. All you need to do is inspect the logs to see the prompts they're using. So far no one seems to care about this "loophole". Presumably, because the only thing that matters is for you to consume as many tokens per unit time as possible.

    The source code of the slot machine is not relevant to the casino manager. He only cares that the customer is using it.

  • dev213 16 minutes ago
    Undercover mode is pretty interesting and potentially problematic: https://github.com/sanbuphy/claude-code-source-code/blob/mai...
  • dhruv3006 2 hours ago
    I have a feeling this is like llama.

    Original llama models leaked from meta. Instead of fighting it they decided to publish them officially. Real boost to the OS/OW models movement, they have been leading it for a while after that.

    It would be interesting to see that same thing with CC, but I doubt it'll ever happen.

    • jkukul 1 hour ago
      Yes, I also doubt it'll ever happen considering how hard Anthropic went after Clawdbot to force its renaming.
  • sudo_man 6 minutes ago
    How this leak happened?
    • sbarre 4 minutes ago
      It's literally explained in the tweet, in the repo and in this thread in many places.
  • cbracketdash 2 hours ago
    Once the USA wakes up, this will be insane news
    • echelon 2 hours ago
      What's special about Claude Code? Isn't Opus the real magic?

      Surely there's nothing here of value compared to the weights except for UX and orchestration?

      Couldn't this have just been decompiled anyhow?

      • derwiki 20 minutes ago
        I think pi has stolen the top honors, but people consider the Claude code harness very good (at least, better than Cursor)
        • sbarre 5 minutes ago
          Pi is the best choice for experts and power users, which is not most people.

          Claude Code is still the dominant (I didn't say best) agentic harness by a wide margin I think.

  • VadimPR 50 minutes ago
    Anthropic team does an excellent job of speeding up Claude Code when it slows down, but for the sake of RAM and system resources, it would be nice to see it rewritten in a more performant framework!

    And now, with Claude on a Ralph loop, you can.

  • bryanhogan 3 hours ago
  • gman83 1 hour ago
    Gemini CLI and Codex are open source anyway. I doubt there was much of a moat there anyway. The cool kids are using things like https://pi.dev/ anyway.
  • napo 17 minutes ago
    The autoDream feature looks interesting.
  • Sathwickp 1 hour ago
    They do have a couple of interesting features that has not been publicly heard of yet:

    Like KAIROS which seems to be like an inbuilt ai assistant and Ultraplan which seems to enable remote planning workflows, where a separate environment explores a problem, generates a plan, and then pauses for user approval before execution.

  • sbochins 1 hour ago
    Does this matter? I think every other agent cli is open source. I don’t even know why Anthropic insist upon having theirs be closed source.
  • Diablo556 2 hours ago
    haha.. Anthropic need to hire fixer from vibecodefixers.com to fix all that messy code..lol
    • derwiki 19 minutes ago
      I don’t think they can hear you over the billions of dollars they are generating, and definitely not over them redefining what SWE means.
  • temp7000 11 minutes ago
    There's some rollout flags - via GrowthBook, Tengu, Statsig - though I'm not sure if it's A/B or not
  • tekacs 1 hour ago
    In the app, it now reads:

    > current: 2.1.88 · latest: 2.1.87

    Which makes me think they pulled it - although it still shows up as 2.1.88 on npmjs for now (cached?).

  • mapcars 3 hours ago
    Are there any interesting/uniq features present in it that are not in the alternatives? My understanding is that its just a client for the powerful llm
    • swimmingbrain 3 hours ago
      From the directory listing having a cost-tracker.ts, upstreamproxy, coordinator, buddy and a full vim directory, it doesn't look like just an API client to me.
  • hemantkamalakar 23 minutes ago
    today being March 31st, is this a genuine issue or just perfectly timed April Fools noise? What do you think?
  • q3k 3 hours ago
    The code looks, at a glance, as bad as you expect.
    • tokioyoyo 2 hours ago
      It really doesn’t matter anymore. I’m saying this as a person who used to care about it. It does what it’s generally supposed to do, it has users. Two things that matter at this day and age.
      • samhh 2 hours ago
        It may be economically effective but such heartless, buggy software is a drain to use. I care about that delta, and yes this can be extrapolated to other industries.
        • tokioyoyo 2 hours ago
          Genuinely I have no idea what you mean by buggy. Sure there are some problems here and there, but my personal threshold for “buggy” is much higher. I guess, for a lot of other people as well, given the uptake and usage.
          • mattmanser 49 minutes ago
            Two weeks ago typing became super laggy. It was totally unusable.

            Last week I had to reinstall Claude Desktop because every time I opened it, it just hung.

            This week I am sometimes opening it and getting a blank screen. It eventually works after I open it a few times.

            And of course there's people complaining that somehow they're blowing their 5 hour token budget in 5 messages.

            It's really buggy.

            There's only so long their model will be their advantage before they all become very similar, and then the difference will be how reliable the tools are.

            Right now the Claude Code code quality seems extremely low.

      • ghywertelling 23 minutes ago
        Do compilers care about their assembly generated code to look good? We will soon reach that state with all the production code. LLMs will be the compiler and actual today's human code will be replaced by LLM generated assembly code, kinda sorta human readable.
      • FiberBundle 2 hours ago
        This is the dumbest take there is about vibe coding. Claiming that managing complexity in a codebase doesn't matter anymore. I can't imagine that a competent engineer would come to the conclusion that managing complexity doesn't matter anymore. There is actually some evidence that coding agents struggle the same way humans do as the complexity of the system increases [0].

        [0] https://arxiv.org/abs/2603.24755

        • tokioyoyo 2 hours ago
          I agree, there is obviously “complete burning trash” and there’s this. Ant team has got a system going on for them where they can still extend the codebase. When time comes to it, I’m assuming they would be able to rewrite as feature set would be more solid and assuming they’ve been adding tests as well.

          Reverse-engineering through tests have never been easier, which could collapse the complexity and clean the code.

        • maplethorpe 53 minutes ago
          Well what is Anthropic doing differently to deal with this issue? Apparently they don't write any of their own code anymore, and they're doing fine.
      • hrmtst93837 2 hours ago
        Users stick around on inertia until a failure costs them money or face. A leaked map file won't sink a tool on its own, but it does strip away the story that you can ship sloppy JS build output into prod and still ask people to trust your security model.

        'It works' is a low bar. If that's the bar you set you are one bad incident away from finding out who stayed for the product and who stayed because switching felt annoying.

        • tokioyoyo 2 hours ago
          “It works and it’s doing what it’s supposed to do” encompasses the idea that it’s also not doing what it’s not supposed to do.

          Also “one bad incident away” never works in practice. The last two decades have shown how people will use the tools that get the job done no matter what kinda privacy leaks, destructive things they have done to the user.

    • breppp 2 hours ago
      Honestly when using it, it feels vibe coded to the bone, together with the matching weird UI footgun quirks
      • tokioyoyo 2 hours ago
        Team has been extremely open how it has been vibe coded from day 1. Given the insane amount of releases, I don’t think it would be possible without it.
        • catlifeonmars 1 hour ago
          It’s not a particularly sophisticated tool. I’d put my money on one experienced engineer being able to achieve the same functionality in 3-6 months (even without the vibe coding).
          • derwiki 17 minutes ago
            Kinda reads like the Dropbox launch thread
        • breppp 2 hours ago
          I don't really care about the code being an unmaintainable mess, but as a user there are some odd choices in the flow which feel could benefit from human judgement
    • loevborg 3 hours ago
      Can you give an example? Looks fairly decent to me
      • Insensitivity 3 hours ago
        the "useCanUseTool.tsx" hook, is definitely something I would hate seeing in any code base I come across.

        It's extremely nested, it's basically an if statement soup

        `useTypeahead.tsx` is even worse, extremely nested, a ton of "if else" statements, I doubt you'd look at it and think this is sane code

        • Overpower0416 2 hours ago

            export function extractSearchToken(completionToken: {
              token: string;
              isQuoted?: boolean;
            }): string {
              if (completionToken.isQuoted) {
                // Remove @" prefix and optional closing "
                return completionToken.token.slice(2).replace(/"$/, '');
              } else if (completionToken.token.startsWith('@')) {
                return completionToken.token.substring(1);
              } else {
                return completionToken.token;
              }
            }
          
          Why even use else if with return...
          • kelnos 1 hour ago
            I always write code like that. I don't like early returns. This approximates `if` statements being an expression that returns something.
            • whilenot-dev 1 hour ago
              > This approximates `if` statements being an expression that returns something.

              Do you care to elaborate? "if (...) return ...;" looks closer to an expression for me:

                export function extractSearchToken(completionToken: { token: string; isQuoted?: boolean }): string {
                  if (completionToken.isQuoted) return completionToken.token.slice(2).replace(/"$/, '');
              
                  if (completionToken.token.startsWith('@')) return completionToken.token.substring(1);
              
                  return completionToken.token;
                }
            • catlifeonmars 1 hour ago
              I’m not strongly opinionated, especially with such a short function, but in general early return makes it so you don’t need to keep the whole function body in your head to understand the logic. Often it saves you having to read the whole function body too.

              But you can achieve a similar effect by keeping your functions small, in which case I think both styles are roughly equivalent.

          • worksonmine 1 hour ago
            > Why even use else if with return...

            What is the problem with that? How would you write that snippet? It is common in the new functional js landscape, even if it is pass-by-ref.

            • Overpower0416 1 hour ago
              Using guard clauses. Way more readable and easy to work with.

                export function extractSearchToken(completionToken: {
                  token: string;
                  isQuoted?: boolean;
                }): string {
                  if (completionToken.isQuoted) {
                    return completionToken.token.slice(2).replace(/"$/, '');
                  }
                  if (completionToken.token.startsWith('@')) {
                    return completionToken.token.substring(1);
                  }
                  return completionToken.token;
                }
        • duckmysick 1 hour ago
          I'm not that familiar with TypeScript/JavaScript - what would be a proper way of handling complex logic? Switch statements? Decision tables?
          • catlifeonmars 1 hour ago
            Here I think the logic is unnecessarily complex. isQuoted is doing work that is implicit in the token.
        • luc_ 3 hours ago
          Fits with the origin story of Claude Code...
          • werdnapk 1 hour ago
            insert "AI is just if statements" meme
        • loevborg 3 hours ago
          useCanUseTool.tsx looks special, maybe it'scodegen'ed or copy 'n pasted? `_c` as an import name, no comments, use of promises instead of async function. Or maybe it's just bad vibing...
          • Insensitivity 3 hours ago
            Maybe, I do suspect _some_ parts are codegen or source map artifacts.

            But if you take a look at the other file, for example `useTypeahead` you'd see, even if there are a few code-gen / source-map artifacts, you still see the core logic, and behavior, is just a big bowl of soup

        • matltc 2 hours ago
          Lol even the name is crazy
      • wklm 2 hours ago
        have a look at src/bootstrap/state.ts :D
      • q3k 3 hours ago

          1. Randomly peeking at process.argv and process.env all around. Other weird layering violations, too.
          2. Tons of repeat code, eg. multiple ad-hoc implementations of hash functions / PRNGs.
          3. Almost no high-level comments about structure - I assume all that lives in some CLAUDE.md instead.
        • delamon 2 hours ago
          What is wrong with peeking at process.env? It is a global map, after all. I assume, of course, that they don't mutate it.
          • lioeters 1 hour ago
            > process.env? It is a global map

            That's exactly why, access to global mutable state should be limited to as small a surface area as possible, so 99% of code can be locally deterministic and side-effect free, only using values that are passed into it. That makes testing easier too.

          • withinboredom 1 hour ago
            environment variables can change while the process is running and are not memory safe (though I suspect node tries to wrap it with a lock). Meaning if you check a variable at point A, enter a branch and check it again at point B ... it's not guaranteed that they will be the same value. This can cause you to enter "impossible conditions".
          • hu3 2 hours ago
            For one it's harder to unit test.
          • q3k 2 hours ago
            It's implicit state that's also untyped - it's just a String -> String map without any canonical single source of truth about what environment variables are consulted, when, why and in what form.

            Such state should be strongly typed, have a canonical source of truth (which can then be also reused to document environment variables that the code supports, and eg. allow reading the same options from configs, flags, etc) and then explicitly passed to the functions that need it, eg. as function arguments or members of an associated instance.

            This makes it easier to reason about the code (the caller will know that some module changes its functionality based on some state variable). It also makes it easier to test (both from the mechanical point of view of having to set environment variables which is gnarly, and from the point of view of once again knowing that the code changes its behaviour based on some state/option and both cases should probably be tested).

        • loevborg 3 hours ago
          You're right about process.argv - wow, that looks like a maintenance and testability nightmare.
          • darkstar_16 2 hours ago
            They use claude code to code it. Makes sense
        • s3p 2 hours ago
          It probably exists only in CLAUDE or AGENTS.md since no humans are working on the code!
    • PierceJoy 3 hours ago
      Nothing a couple /simplify's can't take care of.
    • linesofcode 1 hour ago
      Code quality no longer carries the same weight as it did pre LLMs. It used to matter becuase humans were the ones reading/writing it so you had to optimize for readability and maintainability. But these days what matters is the AI can work with it and you can reliably test it. Obviously you don’t want code quality to go totally down the drain, but there is a fine balance.

      Optimize for consistency and a well thought out architecture, but let the gnarly looking function remain a gnarly function until it breaks and has to be refactored. Treat the functions as black boxes.

      Personally the only time I open my IDE to look at code, it’s because I’m looking at something mission critical or very nuanced. For the remainder I trust my agent to deliver acceptable results.

  • CookieJedi 16 minutes ago
    Hmmm, dont like the vibe
  • zoobab 48 minutes ago
    Just a client side written in JS, nothing to see here, the LLM is still secret.

    They could have written that in curl+bash that would not have changed much.

  • theanonymousone 2 hours ago
    I am waiting now for someone to make it work with a Copilot Pro subscription.
  • hemantkamalakar 23 minutes ago
    Today being March 31st, is this a genuine issue or just perfectly timed April Fools noise? What do you think?
  • CookieJedi 16 minutes ago
    Hmmm, not the vibe
  • aiedwardyi 1 hour ago
    interesting to see cost-tracker.ts in there. makes you wonder why they track usage internally but don't surface it to users in any meaningful way
    • jsmith45 19 minutes ago
      Cost tracking is used if you connect claude code with an api key instead of a subscription. It powers the /cost command.

      It is tricky to meaningfully expose a dollar cost equivlent value for subscribers in a way that won't confuse users into thinking that they will get a bill that includes that amount. This is especially true if you have overages enabled, since in a session that used overages it was likely partially covered by the plan (and thus zero-rated) with the rest at api prices, and the client can't really know the breakdown.

    • fcarraldo 36 minutes ago
      They do, you can just type /cost
  • artdigital 21 minutes ago
    Now waiting for someone to point Codex at it and rebuild a new Claude Code in Golang to see if it would perform better
  • LeoDaVibeci 3 hours ago
    Isn't it open source?

    Or is there an open source front-end and a closed backend?

    • dragonwriter 3 hours ago
      > Isn't it open source?

      No, its not even source available,.

      > Or is there an open source front-end and a closed backend?

      No, its all proprietary. None of it is open source.

    • avaer 3 hours ago
      No, it was never open source. You could always reverse engineer the cli app but you didn't have access to the source.
    • karimf 3 hours ago
      The Github repo is only for issue tracker
      • matheusmoreira 3 hours ago
        Wow it's true. Anthropic actually had me fooled. I saw the GitHub repository and just assumed it was open source. Didn't look at the actual files too closely. There's pretty much nothing there.

        So glad I took the time to firejail this thing before running it.

    • agluszak 3 hours ago
      You may have mistaken it with Codex

      https://github.com/openai/codex

    • yellow_lead 3 hours ago
      No
  • thefilmore 1 hour ago
    400k lines of code per scc
  • DeathArrow 2 hours ago
    Why is Claude Code, a desktop tool, written in JS? Is the future of all software JS or Typescript?
  • ChicagoDave 2 hours ago
    I hope everyone provides excellent feedback so they improve Claude Code.
  • anhldbk 2 hours ago
    I guess it's time for Anthropic to open source Claude Code.
    • DeathArrow 2 hours ago
      And while they are at it, open source Opus and Sonet. :)
  • bdangubic 57 minutes ago
    I have 705 PRs ready to go :)
  • DeathArrow 2 hours ago
    I wonder what will happen with the poor guy who forgot to delete the code...
    • orphea 38 minutes ago

        the poor guy
      
      Do you mean the LLM?
    • epolanski 2 hours ago
      Responsibility goes upwards.

      Why weren't proper checks in place in the first place?

      Bonus: why didn't they setup their own AI-assisted tools to harness the release checks?

    • matltc 2 hours ago
      Ha. I'm surprised it's not a CI job
  • jedisct1 2 hours ago
    It shows that a company you and your organization are trusting with your data, and allowing full control over your devices 24/7, is failing to properly secure its own software.

    It's a wake up call.

    • prmoustache 1 hour ago
      It is a client running on an interpreted language your own computer, there is nothing to secure or hide as source was provided to you already or am I mistaking?
      • jedisct1 1 hour ago
        It was heavily obfuscated, keeping users in the dark about what they’re installing and running.
    • prmoustache 1 hour ago
      It is a client running on an interpreted language your own computer, there is nothing to secure or hide as source is provided to you already.
  • isodev 2 hours ago
    Can we stop referring to source maps as leaks? It was packaged in a way that wasn’t even obfuscated. Same as websites - it’s not a “leak” that you can read or inspect the source code.
    • kelnos 1 hour ago
      If it was included unintentionally, then it's a leak.
    • bmitc 2 hours ago
      The source is linked to in this thread. Is that not the source code?
    • echelon 2 hours ago
      The only exciting leak would be the Opus weights themselves.
  • mergeshield 7 minutes ago
    [dead]
  • obelai 18 minutes ago
    [dead]
  • mergeshield 3 hours ago
    [dead]
  • imta71770 1 hour ago
    [dead]
  • kevinbaiv 2 hours ago
    [dead]
  • psihonaut 2 hours ago
    [dead]
  • sixhobbits 2 hours ago
    [dead]
  • phtrivier 1 hour ago
    Maybe the OP could clarify, I don't like reading leaked code, but I'm curious: my understanding is that is it the source code for "claude code", the coding assistant that remotely calls the LLMs.

    Is that correct ? The weights of the LLMs are _not_ in this repo, right ?

    It sure sucks for anthropic to get pawned like this, but it should not affect their bottom line much ?

    • treexs 1 hour ago
      Yes it's the claude code CLI tool / coding agent harness, not the weights.

      This code hasn't been open source until now and contains information like the system prompts, internal feature flags, etc.

    • 59nadir 1 hour ago
      > I don't like reading leaked code

      Don't worry about that, the code in that repository isn't Anthropic's to begin with.