12 comments

  • mike_hearn 20 minutes ago
    Impressive. It's worth reading despite the slight AI sheen to the writing, as it's unusually informative relative to most security articles. The primary takeaway from my POV is to watch out for "helpful" string normalization calls in security sensitive software. Strings should be bags of bytes as much as possible. A lot of the exploits boil down to trying to treat security identifiers as text instead of fixed numeric sequences. Also, even things that look trivial like file paths in error messages can be deadly.
    • progbits 12 minutes ago
      My take on the normalization is that it happens in the wrong place - you should not do it adhoc.

      If your input from user is a string, define a newtype like UserName and do all validation and normalization once to convert it. All subsequent code should be using that type and not raw strings, so it will be consistent everywhere.

  • procaryote 1 hour ago
    > This default is 30 seconds, matching the default TOTP period. But due to skew, passcodes may remain valid for up to 60 seconds (“daka” in Hebrew), spanning two time windows.

    Wait, why would I care this is "daka" in Hebrew? Is this a hallucination or did they edit poorly?

    • tecleandor 36 minutes ago
      Also... what is "daka" ? 60 seconds? passcodes that remain valid for two time windows? I've been checking the dictionary and "daka" might mean "minute".
    • 1a527dd5 1 hour ago
      Yeah, it read slightly weird before I got to that point, and then it was obvious it was AI slop.
      • neom 1 hour ago
        Maybe just being cute. Author is Yarden Porat from Cyata, an Israeli cybersecurity company.
    • adhamsalama 1 hour ago
      [flagged]
  • neuralkoi 27 minutes ago

        In non-CA mode, an attacker who has access to the private key of a pinned certificate can:
    
           Present a certificate with the correct public key
    
           Modify the CN in the client certificate to any arbitrary value
    
           Cause Vault to assign the resulting alias.Name to that CN
    
    I agree that this is an issue, but if an attacker has access to the private key of a pinned certificate, you might have some bigger issues...
  • neom 1 hour ago
    The post covers 9 CVEs May-June 2025 (Full chain from default user > admin > root > RCE):

    CVE-2025-6010 - [REDACTED]

    CVE-2025-6004 - Lockout Bypass https://feedly.com/cve/CVE-2025-6004

    Via case permutation in userpass auth Via input normalization mismatch in LDAP auth

    CVE-2025-6011 - Timing-Based Username Enumeration https://feedly.com/cve/CVE-2025-6011

    Identify valid usernames

    CVE-2025-6003 - MFA Enforcement Bypass https://feedly.com/cve/CVE-2025-6003

    Via username_as_alias configuration in LDAP

    CVE-2025-6013 - Multiple EntityID Generation https://feedly.com/cve/CVE-2025-6013

    Allows LDAP users to generate multiple EntityIDs for the same identity

    CVE-2025-6016 - TOTP MFA Weaknesses https://feedly.com/cve/CVE-2025-6016

    Aggregated logic flaws in TOTP implementation

    CVE-2025-6037 - Certificate Entity Impersonation https://feedly.com/cve/CVE-2025-6037

    Existed for 8+ years in Vault

    CVE-2025-5999 - Root Privilege Escalation https://feedly.com/cve/CVE-2025-5999

    Admin to root escalation via policy normalization

    CVE-2025-6000 - Remote Code Execution https://feedly.com/cve/CVE-2025-6000

    First public RCE in Vault (existed for 9 years) Via plugin catalog abuse > https://discuss.hashicorp.com/t/hcsec-2025-14-privileged-vau...

  • gtirloni 1 hour ago
    Something feels odd reading the article. It's so verbose like it's trying to explain things like the reader is 5yo.
    • plantain 1 hour ago
      AI written, or edited.
      • Cthulhu_ 14 minutes ago
        I'd say edited, I did wonder if they used AI to find the issues in the first place but they would brag about that front and center and pivot to an AI-first security company within seconds. Then again, maybe they used AI to help them map out what happens in the code, even though it's Go code and should be pretty readable / obvious what happens.

        That said, I think it's weird; the vulnerabilities seem to have been found by doing a thorough code review and comprehension, why then cut corners by passing the writeup through AI?

  • unwind 1 hour ago
    This feels like a dupe of https://news.ycombinator.com/item?id=44821250.

    Edit: replaced link with link to HN post, not the article in that post.

  • klas_segeljakt 1 hour ago
  • edoceo 1 hour ago
    But does it affect Bao? Could test there since they are so closely related.
    • satoqz 1 hour ago
      OpenBao maintainer here - The majority of these does affect us, more or less. Unfortunately it seems that we did not receive any prior outreach regarding these vulnerabilities before publication... make of that what you will. We've been hard at work the past days trying to get a security release out, which will likely land today.
      • Scandiravian 1 hour ago
        Thanks for the great work and swift communication

        I'm very disappointed to hear that the researchers did not disclose these findings to the OpenBao project before publishing them, so you now have to rush a release like this

        Will you reach out to the researchers for an explanation after you've fixed the issues?

        • wafflemaker 1 hour ago
          I can explain* researchers (and myself, though have nothing to do with it): We both learned about OpenBao today.

          explanation ≠ excuse

          • Scandiravian 1 hour ago
            Thank you for the explanation. It's obviously not great that this was missed, but finger-pointing now doesn't really help anyone, so I'll focus on what seems to me like the root issue

            My impression is that there is an information gap about forked projects that lead to this issue

            I'm on vacation right now, but when I'm back I'll try to setup a small site that lists forks of popular projects and maybe some information on when in time the project was forked

            Hopefully something like that can make it more likely that these things are responsibly disclosed to all relevant projects

    • Scandiravian 1 hour ago
      It sounds like these issues are from before the fork, in which case they will be

      It also doesn't sound like the researchers made an effort to safely disclose these findings to the OpenBao project before publishing them, which I think would have been the right thing to do

  • v5v3 2 hours ago
    Fantastic work guys. Thank you.
  • maxall4 2 hours ago
    Mmm AI writing gotta love it… /s
    • markasoftware 1 hour ago
      it really does have that AI writing style, and these are the sorts of bugs I imagine an AI could have found...I wonder if that's what they did (though they claim it was all manual source code inspection).
      • darkwater 1 hour ago
        Having the blog post explaining the findings written - or aided - by an AI doesn't necessarily mean that the findings themselves were found using AI.

        Edit: even if the TLD they use is .ai and they heavily promote themselves as revolutionary AI security firm yadda yadda yadda

        • neomantra 27 minutes ago
          From reading it and mostly from the introduction, it felt like they rolled up their sleeves and really dug into the code. This was refreshing versus the vibe-coding zeitgeist.

          I would be curious what AI tools assisted in this and also what tools/models could re-discover them on the unpatched code base now that we know they exist.

  • tiedemann 1 hour ago
    TLDR: string parsing is hard and most of us are vulnerable to assumptions and/or never get around to do those fuzzy tests properly when checking that input is handled correctly.
    • compressedgas 1 hour ago
      I don't see any parsing going on here. They failed to normalize the input values the way that the LDAP server does before applying rate limiting resulting in an effectively higher than expected login attempt rate limit.
    • procaryote 1 hour ago
      A lot of these are on the pattern of normalising input as late as possible, which is an odd choice for a security product.
      • Cthulhu_ 7 minutes ago
        I'd argue it's odd that they (or LDAP) normalise input in the first place. I can sort-of understand username normalization to avoid having both "admin" and "Admin" accounts, but that check only needs to be done when creating an account, when logging in it should not accept "Admin" as valid for account "admin".

        But I'm neither a security person nor have I done much with authentication since my 2000's PHP hobbying. I suspect an LDAP server has to deal with or try and manage a lot of garbage input because of the sheer number of integrations they often have.

      • LtWorf 29 minutes ago
        I mean… it's hashicorp… did you expect sanity?

        One of the vault backends has a size limit and so secret keys larger than 2048 bits would not fit. Amazing tool.