Impressive. It's worth reading despite the slight AI sheen to the writing, as it's unusually informative relative to most security articles. The primary takeaway from my POV is to watch out for "helpful" string normalization calls in security sensitive software. Strings should be bags of bytes as much as possible. A lot of the exploits boil down to trying to treat security identifiers as text instead of fixed numeric sequences. Also, even things that look trivial like file paths in error messages can be deadly.
My take on the normalization is that it happens in the wrong place - you should not do it adhoc.
If your input from user is a string, define a newtype like UserName and do all validation and normalization once to convert it. All subsequent code should be using that type and not raw strings, so it will be consistent everywhere.
> This default is 30 seconds, matching the default TOTP period. But due to skew, passcodes may remain valid for up to 60 seconds (“daka” in Hebrew), spanning two time windows.
Wait, why would I care this is "daka" in Hebrew? Is this a hallucination or did they edit poorly?
Also... what is "daka" ? 60 seconds? passcodes that remain valid for two time windows? I've been checking the dictionary and "daka" might mean "minute".
In non-CA mode, an attacker who has access to the private key of a pinned certificate can:
Present a certificate with the correct public key
Modify the CN in the client certificate to any arbitrary value
Cause Vault to assign the resulting alias.Name to that CN
I agree that this is an issue, but if an attacker has access to the private key of a pinned certificate, you might have some bigger issues...
I'd say edited, I did wonder if they used AI to find the issues in the first place but they would brag about that front and center and pivot to an AI-first security company within seconds. Then again, maybe they used AI to help them map out what happens in the code, even though it's Go code and should be pretty readable / obvious what happens.
That said, I think it's weird; the vulnerabilities seem to have been found by doing a thorough code review and comprehension, why then cut corners by passing the writeup through AI?
OpenBao maintainer here - The majority of these does affect us, more or less. Unfortunately it seems that we did not receive any prior outreach regarding these vulnerabilities before publication... make of that what you will. We've been hard at work the past days trying to get a security release out, which will likely land today.
I'm very disappointed to hear that the researchers did not disclose these findings to the OpenBao project before publishing them, so you now have to rush a release like this
Will you reach out to the researchers for an explanation after you've fixed the issues?
Thank you for the explanation. It's obviously not great that this was missed, but finger-pointing now doesn't really help anyone, so I'll focus on what seems to me like the root issue
My impression is that there is an information gap about forked projects that lead to this issue
I'm on vacation right now, but when I'm back I'll try to setup a small site that lists forks of popular projects and maybe some information on when in time the project was forked
Hopefully something like that can make it more likely that these things are responsibly disclosed to all relevant projects
It sounds like these issues are from before the fork, in which case they will be
It also doesn't sound like the researchers made an effort to safely disclose these findings to the OpenBao project before publishing them, which I think would have been the right thing to do
it really does have that AI writing style, and these are the sorts of bugs I imagine an AI could have found...I wonder if that's what they did (though they claim it was all manual source code inspection).
From reading it and mostly from the introduction, it felt like they rolled up their sleeves and really dug into the code. This was refreshing versus the vibe-coding zeitgeist.
I would be curious what AI tools assisted in this and also what tools/models could re-discover them on the unpatched code base now that we know they exist.
I can imagine they could have used AI to analyze, describe and map out what exactly happens in the code. Then again, it's Go, following the flow of code and what exactly is being checked is pretty straightforward (see e.g. https://github.com/hashicorp/vault/blob/main/vault/request_h... which was mentioned in the article)
TLDR: string parsing is hard and most of us are vulnerable to assumptions and/or never get around to do those fuzzy tests properly when checking that input is handled correctly.
I don't see any parsing going on here. They failed to normalize the input values the way that the LDAP server does before applying rate limiting resulting in an effectively higher than expected login attempt rate limit.
I'd argue it's odd that they (or LDAP) normalise input in the first place. I can sort-of understand username normalization to avoid having both "admin" and "Admin" accounts, but that check only needs to be done when creating an account, when logging in it should not accept "Admin" as valid for account "admin".
But I'm neither a security person nor have I done much with authentication since my 2000's PHP hobbying. I suspect an LDAP server has to deal with or try and manage a lot of garbage input because of the sheer number of integrations they often have.
If your input from user is a string, define a newtype like UserName and do all validation and normalization once to convert it. All subsequent code should be using that type and not raw strings, so it will be consistent everywhere.
Wait, why would I care this is "daka" in Hebrew? Is this a hallucination or did they edit poorly?
CVE-2025-6010 - [REDACTED]
CVE-2025-6004 - Lockout Bypass https://feedly.com/cve/CVE-2025-6004
Via case permutation in userpass auth Via input normalization mismatch in LDAP auth
CVE-2025-6011 - Timing-Based Username Enumeration https://feedly.com/cve/CVE-2025-6011
Identify valid usernames
CVE-2025-6003 - MFA Enforcement Bypass https://feedly.com/cve/CVE-2025-6003
Via username_as_alias configuration in LDAP
CVE-2025-6013 - Multiple EntityID Generation https://feedly.com/cve/CVE-2025-6013
Allows LDAP users to generate multiple EntityIDs for the same identity
CVE-2025-6016 - TOTP MFA Weaknesses https://feedly.com/cve/CVE-2025-6016
Aggregated logic flaws in TOTP implementation
CVE-2025-6037 - Certificate Entity Impersonation https://feedly.com/cve/CVE-2025-6037
Existed for 8+ years in Vault
CVE-2025-5999 - Root Privilege Escalation https://feedly.com/cve/CVE-2025-5999
Admin to root escalation via policy normalization
CVE-2025-6000 - Remote Code Execution https://feedly.com/cve/CVE-2025-6000
First public RCE in Vault (existed for 9 years) Via plugin catalog abuse > https://discuss.hashicorp.com/t/hcsec-2025-14-privileged-vau...
That said, I think it's weird; the vulnerabilities seem to have been found by doing a thorough code review and comprehension, why then cut corners by passing the writeup through AI?
Edit: replaced link with link to HN post, not the article in that post.
I'm very disappointed to hear that the researchers did not disclose these findings to the OpenBao project before publishing them, so you now have to rush a release like this
Will you reach out to the researchers for an explanation after you've fixed the issues?
explanation ≠ excuse
My impression is that there is an information gap about forked projects that lead to this issue
I'm on vacation right now, but when I'm back I'll try to setup a small site that lists forks of popular projects and maybe some information on when in time the project was forked
Hopefully something like that can make it more likely that these things are responsibly disclosed to all relevant projects
It also doesn't sound like the researchers made an effort to safely disclose these findings to the OpenBao project before publishing them, which I think would have been the right thing to do
Edit: even if the TLD they use is .ai and they heavily promote themselves as revolutionary AI security firm yadda yadda yadda
I would be curious what AI tools assisted in this and also what tools/models could re-discover them on the unpatched code base now that we know they exist.
But I'm neither a security person nor have I done much with authentication since my 2000's PHP hobbying. I suspect an LDAP server has to deal with or try and manage a lot of garbage input because of the sheer number of integrations they often have.
One of the vault backends has a size limit and so secret keys larger than 2048 bits would not fit. Amazing tool.