ChatGPT still does not display per-message timestamps (time of day / date) in conversations.
This has been requested consistently since early 2023 on the OpenAI community forum, with hundreds of comments and upvotes and deleted threads, yet remains unimplemented.
Do any of you could think of a reason (UX-wise) for it not to be displayed?
I actually agree there's an issue here. I feel we've been dumbing down interfaces so much, to the extent that people who in previous generations would barely write and who wouldn't affect anyone outside their close friends and family, now having their voice algorithmically amplified to millions. And given that the algorithms care only about engagement, rather than eloquence (let alone veracity), these people end up believing that their thoughts are as valid regardless of substance, and that there's nothing they could gain by learning numeracy.
EDIT: It's not a new issue, and Asimov phrased it well back in 1980, but I feel it got much worse.
> Anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge'.
My honest opinion, which may be entirely wrong but remains my impression, is:
User Engagement Maximization At Any Cost
Obviously there's a point at which a session becomes too long, but I suspect a sweet spot somewhere which optimization is made for.
I often observe, whether as I perceive or not, that among the multiple indicators that I suspect of engagement augmentation, is also the tendency for vital information to be withheld while longer more complex procedures receive higher priority than simpler cleaner solutions.
Of course, all sorts of emergent behaviors could convey such impressions falsely. But I do believe an awful lot of psychology and clever manipulation have been provided as tools for the system.
I have.a lot of evidence for this and much more, but I realize it may merely be coincidence. That said, many truly fascinating, fully identifiable functions from pathological psychology can be seen. DARVO, gaslighting and basically everything one would see with a psychotic interlocutor.
Edit
Mych of the above has been observed after putting the system under scrutiny. On one super astonishing and memorable occasion GPT recommend I call a suicide hotline because I questioned its veracity and logic
Just a note to those adding the time to the personalization response. It’s inaccurate. If you have an existing chat, the time is near the last time you had that chat session active. If you open a new one, it can be off by + or - 15 minutes for some reason
I was using a continuous conversation with chatgpt to keep track of my lifts, and then I realize it never understand what day I'm talking to it, like there is no consistency, it might as well be the date of the first message you sent
I think that’s exactly why they’re not including timestamps. If timestamps are shown in the UI users might expect some form of “time awareness” which it doesn’t quite have. Yes you can add it to the context but I imagine that might degrade other metrics.
Another possible reason is that they want to discourage users from using the product in a certain way (one big conversation) because that’s bad for content management.
It’s an incredible tool for weightlifting. I use it all the time to analyze my workout logs that I copy/paste from Apple Notes.
Example prompts:
- “Modify my Push #2 routine to avoid aggravating my rotator cuff”
- “Summarize my progression over the past 2 months. What lifts are progressing and which are lagging? Suggest how to optimize training”
- “Are my legs hamstring or glute dominant? How should I adjust training”
- “Critique my training program and suggest optimizations”
That said, I would never log directly in ChatGPT since chats still feel ephemeral. Always log outside of ChatGPT and copy/paste the logs when needed for context.
That's brilliant. I have an injury for a while now, and I change my routine on the fly at the gym, depending on whether I still feel pain or not. Much better if I change it before the next time I go, so I don't waste time figuring out what to replace.
Time stamps? lol
They still don’t have the option to search your previous history.
Luckily I built an extension that stores all chats locally to a database so I can reference and view offline if I want too. Time stamps included.
You only need that info if you know you need it in your rag. Over the last two years of usage I don't recall where I'd need those timestamps but I know there are cases. Still, this would have to be an option because otherwise it would be waste of tokens. However, we have to consider they are competing for the quality AND length of the response even if a shorter response is better. There's a pretzel of considerations when talking about this.
Imagine you started having back pain months ago and you remember asking ChatGPT questions when it first started.
Now you’re going to the doctor and you forgot exactly when the pain started. You remember that you asked ChatGPT about the pain the day it started.
So you look for the chat, and discover there are no dates. It feels like such an obvious thing that’s missing.
Let’s not over complicate things. There aren’t that many considerations. It’s just a date. It doesn’t need to be stuffed into the context of the chat. Not sure why quality or length of chat would need to be affected?
I built a single page website that copies the current time to my clipboard and I paste it into my messages. It's inconvenient and I don't do it irregularly.
I'll have to look into the extension described in the link. Thank you for sharing. It's nice to know it's a shared problem.
Beyond the lack of timestamps, ChatGPT produces oddly formatted text when you copy answers. It’s neither proper markdown nor rich text. The formatting is consistently off: excessive newlines between paragraphs, strangely indented lists, and no markdown support whatsoever.
I regularly use multiple LLM services including Claude, ChatGPT, and Gemini, among others. ChatGPT’s output has the most unusual formatting of them all. I’ve resorted to passing answers through another LLM just to get proper formatting.
Other than the potential liability, cost may also be a factor.
Back in April 2025, Altman mentioned people saying "thank you" was adding “tens of millions of dollars” to their infra costs. Wondering if adding per-message timestamps would cost even more.
Altman was being dumb; being polite to LLMs makes them produce higher quality results which results in less back-and-forth, saving money in the long run.
I think "thank you" are used for inference in follow-up messages, but not necessarily timestamps.
I just asked ChatGPT this:
> Suppose ChatGPT does not currently store the timestamp of each message in conversations internally at all. Based on public numbers/estimates, calculate how much money it will cost OpenAI per year to display the timestamp information in every message, considering storage/bandwidth etc
The answer it gave was $40K-$50K. I am too dumb and inexperienced to go through everything and verify if it makes sense, but anyone who knows better is welcome to fact check this.
Just like on a piece of hardware that doesn't have a RTC, we rely on NTP. Maybe we just need an NTP MCP for the agents. Looks like there are several open-source projects already but I'm not linking to them because I don't know their quality or trust.
It's ugly, why it isn't at least exposed as an option to enable for power users would make me look at some advantage time stamps would give to an inference scraper or possibly their service APIs don't have contemporaneous access to the metadata available from the web interface.
What annoys me even more is that ChatGPT doesn't alert you, when you near the context window limit. I have a chat which I've worked on for a year and now hit the context window. I've worked around this by doing a GDPR download of all messages, re-constructed the conversation inside a markdown file and then gave that file to claude to create a summarized / compacted version of that chat...
The html file is just a big JSON with some JS rendering, so I wrote this bash script which adds the timestamp before the conversation title:
This has been requested consistently since early 2023 on the OpenAI community forum, with hundreds of comments and upvotes and deleted threads, yet remains unimplemented.
Do any of you could think of a reason (UX-wise) for it not to be displayed?
I can imagine a legal one. If the LLM messes big time[1], timestamps could help build the case against it, and make investigation work easier.
[1] https://www.ap.org/news-highlights/spotlights/2025/new-study...
Not a joke. To capture a wide audience you want to avoid numbers, among other technical niceties.
What does this even mean
EDIT: It's not a new issue, and Asimov phrased it well back in 1980, but I feel it got much worse.
> Anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge'.
It's irresponsible for OpenAI to let this issue be solved by extensions.
https://github.com/Hangzhi/chatgpt-timestamp-extension
https://chromewebstore.google.com/detail/kdjfhglijhebcchcfkk...
User Engagement Maximization At Any Cost
Obviously there's a point at which a session becomes too long, but I suspect a sweet spot somewhere which optimization is made for.
I often observe, whether as I perceive or not, that among the multiple indicators that I suspect of engagement augmentation, is also the tendency for vital information to be withheld while longer more complex procedures receive higher priority than simpler cleaner solutions.
Of course, all sorts of emergent behaviors could convey such impressions falsely. But I do believe an awful lot of psychology and clever manipulation have been provided as tools for the system.
I have.a lot of evidence for this and much more, but I realize it may merely be coincidence. That said, many truly fascinating, fully identifiable functions from pathological psychology can be seen. DARVO, gaslighting and basically everything one would see with a psychotic interlocutor.
Edit Mych of the above has been observed after putting the system under scrutiny. On one super astonishing and memorable occasion GPT recommend I call a suicide hotline because I questioned its veracity and logic
Another possible reason is that they want to discourage users from using the product in a certain way (one big conversation) because that’s bad for content management.
Example prompts:
- “Modify my Push #2 routine to avoid aggravating my rotator cuff”
- “Summarize my progression over the past 2 months. What lifts are progressing and which are lagging? Suggest how to optimize training”
- “Are my legs hamstring or glute dominant? How should I adjust training”
- “Critique my training program and suggest optimizations”
That said, I would never log directly in ChatGPT since chats still feel ephemeral. Always log outside of ChatGPT and copy/paste the logs when needed for context.
Cardio goals, current FTP, days to train, injuries to avoid
3 lift day programs with tracking 8w progressive Loop my PT into warm ups
Alternate suggestions.
Use whole sheet to get an overview of how the last 8w went and then change things up
Now you’re going to the doctor and you forgot exactly when the pain started. You remember that you asked ChatGPT about the pain the day it started.
So you look for the chat, and discover there are no dates. It feels like such an obvious thing that’s missing.
Let’s not over complicate things. There aren’t that many considerations. It’s just a date. It doesn’t need to be stuffed into the context of the chat. Not sure why quality or length of chat would need to be affected?
I'll have to look into the extension described in the link. Thank you for sharing. It's nice to know it's a shared problem.
I’m not suggesting this is sufficient, I’m just noting there is somewhere in the user interface that it is displayed.
I regularly use multiple LLM services including Claude, ChatGPT, and Gemini, among others. ChatGPT’s output has the most unusual formatting of them all. I’ve resorted to passing answers through another LLM just to get proper formatting.
Back in April 2025, Altman mentioned people saying "thank you" was adding “tens of millions of dollars” to their infra costs. Wondering if adding per-message timestamps would cost even more.
I would be very surprised if they don’t already store date/time metadata. If they do, it’s just a matter of exposing it.
I just asked ChatGPT this:
> Suppose ChatGPT does not currently store the timestamp of each message in conversations internally at all. Based on public numbers/estimates, calculate how much money it will cost OpenAI per year to display the timestamp information in every message, considering storage/bandwidth etc
The answer it gave was $40K-$50K. I am too dumb and inexperienced to go through everything and verify if it makes sense, but anyone who knows better is welcome to fact check this.
if response == 'thank you': print('your welcome')
As the companies sprint towards AGI as the goal the floor for acceptable customer service has never been lower. These two concepts are not unrelated.
Claude Sonnet is my favorite, despite occasionally going into absurd levels of enthusiasm.
Opus is... Very moody and ambiguous. Maybe that helps with complex or creative tasks. For conversational use I have found it to be a bit of a downer.