Am I the only one who is getting tired of all these LLM generated landing pages with their hallmark indigo backgrounds/gradients, unnecessary and tasteless transitions, and meaningless marketing sell points?
This is something I really wish was just built-in to Claude Code. I want it built in because I don't want to have to think about it beforehand. I should be able to jump back in conversation history and have the state of the code jump back with me, so it's restored to the same state it originally was at that point in the conversation.
(There does also need to be a way to jump back in the conversation history without reverting the code, there are times that is useful too!)
A few weeks ago I asked gemini cli to do something pretty simple and it ran for like 12 minutes and then failed with an exception. Haven't tried it again since.
Cline gives you the ability to jump back to any point in the task. The three options are "Restore task", "restore files" and "restore task and files"
A common experience with these tools is that if you realize you want to change the direction you're heading, it's better to jump back to that point in the work and redo it than it is to try to redirect the tool from where you are. Here's a great post about it on the Cline blog
Hi, the developer here. Already thinking about a way to add it as a background task that can communicate with multiple instances at once. As long as its part of CLAUDE.md, every new project would have it automatically included. Not part of Claude Code, but a good way closer?
The claude models are just a part of Claude code. I've worked with both copilot with the Claude models and Claude code itself. Claude code is way more capable, and has a greater likelihood of successfully completing a task.
This was a pain point in coming from Aider to CC. How to have diffs of the changes once CC has done the changes? Having git commits done the way Aider does it would have saved me a lot of time.
What business? This seems to be completely free, with no pricing, in-app purchases, or anything. That being said, it's strange that it doesn't seem to be open-source.
I tend to have auto-accept on for edits, and once Claude is done with a task I'll just use git to review and stage the changes, sometimes commit them when it's a logical spot for it.
I wouldn't want to have Claude auto-commit everything it does (because I sometimes revert its changes), nor would I want to YOLO it without any git repo... This seems like a nice tool, but for someone who has a very different workflow.
"Checkpoints for Claude Code" use git under the hood, but stored in .claudecheckpoints folder, to not mess with your own git. Add itself to .gitignore.
It auto commits with a git message for the changes done through MCP locally.
As someone who doesn't use CC, auto-commit seems like it would be the easiest way to manage changes. It's easy enough to revert or edit a commit if I don't like what happened.
It's also very easy to throw away actual commits, as long as you don't push them (and even then not so difficult if you're in a context where force-pushing is tolerable).
True, but it's harder to reject changes in one file, make a quick fix, etc. I like to keep control over my git repo as it's a very useful tool for supervising the AI.
I don't know what this is but isn't git enough? Incidentally I'm not convinced in my day-to-day for "jujitsu" (jj) but from what I understand about how it works, I've been wanting to give it a try for agent-based coding, based on the way it defaults to saving everything and letting you sort it out after. I do like how Aider commits everything so you can easily roll back, although it ends up with a few too many commits imho.
I've been wanting to experiment also with getting an agent to go back and rebase history, rewrite commits etc in the context of where the project ended up, to make a more legible history, but I don't know if that's doable, or even all that useful.
Git won't catch new files the agent is adding. To get around that you can of course always add all new files, but then you'll potentially have your repo polluted with a bunch of temporary scratch files instead.
You can typically go back and edit git history. But it will require force push and breaking changes. And a few sacrifices to ensure that it doesn't make a mistake because then your repo is potentially broken.
Best way to do that is probably to have it work on branches and then squash merge those.
Another problem I inadvertently dodged by using Jujutsu with Claude Code :)
I tend to send a lone "commit" message to Claude when I think I'm in a spot I may want to return to in the future, in case the current path doesn't work out. Then Claude commits it with a decent message. It knows how to use jj well enough for most things. Then it's really easy to jj new back to a previous change and try again.
Yup, that's what I do. Even for personal projects, with the flurry of changes Claude/other AI assistants make, a branch makes it easier for me to compare changes.
Often I have a branch with multiple commits on it, with each commit corresponding to a message in a conversation with AI on Cursor trying to get a new feature built.
In the end, I can diff the branch against the main branch, and see the sum total of changes the AI agent has made.
Maybe edit/improve manually on my own afterwards. And then, merge.
I always squash and reorganise the commits from aider. It is however awesome that everything is in git directly from the agent. I can’t imagine why not all these tools do this!
I just commit with a “wip!”-prefaced message whenever the LLM pauses and says it’s finished, including new files. You can squash and cleanup later, or revert back to a state before it screwed up.
Also doubles as a way to cohesively look at the changes it made without all the natural language and recursive error/type fixing it does while working.
I don’t understand why people are making it so complicated. You’re saving a minute per iteration with the LLM, tops, at risk of losing control or introducing hard to find issues. It is the definition of diminishing returns.
I don't think jujutsu woild help with this use case -- jujutsu will not save everything because it is not running constantly on your repo. It snapshots the working tree only when you run a `jj` command. Ineffective if an agent is doing work.
I recently started using Aider and had that thought about too many commits. What I realized though was: (1) if I'm going to contribute to a project, I should be working in a local branch and interactively rebasing to clean up my history anyway (and of course carefully reviewing Aider's work first) and (2) if I'm working on my own thing WITHOUT LLM, I tend to prefer to commit every dang little change anyway, I just don't remember to do it because I'm in the zone and then inevitably wish I had at some point.
> I tend to prefer to commit every dang little change anyway, I just don't remember to do it because I'm in the zone and then inevitably wish I had at some point.
That’s what I do too until I developed a practice to break up into thematic commits as I realize I need them. And if I don’t, then I just git reset to the beginning and use git gui to commit lines and chunks that are relevant for a given piece of work. But with experience, I barely do the break down completely - I generally don’t even bother creating commits until I have a starting sense of what the desired commit history should be.
T shirt estimation doesn't make any sense for AI dev, not one bit. They get epic long features done in hours, and all the shirt sizing comes from cases where the agent circles the drain and needs to be guided, which isn't predictable.
The shirt sizes now are for manual acceptance testing.
My experience with AI tooling is that while it's really useful and great - I don't think i've ever seen a LLM complete an epic long feature well full stop.
Don't get me wrong, it's definitely improved my workflow and efficiency, but you must be winning at roulette if the model is performing well on anything that can't be googled and implemented witihn a similar amount of time.
unless it's claude, where even simple styling changes seem to become epics just when it wants to spit out an extra few thousands lines of code
If you go back and forth with chatgpt/gemini on architectural details first, then get chatgpt to produce a hyper detailed spec (like, almost a program claude can execute), you can get claude to run for 2-3 hours at a shot (particularly with a hook to prevent early stopping). Require >85% test coverage, and bake very clear e2e test paths into the spec, and Claude can come surprisingly close to one shotting big things.
Fair enough! I'll need to give it a try - I tend to mostly use these agents as idea testers as have found them limiting beyond those concepts, but it sounds like they may be quite useful using this spec, thanks!
> T shirt estimation doesn't make any sense for AI dev
It doesn't make sense for NI (natural intelligence) dev, either. Even SCRUM doesn't make much sense. The only Agile thing that really makes sense is Kanban, which is actually known to computer science as dispatch queue.
In the 60s, OS researchers spent time figuring out how to optimally schedule resources for computation. Today, almost nobody uses these techniques. (This is known as "waterfall" in PM parlance.)
It turns out, the cheapest way to schedule computing resources is a simple dispatch queue. Why spend extra time figuring out in what order things need to be done, or how long they will take, if they need to be done anyway? It never made sense and it doesn't matter whether the agent is NI or AI.
Under the hood, is this simply checkpointing the files in the claude target folder or are you also checkpointing the claude context? One of my biggest pain points is after a few compactions/edits to claude.md and all of a sudden Claude has made a few mistakes and all the context window cruft of fixes it attempted and reverted actually seem to confuse it further and it would be nice to reset to a known happy place code & contextually and retry from there.
Not to trivialize the work being done here but isn’t this as simple as a hook on edit and write tool calls that commits to git? I’m not sure I see the need for a whole app around this vs just the standard git workflow
Hi, the developer here. Its a very early version so there could be a lot of bugs, but I like to use it myself (already found several bugs and updated version soon on its way).
Switching from Cursor to Claude Code this was the biggest loss. Have tried to improve on the Cursor functionality, with features I missed.
I would love any feedback on what you are missing etc
Some of the user interaction borders on "disaster" IMO. One puts up with it because it's not a show-stopper for the core value proposition of the software (an LLM agent completing tasks for you), and the core value proposition of the software is really valuable.
The noticeable issues are (1) unpredictable scrolling of the terminal window and (2) a super-buggy text box for inputting the prompt.
In particular if I mash the arrow key too fast while moving around and editing the prompt CC and my terminal's idea of where the cursor is get out of sync somehow and it's tricky to get them re-aligned, and I can't actually input text until I do. The vim mode lets me bypass this but it has its own bugs and is missing a ton of features that I expect. Visual selection in particular seems to be missing? Not entirely certain what things I'm used to are stock vim features vs Spacemacs features but I'm pretty sure visual mode is the former. Regardless, only the very basics seem to actually work. "w", "b", "e", "cw/b/e", "dw/b/e", "esc/i".
So for the most part I actually just edit CC prompts in emacs and paste them.
I resort to this workaround because I am very motivated to use Claude Code. For a less-useful piece of software I would probably just give up.
I really love Claude Code but it's wild to me if
others aren't seeing this.
Is Ctrl+R usable at all? I've given up on it, the whole screen just starts scrolling madly most of the time. Not that I have to press Ctrl+R to get that bug to happen, it's just the most reliable way to do so.
And I've had the input box stuck not accepting input or not allowing me to delete past a certain point a hundred times. By now I know how to get it unstuck (although I couldn't tell you - my fingers figured it out but my brain doesn't know).
I've built terminal applications and when not using a dedicated alternate buffer, things like multiline text input and navigation are so easy to screw up. Not to mention when you have to do all the tricks to properly detect key strokes, pastes, etc. It's a mess of printing special codes and carriage returns.
I'm guessing they're using abstraction of some sort, but imo they've done a lot of great features and definitely usable.
That being said- they could just build / use something more like a jupyter notebook and have a wildly more stable and rich experience. Or a classic tui app, but pros and cons.
> That being said- they could just build / use something more like a jupyter notebook and have a wildly more stable and rich experience.
Right, part of the reason it stands out is that we're conditioned to much more functional text input in claude.ai (or competing web apps like ChatGPT).
I assume part of the motivation for the terminal app concept is that all the tool calls run in a deterministic environment (whatever was the environment of the shell where you launched "claude"). A Jupyter-type approach would really muddle up that whole picture (at least from a user perspective).
While disaster is strong language, Claude Code isn't really a well engineered product, they're just kinda trying shit, they don't have a clear long term vision. The core prompts and agent loop are good though, it's too bad it's not open source so someone could implement them in a client with good UX/engineering (at least without disassembling claude code and being legally questionable).
You can extract prompts with mitmproxy/netcat, and AFAIK there isn't much more to it (bash and todo list are all you need in terms of tools), there's already a lot of simpler tools with better ux:
- sst/opencode and charmbracelet/crush -- related "cc clones" with top tier UX; opencode has near feature parity with cc, crush is more barebones
- block/goose -- a lot of multi-model features and extensions (it's practically a framework), but UI is pretty basic
- antinomyhq/forge -- similar to goose, but last week they merged some PRs with agent-agent communication, yet to see how it works out
- openai/codex, gemini-cli -- both somehow don't even have a way to resume a conversation
- avante.nvim with mcphub.nvim -- neovim plugin that emulates cursor to a degree; has a crazy good hack that makes even older models like gpt4.1 "more agentic" -- it keeps reprompting the model with "STFU and write code" until the model calls a "task_completed" tool; gets diagnostics, formatting and anything else neovim can do "for free"
For the sake of completeness, closed-source:
- amp-cli -- absolutely barebones, zero configuration (they even decide what model you're using for you); one problem -- closed source, no BYOK or subscription, pay per token only
- cursor-cli -- atm unusable, can't even set a global context file
- codebuff -- yet to try it myself, but they have some sort of an overengineered setup with 5+ different models (resoner/coder/file picker (!)/fast apply/...), curious to see how it works in practice (I'm assuming this setup is strictly worse than a single sonnet4/gpt5, but much cheaper)
Claude does have a lot of unique/rare (for now) features -- hooks, sub-agents, background jobs, planning mode, per-prompt reasoning effort controls, executable bash in slash commands.
Only half of them are really useful IMHO, but I wouldn't know that if they didn't have them.
I replied to a sibling comment with my observations - the upshot is the actual user interaction is quite buggy in my experience.
If you typically compose prompts in a separate editor and paste them in you aren't likely to even notice. But it's the kind of thing that would drive me up the wall in a piece of software whose primary function was less impressive.
Interesting to watch the explosion of projects, even whole startups, which are just a feature addition to Claude Code. Shows how beloved it has become.
Great idea, but i've set it up and the app is pretty unusable for me, there is some sort of blocking process which runs every few seconds and freezes the UI, so you can interact with it properly
(There does also need to be a way to jump back in the conversation history without reverting the code, there are times that is useful too!)
Ref: https://github.com/google-gemini/gemini-cli/blob/main/docs/c...
A common experience with these tools is that if you realize you want to change the direction you're heading, it's better to jump back to that point in the work and redo it than it is to try to redirect the tool from where you are. Here's a great post about it on the Cline blog
https://cline.bot/blog/how-i-learned-to-stop-course-correcti...
I wouldn't want to have Claude auto-commit everything it does (because I sometimes revert its changes), nor would I want to YOLO it without any git repo... This seems like a nice tool, but for someone who has a very different workflow.
I've been wanting to experiment also with getting an agent to go back and rebase history, rewrite commits etc in the context of where the project ended up, to make a more legible history, but I don't know if that's doable, or even all that useful.
You can typically go back and edit git history. But it will require force push and breaking changes. And a few sacrifices to ensure that it doesn't make a mistake because then your repo is potentially broken.
Best way to do that is probably to have it work on branches and then squash merge those.
Another problem I inadvertently dodged by using Jujutsu with Claude Code :)
I tend to send a lone "commit" message to Claude when I think I'm in a spot I may want to return to in the future, in case the current path doesn't work out. Then Claude commits it with a decent message. It knows how to use jj well enough for most things. Then it's really easy to jj new back to a previous change and try again.
Often I have a branch with multiple commits on it, with each commit corresponding to a message in a conversation with AI on Cursor trying to get a new feature built.
In the end, I can diff the branch against the main branch, and see the sum total of changes the AI agent has made.
Maybe edit/improve manually on my own afterwards. And then, merge.
Its fine if you just rebase at the end manually, but not good if you don't, your history will be cluttered and as hard to read as the codebase.
Eventually most people who use coding tools will have low knowledge of what is being generated and then they probably never rebase either...
I just commit with a “wip!”-prefaced message whenever the LLM pauses and says it’s finished, including new files. You can squash and cleanup later, or revert back to a state before it screwed up.
Also doubles as a way to cohesively look at the changes it made without all the natural language and recursive error/type fixing it does while working.
I don’t understand why people are making it so complicated. You’re saving a minute per iteration with the LLM, tops, at risk of losing control or introducing hard to find issues. It is the definition of diminishing returns.
That’s what I do too until I developed a practice to break up into thematic commits as I realize I need them. And if I don’t, then I just git reset to the beginning and use git gui to commit lines and chunks that are relevant for a given piece of work. But with experience, I barely do the break down completely - I generally don’t even bother creating commits until I have a starting sense of what the desired commit history should be.
how long until we start seeing software products for scrum management and t-shirt size estimation for claude code
introduce waterfall methodology to the LLM!
(These people dont realise that there a lot of tradeoffs to be made that pop up during implementation)
The shirt sizes now are for manual acceptance testing.
Don't get me wrong, it's definitely improved my workflow and efficiency, but you must be winning at roulette if the model is performing well on anything that can't be googled and implemented witihn a similar amount of time.
unless it's claude, where even simple styling changes seem to become epics just when it wants to spit out an extra few thousands lines of code
It doesn't make sense for NI (natural intelligence) dev, either. Even SCRUM doesn't make much sense. The only Agile thing that really makes sense is Kanban, which is actually known to computer science as dispatch queue.
In the 60s, OS researchers spent time figuring out how to optimally schedule resources for computation. Today, almost nobody uses these techniques. (This is known as "waterfall" in PM parlance.)
It turns out, the cheapest way to schedule computing resources is a simple dispatch queue. Why spend extra time figuring out in what order things need to be done, or how long they will take, if they need to be done anyway? It never made sense and it doesn't matter whether the agent is NI or AI.
I would love any feedback on what you are missing etc
[0] https://docs.kernel.org/filesystems/nilfs2.html
The noticeable issues are (1) unpredictable scrolling of the terminal window and (2) a super-buggy text box for inputting the prompt.
In particular if I mash the arrow key too fast while moving around and editing the prompt CC and my terminal's idea of where the cursor is get out of sync somehow and it's tricky to get them re-aligned, and I can't actually input text until I do. The vim mode lets me bypass this but it has its own bugs and is missing a ton of features that I expect. Visual selection in particular seems to be missing? Not entirely certain what things I'm used to are stock vim features vs Spacemacs features but I'm pretty sure visual mode is the former. Regardless, only the very basics seem to actually work. "w", "b", "e", "cw/b/e", "dw/b/e", "esc/i".
So for the most part I actually just edit CC prompts in emacs and paste them.
I resort to this workaround because I am very motivated to use Claude Code. For a less-useful piece of software I would probably just give up.
Is Ctrl+R usable at all? I've given up on it, the whole screen just starts scrolling madly most of the time. Not that I have to press Ctrl+R to get that bug to happen, it's just the most reliable way to do so.
And I've had the input box stuck not accepting input or not allowing me to delete past a certain point a hundred times. By now I know how to get it unstuck (although I couldn't tell you - my fingers figured it out but my brain doesn't know).
I'm guessing they're using abstraction of some sort, but imo they've done a lot of great features and definitely usable.
That being said- they could just build / use something more like a jupyter notebook and have a wildly more stable and rich experience. Or a classic tui app, but pros and cons.
Right, part of the reason it stands out is that we're conditioned to much more functional text input in claude.ai (or competing web apps like ChatGPT).
I assume part of the motivation for the terminal app concept is that all the tool calls run in a deterministic environment (whatever was the environment of the shell where you launched "claude"). A Jupyter-type approach would really muddle up that whole picture (at least from a user perspective).
You can extract prompts with mitmproxy/netcat, and AFAIK there isn't much more to it (bash and todo list are all you need in terms of tools), there's already a lot of simpler tools with better ux:
- sst/opencode and charmbracelet/crush -- related "cc clones" with top tier UX; opencode has near feature parity with cc, crush is more barebones
- block/goose -- a lot of multi-model features and extensions (it's practically a framework), but UI is pretty basic
- antinomyhq/forge -- similar to goose, but last week they merged some PRs with agent-agent communication, yet to see how it works out
- openai/codex, gemini-cli -- both somehow don't even have a way to resume a conversation
- avante.nvim with mcphub.nvim -- neovim plugin that emulates cursor to a degree; has a crazy good hack that makes even older models like gpt4.1 "more agentic" -- it keeps reprompting the model with "STFU and write code" until the model calls a "task_completed" tool; gets diagnostics, formatting and anything else neovim can do "for free"
For the sake of completeness, closed-source:
- amp-cli -- absolutely barebones, zero configuration (they even decide what model you're using for you); one problem -- closed source, no BYOK or subscription, pay per token only
- cursor-cli -- atm unusable, can't even set a global context file
- codebuff -- yet to try it myself, but they have some sort of an overengineered setup with 5+ different models (resoner/coder/file picker (!)/fast apply/...), curious to see how it works in practice (I'm assuming this setup is strictly worse than a single sonnet4/gpt5, but much cheaper)
Claude does have a lot of unique/rare (for now) features -- hooks, sub-agents, background jobs, planning mode, per-prompt reasoning effort controls, executable bash in slash commands.
Only half of them are really useful IMHO, but I wouldn't know that if they didn't have them.
If you typically compose prompts in a separate editor and paste them in you aren't likely to even notice. But it's the kind of thing that would drive me up the wall in a piece of software whose primary function was less impressive.
- no large context
- no zipfile uploads
- no multi file downloads