Intent Weaving for AI Coding Agents

(autohand.ai)

21 points | by igorpcosta 16 hours ago

3 comments

  • igorpcosta 16 hours ago
    Hey HN community, I’m Igor, co-founder of Autohand.ai in New Zealand.

    We’re building an open stack that lets AI coding agents deliver work with the discipline senior engineers expect. Our latest write-up, “Intent Weaving for AI Coding Agents,” breaks down how we encode strategy, policy, and telemetry into machine-executable intent, plus an honest inventory of where current agents fail (reasoning, repo awareness, testing, etc.).

    Highlights: - Mission compiler that turns business objectives into guardrail-rich plans for agents. - Knowledge graph + policy DSL so automation stays inside governance envelopes. - Pain-points matrix from real deployments; new benchmarks that punish regressions, not just pass unit tests. - Open-source pieces as we release them; Commander is already MIT-licensed.

    We’d love feedback from folks shipping agentic workflows or wrestling with AI codegen drift. Where should we push harder? What failure modes have we missed?

    Link to our manifesto: https://autohand.ai/manifesto

    Thanks for reading, and be kind. Creating a new category means stretching before the skills are perfect.

    • ripped_britches 14 hours ago
      Please build patterns like manager/worker AI agent pairs. You spec a task and they work together on it in a loop, reviewing the code, etc.
  • visarga 14 hours ago
    Nice idea, I came up with a similar system. The idea is to map the "state space" of the agent, and describe a number of discrete states. Then assign a policy to each one. Both state space mapping and policy are generated by the agent after a discussion with the human. A chat driven, LLM based expert system, a problem specific bunch of "when in situation X, do Y".
    • igorpcosta 14 hours ago
      that's very cool, haven't thought about that, how do you score the policy if in case they conflict or override previous directives?
      • visarga 14 hours ago
        My state space mapping approach, which is reminiscent of expert systems and tabelar RL, only makes sense when you repeat the task in the same environment so you can gradually discover the states and their policies. You can look at execution traces to make targeted policy adjustments after each execution.

        Here is an example of a state space map rendered in 2D by PCA. It maps LLM research papers from 2025. It does not have policies attached to state positions yet, but can be used as a visual map.

        The projection: https://i.imgur.com/a9ESiXs.png

        The map itself: https://pastebin.com/pmGzFcPM

        A cool thing both for intent weaving and state space policy approach is that they do not prescribe a sequence of steps, they are more like a GPS map allowing rerouting towards goal state at any moment. This is a more flexible description than a static procedure.

  • raminf 12 hours ago
    Feel like we're revisiting heuristic planning and General Problem Solving by Simon, Shaw, and Newell.