Show HN: Container Use for Agents

(github.com)

71 points | by aluzzardi 1 day ago

5 comments

  • lmeyerov 19 hours ago
    Interesting. I have been doing a simple man's version of multiple git clone folders and 'docker compose -p'. Making that smoother is attractive, esp if can be made opaque for our more junior teammates.

    On one end, I have been curious about getting multiple agents to work on the same branch, but realized I can just wait till they do that natively.

    More so, all this feels like a dead end. I think OpenAI and github are right to push to remote development, so these don't matter. Eg, mark up a PR or branch in GitHub, and come back as necessary, and do it all from my phone. If I want an IDE, it can be remote ssh.

  • shykes 1 day ago
    Hi all, we open sourced this live on stage today at AI Engineer World Fair (great event by the way).

    If you're interested, here's the keynote recording: https://www.youtube.com/live/U-fMsbY-kHY?t=3400s

  • steeve 1 day ago
    Very cool that this runs as a MCP server, very cool demo
    • dboreham 21 hours ago
      Seems odd that the LLM is so clever it can write programs to drive any API. But so dumb that it needs a new special purpose protocol proxy to access anything behind such an API...
      • sharifhsn 21 hours ago
        It’s about resilience. LLMs are prone to hallucinations. Although they can be very intelligent, they don’t have 100% correct output unaided. The protocol helps increase the resilience of the output so that there’s more of a guarantee that the LLM will stay within the lines you’ve drawn around it.
        • beardedwizard 17 hours ago
          That's really not true. Context is one strategy to keep a models output constrained, and tool calling allows dynamic updates to context. Mcp is a convenience layer around tool calls and the systems they integrate with
      • nsonha 8 hours ago
        > LLM is so clever it can write programs to drive any API

        It is not, name one software that has a LLM generating code on the fly to call APIs. Why do people have this delusion?

        • TeMPOraL 2 hours ago
          Every runtime executing LLMs with support for tools does it, starting with the first update to ChatGPT app/webapp that made use of the earliest version of "function calling"? Even earlier, there were third-party runtimes/apps (including scripts people made for themselves), that used OpenAI models via API with a prompt teaching LLM a syntax it can use to "shell out", which the runtime would scan for.

          If you tell a model it can use some syntax, e.g. `:: foo(arg1, arg2) ::`, to cause the runtime to call an API, and then, based on the context of the conversation, the model outputs `:: get_current_weather("Poland/Warsaw")`, that is "generating code on the fly to all APIs". How `:: get_current_weather("Poland/Warsaw")` gets turned into a bunch of cURL invocations against e.g. OpenWeather API, is an implementation detail of the runtime.

        • ErikBjare 1 hour ago
          This is basically just function calling?
  • rahimnathwani 23 hours ago
    I'm curious: what do containers add over and above whatever you'd get using worktrees on their own?
    • shykes 23 hours ago
      They're complementary. git worktrees isolate file edits; containers isolate execution: building, testing, running dev instances..

      container-use combines both forms of isolation: containers and git worktrees in a seamless system that agents can use to get work done.

    • brunoqc 23 hours ago
      I would guess isolation/safety.
  • kamikaz1k 20 hours ago
    Page is crashing my mobile chrome.
    • akshayKMR 19 hours ago
      Freezing for me on Safari desktop. I think the culprit is the SVG based demo in the README.md
      • shykes 17 hours ago
        Sorry about that! We'll fix it.
      • meling 17 hours ago
        On iPad as well.