18 comments

  • simonw 1 hour ago
    Suggestion: publish a live demo of the "needle playground". It's small enough that it should be pretty cheap to run this on a little VPS somewhere!
    • quantumleaper 1 hour ago
      Should be quick and easy with WebGPU, too.
      • simonw 1 hour ago
        That's an even better idea, I bet this could run in Transformers.js.
      • ilaksh 1 hour ago
        Good idea. Could you make that.
    • HenryNdubuaku 1 hour ago
      thanks, yeah, the problem is just handling scale, we don't have the infra ready to go, but anyone can do that. Its easy for people to run on their laptops straight up. Will try the VPS route.
  • ilaksh 2 hours ago
    Hmm.. this might make it feasible to build something like a command line program where you can optionally just specify the arguments in natural language. Although I know people will object to including an extra 14 MB and the computation for "parsing" and it could be pretty bad if everyone started doing that.

    But it's really interesting to me that that may be possible now. You can include a fine-tuned model that understands how to use your program.

    E.g. `> toolcli what can you do` runs `toolcli --help summary`, `toolcli add tom to teamfutz group` = `toolcli --gadd teamfutz tom`

    • HenryNdubuaku 2 hours ago
      So Needle is trained for INT4, what you see in the playground is INT4, only 14MB, same challenge though.
      • ilaksh 2 hours ago
        Oh gotcha. Fixed my comment.
  • kristopolous 1 hour ago
    That M versus B is way too subtle. 0.026B is my suggestion
    • HenryNdubuaku 1 hour ago
      Haha, we were trying to not be hand-wavy too much :)
  • rsolva 22 minutes ago
    Can it summarize text it fetches?

    Come to think of it, this could be a nice model to have as the first pass in a more complex agent system where Needle hands of the results of a tool call to a larger model.

    I will defiantly play around with this!

    • HenryNdubuaku 13 minutes ago
      The codebase is fully open, feel free to play around!
  • BoredPositron 2 minutes ago
    I source old, defective high-end radios with timeless designs from brands like Grundig or Braun, and replace the original hardware with a Raspberry Pi while using the original audio parts to build custom smart speakers. Reliable hotword detection and voice command recognition have been a persistent challenge over the years, but whisper and other small models have helped enormously. At the moment I have ollama running on my server with qwen 9b which works fine but a 26M that could be deployed on the pi itself would be amazing.
  • zamalek 42 minutes ago
    Is the idea here to add function calling to models that don't have it, or even improve function calling (qwen quirks)?
    • HenryNdubuaku 22 minutes ago
      So it’s a tiny model capable of function calling that could run locally on cheap devices.
  • bityard 12 minutes ago
    This is pretty much exactly what I want for Home Assistant. I yell out, "Computer! Lights!" and it toggles the lamp in the room on or off. (I mean I can do that now, I think, but probably with a much larger model.)

    I haven't played with it yet, but does it ever return anything other than a tool call? What are the failure modes? What if it doesn't understand the request? Does it ever say it can't find a tool? Does it get confused if there are two similar (but different) tools? Can it chain tools together (e.g. one tool to look up and address and another to get directions to the address)?

    I mean, I plan on downloading the model later tonight and finding out for myself, but since I'm stuck at work right now, I figured I'd ask anyway...

  • logdahl 1 hour ago
    I find this stuff super fascinating and been thinking about it myself. Maybe one could bootstrap tiny models on a rather 'pure' procedural data set. Neglecting [0] of course...

    [0]: http://www.incompleteideas.net/IncIdeas/BitterLesson.html

    • HenryNdubuaku 53 minutes ago
      Sounds interesting, would love to see it too!
  • simonw 2 hours ago
    Looks like you need to open up access to https://huggingface.co/Cactus-Compute/datasets/needle-tokeni... - I get this error when trying to run the steps in your README:

    > Repository Not Found for url: http s://huggingface.co/api/datasets/Cactus-Compute/needle-tokenizer/revision/main.

  • Havoc 1 hour ago
    Sounds interesting.

    Got a bunch of errors trying to run it on CPU though. Very likely connected to me running this in a container (unpriv LXC), but figured for 26M CPU would suffice.

    https://pastebin.com/PYZJKTNk

    • dakolli 1 hour ago
      It better, considering its purpose is to run on devices with no GPU.
  • quadrature 40 minutes ago
    Does the model have capacity for in context learning ?, if we give it examples of patterns can it follow them ?.
    • HenryNdubuaku 22 minutes ago
      Not yet, for now. But it’s in the works!
  • murkt 1 hour ago
    Can this be a Siri-like core? Set me a timer, tell me what’s the weather, etc. Here is transcribed text and available list of tools for the model to call, and voice the output.
  • deepsquirrelnet 1 hour ago
    This is really cool. Any plans to release the dataset?
    • HenryNdubuaku 52 minutes ago
      We include the dataset pipeline in the codebase so far, might release dataset.
  • cmrdporcupine 1 hour ago
    This is very cool I'm going to try to carve out some time to try building this into my MOO system ( https://codeberg.org/timbran/moor / https://timbran.org/moor.html ) as alternative command parser front end.
    • Balinares 1 hour ago
      Man, I love that there are still people writing new MOO servers in 2026. Any game out there already running on mooR?
      • cmrdporcupine 11 minutes ago
        Many people tease that they will, and start... but then kinda stop. But mostly just been building my own bespoke thing on my own bespoke platform, and kinda running out of steam because I need to make $$ instead.
    • HenryNdubuaku 1 hour ago
      Thanks, let us know how it goes!
  • nhattruongadm 1 hour ago
    [flagged]
  • abhijithbabu 3 hours ago
    [flagged]
  • ac29 1 hour ago
    FYI, distilling Gemini is explicitly against the ToS:

    "You may not use the Services to develop models that compete with the Services (e.g., Gemini API or Google AI Studio). You also may not attempt to reverse engineer, extract or replicate any component of the Services, including the underlying data or models (e.g., parameter weights)."

    • Havoc 1 hour ago
      Yeah I think Google should shove that somewhere. They effectively distilled all the internet's knowledge into these models...without asking & without permission
    • HenryNdubuaku 1 hour ago
      Thanks, Needle doesn’t compete with those tools though and the distillation process did not access the weights.
    • ilaksh 1 hour ago
      I think GLM 5.1 or Kimi 2.6 could substitute for this type of purpose.
    • iAMkenough 37 minutes ago
      FYI, Gemini was developed using stolen copyrighted works without author consent. The double standard is striking.
    • ForHackernews 1 hour ago
      So is copying all the books in the world.
    • vablings 1 hour ago
      Oh no! They stole the model weights! Distillation "attacks" is such bullshit
    • xgulfie 1 hour ago
      This is being downvoted but it's worth noting if only for the "be careful" aspect.

      That said, we need more people distilling models IMO, just be ready for a C&D and a ban