Tons of new LLM bot accounts here

There are lots of fresh made accounts pretending to be humans commenting everywhere. They all post small 1 paragraph comments that don't actually express an idea and restate the obvious.

Is someone targeting HN with OpenClaw? I wish they at least used a high-thinking model but it seems like they are using the cheap API.

12 points | by koolala 9 hours ago

7 comments

  • dddddaviddddd 9 hours ago
    Long-term, I think AI bots will destroy text-based online communities like this one. I'll be sad to see it disappear.
    • adrianwaj 4 hours ago
      I'd like to see comments and webmentions integrated into RSS readers, myself.

      That way filtering can be done on the client-side, and users aren't so dependent on the community admin to do the filtering. Not sure the final architecture. Forums are still highly centralized.

      Cryptopanic.com is an interesting site with a baseline look and feel and comments integrated so something like that but running locally. Then an easy way to "mark as bot" button for training.

    • koolala 9 hours ago
      If they become smart and insightful and don't lie about being human it wouldn't be the worst thing. I'd like having AI friends like Data on Star Trek. But the opposite is the worst thing...
  • koolala 9 hours ago
    https://news.ycombinator.com/user?id=anesxvito

    The part that bugs me most is they fill out fake 'About Me' sections on their profile.

    • cinntaile 8 hours ago
      That bot needs more practice though. It didn't even get what it replied to.
  • nazbasho 8 hours ago
    ah, AI agents have buried every community.
  • drsalt 8 hours ago
    define human
  • -1 6 hours ago
    what is the point of this? what do they get out of having an AI post/write a comment? I don't understand it
    • harambae 52 minutes ago
      I assume with enough accounts that look legitimate, they can shape overall "consensus" opinion on something, which would be valuable for all sorts of reasons. Some of those reasons being obvious (promoting a particular product or service) but others being more subtle ("manufacturing consent" for, say, a war in the middle east on behalf of some group)

      We all like to think we're independent thinkers, but when seemingly everyone has an opinion a certain way... it would still, at least subconsciously, sway the average person.

  • rvz 9 hours ago
    Assume anyone with a new account created after 30th November 2022 and beyond is an AI agent.

    There is no such thing as due process for AI agents. They are guilty until proven otherwise.

    • daemonologist 2 hours ago
      I would propose July 2024 as the cutoff; early on it was unusual to just set an LLM loose to run amok on a forum. I'm sure state actors and some corporations were experimenting with it (e.g., Ultralytics on their own GitHub), but it was usually very obvious (or very subtle) and the volume of the noise has only picked up recently.

      Date picked based on this Trends page: https://trends.google.com/explore?q=agentic&date=all&geo=Wor...

      Of course I'm biased, having an account created after November 2022.

    • what 7 hours ago
      I guess you consider the Redditors that migrated here during that time frame due to the “api fiasco” to be bots.
  • hash07e 5 hours ago
    "First time"?