2 comments

  • gregjor 1 hour ago
    Most people in the broad sense don't use AI at all, at least not willingly. From your article I think "most people" in this context actually refers to a very small number of your colleagues and people you have shown your work to.

    Many people who have used AI find it useless, in the sense that it doesn't add to their workflow, and just creates more distractions and sources of error. Perhaps mistaking novelty for efficiency or increased "productivity" (always mentioned but never measured) plenty of people have tried using AI. So far the current crop of LLMs have managed to attract public interest (now waning) and piles of money (seeking a return), but have failed to achieve widespread adoption because they mostly don't add much value. Almost every big company announced AI initiatives and plastered "AI" all over their press releases, but almost none have found a "killer app" for AI.

    Of course if AI could "actively observe, understand context, and offer assistance at the right moment" -- the Holy Grail of current AI work -- then more people might find it useful. But that looks more and more like a chasm current LLM technology can't cross rather than something we'll see in the next version or two. I expect AI will turn out like full self-driving, just around the corner for decades, and maybe never really happening.

    • MakerBi 44 minutes ago
      Yes, it may be true that LLMs may never cross that chasm. But I wonder what if LLMs provide continuous help and suggestions even though there may be useless suggestions, will people consider it a helpful AI assistant?
      • gregjor 39 minutes ago
        I can get continuous and mostly useless "help and suggestions" from my children or parents. I don't consider that helpful in terms of getting my work done.

        If LLMs could do what you describe then maybe they would get more traction. Right now they seem to hold our interest out of novelty, like a talking bird, and maybe help some people who have worse skill issues than the LLMs. I may have it wrong but I think the leap from current LLMs to software that truly understands and can reason like a person will prove very difficult.

  • MakerBi 2 hours ago
    Hi HN! Author here. I wrote this piece to share my observations about why AI tools, despite their capabilities, haven't been widely integrated into daily workflows. The key insight is that the current "passive" interaction paradigm, where users must actively initiate AI assistance, creates unnecessary cognitive overhead.

    I believe the next evolution in AI products will be "proactive AI" - tools that actively observe, understand context, and offer assistance at the right moment, similar to how Cursor AI works for coding. With AI model costs dropping significantly, this shift is becoming technically feasible.

    I'd love to hear your thoughts on this paradigm shift. Have you experienced similar friction with AI tools? What are your concerns about more proactive AI assistants? And for those building AI products, how are you thinking about reducing the cognitive burden for users?