7 comments

  • woolion 37 minutes ago
    > We assert that artificial intelligence is a natural evolution of human tools developed throughout history to facilitate the creation, organization, and dissemination of ideas, and argue that it is paramount that the development and application of AI remain fundamentally human-centered.

    While this is a noble goal, it seems obvious that this isn't how it usually goes. For instance, "free market" is often used as a dogma against companies that are actively harmful to society, as "globalization" might be. An unstoppable force, so any form of opposition is "luddite behavior". Another one is easier transport and remote communication, that generally broke down the social fabric. Or social media wreaking havoc among teen's minds. From there, it's easy to see why the technological system might be seen as an inherent evil. In 1872's Erewhon, Butler already described the technological system as a force that human society could contain as soon as it tolerated it. There are already many companies persecuting their employees for not using AI enough, even when the employee's response is that the quality of its output is not good enough for the work at hand, rather than any ideological reason.

    I'm neither optimistic nor pessimistic about the changes that AI might bring, but hoping it to become "human-centered" seems almost as optimistic as hoping for "humane wars".

    • Izikiel43 12 minutes ago
      Globalization was great for poor countries, not so much developed economies.
      • js8 2 minutes ago
        No it wasn't. Look at Joseph Stiglitz (Globalization and Its Discontents) and Ha-Joon Chang (Bad Samaritans, Kicking Away the Ladder) for counter-examples.
    • cowpig 16 minutes ago
      > "free market" is often used as a dogma against companies that are actively harmful to society

      This is a predominantly America-specific piece of propaganda, and it's pretty recent.

      Adam Smith's ideas are primarily arguments against mercantilism (e.g. things like using tariffs to wield self-interested state power), something he showed to be against the common good. The "invisible hand" concept is used to show how self-interested action can, under conditions of *competitive markets*, lead to unintentional alignment with the common good.

      Obviously that's a significant departure from the way it's commonly used today, where Thiel's book has influenced so many entrepreneurs into believing Monopolies are Good.

      But the history of this is very Cold War-influenced, where "free markets" were politically positioned as alternatives to the USSR's "planned economy", and slowly pushed to depart further and further from Adam Smith's original argument about moral philosophy.

  • sendes 18 minutes ago
    > We assert that artificial intelligence is a natural evolution of human tools.

    While nowhere in the paper this is actually asserted but the abstract, a whiggish narrative of a genuinely unprecedented technology --such that it can replace and supersede human "labour" altogether (one is reminded of The Evolution of Human Science by Ted Chiang)-- sounds naive at best, dangerous at worst.

    • Zigurd 6 minutes ago
      I'm glad I can still count on HN to come across the correct use of a lesser known definition of a word.
    • jebarker 14 minutes ago
      I don’t see why “natural evolution of human tools” implies “such that it can replace and supersede human labor altogether”. Can you clarify?
      • sendes 7 minutes ago
        A common error in historical thinking tends to see human tools essentially as a positive linear plot between time and progress. But these tools until AI had the common property of being enhancing of human cognition, because they couldn't do the thinking _for you_. AI can do just that, and for all the benefit it brings, seeing it simply as the next step in the "natural evolution of human tools" is alarmingly disarming coming from frontier thinkers.
  • GodelNumbering 11 minutes ago
    > Today, unlike in the Luddites’ time, we are already seeing skilled workers replaced not with lower-wage human labor, but with AI.

    To me this is the weakest claim of the article. This claim been thrown around endlessly without proof.

    https://fred.stlouisfed.org/series/IHLIDXUSTPSOFTDEVE

    Software Engineer job openings for instance is at 2 year high (still far lower than covid dislocations though), but arguably all Enterprise AI was built or deployed in the last two years. We should have seen a crash in the job openings if the AI job replacement claim was correct.

    This is something I've spend some time thinking about (personally written article, not AI slop): https://www.signalbloom.ai/posts/why-task-proficiency-doesnt...

  • gradstudent 48 minutes ago
    I skimmed the paper a couple of times, hoping to find the promised (from the abstract)

    > pathway to integrating AI into our most challenging and intellectually rigorous fields to the benefit of all humankind.

    There's very little insight here though. It seems mostly a retread of conversations we've been having in the academic community for a few years now. In particular, I was hoping to see some discussion of how we might restructure our educational institutions around this technology, when the machines rob students of the opportunity to develop critical thinking skills. Right now our best idea seems to be a retreat to oral and written examinations; an idea which doesn't scale and which ignores the supposed benefits of human+AI reasoning. The alternative suggestion I've seen is to teach prompt engineering, which seems (a) hard for foundational subjects and (b) again, seems to outsource much of the thinking to the AI, instead of extending the reach of human thought.

    • BDPW 40 minutes ago
      Physical classrooms don't really scale either, is that really a fundamental problem?
      • lo_zamoyski 22 minutes ago
        Indeed. Education isn't supposed to "scale". We've mucked around with education so much and subjected it to tech fad after tech fad that we hardly have anything resembling education.

        Because this has been going on so long, most people's reference point for what constitutes "education" is simply off, mistaking "training" or something like that for it. But the purpose of education is intellectual formation, the ability to reason competently, and the comprehension of basic reality, which enables genuine intellectual freedom (there are moral presuppositions, too; immorality deranges the mind). This is what the classical liberal arts were about.

        The very bare minimum criterion (and it is a very bare minimum) for someone to be able to claim to be educated is not only knowledge of their field, but knowledge of the intellectual nature, foundations, and basis of their field in the greater intellectual scope. I would not hold someone with only that bare minimum in especially high esteem vis-a-vis education, but even that bar is higher than what education today provides.

  • zaikunzhang 3 hours ago
    • anotherpaulg 1 hour ago
      Recorded 10 February 2026. Terence Tao of the University of California, Los Angeles, presents "Machine assistance and the future of research mathematics" at IPAM's AI for Science Kickoff.
  • e7h4nz 2 minutes ago
    [dead]
  • bluecheese452 1 hour ago
    Enough Terence Tao spam.