Wow, it is really interesting the difference in comments between ALife and AI stories on HN.
For some of you out there, there's a great book that really hasn't gotten enough attention called "The Self-Assembling Brain"
[1] that explores intelligence (artificial or otherwise) from the perspectives of AI, ALife, robotics, genetics, and neuroscience.
I hadn't realized the divide was a sharp as it is until I saw the difference in comments. i.e. this one[2] about GPT-5 has over 1000 comments of emotional intensity while comments on OP story are significantly less "intense".
The thing is, if you compare the fields, you would quickly realize that which we call AI has very little in common which intelligence. It can't even habituate to stimuli. A little more cross disciplinary study would help is get better AI sooner.
Apart from the obvious distinction that many of us on HN are making (or trying to make) money on LLMs I think you've also hit a broader point.
There appears to be a class of article that have a relatively ratio of votes to comments, and concerns such topics as, e.g. Programming Language Theory or high-level physics. These are of broad interest and probably are widely read, but are difficult to make a substantial comment on. I don't think there are knee-jerk responses to be made on Quantum Loop Gravity, so even asking an intelligent question requires background and thought and reading the fine article. (Unless you're complaining about the website design.)
The opposite is the sort of topic that generates bikeshedding and "political" discussion, along with genuine worthwhile contributions. AI safety, libertarian economics, and Californian infrastructure fall into this bucket.
This is all based on vibes from decades of reading HN and its forerunners like /. but I would be surprised if someone hasn't done some statical analyses that support the broad point. In fact I half remember dang saying that the comments-to-votes ratio is used as an indicator of topics getting too noisy and veering away from the site's goals.
thanks for your resources. I am myself concerned with the question of artificial life, and I wonder if it is even possible to search for it, or rather it will emerge on its own. Perhaps, in a sense, it is already emerging, and we humans are its substrate...
I'm not even sure that the goal of Artificial Life is actually "life", although that may be the AGI equivalent of ALife -- AGL or "Artificial General Life"?. In practice I think the discipline is much closer to the current LLM hype around "Agentic AI", but with more of a focus around the environment in which the agents are situated and the interactions between communities of agents.
Much like the term "Artificial Intelligence", the term ALife is somewhat misleading in terms of the actual discipline.
The overlap between "agentic AI" and ALife is so strong it's amazing to me that there is so little discussion between the fields. In fact it's closer to borderline disdain!
I think it's a pretty natural reaction for a lot of people who have spent a significant of time and energy in the "artificial life"/"AI" space, I think my first introduction to artificial life was Steven Levy's book (1992) which captivated me over decade ago when I was much younger.
Fast forward to now after grinding using "AI" in various capacities well before LLMs were what they are today, suddenly everyone and their brother is an "AI expert".
I guess the best parallel would be (assuming you are a professional full time SWE/IC) if for some reason you were talking to a elementary school students and you find out that that they are all being handed PhDs in software engineering at graduation because no one stopped them and now everyone is an engineer. It's super bizarre.
Some people have a low bar for fun, for example, learning something new that connects to something they already knew, and saying to themselves, "Neat!"
I actually found artificial life. Crocs. They keep on reproducing effectively and walking around (symbiotically with humans), with some mutation though the polysexual recombination process of Product Manager design reviews.
I mean it's Japanese for Fish, but yeah, perhaps we need a database of false cognates sorted by number-of-languages-that-consider-it-vulgar
As for Portuguese, GPTo3 tells me "depending on context it can mean “bastard,” “scumbag,” “dirty-minded jerk,” or imply that someone is a lecherous creep. It’s essentially an insult calling someone sleazy or untrustworthy."
> perhaps we need a database of false cognates sorted by number-of-languages-that-consider-it-vulgar
Or, like most people, we can assume the intent from the context and if someone says "Use git", we know they're not telling us to use a bum/rat/scum/whatever but the SCM :)
For some of you out there, there's a great book that really hasn't gotten enough attention called "The Self-Assembling Brain" [1] that explores intelligence (artificial or otherwise) from the perspectives of AI, ALife, robotics, genetics, and neuroscience.
I hadn't realized the divide was a sharp as it is until I saw the difference in comments. i.e. this one[2] about GPT-5 has over 1000 comments of emotional intensity while comments on OP story are significantly less "intense".
The thing is, if you compare the fields, you would quickly realize that which we call AI has very little in common which intelligence. It can't even habituate to stimuli. A little more cross disciplinary study would help is get better AI sooner.
Happy this story made it to the front page.
[1]: https://a.co/d/hF2UJKF
[2]: https://news.ycombinator.com/item?id=42485938
There appears to be a class of article that have a relatively ratio of votes to comments, and concerns such topics as, e.g. Programming Language Theory or high-level physics. These are of broad interest and probably are widely read, but are difficult to make a substantial comment on. I don't think there are knee-jerk responses to be made on Quantum Loop Gravity, so even asking an intelligent question requires background and thought and reading the fine article. (Unless you're complaining about the website design.)
The opposite is the sort of topic that generates bikeshedding and "political" discussion, along with genuine worthwhile contributions. AI safety, libertarian economics, and Californian infrastructure fall into this bucket.
This is all based on vibes from decades of reading HN and its forerunners like /. but I would be surprised if someone hasn't done some statical analyses that support the broad point. In fact I half remember dang saying that the comments-to-votes ratio is used as an indicator of topics getting too noisy and veering away from the site's goals.
I’d also highlight the misalignment between creating better AI and working towards AGI and extracting value right now from LLMs (and money investors).
Much like the term "Artificial Intelligence", the term ALife is somewhat misleading in terms of the actual discipline.
The overlap between "agentic AI" and ALife is so strong it's amazing to me that there is so little discussion between the fields. In fact it's closer to borderline disdain!
Fast forward to now after grinding using "AI" in various capacities well before LLMs were what they are today, suddenly everyone and their brother is an "AI expert".
I guess the best parallel would be (assuming you are a professional full time SWE/IC) if for some reason you were talking to a elementary school students and you find out that that they are all being handed PhDs in software engineering at graduation because no one stopped them and now everyone is an engineer. It's super bizarre.
Isn't this basically what (we'd like to think) HN is?
But I can't deny that it's beautiful. Unlike crocs.
https://youtu.be/-tDQ74I3Ovs?si=1m0JV8gZEl4WFedG
https://youtu.be/SFxIazwNP_0?si=R7yZroSNbw5Jjc0H
https://youtu.be/wwhTfyX9J34?si=ceXh_aehsjQPklUT
As for Portuguese, GPTo3 tells me "depending on context it can mean “bastard,” “scumbag,” “dirty-minded jerk,” or imply that someone is a lecherous creep. It’s essentially an insult calling someone sleazy or untrustworthy."
Would you say that's about right?
Or, like most people, we can assume the intent from the context and if someone says "Use git", we know they're not telling us to use a bum/rat/scum/whatever but the SCM :)