>Generative AI doesn't have a coherent understanding of the world
The very headline is based on a completely faulty assumption, that AI has any capacity for understanding at all, which it doesn't. That would require self-directed reasoning and self-awareness, both of which it lacks based on any available evidence. (though there's no shortage of irrational defenders here who somehow leap to say that there's no difference between consciousness in humans and the pattern matching of AI technology today, because they happened to have a "conversation" with ChatGPT and etc).
That is emphatically not true - animals and small children that can't speak yet know about object persistence. If something has come from over there and is now here then it's no longer there.
LLM's do not have that concept, and you'll notice very quickly if you ask chemistry questions. Atoms appear twice, and the LLM just won't notice. The approach has to be changed for AI to be useful in the physical sciences.
Yeah I was going to call out MIT for pointing out the obvious, but there's enough noise/misunderstanding out there, that this kind of article can lead to the 'I get it' moment for someone.
A map, if it is useful (which the subjective human experience of reality tends to be, for most people most of the time,) is by definition a coherent view of the territory. Coherent doesn't imply perfect objective accuracy.
To be honest… it’s amazing it can have any understanding given the only “senses” it has are the 1024 inputs of d_model!
Imagine trying to understand the world if you were simply given books and books in a language you had never read… and you didn’t even know how to read or even talk!
But what you describe is basically done by any human on Earth : you are born not knowing how to read or how to talk and after years of learning, reading all the books may give you some understanding of the world.
Contrary to LLM though, human brains don’t have a virtually infinite energy supply, cannot be parallelized and have to dedicate their already scarce energy to do a lot of other things than reading books including moving in a 3D world, living in a society, feeding themselves, doing their own hardware maintenance (sleep …), pay attention not to die every single day etc etc.
So, for sure, LLMs _algorithms_ are really incredible, but they are also useful only if you throw a lot of hardware and energy into them. I’d be curious to see how long you’d need to train (not use) a useful LLM with only 20W of power (which is more or less the power we estimate the brain is using to function).
We can still be impressed by the results, but not really by the speed. And when you have to learn the entire written corpus in some weeks/months, speed is pretty useful.
Pretty sure the human brain is far more parallelized than a regular CPU or GPU. Our image recognition for example probably dosen't take shortcuts like convulution because of the cost of processing each "pixel", we directly do it all with those millions of eye neurons. Well, to be fair there is alot of post-processing and "magic" involved in getting the final image.
The very headline is based on a completely faulty assumption, that AI has any capacity for understanding at all, which it doesn't. That would require self-directed reasoning and self-awareness, both of which it lacks based on any available evidence. (though there's no shortage of irrational defenders here who somehow leap to say that there's no difference between consciousness in humans and the pattern matching of AI technology today, because they happened to have a "conversation" with ChatGPT and etc).
LLM's do not have that concept, and you'll notice very quickly if you ask chemistry questions. Atoms appear twice, and the LLM just won't notice. The approach has to be changed for AI to be useful in the physical sciences.
https://www.science.org/doi/10.1126/science.adg9774
https://www.technologyreview.com/2024/10/18/1105880/the-race...
Imagine trying to understand the world if you were simply given books and books in a language you had never read… and you didn’t even know how to read or even talk!
So it’s pretty incredible it’s got this far!
But what you describe is basically done by any human on Earth : you are born not knowing how to read or how to talk and after years of learning, reading all the books may give you some understanding of the world.
Contrary to LLM though, human brains don’t have a virtually infinite energy supply, cannot be parallelized and have to dedicate their already scarce energy to do a lot of other things than reading books including moving in a 3D world, living in a society, feeding themselves, doing their own hardware maintenance (sleep …), pay attention not to die every single day etc etc.
So, for sure, LLMs _algorithms_ are really incredible, but they are also useful only if you throw a lot of hardware and energy into them. I’d be curious to see how long you’d need to train (not use) a useful LLM with only 20W of power (which is more or less the power we estimate the brain is using to function).
We can still be impressed by the results, but not really by the speed. And when you have to learn the entire written corpus in some weeks/months, speed is pretty useful.