> On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.
My intelligence is trained by paying close attention to who is doing the talking. Some people know a lot about one topic which means they didn't spend all of that time learning other things. Many don't know this about themselves.
Wikipedia had some comical instances where high quality contributors accident ventured into other areas where they spontaneously transformed into ignorant trolls.
These companies are sitting on a never-ending stream of human created data. What do you think happens to your conversations or other interactions with AI? Quality might be a bit sus though.
I'd imagine it's really low quality data. Most or all of my conversations with an LLM are questions or telling it to do something, with varying levels of specificity
I'm not sure what they'd get from training on that
> If human response is "That's BS", "fuck off", or something similar, mark as bad assistant message.
Marking is not a trivial task though. Use some AI system to mark it and you get a 99.something% filter maybe but whatever that remainder is leaks through. Over time your filter may get worse as a result.
I'm in the process of messing around with a new distro where things are not quite what I am used to, and the usual suspects have been pretty helpful there... except for when they just make shit up
Grok is the only one that swore back at me. I kinda liked that. The others are way too polite, "Artificial Intelligence? Artificial Canadians, more like", my uni-going kid joked.
Every time you tell it to do something, it does, and you don't correct it that's a weakly positive signal. If you tell it to do it again with further clarification, that's also a signal. Sometime I feel like I am giving them free work when chatting.. I guess the trade is sort of equitable. Answers in exchange for data..
I sometimes wonder if they're vulnerable to a coordinated effort of deliberately upvoting shit assistant turns and praising in the next user turn - how much does that actually contribute to future training, if at all?
I had a very basic React question about useState while porting some vanilla code last week which all models of all stripes I've tried it on have been confidently and completely incorrect about, up to stating the code absolutely will not work, even when I take a turn to assert that I ran it and it does, so there's plenty of shit in there already.
Most of the human-created data is also very low quality. But it's also limited in other ways, such as how a lot of so-called high-quality data online is typically the finished answer to a question, with no serialization of the thought process that lead to that answer.
I think he was referring not to finished content, but to the prompts humans put in when using chatbots. The prompts would show some of the thought process, but then they won't really show the answer (as that's output by the chatbot and not the human prompting it).
You can't. That appears to be a dark pattern by OAI, most likely designed to deceive you into uploading your sensitive material unaware that it's being trained on.
The real process involves submitting a request on another one of OpenAI's sites and awaiting a confirmation email (either their privacy or platform site).
Feel deceived and violated? Yeah, you, me and millions of other people, welcome to the club.
There wasn't any known active AI back then, but statistics on popular ideas and internet content was already a thing, and speech pollution based on those assessments had already started to spread fast, manually outputted.
Sure, a lot of good content came out since then. But the amount of garbage... it's immense and very difficult to sort out automatically.
The major issue is that this garbage then _became_ the norm. Only people who lived back then can remember what it was. For new folk, it looks just like a generational shift. However, it is quite obvious that some aspects of this shift were... unnatural (in the sense of not being spontaneous cultural manifestations).
I mentioned explicitly that I see what happened as distinct from a natural generational shift.
There are many phenomena around that era to support what I am saying. Like, for example, the first massive political campaign to leverage internet as its primary vehicle.
Not sure why you're getting downvoted, content farms have been a thing for a long time, and many a spam website used crappy markov chains to generate even more "content". Anything that could be marketed by company had its search results drowned in hand-crafted bland marketing slop, and even before ChatGPT got popular searching for things like recipes (or, god forbid, generic windows error messages) was a nightmare. And a lot of that garbage is in LLMs' training data.
A scarier thought is that people will "talk" so much with these AIs that they'll start talking like ChatGPT. So we may still end up with some AI enshittification fixed point in the future but, one of the feedback paths will be human brains become enshittified.
Imagine you time travel 20 years in the future and find out everyone around you talks the same and they all like ChatGPT.
If someone earnestly starts using those pointless platitudes LLM generated slop is filled with (“You're absolutely right. Here's where I was wrong …”) I suspect they will quickly find that violence was never far off.
No need to do imaginary time travel, here's articles from almost 10 years ago with the exact same concerns about how Alex fosters rudeness in children:
Kids are social creatures, I don't think the interaction from AIs is going to be so overwhelming. At least looking back, I'd blame social media more for today's brain rot more than Alex like these articles feared.
This reminds me of the Monsanto case, where they sued a farmer (and won) for using patented seeds that the farmer obtained from a local grain elevator which happened to contain some of Monsanto's seeds.
Should it eventually happen for LLM outputs, I hope we name it Slop Wars.
Maybe they do use watermarks, and the vendors which only offer hosted models can just log everything they've ever generated, but there's enough players all working on this stuff independently of each other that filtering out their own noise would only get them so far.
I noticed that a big chunk of the default Llama 4 system prompt is devoted to suppressing various GPT-isms, which to me implies they weren't able to keep their newer training set from being contaminated by competing models.
> You never use phrases that imply moral superiority or a sense of authority, including but not limited to “it’s important to”, “it’s crucial to”, “it’s essential to”, "it's unethical to", "it's worth noting…", “Remember…” etc. Avoid using these.
Y'know, I've been writing double dashes and having them converted into em dashes about 50% of the time on whatever platform I'm using for decades. It's bizarre that this is suddenly supposed to be a shibboleth.
Apparently the new ageist insult beyond "boomer" is "double-spacer" -- people who were taught in school to always follow the period at the end of a sentence with two spaces when composing the next sentence. If you went to elementary school after the internet became widespread, you are not likely to have been taught that. So double-spacing has now also become a shibboleth, albeit indicating the typist's age, distinguishing early millennials and Xers, who are now entering middle/old age, from the younger generations.
Right? I've never associated "double-spacer" with boomer. Maybe anally retentive? Someone who is trying too hard? The only thing I associate with boomers is ALL-CAPS writing. Which I assume is a holdover from typewriter days. But I kind of like ALL CAPS. It conveys some level of importance to the message.
Considering most recent models' general knowledge cutoffs are still in the late 2023/early 2024 range, I'm guessing the answer is "yes, and AI companies are very much aware of it".
A good way to harvest new training material is to eavesdrop real human conversations from non polluted sources (such as microphones listening to people talk in public places or texts), transcribe them, and feed them to LLMs.
It doesn't take much to clean up say 95% of mistakes I reckon, as it tends to be pretty repetitive, and unless there's a bunch of wordplay happening, intention can be discerned.
In fact I wouldn't be surprised if this tainted information somehow enriches a dataset by providing an extra dimensionality for training specialized heuristics. Maybe this would turn out to be how LLM hallucination can be solved, through being able to accurately identify AI generated material, and as result, becoming more robust against both the identification and generation of nonsense.
Humans learn to discern what/who to best pay attention to via all manners of heuristics. I don't see in principle why LLMs (or something like it in the future) won't eventually be able to do the same.
This line of thought was exacerbated by that one paper that was then parroted (hah!) by every influencer / negativist in the space. It didn't matter that the paper was badly executed, their setup was flawed and that it got rendered moot by the existence of LLama3 models. People still quote that, or the "articles" stemming from it.
Not when it comes to math/programming/reasoning. You can generate infinite new problem and solution examples that are based on existing knowledge of course, but build on top of it, not distill it.
A simple example would be chess ai. The core knowledge is rules of the game. We have human generated examples of plays, but we don’t really need them - we can (and we did) synthesize data to train ai.
A similar pattern can be used for all math/physics/programming/reasoning.
> A similar pattern can be used for all math/physics/programming/reasoning.
No it can't, the pattern for chess worked since it was an invented problem where we have a simple outcome checks, we can't do the same for natural problems where we don't have easily judged outcomes.
So you can do it for arithmetics and similar where you can generate tons of questions and answers, but you can't use this for things that are fuzzier like physics or chemistry or math theorem choices. In the end we don't really know what a good math theorem is like, it has to be useful but how do you judge that? Not just any truthy mathematical statement is seen as a theorem, most statements doesn't lead anywhere.
Once we have a universal automated judge that can judge any kind of human research output then sure your statement is true, then we can train research AI that way. But we don't have that, or science would look very different than it does today. But I'd argue that such a judge needs to be AGI on its own, so its circular.
> No it can't, the pattern for chess worked since it was an invented problem where we have a simple outcome checks, we can't do the same for natural problems where we don't have easily judged outcomes.
You might be interested in some of the details of how AlphaGo (and especially the followup version) works.
Go is a problem where it's very difficult to judge a particular position, but they were still able to write a self-improving AI system that can reach _very_ high quality results starting from nothing, and only using computing power.
There does not appear to me to be any fundamental reason the same sort of techniques can't work for arbitrary problems.
> But I'd argue that such a judge needs to be AGI on its own, so its circular.
But is it circular in a way that means it can't exist, or can it run in circles like AlphaGo and keep improving itself?
> Once we have a universal automated judge that can judge any kind of human research output then sure your statement is true,
If you've noticed, most LLM interfaces have a "thumbs up" or "thumbs down" response. The prompt may provide novel data. The text generated is synthetic. You don't need an automated judge, the user is providing sufficient feedback.
I’m extremely skeptical that “thumbs up” and “thumbs down” plus replies to chatbots is sufficiently informative to train models to the same level of quality as models trained on user generated content.
I know they're training with synthetic data, I didn't realize that has been done at scalr for long enough to really know if it improved (assuming the metrics its improving are defined well).
Yes, there's a podcast with the post-training lead for L3 where he mentions this. Lemme try and find it.
edit: found it. The money quote is here, but I really recommend the entire podcast since it's full of great tidbits and insights.
> Thomas [00:33:44]: You mean between supervised fine-tuning like supervised fine-tuning annotation and preference annotation? Yeah. So 100% to RLHF. In fact, that's quite interesting. You start for Llama 2 with a pre-trained model and you have to have an instruction model to chat model. Otherwise, like the model is just like continue finishing sentences. So you need that to start RLHF. So we had to annotate like 10,000 examples. What did we do for Llama 3? You start with a new pre-trained model and then you want, before starting the RLHF, to have now a chat model, which is not too bad. The option one was, let's do human annotation again, like SFT stage. But in fact, by the principle I said before, the annotation would be actually worse than Llama 2. So what we did is that we generated all the data on the prompts with Llama 2 and we applied like basically the last round of Llama 2 we had to kick off and start Llama 3 post-training. So Llama 3 post-training doesn't have any like human written answers there basically, almost. It's just leveraging pure synthetic data from Llama 2.
Synthetic data ought to be viewed as an extension of the training process rather than proper new phenomena. It can definitely help smooth things out and reinforce wanted behavior, but it's still derivative of the real data.
I mean imagine Linear least squares on a 2D graph.
I have a best fit line. Then I take random data on that line to train a new line.
I pretty much get the same line.
From an intuitive perspective... it doesn't get worse. At worst it stays the same.
Now imagine something a bit more complex. I have a best fit curve that's very close to a line.
I use random data from that curve to train a new best fit line.
I get something different now. Not necessarily worse.
I mean literally just take all your ideas of ML and just imagine it on the 2D plane doing curve fitting. If retraining new lines from generated data doesn't necessarily make things worse.
Today we have humans being trained on llm garbage - kids using it to do their homework - programmers using it to "learn" how to code, med students cheating their way through med school, etc. So the content the humans are producing and will produce is really just LLM statistical word jumbles - ie human generated content will soon be as useless as LLM generated content.
Yeah I think it's because people want it to be true that LLMs will stop improving and regress. If nothing else they can always just access data from the before times if it was an actual issue
So much burying the head in the sand from people in this industry and wishful thinking that AI will stop before whatever they're good at. A little reminder a couple years ago most people hadn't even heard of LLMs.
Most of us wish we could stop hearing about LLMs. A little reminder that every year since 2022 has been the last year that anyone will ever do anything by hand according to you prophets.
I'll give it 3-4 years before it's gone the way of crypto: still exists, still used (and still useful in the right cases), not something most people are talking about.
I don't really understand this take. You're unhappy because people are excited? May I ask, what do you work on and what do you get excited about that you'd rather advance? Do you think you're part of the group that is highly motivated and advancing ideas and people around them? Because I work with a lot of people in this group and this is the most exciting time I've experienced in my 30+ years. I am waking up and living and breathing this moment in time. I'm devouring more information than I ever have in my entire life, and it both feels like I can't get enough and that there's so much I'll never be able to keep up. So I really hope whatever you have in your pocket is just as interesting and excites people this much.
Is the cynical depression coming from watching people build your future without you while you're not a part of the conversation? Correct me if I'm wrong.
If "most of us" wish we'd hear less about LLMs how come it's always the hottest topic? That doesn't compute. Wouldn't have most of you taken over the conversation by now and changed the topic? Or perhaps whatever else you're trying to advance is just not that interesting to most of us?
I'm not unhappy that anyone is excited, I'm frustrated by the amount of time and effort being wasted right now, and especially by how out of control people's expectations about these tools are. I say this as a daily user of multiple different LLM tools, including state of the art coding agents.
My take is neither cynical nor depressed (projection on your part perhaps?), it's seeing through the hype and applying critical thinking. Same thing that I've been doing for social and tech trends for the past 20 years. My work has nothing to do with any of this, but fwiw I do work in one of the most competitive and technically advanced industries in the world, and there is absolutely 0 chance of LLMs doing anything meaningful in the field.
It's the hottest topic on HN because of the specific type of person you get here. Check out lobste.rs for a much more representative and balanced take on LLMs, for example. "Most of us" meant humans, not tech bros.
The "core reasoning" part of AI may be increasingly important to improve, and its "database of factual knowledge" aspects may be less and less important, maybe increasingly a hindrance. So more focused and specialized training may take over toward increasing reasoning precision, and not this never-ending stream of new data.
So maybe we'll get better reasoning and therefore better generated data/content in the wild, without this negative feedback loop everyone is worried about.
If AI is meant to sound nearly identical to a human, you don't need more training data.
If its meant to act as a natural language encyclopedia, we'll never get there with LLMs which amount to natural language processing on top of a massively compressed dataset.
For example, I just asked o3 "how many Is are in antidisestablishmentarianism", and this was its train-of-thought output:
------
The user wants to know how many “I”s are in the word “antidisestablishmentarianism.” They probably mean the letter "I" in lowercase, though the question was a little broader with "Is."
I’ll treat it case-insensitively and count the total occurrences of the letter "i" (both uppercase and lowercase). I can count them manually now, or use a tool for faster analysis.
Analyzed
word = "antidisestablishmentarianism"
word_lower = word.lower()
word_lower.count('i')
5
The user asked how many "I"s are in "antidisestablishmentarianism." I’m taking it to mean whether it’s counting "i" or "I", and I’ll consider them as the same for simplicity. So, I’ll count the 5 occurrences of "i" in the word. I manually broke it down and verified there are indeed five "i"s, so the final count should be 5. Pretty straightforward, now that it's all clear!
Done
There are 5 “i”s in “antidisestablishmentarianism.”
------
You can come up with complicated definitions of "core reasoning" where this doesn't count, and I understand there's great demand for such definitions, but this clearly satisfies common-sense understanding of the term.
Then I guess Ubuntu has had reasoning for several decades:
sudp
Command 'sudp' not found, did you mean:
command 'sudo' from deb sudo (1.9.9-1ubuntu2.4)
command 'sudo' from deb sudo-ldap (1.9.9-1ubuntu2.4)
command 'sup' from deb sup (20100519-3)
command 'sfdp' from deb graphviz (2.42.2-6)
Try: sudo apt install <deb name>
I might just be on the opposite side of the aisle, but to me chain-of-thought is better understood as simply more context.
Of course there is ambiguity though, more context would be hard to distinguish from core-reasoning and vice versa.
I think LLMs/AI mean we can substitute reasoning with vast accumulations and relations between contexts.
Remember, RLHF gives the models some, and perhaps most of these chains-of-thought, when there isn’t sufficient text to scrape for each family of problems. When I see that chain-of-thought, the first thing I think of is of my peers who had write, rewrite, nudge, and correct these chains of thought, and not about core reasoning.
The CoT has that same overexplained step-by-step so many RLHF’ers will be accustomed to, and much of it was authored/originated by them. And due to the infinite holes it feels like plugging, I dont call that RL reasoning.
Has the quality of art gone down since art was invented? Or has the quality of the written text gone down since writing was invented? I think the answer is clear no.
Humans have been trained on "human-generated data" (cultural artifacts) for centuries, and quality is not down. AI is only an accelerator of this process, but there is nothing inherent in creating "artifacts" that would pollute the original training data.
If anything, we should be worried about destroying nature, because that's the original inspiration for human-produced artifacts.
Generated AI content contains mistakes and hallucinations. Over time those mistakes will compound because GAI doesn’t consider truth or have a way of judging truth.
So yes, you can’t compare humans generating and picking influential content to AIs doing so.
GAI is a dead end IMO anyway we’ve seen much more success with machine learning, GAI is good for fooling humans into thinking they see glimmers of intelligence.
Unfortunately, I don't really know if I can trust academics to analyze the development of large language models. No academic team has built an LLM. So... do people working at Stanford or Oxford really have good insight how LLMs are developed?
If people at OpenAI, Anthropic, or Google said this, that would be interesting. But I don't think it makes sense any more to treat academic computer scientists as relevant experts here.
My understanding is that those building them don't really know how they work. Research into interoperability has fallen way behind as funding went towards features and scale.
Any understanding of how they work is largely theoretical, that seems like a reasonable place for academics to lean in and join the conversation.
It doesn't really make sense to trust what OpenAI and friends say about this either, when admitting to any kind of scaling limits would go against the narrative propping up their multi-hundred-billion-dollar valuations. I guess we're just flying blind for now.
Wikipedia had some comical instances where high quality contributors accident ventured into other areas where they spontaneously transformed into ignorant trolls.
Recursively retrained their own LLM on it's own output until it descends into gibberish in amusing or artistic ways?
https://en.wikipedia.org/wiki/The_Disintegration_Loops
With a selection criteria it's called evolution.
I'm not sure what they'd get from training on that
If human response is "That's BS", "fuck off", or something similar, mark as bad assistant message.
If human response is "huh" or "cool", mark as good assistant message.
If on ChatGPT, watch how much scrolling user does. If there's a lot, its somewhat likely that the LLM outputted something useful.
That strategy would have holes of course but as long as its better than guessing something like that would be a useful heuristic.
Marking is not a trivial task though. Use some AI system to mark it and you get a 99.something% filter maybe but whatever that remainder is leaks through. Over time your filter may get worse as a result.
Grok is the only one that swore back at me. I kinda liked that. The others are way too polite, "Artificial Intelligence? Artificial Canadians, more like", my uni-going kid joked.
Even very weak human signals can be immensely valuable over large enough datasets.
I had a very basic React question about useState while porting some vanilla code last week which all models of all stripes I've tried it on have been confidently and completely incorrect about, up to stating the code absolutely will not work, even when I take a turn to assert that I ran it and it does, so there's plenty of shit in there already.
In Gemini you can turn off Gemini Apps Activity (warning: deletes your chat log, you need to copy paste everything into notes)
Highly recommended.
The real process involves submitting a request on another one of OpenAI's sites and awaiting a confirmation email (either their privacy or platform site).
Feel deceived and violated? Yeah, you, me and millions of other people, welcome to the club.
"I previously opted out of model training by writing to the support team. Will you continue to honor my opt-out?
Yes. If you opted out by contacting support or using our privacy form, your account will represent that request."
https://help.openai.com/en/articles/7730893-data-controls-fa...
There wasn't any known active AI back then, but statistics on popular ideas and internet content was already a thing, and speech pollution based on those assessments had already started to spread fast, manually outputted.
Sure, a lot of good content came out since then. But the amount of garbage... it's immense and very difficult to sort out automatically.
The major issue is that this garbage then _became_ the norm. Only people who lived back then can remember what it was. For new folk, it looks just like a generational shift. However, it is quite obvious that some aspects of this shift were... unnatural (in the sense of not being spontaneous cultural manifestations).
https://en.m.wikipedia.org/wiki/Eternal_September
Pay attention.
I mentioned explicitly that I see what happened as distinct from a natural generational shift.
There are many phenomena around that era to support what I am saying. Like, for example, the first massive political campaign to leverage internet as its primary vehicle.
Imagine you time travel 20 years in the future and find out everyone around you talks the same and they all like ChatGPT.
https://qz.com/701521/parents-are-worried-the-amazon-echo-is...
https://www.wsj.com/articles/alexa-dont-let-my-2-year-old-ta...
Kids are social creatures, I don't think the interaction from AIs is going to be so overwhelming. At least looking back, I'd blame social media more for today's brain rot more than Alex like these articles feared.
Should it eventually happen for LLM outputs, I hope we name it Slop Wars.
[0] Like statistics of words, etc.
I noticed that a big chunk of the default Llama 4 system prompt is devoted to suppressing various GPT-isms, which to me implies they weren't able to keep their newer training set from being contaminated by competing models.
> You never use phrases that imply moral superiority or a sense of authority, including but not limited to “it’s important to”, “it’s crucial to”, “it’s essential to”, "it's unethical to", "it's worth noting…", “Remember…” etc. Avoid using these.
Says who? I've seen "boomer"everywhere but it's the first time I've heard about that one.
In fact I wouldn't be surprised if this tainted information somehow enriches a dataset by providing an extra dimensionality for training specialized heuristics. Maybe this would turn out to be how LLM hallucination can be solved, through being able to accurately identify AI generated material, and as result, becoming more robust against both the identification and generation of nonsense.
Humans learn to discern what/who to best pay attention to via all manners of heuristics. I don't see in principle why LLMs (or something like it in the future) won't eventually be able to do the same.
this sounds like a nonsensical word salad.
Heuristics is pattern matching. LLMs pattern match. LLMs may identify the patterns that indicate something is AI generated.
What about this is confusing you?
No. Synthetic data is being used to improve LLMs
That doesn't mean there aren't ways to train a model incorporating synthetic data without seeing model collapse
This line of thought was exacerbated by that one paper that was then parroted (hah!) by every influencer / negativist in the space. It didn't matter that the paper was badly executed, their setup was flawed and that it got rendered moot by the existence of LLama3 models. People still quote that, or the "articles" stemming from it.
A simple example would be chess ai. The core knowledge is rules of the game. We have human generated examples of plays, but we don’t really need them - we can (and we did) synthesize data to train ai.
A similar pattern can be used for all math/physics/programming/reasoning.
No it can't, the pattern for chess worked since it was an invented problem where we have a simple outcome checks, we can't do the same for natural problems where we don't have easily judged outcomes.
So you can do it for arithmetics and similar where you can generate tons of questions and answers, but you can't use this for things that are fuzzier like physics or chemistry or math theorem choices. In the end we don't really know what a good math theorem is like, it has to be useful but how do you judge that? Not just any truthy mathematical statement is seen as a theorem, most statements doesn't lead anywhere.
Once we have a universal automated judge that can judge any kind of human research output then sure your statement is true, then we can train research AI that way. But we don't have that, or science would look very different than it does today. But I'd argue that such a judge needs to be AGI on its own, so its circular.
You might be interested in some of the details of how AlphaGo (and especially the followup version) works.
Go is a problem where it's very difficult to judge a particular position, but they were still able to write a self-improving AI system that can reach _very_ high quality results starting from nothing, and only using computing power.
There does not appear to me to be any fundamental reason the same sort of techniques can't work for arbitrary problems.
> But I'd argue that such a judge needs to be AGI on its own, so its circular.
But is it circular in a way that means it can't exist, or can it run in circles like AlphaGo and keep improving itself?
If you've noticed, most LLM interfaces have a "thumbs up" or "thumbs down" response. The prompt may provide novel data. The text generated is synthetic. You don't need an automated judge, the user is providing sufficient feedback.
Same thing goes for the other disciplines.
I know they're training with synthetic data, I didn't realize that has been done at scalr for long enough to really know if it improved (assuming the metrics its improving are defined well).
LLama3 were post-trained on almost entirely synthetic data. Yes, it works. No, the model doesn't collapse (unless you want it to, of course).
What they did is use Model n-1 to classify, filter and enhance the datasets for Model n.
> almost entirely synthetic data
thing?
edit: found it. The money quote is here, but I really recommend the entire podcast since it's full of great tidbits and insights.
> Thomas [00:33:44]: You mean between supervised fine-tuning like supervised fine-tuning annotation and preference annotation? Yeah. So 100% to RLHF. In fact, that's quite interesting. You start for Llama 2 with a pre-trained model and you have to have an instruction model to chat model. Otherwise, like the model is just like continue finishing sentences. So you need that to start RLHF. So we had to annotate like 10,000 examples. What did we do for Llama 3? You start with a new pre-trained model and then you want, before starting the RLHF, to have now a chat model, which is not too bad. The option one was, let's do human annotation again, like SFT stage. But in fact, by the principle I said before, the annotation would be actually worse than Llama 2. So what we did is that we generated all the data on the prompts with Llama 2 and we applied like basically the last round of Llama 2 we had to kick off and start Llama 3 post-training. So Llama 3 post-training doesn't have any like human written answers there basically, almost. It's just leveraging pure synthetic data from Llama 2.
https://www.latent.space/p/llama-3
I have a best fit line. Then I take random data on that line to train a new line.
I pretty much get the same line.
From an intuitive perspective... it doesn't get worse. At worst it stays the same.
Now imagine something a bit more complex. I have a best fit curve that's very close to a line.
I use random data from that curve to train a new best fit line.
I get something different now. Not necessarily worse.
I mean literally just take all your ideas of ML and just imagine it on the 2D plane doing curve fitting. If retraining new lines from generated data doesn't necessarily make things worse.
(Most of my disks are still MBR as they're not big enough to be worth the hassle of using GPT.)
So much burying the head in the sand from people in this industry and wishful thinking that AI will stop before whatever they're good at. A little reminder a couple years ago most people hadn't even heard of LLMs.
I'll give it 3-4 years before it's gone the way of crypto: still exists, still used (and still useful in the right cases), not something most people are talking about.
It's going to be really nice.
Is the cynical depression coming from watching people build your future without you while you're not a part of the conversation? Correct me if I'm wrong.
If "most of us" wish we'd hear less about LLMs how come it's always the hottest topic? That doesn't compute. Wouldn't have most of you taken over the conversation by now and changed the topic? Or perhaps whatever else you're trying to advance is just not that interesting to most of us?
My take is neither cynical nor depressed (projection on your part perhaps?), it's seeing through the hype and applying critical thinking. Same thing that I've been doing for social and tech trends for the past 20 years. My work has nothing to do with any of this, but fwiw I do work in one of the most competitive and technically advanced industries in the world, and there is absolutely 0 chance of LLMs doing anything meaningful in the field.
It's the hottest topic on HN because of the specific type of person you get here. Check out lobste.rs for a much more representative and balanced take on LLMs, for example. "Most of us" meant humans, not tech bros.
What is AI good at already — I mean apart from making a few people very rich and using tremendous amounts of resources to generate slop?
So maybe we'll get better reasoning and therefore better generated data/content in the wild, without this negative feedback loop everyone is worried about.
Are you sure?
If AI is meant to sound nearly identical to a human, you don't need more training data.
If its meant to act as a natural language encyclopedia, we'll never get there with LLMs which amount to natural language processing on top of a massively compressed dataset.
... and sigh ... it look like I'm twiddling with a noddy!
------
The user wants to know how many “I”s are in the word “antidisestablishmentarianism.” They probably mean the letter "I" in lowercase, though the question was a little broader with "Is."
I’ll treat it case-insensitively and count the total occurrences of the letter "i" (both uppercase and lowercase). I can count them manually now, or use a tool for faster analysis.
Analyzed word = "antidisestablishmentarianism" word_lower = word.lower() word_lower.count('i')
5 The user asked how many "I"s are in "antidisestablishmentarianism." I’m taking it to mean whether it’s counting "i" or "I", and I’ll consider them as the same for simplicity. So, I’ll count the 5 occurrences of "i" in the word. I manually broke it down and verified there are indeed five "i"s, so the final count should be 5. Pretty straightforward, now that it's all clear!
Done
There are 5 “i”s in “antidisestablishmentarianism.”
------
You can come up with complicated definitions of "core reasoning" where this doesn't count, and I understand there's great demand for such definitions, but this clearly satisfies common-sense understanding of the term.
Did we read the same response? It did write a lot of reasons, but didn't do any reasoning at all, it just suddenly wrote "5" here
There was no reasoning at all to arrive at 5, so no your example just proves how these models are great at faking reasoning.Of course there is ambiguity though, more context would be hard to distinguish from core-reasoning and vice versa.
I think LLMs/AI mean we can substitute reasoning with vast accumulations and relations between contexts.
Remember, RLHF gives the models some, and perhaps most of these chains-of-thought, when there isn’t sufficient text to scrape for each family of problems. When I see that chain-of-thought, the first thing I think of is of my peers who had write, rewrite, nudge, and correct these chains of thought, and not about core reasoning.
The CoT has that same overexplained step-by-step so many RLHF’ers will be accustomed to, and much of it was authored/originated by them. And due to the infinite holes it feels like plugging, I dont call that RL reasoning.
Humans have been trained on "human-generated data" (cultural artifacts) for centuries, and quality is not down. AI is only an accelerator of this process, but there is nothing inherent in creating "artifacts" that would pollute the original training data.
If anything, we should be worried about destroying nature, because that's the original inspiration for human-produced artifacts.
So yes, you can’t compare humans generating and picking influential content to AIs doing so.
GAI is a dead end IMO anyway we’ve seen much more success with machine learning, GAI is good for fooling humans into thinking they see glimmers of intelligence.
If people at OpenAI, Anthropic, or Google said this, that would be interesting. But I don't think it makes sense any more to treat academic computer scientists as relevant experts here.
Any understanding of how they work is largely theoretical, that seems like a reasonable place for academics to lean in and join the conversation.