We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
“and then on retrain on that”
Thats called model collapse.
Spoiler: He’s gonna fix the “missing” information with MISinformation.
She sounds Hot
She’s unfortunately can’t see you because of financial difficulties. You gotta give her money like I do. One day, I will see her in person.
So they’re just going to fill it with Hitler’s world view, got it.
Typical and expected.
I mean, this is the same guy who said we’d be living on Mars in 2025.
In a sense, he’s right. I miss good old Earth.
So just making shit up.
Don’t forget the retraining on the made up shit part!
I wonder how many papers he’s read since ChatGPT released about how bad it is to train AI on AI output.
Grandiose delusions from a ketamine-rotted brain.
“We’ll fix the knowledge base by adding missing information and deleting errors - which only an AI trained on the fixed knowledge base could do.”
Delusional and grasping for attention.
He knows more … about knowledge… than… anyone alive now
Humm…this doesn’t sound great
Isn’t everyone just sick of his bullshit though?
US tax payers clearly aren’t since they’re subsidising his drug habit.
If we had direct control over how our tax dollars were spent, things would be different pretty fast. Might not be better, but different.
At this point a significant part of the country would decide to airstrike US primary schools to stop wasting money and indoctrinating kids.
More guns?
adding missing information and deleting errors
Which is to say, “I’m sick of Grok accurately portraying me as an evil dipshit, so I’m going to feed it a bunch of right-wing talking points and get rid of anything that hurts my feelings.”
That is definitely how I read it.
History can’t just be ‘rewritten’ by A.I. and taken as truth. That’s fucking stupid.
It’s truth in Whitemanistan though
The plan to “rewrite the entire corpus of human knowledge” with AI sounds impressive until you realize LLMs are just pattern-matching systems that remix existing text. They can’t create genuinely new knowledge or identify “missing information” that wasn’t already in their training data.
What he means is correct the model so all it models is racism and far-right nonsense.
Remember the “white genocide in South Africa” nonsense? That kind of rewriting of history.
It’s not the LLM doing that though. It’s the people feeding it information
Try rereading the whole tweet, it’s not very long. It’s specifically saying that they plan to “correct” the dataset using Grok, then retrain with that dataset.
It would be way too expensive to go through it by hand
Yes.
He wants to prompt grok to rewrite history according to his worldview, then retrain the model on that output.
Literally what Elon is talking about doing…
But Grok 3.5/4 has Advanced Reasoning
Surprised he didn’t name it Giga Reasoning or some other dumb shit.
Gigachad Reasoning
To be fair, your brain is a pattern-matching system.
When you catch a ball, you’re not doing the physics calculations in your head- you’re making predictions based on an enormous quantity of input. Unless you’re being very deliberate, you’re not thinking before you speak every word- your brain’s predictive processing takes over and you often literally speak before you think.
Fuck LLMs- but I think it’s a bit wild to dismiss the power of a sufficiently advanced pattern-matching system.
I said literally this in my reply, and the lemmy hivemind downvoted me. Beware of sharing information here I guess.
Generally, yes. However, there have been some incredible (borderline “magic”) emergent generalization capabilities that I don’t think anyone was expecting.
Modern AI is more than just “pattern matching” at this point. Yes at the lowest levels, sure that’s what it’s doing, but then you could also say human brains are just pattern matching at that same low level.
Nothing that has been demonstrated makes me think these chatbots should be allowed to rewrite human history what the fuck?!
Tech bros see zero value in humanity beyond how it can be commodified.
That’s not what I said. It’s absolutely dystopian how Musk is trying to tailor his own reality.
What I did say (and I’ve been doing AI research since the AlexNet days…) is that LLMs aren’t old school ML systems, and we’re at the point that simply scaling up to insane levels has yielded results that no one expected, but it was the lowest hanging fruit at the time. Few shot learning -> novel space generalization is very hard, so the easiest method was just take what is currently done and make it bigger (a la ResNet back in the day).
Lemmy is almost as bad as reddit when it comes to hiveminds.
You literally called it borderline magic.
Don’t do that? They’re pattern recognition engines, they can produce some neat results and are good for niche tasks and interesting as toys, but they really aren’t that impressive. This “borderline magic” line is why they’re trying to shove these chatbots into literally everything, even though they aren’t good at most tasks.
It’s clear you don’t really understand the wider context and how historically hard these tasks have been. I’ve been doing this for a decade and the fact that these foundational models can be pretrained on unrelated things then jump that generalization gap so easily (within reason) is amazing. You just see the end result of corporate uses in the news, but this technology is used in every aspect of science and life in general (source: I do this for many important applications).
“If we take this 0.84 accuracy model and train another 0.84 accuracy model on it that will make it a 1.68 accuracy model!”
~Fucking Dumbass
1.68 IQ move
More like 0.7056 IQ move.
[My] translation: “I want to rewrite history to what I want”.
That was my first impression, but then it shifted into “I want my AI to be the shittiest of them all”.
Why not both?