Artificial General Intelligence, the pipedream of a technological intelligence that is not producing a single thing but generally capable, like a human.
Edit: recommended reading is “Life 3.0”. While I think it is overly positive about AI, it gives a good overview of AI industry and innovation, and the ideas behind it. You will have to swallow a massive chunk of Musk-fanboism, although to be fair it predates Musk’s waving the fasces.
Well, it does make sense in that the time during which we have AGI would be pretty short because AGI would soon go beyond human-level intelligence. With that said, LLMs are certainly not going to get there, assuming AGI is even possible at all.
The fact that Microsoft and OpenAI define Artificial General Intelligence in terms of profit suggests they’re not confident about achieving the real thing:
The two companies reportedly signed an agreement last year stating OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits. (Source)
Given this definition, when they say they’ll achieve AGI and beyond, they simply mean they’ll achieve more than $100 billion in profit. It says nothing about what they expect to achieve technically.
Artificial General Intelligence, the pipedream of a technological intelligence that is not producing a single thing but generally capable, like a human.
Edit: recommended reading is “Life 3.0”. While I think it is overly positive about AI, it gives a good overview of AI industry and innovation, and the ideas behind it. You will have to swallow a massive chunk of Musk-fanboism, although to be fair it predates Musk’s waving the fasces.
I get it. I just didn’t know that they are already using “beyond AGI” in their grifting copytext.
Yeah, that started a week or two ago. Altman dropped the AGI promise too soon now he’s having to become a sci-fi author to keep the con cooking.
Dude thinks he’s Asimov but anyone paying attention can see he’s just an L Ron Hubbard.
Hell, I’d help pay for the boat if he’d just fuck off to go spend the rest of his life floating around the ocean.
You say that like Hubbard wasn’t brilliant, morals notwithstanding
He sure as shit wasn’t a brilliant writer. He was more endowed with the cunning of a proto-Trump huckster-weasel.
Well, it does make sense in that the time during which we have AGI would be pretty short because AGI would soon go beyond human-level intelligence. With that said, LLMs are certainly not going to get there, assuming AGI is even possible at all.
We’re never getting AGI from any current or planned LLM and ML frameworks.
These LLMs and ML programs are above human intelligence but only within a limited framework.
https://en.m.wikipedia.org/wiki/Superintelligence#Feasibility_of_artificial_superintelligence
Artificial Superintelligence is a term that is getting banded about nowadays
Ah ok, yeah the “beyond” thing us likely pulled straight out of the book I mentioned in my edit.
The fact that Microsoft and OpenAI define Artificial General Intelligence in terms of profit suggests they’re not confident about achieving the real thing:
Given this definition, when they say they’ll achieve AGI and beyond, they simply mean they’ll achieve more than $100 billion in profit. It says nothing about what they expect to achieve technically.
Well that’s a pretty fucking ridiculous definition lol.
This should be its own post. Very interesting. People are not aware of this I think.
I think I saw a post about exactly this.
what a joke. can’t wait for the shift and these parasites to go back underground