Over the last three years, companies worldwide have invested between 30 and 40 billion dollars into generative artificial intelligence projects. Yet most of these efforts have brought no real business…
If you truly believe that you fundamentally misunderstand the definition of that word or are being purposely disingenuous as you Ai brown nose folk tend to be. To pretend for a second you genuinely just don’t understand how to read LLMs, the most advanced “Ai” they are trying to sell everybody is as capable of reasoning as any compression algorithm, jpg, png, webp, zip, tar whatever you want. They cannot reason. They take some input and generate an output deterministically. The reason the output changes slightly is because they put random shit in there for complicated important reasons.
Again to recap here LLMs and similar neural network “Ai” is as capable of reasoning as any other computer program you interact with knowingly or unknowingly, that being not at all. Your silly Wikipedia page is a very specific term “Reasoning System” which would include stuff like standard video game NPC Ai such as the zombies in Minecraft. I hope you aren’t stupid enough to say those are capable of reasoning
This link is about reasoning system, not reasoning. Reasoning involves actually understanding the knowledge, not just having it. Testing or validating where knowledge is contradictionary.
LLM doesn’t understand the difference between hard and soft rules of the world. Everything is up to debate, everything is just text and words that can be ordered with some probabilities.
It cannot check if something is true, it just ‘knows’ that someone on the internet talked about something, sometimes with and often without or contradicting resolutions…
It is a gossip machine, that trys to ‘reason’ about whatever it has heard people say.
You partly described reasoning tho
https://en.m.wikipedia.org/wiki/Reasoning_system
If you truly believe that you fundamentally misunderstand the definition of that word or are being purposely disingenuous as you Ai brown nose folk tend to be. To pretend for a second you genuinely just don’t understand how to read LLMs, the most advanced “Ai” they are trying to sell everybody is as capable of reasoning as any compression algorithm, jpg, png, webp, zip, tar whatever you want. They cannot reason. They take some input and generate an output deterministically. The reason the output changes slightly is because they put random shit in there for complicated important reasons.
Again to recap here LLMs and similar neural network “Ai” is as capable of reasoning as any other computer program you interact with knowingly or unknowingly, that being not at all. Your silly Wikipedia page is a very specific term “Reasoning System” which would include stuff like standard video game NPC Ai such as the zombies in Minecraft. I hope you aren’t stupid enough to say those are capable of reasoning
This link is about reasoning system, not reasoning. Reasoning involves actually understanding the knowledge, not just having it. Testing or validating where knowledge is contradictionary.
LLM doesn’t understand the difference between hard and soft rules of the world. Everything is up to debate, everything is just text and words that can be ordered with some probabilities.
It cannot check if something is true, it just ‘knows’ that someone on the internet talked about something, sometimes with and often without or contradicting resolutions…
It is a gossip machine, that trys to ‘reason’ about whatever it has heard people say.