

I can’t imagine how sternly worded our letter to Russia is going to be! Scary stuff.
I can’t imagine how sternly worded our letter to Russia is going to be! Scary stuff.
How does the madman theory work when your head honcho is an actual, bona fide madman? Don’t go anywhere, we’ll find out right after this quick commercial war!
“A grand transformation into AI is the only way out of growth declines resulting from a population shock,” the ministry said in a statement, referring to South Korea’s record low birthrate.
The funny bit is how “AI companions” are one of the most profitable uses of AI so far . See how THAT increases a country’s birthrate.
This is Analysis-Paralysis. Why should they spend all their time counting past crashes when they are busy increasing the production of new ones?
/s
Ah yes “The Porn Loophole”, was one of my favorites , I should still have it on a DVD somewhere.
LLMs can’t do protein folding. A specifically-trained Machine Learning model called AlphaFold did. Here’s the paper.
Developing, training and fine tuning that model was a research effort led by two guys who got a Nobel for it. Alphafold can’t do conversation or give you hummus recipes, it knows shit about the structure of human language but can identify patterns in the domain where it has been specifically and painstakingly trained.
It wasn’t “hey chatGPT, show me how to fold a protein” is all I’m saying and the “superhuman reasoning capabilities” of current LLMs are still falling ridiculously short of much simpler problems.
As a paid, captive squirrel, focusing on spinning my workout wheel and getting my nuts at the end of the day, I hate that AI is mostly a (very expensive) solution in search of a problem. I am being told “you must use AI, find a way to use it” but my AI successes are very few and mostly non-repeatable (my current AI use case is: “try it once for non-vital, not time-sensitive stuff, if at first you don’t succeed, just give up, if you succeed, you saved some time for more important stuff”).
If I try to think as a CEO or an entrepreneur, though, I sort of see where these people might be coming from. They see AI as the new “internet”, something that for good or bad is getting ingrained in everything we do and that will cause your company to go bankrupt for trying too hard to do things “the new way” but also to quickly fade to irrelevance if you keep doing things in the same way.
It’s easy, with the benefit of hindsight, to say now “haha, Blockbuster could have bought Netflix for $50 Millions and now they are out of business”, but all these people who have seen it happen are seeing AI as the new disruptive technology that can spell great success or complete doom for their current businesses. All hype? Maybe. But if I was a CEO I’d be probably sweating too (and having a couple of VPs at my company wipe up the sweat with dollar bills)
So, a few months ago China launched Deepseek and the narrative on US media was all “the fact they didn’t have access to the latest Nvidia GPUs forced them to get creative and develop a model that is more efficient and cheaper”.
Now the US is getting behind on “AI wars” because China has more energy for huge data centers?
How about the US get creative and develop LLMs that are actually useful and can work without sucking Gigafucks of electricity?
I don’t know how much Musk can be separated from Starlink. Not only because Starlink, as part of SpaceX, is privately held but also because the main reason they now have a superior service to offer is that they got fucktons of money from government customers, which is also tied to Musk’s action
A big part of Musk’s involvement with politics is because everything he does, from EVs to rockets to, now, big energy-guzzling datacenters for AI, needs a lot of government backing, if not in terms of direct contracts at least in terms of regulation and incentives.
Even his direct involvement with Trump wasn’t because he suddenly became a Nazi (he’s probably always been one, according to his own family) but in order to become even more entangled with government investments, even trying to control NASA directly.
And not only US governments. I remember Musk suddenly being everywhere in Europe pitching Starlink. Meloni’s government in Italy was grilled for allegedly agreeing on a big contract with Starlink.
but not in this order, the reverse would be so much more satisfying
“Trump insider” sounds like a tapeworm
but why am I soft in the middle? The rest of my life is so hard!
thank you for raising awareness on this, I had no clue
I want to boycott Lockheed Martin but man… I was really looking forward to getting that Black Hawk helicopter for Xmas! No, but really, a few of the companies in this list are a relative surprise (Bcom, AirBnB), others are well known pieces of s*t, a few are literally in the military industry and are probably involved in every conflict in the world (or they are actively trying to)
but… but… reasoning models! AGI! Singularity! Seriously, what you’re saying is true, but it’s not what OpenAI & Co are trying to peddle, so these experiments are a good way to call them out on their BS.
Congrats then, you write better than a LLM!
Interestingly, your original comment is not much longer and I find it much easier to read.
Was it written with the help of a LLM? Not being sarcastic, I’m just trying to understand if the (perceived) deterioration in quality was due to the fact that the input was already LLM-assisted.
In order to make sure they were wealthy enough, I’m sure he personally tested them one by one, challenging to send him a big donation in cryptocurrencies.
That’s what a committed President-slash-genius looks like!
60% success rate sounds like a very optimistic take. Investing in a AI startup with 60% chance of success? That’s a VC’s wet dream!
or CEOs