

Congrats then, you write better than a LLM!
Congrats then, you write better than a LLM!
Interestingly, your original comment is not much longer and I find it much easier to read.
Was it written with the help of a LLM? Not being sarcastic, I’m just trying to understand if the (perceived) deterioration in quality was due to the fact that the input was already LLM-assisted.
In order to make sure they were wealthy enough, I’m sure he personally tested them one by one, challenging to send him a big donation in cryptocurrencies.
That’s what a committed President-slash-genius looks like!
60% success rate sounds like a very optimistic take. Investing in a AI startup with 60% chance of success? That’s a VC’s wet dream!
“Eventually” might be a long time with radiation.
20 years after the Chernobyl disaster the level of radiation was still high enough to give you a good chance of cancer if you went to live there for a few years.
https://www.chernobylgallery.com/chernobyl-disaster/radiation-levels/
I don’t know how much radiation these “tactical” weapons release, but if it’s comparable to Chernobyl, even if the buildings were not originally damaged, I don’t know how fit they would be for living after being abandoned for 30 or 40 years.
It was Anthropic who ran this experiment
rest of Tokio is mostly intact
and housing becomes much more accessible too when buildings are intact but their inhabitants have much shorter lives because of radiation
translation from Russian: they will keep bombing schools and hospitals in Ukraine, but now it’s going to be a “reaction” just because they were provoked
Quick recap for future historians:
for a really brief part of its history, humanity tried to give kindness a go. A half-hearted attempt at best, but there were things like DEI programs, for instance, attempting to create a gentler, more accepting world for everyone. At the very least, trying to appear human to the people they managed was seen as a good attribute for Leaders.
some people felt that their God-given right to be assholes to everyone was being taken away (it’s right there in the Bible: be a jerk to your neighbor, take away his job and f##k his wife)
Assholes came back in full force, with a vengeance. Not that they had ever disappeared, but now they relished the opportunity to be openly mean for no reason again. Once again, True Leaders were judged by their ability to drain every drop of blood from their employees and take their still-beating hearts as an offering to the Almighty Shareholders.
I get what you mean and it’s a fair point. But I would still go with Meta as the most immediate threat in a war with the US.
As the ignorant I am, my understanding is that the phone manufacturer has a level of control on the way Android works and so it wouldn’t be as easy for Google to access any individual Samsung or Xiaomi phone as it is for Meta with WhatsApp, an app they fully control with permissions to use (way too many) phone features regardless of brand.
Plus, getting both Google and Apple to cooperate and coordinate sounds harder to me than just going to one company, that is basically controlled by only one person.
They are basically at war with the US and there is this piece of US Tech that nearly everyone is carrying around and that can access their communications, precise location, microphone and camera.
It’s also owned by a company, Meta, that has a history of being used as a tool to manipulate public opinion. I have no particular sympathy for Iran’s leadership but I can understand why they would advice that (and I don’t think WhatsApp is the only way for people to communicate with the outside world).
I can’t tell if it’s “the true cause” of the massive tech layoffs because I know jackshit of US tax, but it does make more sense than every company realising at the same time that they over-hired or becoming instant believers of AI-driven productivity.
The only part that doesn’t make sense to me is why hide this from employees. Countless all-hamds with uncomfortable CTOs spitting badly rehearsed bs about why 20% of their team was suddenly let go or why project Y, top of last year’s strategic priorities, was unceremoniously cancelled. Instead of “R&D is no longer deductible so it costs us much more now”.
I would not necessarily be happier about being laid off but this would at least be an explanation I feel I’d truly be able to accept
Machine learning has existed for many years, now. The issue is with these funding-hungry new companies taking their LLMs, repackaging them as “AI” and attributing every ML win ever to “AI”.
ML programs designed and trained specifically to identify tumors in medical imaging have become good diagnostic tools. But if you read in news that “AI helps cure cancer”, it makes it sound like it was a lone researcher who spent a few minutes engineering the right prompt for Copilot.
Yes a specifically-designed and finely tuned ML program can now beat the best human chess player, but calling it “AI” and bundling it together with the latest Gemini or Claude iteration’s “reasoning capabilities” is intentionally misleading. That’s why articles like this one are needed. ML is a useful tool but far from the “super-human general intelligence” that is meant to replace half of human workers by the power of wishful prompting
It’s one of those things where periodically someone gets sanctioned and a few others get scared and stop doing it (or tone it down) for a while.
I guess SHEIN are either overdoing it or they crossed the popularity threshold where companies become more scrutinized
I’ve never used SHEIN so I can’t tell if they are using these practices or how bad they are, but from the article I see they allegedly use fake urgency messaging, which I know has been sanctioned before in the EU (the company I used to work with had to rush removing it from our eCommerce site).
A company can tell you that the item you’re looking at happens to be the last one in stock, if it’s true. But if they lie about it, so you rush into a decision to buy it before it’s gone, then it’s a deceptive practice.
Depends what you mean by “valid”. If you mean “profitable”, sure: Fraud has always been a profitable business model.
But if you mean “valid” in terms of what Microsoft got out of their $455M investment, not so much, as they wanted a great new AI model, not the output that the “human-powered” model produced pretending to be an AI.
I agree. I was almost skipping it because of the title, but the article is nuanced and has some very good reflections on topics other that AI. Every technical progress is a tradeoff. The article mentions cars to get to the grocery store and how there are advantages in walking that we give up when always using a car. Are cars in general a stupid and useless technology? No, but we need to be aware of where the tradeoffs are. And eventually most of these tradeoffs are economic in nature.
By industrializing the production of carpets we might have lost some of our collective ability to produce those hand-made masterpieces of old, but we get to buy ok-looking carpets for cheap.
By reducing and industrializing the production of text content, our mastery of language is declining, but we get to read a lot of not-very-good content for free. This pre-dates AI btw, as can be seen by standardized tests in schools everywhere.
The new thing about GenAI, though is that it upends the promise that technology was going to do the grueling, boring work for us and free up time for us to do the creative things that give us joy. I feel the roles have reversed: even when I have to write an email or a piece of coding, AI does the creative piece and I’m the glorified proofreader and corrector.
cover letters, meeting notes, some process documentation: the stuff that for some reason “needs” to be done, usually written by people who don’t want to write it for people who don’t want to read it. That’s all perfect for GenAI.
“he’s no longer the sensitive man and caring lover that I used to know”
but… but… reasoning models! AGI! Singularity! Seriously, what you’re saying is true, but it’s not what OpenAI & Co are trying to peddle, so these experiments are a good way to call them out on their BS.