

Isn’t this the same immigrant that got pissed about someone tracking his private jet?
Mastodon: @greg@clar.ke
Isn’t this the same immigrant that got pissed about someone tracking his private jet?
Are the drivers Tokyo drifters? Cause that could explain it
Facebook (Meta) owns WhatsApp. I don’t use WhatsApp and it’s a hassle because it’s popular here so having the ability to send messages to WhatsApp without having a WhatsApp account or app would be useful.
The same could be said for sending emails to a Gmail account. You won’t be forced to send messages to WhatsApp but you’d have the option. WhatsApp is popular where I live so having the ability to send messages to a WhatsApp account would be useful for me as I refuse to install or use any facebook apps.
These seem like reasonable changes
Is this just a weird pregnancy announcement?
Zuck only cares about his privacy
Sweden… more like snitchden… amirite?
I agree and I think this comes back to execution of the technology as opposed to the technology itself. For context, I work as an ML engineer and I’ve been concerned with bias in AI long before ChatGPT. I’m interested in other folks perspectives on this technology. The hype and spin from tech companies is a frustrating distraction from the real benefits and risks of AI.
But I don’t think it’s the best option if you consider everyone involved.
Can you expand on this? Do you mean from an environmental perspective because of the resource usage, social perspective because of jobs losses, and / or other groups being disadvantaged because of limited access to these tools?
It is the best option for certain use cases. OpenAI, Anthropic, etc sell tokens, so they have a clear incentive to promote LLM reasoning as an everything solution. LLM read is normally an inefficient use of processor cycles for most use cases. However, because LLM reasoning is so flexible, even though it’s inefficient from a cycle perspective, it is still the best option in many cases because the current alternatives are even more inefficient (from a cycle or human time perspective).
Identifying typos in a project update is a task that LLMs can efficiently solve.
I don’t want to get my hopes up but is this Facebook’s MySpace moment?
Microhard?
except genAI has proven no purpose
Generative AI has spawned an awful amount of AI slop and companies are forcing incomplete products on users. But don’t judge the technology by shitty implementations. There are loads of use cases where when used correctly, generative AI brings value. For example, in document discovery in legal proceedings.
100% and like any tool, it can be used poorly resulting in AI bit rot, bugs, unmaintainable code, etc. But when used well, given appropriate context, by users that know what good solutions looks like, it can increase developer efficiency.
The author seems to think that OpenAI having an unsustainable business model means generative AI is a con. Generative AI doesn’t mean OpenAI 🤦♂️ There is a good chance that the VC funds invested in OpenAI will have evaporated in 5 years. But generative AI will exist in 5 years, it will be orders of magnitude more useful, and it will help solve many problems.
Did you try switching your exit point?
It’s likely bad actors causing your VPN’s public IP address to get flagged. Next time change the exit point.
Cheers! Thank you
Encryption is not a crime *unless you’re doing it to someone else’s data to extort them for bitcoins