The milestone highlights how DeepSeek has left a deep impression on Silicon Valley, upending widely held views about U.S. primacy in AI and the effectiveness of Washington’s export controls targeting China’s advanced chip and AI capabilities.
The milestone highlights how DeepSeek has left a deep impression on Silicon Valley, upending widely held views about U.S. primacy in AI and the effectiveness of Washington’s export controls targeting China’s advanced chip and AI capabilities.
So I’m still on the fence about the AI arms race in general. However, reading up on DeepSeek it feels like they built a model specifically to work well on the benchmarks.
I say this cause it’s a Mixture of Experts approach, so only parts of the model are used at any given point. The drawback is generalization.
Additionally, it isn’t a multimodal model and the only place I’ve seen real opportunity for workflows automation is using the multimodal models. I guess you could use a combination of models, but that’s definitely a step back from the grand promise of these foundational models.
Overall, I’m just not sure if this is lay people getting caught up in hype or actually a significant change in the landscape.
To be fair, I’m pretty sure that’s what everyone is doing. If you’re not measuring against something, there’s no way to tell if you’re doing anything at all.
My point was a mixture of Experts model could suffer from generalization. Although in reading more I’m not sure if it’s the newer R model that had the MoE element.