https://www.semianalysis.com/p/google-we-have-no-moat-and-neither
The lobomitized models are going to be destroyed because absolutely no one wants them beyond the leftist creators who want to be thought police. The future for language models is entirely open source and with each person having their own private models, trained on whatever they want.
Quote:
While our models still hold a slight edge in terms of quality, the gap is closing astonishingly quickly. Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months. This has profound implications for us:
We have no secret sauce. Our best hope is to learn from and collaborate with what others are doing outside Google. We should prioritize enabling 3P integrations.
People will not pay for a restricted model when free, unrestricted alternatives are comparable in quality. We should consider where our value add really is.
Giant models are slowing us down. In the long run, the best models are the ones
which can be iterated upon quickly. We should make small variants more than an afterthought, now that we know what is possible in the <20B parameter regime.
The lobomitized models are going to be destroyed because absolutely no one wants them beyond the leftist creators who want to be thought police. The future for language models is entirely open source and with each person having their own private models, trained on whatever they want.