
"If you want to win in AI - and I mean win in the biggest, most lucrative, most shape-the-world-in-your-image kind of way - you have to do a bunch of hard things simultaneously. You need to have a model that is unquestionably one of the best on the market. You need the nearly infinite resources required to continue to improve that mode and deploy it at massive scale."
"You need at least one AI-based product that lots of people use, and ideally more than one. And you need access to as much of your users' other data - their personal information, their online activity, even the files on their computer - as you can possibly get. Each one of these elements is complex and competitive; there's a reason OpenAI CEO Sam Altman keeps shouting about how he needs trillions of dollars in compute alone."
"In November, Google released Gemini 3, which is widely regarded as the best overall large language model on the market. It wins in most (somewhat dubious) benchmark tests, and most experts agree it is either at or near the top of the list for most tasks. Its reign won't be forever, of course - we're still very much in the "there's a new best model every six weeks" phase of AI - but Google has proven its best work is consistently the industry's best work."
Winning in AI requires simultaneous mastery of multiple demanding elements: a top-tier model, massive ongoing compute resources, widely used AI products, and deep access to users' personal and behavioral data. Google has assembled these elements through Gemini 3, its TPU-based compute infrastructure, and its broad product footprint. Gemini 3 is widely regarded as the best overall large language model and benefits from training on Google's specialized TPUs. The AI landscape remains fast-moving with frequent new models, but Google's integrated strengths in performance, infrastructure, products, and data position it to be the most impactful and dominant player.
Read at The Verge
Unable to calculate read time
Collection
[
|
...
]