At Sanofi, AI has shifted from experimentation to becoming a vital part of our infrastructure. It now powers our R&D decisions, our supply chain and manufacturing processes, and most importantly how we discover and develop medicines. All businesses that have implemented AI in an impactful way face challenges, such as skills gaps and uncertainty, but you move beyond that by embedding AI deeply into teams and systems. This enables AI to become a key, reliable source of sustained productivity and innovation.
A major difference between LLMs and LTMs is the type of data they're able to synthesize and use. LLMs use unstructured data-think text, social media posts, emails, etc. LTMs, on the other hand, can extract information or insights from structured data, which could be contained in tables, for instance. Since many enterprises rely on structured data, often contained in spreadsheets, to run their operations, LTMs could have an immediate use case for many organizations.
According to the Secretary of Defense Pete Hegseth's memorandum on the Strategy, this AI-first status is to be achieved through four broad aims: Incentivizing internal DOD experimentation with AI models. Identifying and eliminating bureaucratic obstacles in the way of model integration. Focusing the U.S.'s military investment to shore up the U.S.'s "asymmetric advantages" in areas including AI computing, model innovation, entrepreneurial dynamism, capital markets, and operational data.
"With any person, company, or concept, the general public really only has space in their head for one characteristic of it," says Palantir alum Marc Frankel, cofounder, board member, and former CEO of Manifest, which creates software and AI "bill of materials"-think ingredient labels for critical software. "Biden: old. AI: scary. Palantir: secretive." Frankel worked at Palantir from 2013 to 2018,
to break through language barriers and offer more natural interactions. With the latest OpenAI models including GPT-5.2, ServiceNow will unlock a new class of AI-powered automation for the world's largest companies.
The promenade in this ski town turns into a tech trade show floor at WEF time, with the logos of prominent software companies and consulting firms plastered to shopfronts and signage touting various AI products. But while last year's Davos was dominated by hype around AI agents and overwrought hand-wringing that the debut of DeepSeek's R1 model, which happened during 2025's WEF, could mean the capital-intensive plans of the U.S. AI companies were for naught, this year's AI discussions seem more sober and grounded.
As enterprise AI spending surges past $100B annually, a critical divide has emerged: while billions flow into horizontal AI platforms promising to solve everything, enterprises deploying these tools face a harsh reality check when their generalist agents struggle with the complexity of real-world operations. The disconnect is particularly acute in customer service, where voice remains the highest-stakes channel where one wrong answer can cost thousands in revenue or irreparably damage customer relationships.
Over the past few years, I've reviewed thousands of APIs across startups, enterprises and global platforms. Almost all shipped OpenAPI documents. On paper, they should be well-defined and interoperable. In practice, most fail when consumed predictably by AI systems. They were designed for human readers, not machines that need to reason, plan and safely execute actions. When APIs are ambiguous, inconsistent or structurally unreliable, AI systems struggle or fail outright.
Every year, TechCrunch's Startup Battlefield pitch contest draws thousands of applicants. We whittle those applications down to the top 200 contenders, and of them, the top 20 compete on the big stage to become the winner, taking home the Startup Battlefield Cup and a cash prize of $100,000. But the remaining 180 startups all blew us away as well in their respective categories and compete in their own pitch competition.
Jain said he had tried to automate internal workflows at Glean, including an effort to use AI to automatically identify employees' top priorities for the week and document them for leadership. "It has all the context inside the company to make it happen," said Jain, adding that he thought AI would "magically" do the work. The idea seemed simple, but it hasn't worked.
For many, enterprise AI adoption depends on the availability of high-quality open-weights models. Exposing sensitive customer data or hard-fought intellectual property to APIs so you can use closed models like ChatGPT is a non-starter. Outside of Chinese AI labs, the few open-weights models available today don't compare favorably to the proprietary models from the likes of OpenAI or Anthropic. This isn't just a problem for enterprise adoption; it's a roadblock to Nvidia's agentic AI vision that the GPU giant is keen to clear.
It is becoming increasingly difficult to separate the signal from the noise in the world of artificial intelligence. Every day brings a new benchmark, a new "state-of-the-art" model, or a new claim that yesterday's architecture is obsolete. For developers tasked with building their first AI application, particularly within a larger enterprise, the sheer volume of announcements creates a paralysis of choice.
They know all too well how Alphabet, Amazon, Apple, Meta, Microsoft, Nvidia and Tesla have delivered more than half of the S&P 500's gains in recent years, setting a high bar for everyone else to clear. But things change: One minute, Alphabet is behind the curve on AI and then Google's latest Gemini launch sparked a 'Code Red' from ChatGPT's Sam Altman.
"What's driving all of this is the awareness from CEOs and executives that this is the time to invest in AI," Jain said in an exclusive interview before the conference. "Everybody has been looking for a safe, secure, more appropriate version of ChatGPT for their employees. And we bring the capabilities that ChatGPT brings to consumers to business users, and in the context of their company."