Nvidia CEO Jensen Huang forecast that capital expenditure (CapEx) on datacentres would increase from the $300-400bn mark today to $3-4tn by 2030, effectively claiming datacentre spending would increase tenfold during this period.
Red Hat AI Enterprise provides a foundation for modern AI workloads, including AI life-cycle management, high-performance inference at scale, agentic AI innovation, integrated observability and performance modeling, and trustworthy AI and continuous evaluation. Tools are provided for dynamic resource scaling, monitoring, and security.
OpenAI is enlisting some of the world's biggest consulting firms in its fight to dominate the enterprise AI market.Today the AI company announced partnerships with Boston Consulting Group, McKinsey & Company, Accenture, and Capgemini that will see the consulting firms helping to sell and implement OpenAI's new Frontier AI agent platform. The consultants will help their clients redesign workflows, integrate AI agents with software tools and systems, help clients with change management, and provide industry-specific expertise OpenAI doesn't have in-house.
Dropbox engineers have detailed how the organization was able to build the context engine behind Dropbox Dash, demonstrating a shift towards index-based retrieval, knowledge graph-derived context, and continuous evaluation to support enterprise AI knowledge retrieval at scale. The design points to a broader pattern emerging across enterprise assistants, whereby teams are deliberately constraining their live tool usage and instead relying more heavily on pre-processed, permission-aware context to speed latency, improve quality and ease token pressure.
At Sanofi, AI has shifted from experimentation to becoming a vital part of our infrastructure. It now powers our R&D decisions, our supply chain and manufacturing processes, and most importantly how we discover and develop medicines. All businesses that have implemented AI in an impactful way face challenges, such as skills gaps and uncertainty, but you move beyond that by embedding AI deeply into teams and systems. This enables AI to become a key, reliable source of sustained productivity and innovation.
A major difference between LLMs and LTMs is the type of data they're able to synthesize and use. LLMs use unstructured data-think text, social media posts, emails, etc. LTMs, on the other hand, can extract information or insights from structured data, which could be contained in tables, for instance. Since many enterprises rely on structured data, often contained in spreadsheets, to run their operations, LTMs could have an immediate use case for many organizations.
According to the Secretary of Defense Pete Hegseth's memorandum on the Strategy, this AI-first status is to be achieved through four broad aims: Incentivizing internal DOD experimentation with AI models. Identifying and eliminating bureaucratic obstacles in the way of model integration. Focusing the U.S.'s military investment to shore up the U.S.'s "asymmetric advantages" in areas including AI computing, model innovation, entrepreneurial dynamism, capital markets, and operational data.
"With any person, company, or concept, the general public really only has space in their head for one characteristic of it," says Palantir alum Marc Frankel, cofounder, board member, and former CEO of Manifest, which creates software and AI "bill of materials"-think ingredient labels for critical software. "Biden: old. AI: scary. Palantir: secretive." Frankel worked at Palantir from 2013 to 2018,
to break through language barriers and offer more natural interactions. With the latest OpenAI models including GPT-5.2, ServiceNow will unlock a new class of AI-powered automation for the world's largest companies.
The promenade in this ski town turns into a tech trade show floor at WEF time, with the logos of prominent software companies and consulting firms plastered to shopfronts and signage touting various AI products. But while last year's Davos was dominated by hype around AI agents and overwrought hand-wringing that the debut of DeepSeek's R1 model, which happened during 2025's WEF, could mean the capital-intensive plans of the U.S. AI companies were for naught, this year's AI discussions seem more sober and grounded.
As enterprise AI spending surges past $100B annually, a critical divide has emerged: while billions flow into horizontal AI platforms promising to solve everything, enterprises deploying these tools face a harsh reality check when their generalist agents struggle with the complexity of real-world operations. The disconnect is particularly acute in customer service, where voice remains the highest-stakes channel where one wrong answer can cost thousands in revenue or irreparably damage customer relationships.
Over the past few years, I've reviewed thousands of APIs across startups, enterprises and global platforms. Almost all shipped OpenAPI documents. On paper, they should be well-defined and interoperable. In practice, most fail when consumed predictably by AI systems. They were designed for human readers, not machines that need to reason, plan and safely execute actions. When APIs are ambiguous, inconsistent or structurally unreliable, AI systems struggle or fail outright.
Every year, TechCrunch's Startup Battlefield pitch contest draws thousands of applicants. We whittle those applications down to the top 200 contenders, and of them, the top 20 compete on the big stage to become the winner, taking home the Startup Battlefield Cup and a cash prize of $100,000. But the remaining 180 startups all blew us away as well in their respective categories and compete in their own pitch competition.
Jain said he had tried to automate internal workflows at Glean, including an effort to use AI to automatically identify employees' top priorities for the week and document them for leadership. "It has all the context inside the company to make it happen," said Jain, adding that he thought AI would "magically" do the work. The idea seemed simple, but it hasn't worked.