The venture capitalist said on an episode of the "a16z Podcast" published Tuesday that AI tools can act as the "world's best coach, mentor, therapist, advisor, board member" for anyone who asks the right kind of questions. AI is probably "the most democratic" technology of all time, said the cofounder of VC firm Andreessen Horowitz. "The very best AI in the world is fully available on the apps that anybody can download."
Meta Platforms and Amazon could surpass the current combined market value of Nvidia and Palantir by the end of the decade. Over the past year, Nvidia shares have advanced 33%, bringing its market value to $4.3 trillion. Meanwhile, Palantir Technologies shares has advanced 155%, bringing its market value to $395 billion. In aggregate, the companies are worth about $4.7 trillion. Apple could certainly surpass that figure within five years, but I also have confidence in Meta Platforms and Amazon .
Context engineering has emerged as one of the most critical skills in working with large language models (LLMs). While much attention has been paid to prompt engineering, the art and science of managing context-i.e., the information the model has access to when generating responses-often determines the difference between mediocre and exceptional AI applications. After years of building with LLMs, we've learned that context isn't just about stuffing as much information as possible into a prompt.
As AI transforms workplace learning, a paradox has emerged: 40% of U.S. employees report receiving AI-generated "workslop," content that looks polished but lacks substance. It's something that's costing organizations nearly $9 million annually for every 10,000 employees, according to research from coaching platform BetterUp and the Stanford Social Media Lab. Each incident of workslop consumes nearly two hours of cleanup time, with nearly half those on the receiving end seeing colleagues who send such subpar work as less creative and less trustworthy.
Ng said he rarely sticks to a single chatbot. To brainstorm effectively, he rotates across different models and leans into their contrasting strengths. For coding, he prefers tools like Claude Code and OpenAI's Codex. He added that staying longer in a conversation with the model yields a better response. "AI is very smart, but getting context in is difficult," Ng said.
For much of the world, technology has become so intertwined with our day-to-day lives that it influences everything. Our relationships, the care we seek, how we work, what we do to protect ourselves, even the things we choose to learn and when. It would be understandable to read this as a dystopian nightmare conjured up by E.M. Forster or Ernest Cline. Yet, we are on the verge of something fundamentally different. We've caught glimpses of a future that values autonomy, empathy, and individual expertise.
Corporate learning has always been about knowledge transfer, skill building, and talent development. But today's dynamic business environment demands more. L&D leaders are tasked with building a skilled, resilient workforce while navigating constant change. Traditional training models, though foundational, often struggle to keep pace with the speed and specificity these modern roles require. The shift is visible. A 2025 Gallup poll [1] found that 40% of U.S. employees now use AI in their roles, nearly double from two years ago. Meanwhile, a Microsoft Canada Work Trend Index revealed that 59% of Canadian business leaders fear their organizations lack a clear AI implementation plan.
HP Inc. said that it will lay off 4,000 to 6,000 employees in favor of AI deployments, claiming it will help save $1 billion in annualized gross run rate by the end of its fiscal 2028. HP expects to complete the layoffs by the end of that fiscal year. The reductions will largely hit product development, internal operations, and customer support, HP CEO Enrique Lores said during an earnings call on Tuesday.
One thing that has always fascinated me is how an innocent, dispassionate analysis can still reinforce biases and exacerbate societal problems. Looking at crime rates by district, for example, shows which area has the highest rate. Nothing wrong with that. The issue emerges when that data leads to reallocating police resources from the lowest-crime district to the highest or changing enforcement emphasis in the higher-crime district.
The CEO of the Chinese tech giant, Eddie Wu, said on Alibaba's second-quarter earnings call on Tuesday that the company "doesn't really see much of an issue in terms of a so-called AI bubble." "We're not even able to keep pace with the growth in customer demand," Wu said, adding that the pace at which Alibaba can deploy new servers is insufficient. "In the next three years to come, AI resources will continue to be under supply," he said.
There's an old adage that goes, "No one ever got fired for hiring [insert consulting firm here]." This rang true for many years, as there was no substitute for consulting 'SaaS' ('scapegoat as a service') - but a reckoning is coming. After nearly a decade of uninterrupted growth, the days of multi-million-dollar, multi-year contracts with governmental entities and private companies are swiftly withering away.
Enterprises can move from small pilots to full deployments without violating their jurisdiction's rules on where data should live. The reality is that, earlier, most security and compliance teams weren't rejecting GenAI because of model design; they were rejecting it because storing data in the US or EU pushed them into conflict with GDPR, India's incoming DPDPA norms, UAE's federal rules, or sector-specific mandates like PCI-DSS,
After The New York Times story was published, Guss, Ozair, and about three dozen other people updated their LinkedIn profiles to list their affiliation with the Bezos venture. Several of those people also work at Foresite Labs. Details about Prometheus remain limited. Its founding date, formal name, and headquarters haven't been publicly identified. But the dinner Bajaj hosted in June provided other clues.
Five years ago, in late November 2020, researchers at London-based Google DeepMind unveiled AlphaFold2. The artificial intelligence tool for predicting protein structures generated stunningly accurate 3D models that, in some cases, were indistinguishable from experimental maps, dominating a long-running structure-prediction challenge. The first version of AlphaFold was announced in 2018, but its predictions weren't nearly as good as its successor, which limited its impact.
Over 1,000 Amazon employees have anonymously signed an open letter warning that the company's allegedly "all-costs-justified, warp-speed approach to AI development" could cause "staggering damage to democracy, to our jobs, and to the earth," an internal advocacy group announced on Wednesday. Four members of Amazon Employees for Climate Justice tell WIRED that they began asking workers to sign the letter last month.
Uber has told some of its gig workers focused on AI training that it no longer needs them two months before their stint was supposed to end, Business Insider has learned. The workers are part of Project Sandbox, Uber's name for the AI training work it carries out for Google. The project represents an early effort by Uber to develop AI tools for other companies under its AI Solutions division.
Some specific improvements of the model include support for up to 10 reference images, meaning you can incorporate a lot more elements from different pictures in your final product; improved photorealism and detail; more accurate text rendering, a task image generating models frequently struggle with; better prompt following; and a better understanding of real-world knowledge, according to Black Forest Labs.
Softbank CEO Masayoshi Son stood next to President Trump, OpenAI CEO Sam Altman, and Oracle (NASDAQ: ORCL) founder Larry Ellison when they announced the Stargate project in January. The plan was to invest $500 billion in data centers. Last month, Son upped its company's investment in OpenAI to $30 billion. He sold all of Softbank's Nvidia ( NASDAQ: NVDA) share ownership for $5.83 billion to help pay for that decision.
OpenAI's ChatGPT and Microsoft's Copilot are both leaving WhatsApp thanks to upcoming changes to the messaging app's terms of service that will prohibit using it to distribute AI chatbots not made by Meta. OpenAI announced its planned departure a few weeks ago, with Microsoft following it this week. Both companies attributed the departures to Meta's new terms of service for WhatsApp Business Solution, which come into effect on January 15th, 2026, and said the chatbots will remain accessible in WhatsApp until that date.
The wisdom goes that the more compute you have or the more training data you have, the smarter your AI tool will be. Sutskever said in the interview that, for around the past half-decade, this "recipe" has produced impactful results. It's also efficient for companies because the method provides a simple and "very low-risk way" of investing resources compared to pouring money into research that could lead nowhere.
U.S. District Judge Sara Ellis wrote the footnote in a 223-page opinion issued last week, noting that the practice of using ChatGPT to write use-of-force reports undermines the agents' credibility and "may explain the inaccuracy of these reports." She described what she saw in at least one body camera video, writing that an agent asks ChatGPT to compile a narrative for a report after giving the program a brief sentence of description and several images.
In response to researchers at a safety group finding that the toymaker's AI-powered teddy bear "Kumma" gave dangerous responses for children, OpenAI said in mid-Novemberit had suspended FoloToy's access to its large language models. The teddy bear was running the ChatGPT maker's older GPT-4o as its default option when it gave some of its most egregious replies, which included in-depth explanations of sexual fetishes.
At the same time, AI companies are getting into e-commerce. In September, OpenAI debuted an instant checkout feature in ChatGPT so people can buy items from stores such as Etsy without leaving the chat. This month, Google announced an AI assistant that can call local stores to check if an item is in stock, while Amazon rolled out an AI feature that tracks price drops and automatically buys an item if it falls within someone's budget.
Africa's official maps are stuck in the past, often either outdated, incomplete-or both. But governments don't have the budgets to fix them, making it difficult to complete projects as complex as deciding where to put new solar plants to as simple as delivering a package. Now a new plan is underway to map the entire continent using satellite data and AI.
Needless to say, over the last three years, the Artificial Intelligence explosion has been at the top of almost every investor's mind. Many have become wealthy, as stocks like NVIDIA and other top tech names soared in a rally some feel is reminiscent of the late 1990s dot-com boom and bust. Between billions being spent on capital expenditures related to AI, the circular financing that seems to shovel money between the top companies in the industry, the worries over depreciation being used in accounting, and off-balance sheet financing, concerns over an AI bubble are legitimate and need to be addressed.
U.S. District Judge Sara Ellis wrote the footnote in a 223-page opinion issued last week, noting that the practice of using ChatGPT to write use-of-force reports undermines the agents' credibility and "may explain the inaccuracy of these reports." She described what she saw in at least one body camera video, writing that an agent asks ChatGPT to compile a narrative for a report after giving the program a brief sentence of description and several images.
The software team at General Motors has now lost three top executives in the past month as the automaker - with its new chief product officer at the helm - combines its disparate technology businesses into one organization. Baris Cetinok, senior vice president of software and services product management, is leaving the company effective Dec. 12, the company confirmed to TechCrunch.
The expression "AI hallucination" is well-known to anyone who's experienced ChatGPT or Gemini or Perplexity spouting obvious falsehoods, which is pretty much anyone who's ever used an AI chatbot. Only, it's an expression that's incorrect. The proper term for when a large language model or other generative AI program asserts falsehoods is not a hallucination but a "confabulation." AI doesn't hallucinate, it confabulates.