We do have a few senior managers that are [experts on AI]: none are that excited about it yet. We have agreed an internal policy to guide us all, which is currently very cautious, e.g., we do not allow AI-generated content or AI to be used in our design processes or its unauthorized use outside of GW, including in any of our competitions,
According to Li Mandri, ING's centralised approach to AI development has resulted in a high success rate for pilot projects, with 90% moving to production compared to the industry average of 30. The bank has standardised on cloud-hosted AI models from preferred partners, which are then made available globally, allowing ING to scale. He says the platform is centrally managed with risk controls, guardrails and real-time monitoring.
She learned that experts across fields-from physics and finance to healthcare and law-were now being paid to help train AI models to think, reason, and problem-solve like domain specialists. She applied, was accepted, and now logs about 50 hours a week providing data for Mercor, a platform that connects AI labs with domain experts. Ruane is part of a fast-growing cohort of professionals who are shaping how AI models learn.
When a scientist feeds a data set into a bot and says "give me hypotheses to test", they are asking the bot to be the creator, not a creative partner. Humans tend to defer to ideas produced by bots, assuming that the bot's knowledge exceeds their own. And, when they do, they end up exploring fewer avenues for possible solutions to their problem.
"Collision rates and related costs remain unacceptably high around the world," said Shoaib Makani, co-founder and CEO of Motive. "Organisations need AI-powered driver safety solutions that can perceive and respond in real time. We've added three times more compute, created the first AI dash cam with stereo vision, and added hands-free communication, all in one system, so organisations can detect more risks and act faster. This isn't just a new product; it reflects a shift toward proactive, AI-driven road safety."
This new reality is forcing organizations to undertake careful assessments before making platform decisions for AI. The days when IT leaders could simply sign off on wholesale cloud migrations, confident it was always the most strategic choice, are over. In the age of AI, the optimal approach is usually hybrid. Having openly championed this hybrid path even when it was unpopular, I welcome the growing acceptance of these ideas among decision-makers and industry analysts.
Anthropic has introduced Cowork, a new feature that uses Claude for more than just chat conversations and programming. Cowork is based on the same fundamentals as Claude Code, but focuses on general knowledge work and a broader audience. The feature is available as a beta version for Claude Max subscribers via the macOS desktop app. Claude Code quickly became popular among developers and hobbyists.
The entrepreneur said that within just a few years, we will live in a world marked by a great surplus, where "better medical care than anyone has today" will be "available for everyone within five years." He also said that there will be "no scarcity of goods and services" and you'll be able to learn anything you want. Musk continued, explaining that there will be such a surplus that life will no longer require people to save in order to ensure they are taken care of later on.
"I've always been a big sports fan - basketball, football, Formula One, MMA - and what draws me to all of them is performance. In my free time, I've spent a lot of time thinking about what actually drives human performance. People are very different, but across sports, there are clear patterns in how performance shows up,"
This will also greatly increase the need for AI audit trails: detailed records of what data AI used, what steps it took, what suggestions or decisions it influenced, and who ultimately confirmed the choices. These trails will become crucial for compliance, ethical accountability, and ensuring business integrity. According to Pugh, there will be a clear trend toward transparent AI workflows, and companies will increasingly see that an error in a prediction can be traced back to a specific step in the AI workflow.
Tesla Inc. ( NASDAQ: TSLA) stock has been up and down over the past year. It plummeted from $428 in early 2025 to $217 in April, as Elon Musk spent time trying to fix the federal government. When he convinced the market that Tesla was a robotics and artificial intelligence company, not an electric vehicle (EV) maker, the stock hit $490 a month ago. Today, it trades at $450.
"According to the investigations, there is possible anti-competitive conduct of an exclusive nature that arises from the application of the New WhatsApp Terms ("WhatsApp Business Solution Terms") imposed by Meta to regulate the access and offer, by providers of artificial intelligence tools, of its technologies to WhatsApp users," the Conselho Administrativo de Defesa Econômica (CADE) said. CADE said it would investigate if Meta's terms are exclusionary to competitors and unduly favor Meta AI, the company's chatbot that's offered on WhatsApp.
Microsoft 365 comes in several different flavors, both for individuals and organizations. For individuals alone, you can choose among Basic, Personal, Family, and Premium. Excluding Basic, the other three subscriptions offer desktop versions of the core Office apps, namely Word, Excel, PowerPoint, OneNote, and Outlook. With each one, Microsoft doles out a certain amount of OneDrive space. And as the much ballyhooed icing on the cake, Copilot AI is built in to generate content and answer your questions.
The math is straightforward. At roughly 60.7 billion tokens in circulation, a $10 XRP price target implies a $607 billion market cap. That would vault XRP past Ethereum into second place behind Bitcoin. It's a moonshot scenario that would require massive institutional adoption, XRP ETF inflows, and utility growth on a scale the asset hasn't yet achieved-but in crypto, stranger things have happened.
December 2025 closed out a transformative year for artificial intelligence with a flurry of major model releases, significant policy shifts, and massive infrastructure investments. OpenAI and Google went head-to-head with their latest flagship models, the open-source community delivered a stunning array of competitive alternatives, and Apple quietly enabled a new era of local AI clustering. Meanwhile, the US government stepped in to create a national AI policy framework, and the race to build out the physical infrastructure for AI reached a fever pitch. 🚀
For the past few years, artificial intelligence has been discussed almost exclusively in terms of models. Bigger models, faster models, smarter models. More recently, the focus shifted to agents, systems capable of planning, reasoning, and acting autonomously. Yet the real leap in usefulness does not happen at the model level, nor at the agent level. It happens one layer above, at the level of Skills.
Scientists are showing that neuromorphic computers, designed to mimic the human brain, are not only useful for AI, but also for complex computational problems that normally run on supercomputers. This is reported by The Register. Neuromorphic computing differs fundamentally from the classic von Neumann architecture. Instead of a strict separation between memory and processing, these functions are closely intertwined. This limits data transport, a major source of energy consumption in modern computers. The human brain illustrates how efficient such an approach can be.
We're living in a token economy. Each piece of content -- words, images, sounds, etc. -- is treated by an AI model as an atomic unit of work called a token. When you type into a prompt in ChatGPT, and you receive a paragraph in response, or you call an API to do the same thing inside an app you've built, both the input and the output data are counted as tokens. As a result, the meter is always running when you use AI, racking up costs per token, and the total bill is set to go higher in aggregate.