"I think that one should work from the facts rather than just trying to cause an alarm," Marcus told Business Insider. "Hyped-up views have gotten us into a bad place, possibly one that's going to lead to a serious economic recession or something like that," Marcus told Business Insider. "And I guess I think that one should work from the facts rather than just trying to cause an alarm."
But when it comes to its own tech being copied, Google has no problem pointing fingers. This week, the company accused "commercially motivated" actors of trying to clone its Gemini AI. In a Thursday report, Google complained it had become under "distillation attacks," with agents querying Gemini up to 100,000 times to "extract" the underlying model - the convoluted AI industry equivalent of copying somebody's homework, basically.
In a new viral AI video, Brad Pitt and Tom Cruise pummel each other on a rooftop in a cinematic action sequence. It's not a trailer for a new blockbuster, and it's not actually Pitt and Cruise, though it looks a lot like them. The video is so realistic, in fact, that the clearest sign it's made with AI is the dialogue.
He used chess as an example: 15 to 20 years ago, a human checking AI's output could beat an AI or a human playing alone. Now, AI can beat people without that layer of human supervision. Amodei, who cofounded AI lab Anthropic in 2021, added that the same transition would happen in software engineering. "We're already in our centaur phase for software," Amodei said. "During that centaur phase, if anything, the demand for software engineers may go up. But the period may be very brief."
Unlike traditional software bugs that might crash a server or scramble a database, errors in AI-driven control systems can spill into the physical world, triggering equipment failures, forcing shutdowns, or destabilizing entire supply chains, Gartner warns. "The next great infrastructure failure may not be caused by hackers or natural disasters but rather by a well-intentioned engineer, a flawed update script, or a misplaced decimal," cautioned Wam Voster, VP Analyst at Gartner.
On a personal basis, that means people using AI services want to be able to veto big decisions such as making payments, accessing or using contact details, changing account details, placing orders, or even just seeking clarity during a decision-making process. Extend this way of thinking to the working environment and the resistance is likely to be equally strong in professional settings.
London-based deep tech startup Stanhope AI has closed a €6.7 million ($8 million) Seed funding round to advance what it calls a new class of adaptive artificial intelligence designed to power autonomous systems in the physical world. The round was led by Frontline Ventures, with participation from Paladin Capital Group, Auxxo Female Catalyst Fund, UCL Technology Fund, and MMC Ventures. The company says its approach moves beyond the pattern-matching strengths of large language models, aiming instead for systems that can perceive, reason, and act with a degree of context awareness in uncertain environments.
The past few days have been a wild ride for xAI, which is racking up staff and cofounder departure announcements left and right. On Tuesday and Wednesday, cofounder Yuhuai (Tony) Wu announced his departure and that it was "time for [his] next chapter," with cofounder Jimmy Ba following with a similar post later that day, writing that it was "time to recalibrate [his] gradient on the big picture."
Big Tech keeps raising its spending plans for artificial intelligence infrastructure, yet shares of Nvidia Corp., one of the biggest beneficiaries of that flood of cash, have been largely stagnant for months. The stock is up less than 1% since the beginning of the fourth quarter and has been largely range bound despite hitting a record high in late October.
Fifty-four seconds. That's how long it took Raphael Wimmer to write up an experiment that he did not actually perform, using a new artificial-intelligence tool called Prism, released by OpenAI last month. "Writing a paper has never been easier. Clogging the scientific publishing pipeline has never been easier," wrote Wimmer, a researcher in human-computer action at the University of Regensburg in Germany, on Bluesky. Large language models (LLMs) can suggest hypotheses, write code and draft papers, and AI agents are automating parts of the research process.
The launch of OpenAI's ads business comes at a critical point in the AI arms race. Companies are burning through billions in training and inference costs for the next big model, while revenue lags behind. The cost to use AI is driving up the cost of RAM and making memory and processors scarcer commodities. These concerns, while less visible to us in the ad world, have in turn put pressure on those companies to create revenue. Hence, ads.
According to the company behind ChatGPT, DeepSeek is systematically attempting to extract knowledge from leading American AI systems in order to improve its own models. In the memo, which OpenAI sent to the U.S. House Select Committee on Strategic Competition between the U.S. and the Chinese Communist Party, OpenAI outlines attempts to circumvent technical and access restrictions. The company claims that accounts linked to DeepSeek employees have developed methods to access AI models via external, obfuscated network routes.
Managers saw the company's engineers getting more done with the technology, so they needed to ensure new hires could do the same. "We just flipped the script and went, 'OK, we're going to invite you to use AI,'" Brendan Humphreys, Canva's chief technology officer, told Business Insider. The result, he said, has been stronger hires better equipped to wield powerful AI tools to help write code and solve problems.
Autonomous vehicles have a lot of potential. As long as you program them right, they won't speed, won't break traffic laws, and won't get drunk, high, abusive, or violent. And the technology has been getting much more capable, even as some of the hype has died down, taking some of the related companies with it. Waymo still easily leads the field and is already operating commercially in six cities across America, with a dozen more (plus London) coming soon.
I attacked it. I started building things - apps, tools, prototypes - with an AI model as my collaborator. No computer science degree. No coding boot camp. Just curiosity and stubbornness. And it worked. Not because I suddenly became technical, but because I refused to let the insecurity win. The big picture: I've always assumed my insecurities are actually superpowers if used right.
What happens under the hood? How is the search engine able to take that simple query, look for images in the billions, trillions of images that are available online? How is it able to find this one or similar photos from all that? Usually, there is an embedding model that is doing this work behind the hood.
Agentic AI workflows sit at the intersection of automation and decision-making. Unlike a standard workflow, where data flows through pre-defined steps, an agentic workflow gives a language model discretion. The model can decide when to act, when to pause, and when to invoke tools like web search, databases, or internal APIs. That flexibility is powerful - but also costly, fragile, and easy to misuse.