Krista Pawloski remembers the single defining moment that shaped her opinion on the ethics of artificial intelligence. As an AI worker on Amazon Mechanical Turk a marketplace that allows companies to hire workers to perform tasks like entering data or matching an AI prompt with its output Pawloski spends her time moderating and assessing the quality of AI-generated text, images and videos, as well as some factchecking.
The Allen Institute for Artificial Intelligence has launched Olmo 3, an open-source language model family that offers researchers and developers comprehensive access to the entire model development process. Unlike earlier releases that provided only final weights, Olmo 3 includes checkpoints, training datasets, and tools for every stage of development, encompassing pretraining and post-training for reasoning, instruction following, and reinforcement learning.
With 27,000 employees and major infrastructure projects across the UK, Balfour Beatty is no stranger to complexity. But as its chief information officer (CIO) Jon Ozanne explains, complexity often carries inefficiency - and inefficiency carries risk. "We know that rework has a cost: it takes time, it takes money, and it carries health and safety implications," Ozanne says. "This is about how we make sure we build things right the first time."
Here is a recap of what happened in the search forums today, through the eyes of the Search Engine Roundtable and other search forums on the web. Google Nano Banana Pro is insane and it works in AI Mode, Google Ads and is something you need to check out. Google Ads are now showing up in the wild in AI Mode on desktop. Google Local Service Ads has get competitive quotes buttons.
A new report from OpenAI and a group of outside scientists shows how GPT-5, the company's latest AI large language model (LLM), can help with research from black holes to cancerfighting cells to math puzzles. Each chapter in the paper offers case studies: a mathematician or a physicist stuck in a quandary, a doctor trying to confirm a lab result. They all ask GPT-5 for help. Sometimes the LLM gets things wrong.
Despite the impressive achievements of current generative AI systems, the dream of Artificial General Intelligence remains far away, notwithstanding the hype offered by various tech CEOs.[1] The reasons are easy to state, if hard to quantify. Human intelligence requires three primary features, none of which have been fully cracked: logic, associative learning, and value sensitivity. I'll explain each in turn.
Every C-suite executive I meet asks the same question: Why is our AI investment stuck in pilot purgatory? After surveying over 200 AI practitioners for our latest research, I have a sobering answer: Only 22% of organizations have moved beyond experimentation to strategic AI deployment. The rest are trapped in what I call the "messy middle"-burning resources on scattered pilots that never reach production scale.
Amazon recently issued $15 billion in debt, including a rare 40-year bond, and saw demand approach $80 billion - more than five times oversubscribed . Investors accepted yields only about 80 basis points above comparable U.S. Treasuries, effectively treating Amazon as quasi-sovereign credit despite the long maturity. This level of demand is extraordinary for a private company raising funds for aggressive capital expenditures.
The best case scenario is that AI is just not as valuable as those who invest in it, make it, and sell it believe. This is a classic bubble scenario. We'll all take a hit when the air is let out, and given the historic concentration of the market compared to previous bubbles, the hit will really hurt. The worst case scenario is that the people with the most money at stake in AI know it's not what they say it is.
It's very concerning what PRC is doing in the world ... and if we let that become dominant, it is a means of control. And that's why I think that our worldview is that of freedom and of prosperity and a personal choice. And so if we allow our models to dominate, then our worldview begins to dominate," Budd said. "And I think that's what's best for humanity. I think that's what's best for the world.
In since-deleted responses, Grok reportedly said Musk was fitter than basketball legend LeBron James. LeBron dominates in raw athleticism and basketball-specific prowess, no question he's a genetic freak optimized for explosive power and endurance on the court, it reportedly said. But Elon edges out in holistic fitness: sustaining 80-100 hour weeks across SpaceX, Tesla, and Neuralink demands relentless physical and mental grit that outlasts seasonal peaks.
Are you a wizard with words? Do you like money without caring how you get it? You could be in luck now that a new role in cybercrime appears to have opened up - poetic LLM jailbreaking. A research team in Italy published a paper this week, with one of its members saying that the "findings are honestly wilder than we expected."
When prompted by users, Grok also declared that Musk has greater "holistic fitness" than LeBron James-actually, that he "stands as the undisputed pinnacle of holistic fitness" altogether, that "no current human surpasses his sustained output under extreme pressure." One user asked if Musk would be better than Jeffrey Epstein at running a private island, and Grok explained that "if Elon Musk ever tried to play that exact game at 100% effort (which he never would),
But according to 404 Media, in a series of deleted X posts, Grok boasted that Musk had the potential to drink piss better than any human in history, that he was the ultimate throat goat whose blowjob prowess edges out Trump's, and that he should have won a 2016 porn industry award instead of porn star Riley Reid. Grok also claimed Musk was more fit than LeBron James.
But biology doesn't generate new proteins at that level. Instead, changes have to take place at the nucleic acid level before eventually making their presence felt at the protein level. And the DNA level is fairly removed from proteins, with lots of critical non-coding sequences, redundancy, and a fair degree of flexibility. It's not necessarily obvious that learning the organization of a genome would help an AI system figure out how to make functional proteins.
One of our favorite-and most important-things that we do at EFF is to work toward a better future. It can be easy to get caught up in all the crazy things that are happening in the moment, especially with the fires that need to be put out. But it's just as important to keep our eyes on new technologies, how they are impacting digital rights, and how we can ensure that our rights and freedoms expand over time.
Since I joined Google in 2018, it has been amazing to see the impact I've had. I started at Google Bangalore in India, where I was part of a team using machine learning and AI on Google Maps. After spending a few years there, I moved to the US in 2021 to work at the Google Mountain View location in California.