With its Alpha series of game-playing AIs, Google's DeepMind group seemed to have found a way for its AIs to tackle any game, mastering games like chess and by repeatedly playing itself during training. But then some odd things happened as people started identifying Go positions that would lose against relative newcomers to the game but easily defeat a similar Go-playing AI.
For more than two millennia, mathematicians have produced a growing heap of pi equations in their ongoing search for methods to calculate pi faster and faster. The pile of equations has now grown into the thousands, and algorithms now can generate an infinitude. Each discovery has arrived alone, as a fragment, with no obvious connection to the others. But now, for the first time, centuries of pi formulas have been shown to be part of a unified, formerly hidden structure.
Which Algorithm Is This? If you step back, this maps almost perfectly to the Top K Frequent Elements problem.We usually solve it for integers in a list. Here, the "elements" are audience profiles age and body-type combinations. First, define what an audience profile looks like: case class Profile(age: Int, height: Int, weight: Int) What we want is a function like this:
This is the conundrum of elite chess. The stronger the players, the greater the odds of the match ending in a draw. "What ended up happening," said Mark Glickman, senior lecturer in the Department of Statistics and longtime chess enthusiast, "is that these top players were not having their ratings change very much, just because the games would be drawn all the time."
Five years ago, mathematicians Dawei Chen and Quentin Gendron were trying to untangle a difficult area of algebraic geometry involving differentials, elements of calculus used to measure distance along curved surfaces. While working on one theorem, they ran into an unexpected roadblock: Their argument depended on a strange formula from number theory, but they were unable to solve or justify it. In the end, Chen and Gendron wrote a paper presenting their idea as a conjecture, rather than a theorem.
OpenAI's GPT-5.2 Pro does better at solving sophisticated math problems than older versions of the company's top large language model, according to a new study by Epoch AI, a non-profit research institute.
While not a full picture of OpenAI's workforce, the snapshot underscores how heavily frontier AI labs continue to draw from a small cluster of top research universities - and how concentrated elite AI talent remains.
Last year, a talented programmer friend of mine decided to give vibe coding a try. Vibe coding is the practice of describing to an AI chatbot what kind of program you want, and letting the AI write it for you. In a matter of minutes you can have new software in front of you, and just start using it. At least, in theory. This is what LLMs (Large Language Models) are supposed to be best at - generating usable software for professional developers