😸 OpenAI solved 5 of 10 “impossible” problems

Your browser does not support the audio element. In partnership with Now, the ironic part would be if two of the humans on dates with their AI’s lock eyes and fall for each other instead. It’d be like that scene from When Harry Met Sally where they each go on a double date setting the other up with their friends Marie and Jess, only for Marie and Jess fall for each other instead A more modern reference: this gives big Opalite music video vibes… where lonely Domhnall Gleeson and T-Swift meet on separate dates with their pet rock / cactus; then the rock and cactus end up together! Only in this version, I don’t think you’re gonna let your iPhones run off together… unless… …you’re both rocking one of these bad boys! Here’s what happened in AI today: OpenAI’s unreleased model solved at least 5 of 10 research-level math problems no AI had seen, and GPT-5.2 produced a verified physics breakthrough. India reached 100M weekly ChatGPT users and approved a $1.1B state-backed AI VC fund. Anthropic’s Super Bowl ads drove Claude to No. 7 on the App Store with 148K downloads in three days. OpenAI quietly removed “safely” and its “openly share” commitment from its IRS mission filings over the years. Advertise in The Neuron here! OpenAI Claims Its Unreleased Model Solved Over Half the Hardest AI Math Test Ever Created Remember when AI winning a math olympiad felt like a big deal? Eleven of the world’s top mathematicians (including a Fields Medalist) decided that was child’s play. So they created First Proof, a set of 10 unpublished, research-level math problems pulled straight from their own work, and gave AI one week to solve them. The catch: none of these problems had ever appeared on the internet. No training data shortcuts. No pattern matching. Just raw mathematical reasoning. OpenAI’s chief scientist Jakub Pachocki says an internal, unreleased model likely solved at least 5 of the 10 problems (originally claimed 6, but walked one back). For context, publicly available AI models like ChatGPT and Gemini could only solve 2. Somewhere, a tenured math professor is nervously refreshing their LinkedIn. Here’s what makes this significant: The problems were genuinely hard. They spanned 10 different subfields, from algebraic topology to symplectic geometry, and each one took the mathematicians who wrote them weeks to months to solve. There was real human oversight, but limited. OpenAI didn’t feed the model proof strategies. They did have experts review outputs and asked the model to expand on some answers. It all happened in one week. Pachocki called it a “chaotic sprint” and said the methodology “leaves a lot to be desired.” Translation: imagine what happens when they actually try. And that wasn’t even OpenAI’s only flex that day. On the same February 13th, they published a physics preprint where GPT-5.2 proposed a formula for gluon particle interactions that physicists had assumed were impossible for decades. Harvard and Cambridge researchers verified it. A UC Santa Barbara professor called it “journal-level research advancing the frontiers of theoretical physics.” As Ethan Mollick put it: the shift from “‘AI can’t do science” to “of course AI does science” will follow the same pattern as every other AI transition. First the hype, then the skeptics, then the quiet adoption… then the breakthroughs start falling like dominoes. The next round of First Proof problems drops March 14. And this time, OpenAI won’t be caught off guard. Want to get the most out of ChatGPT? ChatGPT is a superpower if you know how to use it correctly. Discover how HubSpot’s guide to AI can

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *