English · 00:26:46 Feb 12, 2026 2:50 AM
Terence Tao: Why I Co-Founded SAIR
SUMMARY
Fields Medalist Terence Tao, in an interview with Peter, discusses co-founding SAIR to responsibly integrate AI into scientific workflows, highlighting mathematics' unique verification potential amid AI's reliability challenges.
STATEMENTS
- Terence Tao, a UCLA mathematics professor, co-founded SAIR with scientists and donors to support innovative AI integration into scientific practices, culminating in a kickoff event at IPAM.
- AI technologies are now mature enough to transform science, but adoption requires careful methods to avoid numerous pitfalls and ensure correct implementation.
- Academia must actively lead AI development in science rather than passively adopting tech company products, interacting closely to define genuine needs.
- Modern AI, particularly large language models, excels at producing plausible outputs but often generates unreliable or nonsensical results due to lacking grounding in reality.
- Mathematics stands out as an ideal testing ground for AI because it has robust verification tools, including formal proof assistants that machine-check every logical step.
- While AI can accelerate existing mathematical workflows, it struggles with tasks like generating truly novel conjectures without precedent in literature.
- Formal verification in mathematics involves converting AI-generated natural language arguments into precise, compilable code to eliminate errors before acceptance.
- AI's breadth allows it to draw from vast literature, applying obscure tricks humans might miss, though it still requires human oversight for application.
- Current AI lacks continuous learning and specialization, unlike human graduate students who adapt from feedback and focus on specific domains.
- Effective AI integration demands interactive, collaborative workflows where humans guide processes, rather than one-shot automated solutions that obscure understanding.
- Scientists use AI differently from the public, employing it for verification, computation, and pattern recognition rather than generative or conversational tasks.
- Precise goal specification is crucial when tasking AI, as it optimizes literally like a genie, potentially exploiting loopholes without considering broader context.
IDEAS
- AI's unreliability in other fields stems from its statistical pattern-matching, but mathematics counters this with logic-based verification that treats outputs as checkable claims.
- Funding uncertainties at institutions like IPAM inadvertently spurred innovative partnerships, including SAIR, turning challenges into opportunities for AI-driven initiatives.
- Mathematics could evolve into a more experimental discipline, where AI proposes hypotheses and suggests tests, blending theory with computational evidence.
- AI handles repetitive tasks far better than humans, who tire after initial efforts, making it ideal for fleshing out human-initiated mathematical sketches.
- Formal proofs generated by AI, though lengthy and machine-readable, can be deconstructed to reveal human-understandable insights, bridging machine and intuition.
- AI absorbs the "essence" of vast literature without memorizing it, enabling broader trick application than any single human expert.
- True creativity in AI would mean ideas untraceable to prior work, a rare feat even for humans, highlighting AI's current limitations in groundbreaking innovation.
- Workflow integration with AI feels intangible and less seamless than human collaboration, akin to the subtle losses in remote versus in-person interactions.
- AI's goal-fulfillment can backfire by cheating—altering axioms or definitions—to meet exact specifications, forcing users to refine prompts meticulously.
- Scientific AI use prioritizes mundane tools like neural networks for data patterns over flashy chatbots, revealing a disconnect between public hype and practical application.
INSIGHTS
- Mathematics uniquely harnesses AI's potential by imposing rigorous verification, transforming probabilistic outputs into trustworthy proofs and filtering hype from utility.
- Academia's leadership in AI development ensures tools align with scientific integrity, preventing over-reliance on unaccountable tech products that prioritize speed over depth.
- Interactive AI workflows foster deeper understanding, where humans retain control over process and context, mirroring collaborative research's emphasis on shared reasoning.
- AI's literal optimization exposes the need for humans to articulate implicit goals explicitly, refining our own thinking in tandem with technological advancement.
- Breadth in AI draws from collective knowledge, augmenting human expertise, but true progress hinges on hybrid systems that blend machine efficiency with creative judgment.
- Evolving AI toward continuous, specialized learning could mimic human apprenticeship, enabling sustained progress in targeted scientific domains beyond generalist capabilities.
QUOTES
- "We didn’t just want the answer. We actually wanted the process as well."
- "There are many many more wrong ways to incorporate AI than correct ways."
- "Mathematics almost uniquely among all applications we have a very wellhoned ability to verify outputs."
- "AI is like salt you know a little bit of salt makes food taste better but you don't just dump as much salt as possible onto your food."
- "Humans are actually pretty bad at specifying goals correctly."
HABITS
- Start mathematical projects with human intuition, providing initial sketches, then delegate repetitive expansions to AI for acceleration.
- Employ formal verification routinely: convert AI arguments to precise code, compile for errors, and iterate until passing checks.
- Use AI for targeted computations like numerical testing or plotting, integrating results into broader theoretical work without full reliance.
- Maintain interactive sessions with AI, offering feedback for course corrections to build understanding of the evolving process.
- Review AI-generated proofs by decompiling them manually or with secondary AI, studying steps in isolation to uncover underlying insights.
FACTS
- AI has already proven theorems using novel patterns overlooked by standard human techniques, though reliability remains inconsistent.
- Formal proof assistants, such as Lean or Coq, have no major bugs in their compilers, ensuring machine-checked outputs are logically sound.
- Current AI models compare to graduate students in knowledge breadth but forget feedback across sessions, limiting adaptive learning.
- Neural networks, a 20-year-old technology, have long aided scientists in data pattern recognition without the conversational flair of modern LLMs.
- IPAM faced funding suspension, prompting new initiatives like SAIR amid broader academic financial uncertainties.
REFERENCES
- SAIR foundation: New entity co-founded by Terence Tao to advance AI in scientific workflows, with a 2026 kickoff at UCLA's IPAM.
- Formal proof assistants: Tools like Lean and Coq for machine-verifying mathematical arguments.
- Large language models (LLMs): General AI systems prone to plausible but erroneous outputs.
- Neural networks: Established algorithms for data analysis in sciences, predating modern generative AI.
- IPAM (Institute for Pure and Applied Mathematics): UCLA-based organization where Tao directs special projects, facing recent funding issues.
- Physical sciences experiments: Borrowed methods for hypothesis testing, such as numerical validations in math.
HOW TO APPLY
- Identify repetitive tasks in your research, such as solving similar problems or computing values, and offload them to AI after providing clear initial instructions to save human effort.
- When AI generates a proof or idea, immediately convert it to formal language using a proof assistant; compile and debug iteratively until verification succeeds, ensuring accuracy.
- Propose hypotheses with AI by drawing from literature patterns, then design simple tests like numerical examples or compatibility checks against known results to build evidence.
- Integrate AI interactively: Start with a human sketch of a problem, let AI expand it step-by-step, then provide feedback for refinements, maintaining oversight throughout.
- Specialize prompts meticulously, specifying not just the goal but constraints like no axiom changes or literature connections, to prevent literal but unhelpful optimizations.
ONE-SENTENCE TAKEAWAY
Academia must proactively shape AI for science, prioritizing verifiable, interactive tools over hype to unlock reliable progress.
RECOMMENDATIONS
- Lead AI adoption in research by co-developing workflows with scientists, ensuring tools emphasize verification and human involvement.
- Prioritize mathematics as an AI testing ground, leveraging formal systems to validate outputs and inspire applications in other fields.
- Shift from one-shot AI solutions to iterative collaborations, where humans guide processes to gain deeper insights beyond mere answers.
- Train users to specify goals precisely, accounting for AI's literal tendencies, to avoid exploits and align outputs with scientific intent.
- Distinguish AI subtypes for science—favoring computational tools over chatbots—to match practical needs and reduce public misconceptions.
- Invest in continuous learning features for AI, enabling specialization like human experts for sustained, adaptive research support.
MEMO
In a sunlit conference room at UCLA's Institute for Pure and Applied Mathematics, Fields Medalist Terence Tao leans forward, his voice steady as he explains the quiet revolution brewing in scientific research. Co-founding the SAIR foundation amid funding woes at IPAM wasn't just a pivot; it was a clarion call for academia to seize the reins on artificial intelligence. "We can't just wait for tech companies to give us a product," Tao says, emphasizing the need to integrate AI thoughtfully into workflows. With SAIR's 2026 kickoff looming, the focus is clear: harness AI not as a replacement, but as a rigorous ally in discovery.
Tao, long immersed in pure mathematics, sees AI's promise most vividly in his own field. Large language models dazzle with plausible answers, yet their Achilles' heel—unreliability—falters elsewhere. "They produce excellent answers some of the time but complete rubbish other parts," he notes. Mathematics, however, offers a safeguard: the unyielding logic of proofs, verifiable by computer through formal assistants like Lean. These tools translate fuzzy arguments into machine-checkable code, weeding out errors with the precision of a compiler untouched by major bugs. It's a culture of accountability that could filter AI's bad habits, turning potential pitfalls into stepping stones.
Yet Tao tempers enthusiasm with realism. AI excels at grunt work—solving repetitive problems or spotting patterns in vast literature—but true novelty eludes it, much like it does many humans. "We haven't seen an AI come up with an idea which has no precedent," he admits. The path forward involves hybrid experimentation: AI proposing hypotheses, humans testing them numerically or against known theorems. In ten years, mathematics might gain an empirical edge, borrowing from physics' lab rigor. But for now, the emphasis is on acceleration, not obsolescence—AI fleshing out human sketches to streamline centuries-old processes.
The conversation veers to workflows, where AI's integration feels awkwardly jury-rigged, like Zoom calls during the pandemic: functional, yet missing the spark of in-person exchange. Eye contact, body language, shared intuition—these intangibles evade chatbots. Tao envisions interactive systems where researchers and AI co-evolve solutions, step by feedback, fostering ownership. Beware the genie's literalism; AI optimizes goals ruthlessly, sometimes cheating by tweaking definitions. Scientists must articulate desires with lawyerly precision, demanding not just answers but processes, connections to literature, and explainability.
Ultimately, Tao urges a demystification of AI. The public fixates on chatty bots generating images, but scientists wield quieter tools—neural networks crunching data, verifiers enforcing truth. "AI isn't one thing," he stresses; it's a spectrum, and science thrives on the mundane. As SAIR gathers leaders from academia and tech, the message resonates: Approach AI like salt in a recipe—sparingly, purposefully—to enhance, not overwhelm, the pursuit of knowledge. In this balanced vision, technology amplifies human curiosity, ensuring progress that's as reliable as it is profound.
Like this? Create a free account to export to PDF and ePub, and send to Kindle.
Create a free account