On the past and future of math

math
AI
opinion
My thoughts on AI for math.
Author

Shengrong Wu

Published

April 2, 2026

On the past and future of math

Mathematics holds a peculiar status in modern culture. Students who excel at it earn a kind of reverence rarely extended to any other subject. Parents enroll children in olympiad preparation programs. Mathematicians are popularly imagined as solitary geniuses engaged in uncovering the hidden architecture of the universe. The subject has acquired, over centuries, a quality that can only be described as sacred.

This essay begins by taking that sacred status seriously enough to ask where it came from. The historical answer turns out to be both clear and somewhat surprising: mathematics did not become revered because of what modern research actually produces. It became revered through a long chain of symbolic associations — Greek cosmology, medieval university curricula, the Scientific Revolution, the gatekeeping functions of mass education — each of which deposited new layers of meaning onto the subject until it became inseparable from the ideas of intelligence, truth, and cosmic order. Understanding this history exposes a significant tension. The prestige was largely earned at the level of foundational mathematical concepts — conic sections, calculus, group theory, Boolean algebra — ideas that genuinely transformed physics, engineering, and computing. But that earned prestige then gets projected uncritically upward onto all contemporary research, most of which is puzzle-solving inside an existing framework rather than revolutionary breakthrough.[26] The two are not the same thing.

That gap between historical mythology and current practice is the concern of the first two sections. The third section turns to something genuinely new: artificial intelligence is beginning to alter what is scarce and what is abundant in mathematical work, with capital markets already treating formal proof verification as infrastructure for reliable AI systems rather than as academic curiosity.[28] The fourth section then tempers that long-run picture with a sociological argument. Academic institutions change more slowly than their tools, and the near term is likely to be defined by an asymmetry — technically accelerated in method, but institutionally conservative in norm — before any deeper reordering of mathematics takes hold.

Historical Reason: Why math is sacred to public

Math acquired prestige very early because it looked like the clearest form of certain knowledge. In Greek traditions associated with Pythagoreanism and later Platonism, number and proportion were not just practical tools; they were tied to harmony, order, and the structure of reality itself.[1] That already gave mathematics a status above ordinary craft knowledge.

That elevation was not left to philosophy alone. Medieval and early university culture institutionalized the prestige directly into the structure of higher learning, where arithmetic, geometry, astronomy, and music theory formed the quadrivium.[2] So mathematics was not just useful knowledge; it became part of what counted as a properly educated mind. Once a subject is embedded in elite education for centuries, it gains symbolic authority far beyond its day-to-day applications.

Beyond the curriculum, mathematics was also bound up with religion and metaphysics. In late antique and early modern thought, many intellectuals treated mathematical truth as unusually close to divine order. Augustine integrated mathematical objects into a Christian-Platonist framework, and later early modern thinkers often treated the world as a divinely ordered system legible through mathematics.[3] That theological residue matters: even in secular societies, math still inherits some of the aura of timeless, impersonal truth.

The Scientific Revolution then massively amplified this accumulated aura. Galileo’s “book of nature” metaphor[4] and Newton’s Principia[5] made mathematics look like the language in which the universe itself could be read. Once mathematics became visibly connected to accurate prediction of motion, astronomy, mechanics, and later physics, it no longer seemed like just one intellectual game among others. It became the benchmark for what “real knowledge” looks like.

That intellectual authority soon generated concrete material rewards as well. Modern states, economies, and militaries rewarded numeracy: navigation, surveying, engineering, accounting, ballistics, finance, statistics, and later computing all depended on mathematical competence. So even when the average piece of frontier pure math had no immediate worldly impact, society kept seeing mathematics as adjacent to power, control, prediction, and technological competence. That practical prestige then fed back into the symbolic prestige.

As mathematical competence became economically indispensable, mass education absorbed and intensified that signal. In the modern school system, mathematics became a sorting device — and this is perhaps the largest single driver of the contemporary “sacred” feeling. Math is widely used as a gatekeeper for access to advanced classes, university pathways, and high-status careers. Research literature explicitly describes school math and especially algebra as functioning in this gatekeeping role.[6] So people do not merely learn that math is useful; they learn that math judges who is “smart.” That converts a subject into a social symbol.

Underlying all of these historical layers is a final structural feature that makes the whole arrangement self-reinforcing: math is unusually opaque to outsiders. Most people can at least form an opinion about a poem, a painting, or a historical argument. But they cannot easily evaluate a proof, a theorem, or a research paper in PDE or algebraic geometry. When a field is hard to inspect from the outside, prestige often rises because non-specialists substitute mystery for evaluation. In that setting, mathematics becomes a public emblem of “pure intelligence,” not just a discipline. This is partly why good math students are admired even by people who do not care about math itself.

Will the research be useful “100 years later”?

There is a common saying in pure math academia, that “even the research isn’t useful today, it will make a difference 100 years later.” Is it true that math research will be applied eventually? Or just an excuse for protecting their interest?

Below is the examples of math concepts used in physical world.

  • Modular arithmetic to public-key cryptography

    In 1801, Gauss’s Disquisitiones Arithmeticae gave the first systematic account of modular arithmetic and congruence notation, introducing the triple-bar symbol (≡) that remains standard today.[7] RSA was created in 1977 and became a foundational public-key system for secure digital communication.[8]

  • Elliptic-curve theory to elliptic-curve cryptography and Bitcoin

    A foundational milestone for elliptic-function theory is 1829, when Jacobi published Fundamenta Nova Theoriae Functionum Ellipticarum, with Abel independently developing the theory in parallel — the mature 19th-century theory is placed in this Abel–Jacobi period.[9] The use of elliptic curves in cryptography was proposed independently in 1985 by Koblitz and Miller.[10] Bitcoin’s transaction machinery uses ECDSA-based signatures, and the Bitcoin white paper was published in 2008.[11]

  • Non-Euclidean geometry to general relativity

    Lobachevsky’s first publication on non-Euclidean geometry dates to 1829.[12] Einstein’s general relativity, published in 1915, required Riemannian geometry — a form of non-Euclidean geometry — as its mathematical foundation.[13] This is a classic case where a concept first looked purely theoretical and later became part of the geometry of spacetime in physics.

  • Boolean algebra to digital circuits and computers

    The basic rules of Boolean algebra were formulated by George Boole in 1847.[14] Claude Shannon’s master’s thesis, published in 1938, used Boolean algebra to establish the theoretical basis of digital circuits.[15]

  • Conic sections to planetary orbits

    Apollonius of Perga, around the 3rd century BCE, systematized conic sections and introduced the terms ellipse, parabola, and hyperbola.[16] Kepler published in 1609 the result that Mars moves in an ellipse, making conic sections the geometry of planetary motion.[17]

  • Group theory to symmetry methods in modern physics

    The founding of group-theoretic ideas traces to Galois in the early 1830s.[18] Group theory later became invaluable in theoretical physics for classifying subatomic particles and studying atomic and molecular symmetries — applications that belong to the quantum and particle-physics era of the 20th century.

Although there are several successful application, the real cases are the applications are all “conceptual” level. This is the core conflict of the debate. And once you separate them, a lot of the inflated rhetoric around pure math collapses.

For example, in cryptography, what matters operationally is often things like modular arithmetic, finite fields, elliptic curves at the conceptual and algorithmic level, complexity assumptions, and implementation details. That is the undergraduate student level knowledge and does not imply that a very deep current paper on, say, some refined arithmetic geometry phenomenon has practical importance.

The conceptual level contributions are real. Mathematicians invented the ideas, then hundred years later physicists and engineers would apply the core conception in the projects. After entering theorem level, the applications become much narrower. A specific theorem may become important if it enters a usable pipeline: theorem→algorithm/model→implementation. But this is much rarer. Finally, when it goes to research level problems, the applications collapse. Most frontier work is more like paper A→paper B→paper C, inside a research community, with no bridge to the physical world at all.

A rough picture is:

\[\text{Mathematics} =\underbrace{\text{foundational toolkit}}_{\text{often useful}}+\underbrace{\text{classical developed theory}}_{\text{sometimes useful}}+\underbrace{\text{frontier specialization}}_{\text{always internally useful only}}.\]

That is why public rhetoric can feel misleading. People hear “mathematics is fundamental to science and technology,” which is true at the level of the toolkit, and then project that truth upward onto every niche research program, which is not true. So the more precise conclusion is: Mathematics as a civilizational infrastructure is crucial, but most frontier pure math research is not crucial to the physical world. Those two statements are compatible.

Someone may debate, “the yield rate is low, flood of meaningless research is necessary to ensure there are core conceptual innovation.” The core conflict lies on the research philosophical difference: “explore math, then apply it to science when needed” vs “explore science, then use math to explain the phenomenon”.

The claim “more useless math⇒higher chance of later useful math” is a portfolio argument. As a matter of logic, it is plausible: if applications are rare and unpredictable, then a larger exploratory base can increase the chance that some line later becomes important. History does contain cases where abstract mathematics later became central, such as non-Euclidean geometry for relativity, group theory for symmetry in physics and chemistry, and older arithmetic/algebraic ideas for cryptography.

But that argument is much weaker than people often pretend. It does not show that most niche research is important. At best it shows “expected value of exploration>0” for some broad research ecosystem. That is different from saying “this particular narrow paper matters.” And history suggests the strong “theory first, then application” story is too simple anyway. Innovation scholars and policy institutions explicitly note that the old linear model—basic research to applied research to product—is an oversimplification that misses feedback loops between science, engineering, markets, and institutions.[19]

One pathway is the one you prefer: world / nature / technical problem→mathematical formulation→theory. This is how a lot of major mathematics grew. Mathematics historically evolved from counting, measuring, and describing shapes,[20] and Newton developed calculus as a tool to push forward the study of nature, with strong interaction between mathematics, physics, and astronomy.[21] Probability theory also originated in practical problems of gambling and insurance, before becoming foundational in science.[22] Fourier’s analysis grew out of the theory of heat conduction.[23] So some of the most consequential mathematics did not begin as free-floating abstraction. It began from contact with reality.

The other pathway is: internal mathematical development→later scientific or technical use. That also happens. Conic sections were studied long before Kepler used ellipses in astronomy; non-Euclidean geometry long preceded general relativity; Fourier series outgrew the original heat problem and became central for signal analysis.

So history does not support either extreme: not “all important math comes from physical problems,” and not “just let pure math grow randomly and applications will take care of themselves.”

Even if most pure research never matters externally, there is still a serious defense of it: future bridgeability is hard to predict ex ante. People in 1850 could not reliably know which abstractions would matter in 1915 or 1985. That unpredictability is real. So a society that supports some amount of free mathematical exploration is probably rational. But this only justifies a moderate claim: maintain exploratory capacity.

New variable: AI

Artificial intelligence may change pure mathematics more deeply than most of academia currently expects. The old image of mathematical research is still built around a world in which the main bottlenecks are technical training, symbolic manipulation, proof construction, and verification. But the current AI market is already signaling a different view. Investors are not just funding “math” as an intellectual pursuit; they are funding mathematics as a foundation for reliable reasoning systems.[28] In March 2026, Axiom announced a $200 million Series A at a valuation above $1.6 billion, explicitly framing its mission as extending formal mathematics into “Verified AI.”[24] Harmonic likewise announced a $120 million Series C at a $1.45 billion valuation in late 2025, built around formally verified mathematical reasoning with Lean.[25]

This matters because it shows that capital markets do not currently view AI-for-math mainly as an academic niche. They view it as infrastructure for trustworthy AI. The investment thesis is not simply that theorem proving is interesting. It is that formal reasoning, proof verification, and theorem extraction may become crucial layers in software reliability, scientific automation, and model safety. Axiom’s public product positioning around proof verification and theorem extraction, and Harmonic’s repeated emphasis on formally verified reasoning, both fit that pattern.

If that trajectory continues, then the deepest change to pure mathematics is not just that AI will help prove theorems. It is that correctness may become cheap while significance remains scarce. In the traditional model, the central question is often “Can this theorem be proved?” In an AI-rich world, the harder question becomes “Why is this theorem worth proving at all?” That shift is exactly what the funding environment implies. Venture investors are rewarding systems that can formalize, verify, and operationalize reasoning at scale. If those capabilities become abundant, then theorem production itself loses scarcity value. What remains scarce is problem selection, problem formulation, and judgment about what kinds of formal questions open genuinely important possibilities. The market is, in effect, betting that verification will become infrastructure rather than the main bottleneck.

This has serious consequences for pure mathematics. Much of contemporary pure math is organized around niche specialization. Researchers spend years learning the language, literature, and techniques of a narrow domain. That creates depth, but it also creates a structural weakness: specialists often become very good at advancing local frontiers while becoming worse at judging whether those frontiers matter. If AI can increasingly formalize definitions, search proof paths, and verify correctness, then the value of narrow technical fluency declines relative to the value of identifying fertile questions. In that world, the human comparative advantage moves upward, away from symbolic labor and toward framing, taste, and bridge-building.

The capital-markets perspective reinforces this. Investors are not paying billion-dollar valuations because they want a faster version of conventional academic theorem production. They are paying for a pipeline that looks more like “formal reasoning→verified software/science→commercial and strategic value.” That means the financially legible future of mathematics is not “more papers in narrower niches,” but mathematics as a reliability layer for AI systems and scientific workflows. In other words, capital is already selecting for parts of mathematics that connect to proof infrastructure, automation, and trustworthy computation.

This also weakens the old prestige structure of pure mathematics. For a long time, mathematics drew symbolic authority from opacity: most people could not inspect advanced mathematics, so mathematicians inherited status from the subject’s difficulty. But if AI lowers the barrier to formalization and proof production, then mathematics may become less like a closed priesthood and more like an open creative medium.[27] The public, engineers, scientists, and domain users may increasingly contribute valuable problem sources, while AI handles more of the formal execution. The scarce good is no longer the ability to survive inside a narrow literature. The scarce good is the ability to see what matters.

In that sense, AI may push mathematics back toward a more fundamental role. Instead of treating mathematics as an autonomous machine for generating increasingly refined internal abstractions, it may force a renewed emphasis on exploration: what in science, technology, and human life is worth formalizing in the first place? If that happens, then the future of mathematics will not belong simply to those who know the most. It will belong to those who can best identify what is worth knowing.

The public may become far more important to mathematics than the current academic system assumes. Ordinary people encounter real constraints long before they are turned into formal research problems: bottlenecks in logistics, scheduling, pricing, coordination, design, navigation, resource allocation, accessibility, education, and everyday decision-making. In many cases, they can see the friction of the world more clearly than specialists trained inside narrow mathematical literatures. Their comparative advantage is not usually in writing proofs, but in noticing which patterns, failures, and inefficiencies are actually worth formalizing. If AI increasingly lowers the cost of translation from lived experience to mathematical structure, then the public can contribute not only as consumers of mathematical knowledge, but as a major source of new conjectures, optimization problems, geometric questions, and combinatorial structures rooted in reality.

Near-term inertia: why mathematics may look socially stable even if the technology changes fast

The historical forces examined at the start of this essay — elite institutionalization, opacity, the gatekeeping functions of education — do not dissolve the moment technology changes. The social structures built on top of mathematics’s prestige have their own momentum, and understanding them is essential to calibrating how fast the AI-driven changes described above will actually arrive. In the long run, the social meaning of mathematics may change dramatically: if formalization, proof search, and verification become cheap, the public image of mathematics as a domain governed mainly by scarce human symbolic labor could weaken considerably. But the near future will likely look much more conservative. The main reason is not that the technology will be too weak. It is that academic systems usually change more slowly than their tools.

Mathematics is not only a body of results; it is also a social institution. Journals, hiring committees, graduate training, grant systems, seminar culture, and informal prestige networks all help determine what counts as valuable work. These structures are designed to stabilize judgment under uncertainty, and for that reason they do not respond instantly even when a new technology clearly improves what is technically possible. More likely, the first effect of AI will be to increase productivity inside the existing structure rather than to replace the structure itself. Researchers may use AI to read papers faster, explore examples, draft formal proofs, or check arguments more effectively, while still publishing, refereeing, and evaluating work through familiar institutions. The OECD’s Science, Technology and Innovation Outlook 2025 makes a similar point more broadly: AI is increasing researchers’ capabilities in analysis, simulation, and hypothesis generation, but human scientists remain central, and science systems must still adapt institutionally to make transformative use of these tools.[30] Recent work on LLMs in scientific research reaches a consistent conclusion — that AI is already reshaping workflows and productivity, but its role in deeper scientific discovery remains limited and contingent on alignment with human goals.[31]

There is also a specifically sociological reason to expect inertia. The certification of mathematical truth has never depended only on deduction in the abstract; it depends on communities deciding when an argument is trustworthy enough to count as established. A recent sociological study of mathematical peer review shows that mathematicians do not treat publication as a guarantee of correctness: peer review in practice only “adds a bit of certainty,” and results gain or lose trust over time as the community reads, uses, and tests them.[29] The same study documents severe stress on the existing system from an explosion of submitted papers and growing difficulty finding qualified referees.[29] That is directly relevant to the AI era: if AI increases output further, the short-run result may not be a clean transition to formal certainty, but rather even more pressure on already strained mechanisms of certification.

Prestige systems also slow adoption. The sociology of science has long emphasized the “Matthew effect”: recognition and attention accumulate disproportionately around already prestigious people and institutions, while contributions from less prominent researchers receive systematically less visibility.[32] More recent science-of-science work confirms that reputation signaling remains a structural feature of scientific success, including in review and evaluation settings.[33] This suggests that even if formal verification becomes technically available, social judgment will not instantly become reputation-neutral. Formalization may reduce uncertainty about whether a proof is correct, but it does not by itself settle whether a problem is important, whether a result changes the field, or which directions deserve scarce attention. Those judgments will remain partly social, and therefore partly unequal, for some time.

For that reason, the first phase of strong AI in mathematics may be defined by an asymmetry: methods change faster than norms. Researchers will likely adopt AI privately before institutions fully reorganize around it. Some journals may begin to expect stronger correctness checks in certain areas, and formalization may become increasingly desirable for foundational or high-dependence results, but it is unlikely that the profession as a whole will immediately require every paper to be formally verified. The transition will probably be uneven. Fields that already have mature proof libraries, clearer symbolic interfaces, or high downstream reuse may move first, while areas with heavier tacit reasoning, weaker library support, or less immediate reuse may continue to rely mainly on traditional forms of trust. In that sense, the near future of mathematics is likely to be hybrid: technically accelerated, but institutionally conservative.

This near-term conservatism should not be mistaken for evidence that nothing fundamental is changing. It may instead be the calm phase before a deeper reordering. If AI continues to raise the rate of theorem production, then the demand for reliable certification will rise with it. At some point, communities may conclude that informal trust, reputation, and traditional peer review are no longer sufficient to manage the volume. But that shift will happen not simply because the tools exist. It will happen when social institutions are forced to admit that older mechanisms of validation no longer scale.

Conclusion

The sacred image of mathematics rests on a real foundation, but draws the wrong lesson from it. History shows that a handful of foundational mathematical concepts — conic sections, calculus, group theory, Boolean algebra — have been genuinely indispensable to physics, engineering, and computation. That record is what generates the prestige. But prestige, once institutionalized through centuries of elite education, theological association, and the gatekeeping functions of modern schooling, does not calibrate itself against ongoing research output. It becomes self-reinforcing and self-referential, attributing to all of mathematics the significance that belongs only to a fraction of it. The common defense — that even useless research will matter in a hundred years — is partly true at the level of conceptual foundations, but becomes increasingly implausible the deeper into frontier specialization one looks. The honest conclusion is a narrower one: mathematics as a civilizational toolkit is indispensable; most contemporary frontier research is not.

Artificial intelligence introduces a genuinely new element into this picture, and not simply by accelerating proof production. If correctness and formal verification become cheap, the scarcity that has long structured mathematical prestige — the difficulty of constructing arguments that very few others could construct — weakens. What remains scarce is judgment: knowing which questions are worth asking, which formalizations are worth building, which abstractions might eventually bridge to reality. That is a different kind of intelligence from what traditional mathematical training selects for, and one that may be more widely distributed than the current academic structure assumes. In that sense, AI does not just change how mathematics is done. It challenges the implicit theory of what mathematics is fundamentally for.

Yet the sociological reality examined in the final section should temper any expectation of rapid transformation. Institutions designed to stabilize judgment under uncertainty do not reorganize the moment better tools arrive. Journals, hiring committees, and prestige hierarchies will absorb new AI capabilities gradually, unevenly, and partly on their own terms. The near term will probably look hybrid: researchers adopting AI privately while norms lag behind, formalization spreading in fields with strong library infrastructure while tacit-reasoning fields hold to older forms of trust. What this means is that the pace of change will be determined less by what the technology can do than by when social institutions are forced to admit that their older mechanisms of validation no longer scale. That forcing moment has not yet arrived. Whether it will, and what mathematics looks like on the other side, remains genuinely open.


References

[1] Huffman, C.A. (2018). “Pythagoreanism.” Stanford Encyclopedia of Philosophy (Fall 2018 ed.), ed. E.N. Zalta. https://plato.stanford.edu/entries/pythagoreanism/

[2] “Quadrivium.” Encyclopædia Britannica. https://www.britannica.com/topic/quadrivium; “Quadrivium.” Wikipedia. https://en.wikipedia.org/wiki/Quadrivium

[3] Davenport, A.A. (2011). “Mathematical Knowledge and Divine Mystery: Augustine and his Contemporary Challengers.” Christian Scholar’s Review, 40(3). https://christianscholars.com/mathematical-knowledge-and-divine-mystery-augustine-and-his-contemporary-challengers/; Gilson, E. (1960). The Christian Philosophy of Saint Augustine. Random House.

[4] Galilei, G. (1623). Il Saggiatore [The Assayer]. Rome: Accademia dei Lincei. The “book of nature” passage: “Philosophy is written in this grand book—the Universe—which stands continually open to our gaze… It is written in the language of mathematics, and its characters are triangles, circles, and other geometrical figures.” See: “The Assayer.” Wikipedia. https://en.wikipedia.org/wiki/The_Assayer

[5] Newton, I. (1687). Philosophiae Naturalis Principia Mathematica. London: Royal Society.

[6] Martin, D.B. (2000). “Mathematics as ‘Gate-Keeper’ (?).” Journal for Research in Mathematics Education (ERIC). https://files.eric.ed.gov/fulltext/EJ848490.pdf; Moses, R.P., & Cobb, C.E. (2001). Radical Equations: Math Literacy and Civil Rights. Boston: Beacon Press; Boaler, J. (2016). Mathematical Mindsets. San Francisco: Jossey-Bass.

[7] Gauss, C.F. (1801). Disquisitiones Arithmeticae. Leipzig: Gerh. Fleischer. English translation: Clarke, A.A., trans. (1965). Yale University Press. See also: “Disquisitiones Arithmeticae.” Encyclopædia Britannica. https://www.britannica.com/topic/Disquisitiones-Arithmeticae

[8] Rivest, R.L., Shamir, A., & Adleman, L. (1978). “A Method for Obtaining Digital Signatures and Public-Key Cryptosystems.” Communications of the ACM, 21(2), 120–126. https://people.csail.mit.edu/rivest/Rsapaper.pdf (Received by journal April 4, 1977; published February 1978.)

[9] Jacobi, C.G.J. (1829). Fundamenta Nova Theoriae Functionum Ellipticarum. Königsberg: Borntraeger. See also: “Carl Jacobi.” Encyclopædia Britannica. https://www.britannica.com/biography/Carl-Jacobi; MacTutor History of Mathematics biography: https://mathshistory.st-andrews.ac.uk/Biographies/Jacobi/

[10] Miller, V.S. (1985). “Use of Elliptic Curves in Cryptography.” In Advances in Cryptology — CRYPTO ’85, Lecture Notes in Computer Science, vol. 218, pp. 417–426. Springer; Koblitz, N. (1987). “Elliptic Curve Cryptosystems.” Mathematics of Computation, 48(177), 203–209. https://www.ams.org/journals/mcom/1987-48-177/S0025-5718-1987-0866109-5/

[11] Nakamoto, S. (2008). “Bitcoin: A Peer-to-Peer Electronic Cash System.” https://bitcoin.org/bitcoin.pdf (Published on the Cryptography Mailing List, October 31, 2008.); “Elliptic Curve Digital Signature Algorithm.” Bitcoin Wiki. https://en.bitcoin.it/wiki/Elliptic_Curve_Digital_Signature_Algorithm

[12] Lobachevsky, N.I. (1829–30). “O nachalakh geometrii” [On the Principles of Geometry]. Kazanskii vestnik [Kazan Messenger], nos. 25–28. See also: “Nikolay Ivanovich Lobachevsky.” Encyclopædia Britannica. https://www.britannica.com/biography/Nikolay-Ivanovich-Lobachevsky

[13] Einstein, A. (1915). “Die Feldgleichungen der Gravitation” [The Field Equations of Gravitation]. Sitzungsberichte der Preussischen Akademie der Wissenschaften, 25 November 1915, 844–847. See also: “General relativity.” Wikipedia. https://en.wikipedia.org/wiki/General_relativity

[14] Boole, G. (1847). The Mathematical Analysis of Logic: Being an Essay towards a Calculus of Deductive Reasoning. Cambridge: Macmillan. See also: “George Boole.” Encyclopædia Britannica. https://www.britannica.com/biography/George-Boole; “George Boole.” Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/boole/

[15] Shannon, C.E. (1938). “A Symbolic Analysis of Relay and Switching Circuits.” Transactions of the American Institute of Electrical Engineers, 57(12), 713–723. https://www.cs.virginia.edu/~evans/greatworks/shannon38.pdf; MIT thesis repository: https://dspace.mit.edu/handle/1721.1/11173. Note: The thesis was submitted to MIT in 1937 and published in 1938; it received the Alfred Noble Prize in 1940, which is the common source of the misattributed 1940 publication date.

[16] Apollonius of Perga (c. 200 BCE). Conics (Konika). (Books I–IV survive in Greek; Books V–VII in medieval Arabic translation.) See also: “Apollonius of Perga.” Encyclopædia Britannica. https://www.britannica.com/biography/Apollonius-of-Perga

[17] Kepler, J. (1609). Astronomia Nova. Prague. Modern edition: Donahue, W.H., trans. (1992). New Astronomy. Cambridge University Press. See also: “Astronomia nova.” Wikipedia. https://en.wikipedia.org/wiki/Astronomia_nova

[18] Galois’s manuscripts date to 1831; he was killed in 1832. Published posthumously: Galois, É. (1846). “Œuvres mathématiques d’Évariste Galois.” Journal de Mathématiques Pures et Appliquées, 11, 381–444. See also: “Galois theory.” Wikipedia. https://en.wikipedia.org/wiki/Galois_theory

[19] Stokes, D.E. (1997). Pasteur’s Quadrant: Basic Science and Technological Innovation. Washington, D.C.: Brookings Institution Press. https://www.brookings.edu/books/pasteurs-quadrant/; see also Bush, V. (1945). Science, the Endless Frontier. United States Government Printing Office (the foundational statement of the linear model that Stokes critiques).

[20] Boyer, C.B., & Merzbach, U.C. (1991). A History of Mathematics (2nd ed.). New York: Wiley. (Standard scholarly history documenting mathematics’ origins in measurement, counting, and geometry.)

[21] Newton, I. (1736). The Method of Fluxions and Infinite Series [composed c. 1671]. London: Henry Woodfall. See also: “Method of Fluxions.” Wikipedia. https://en.wikipedia.org/wiki/Method_of_Fluxions

[22] David, F.N. (1962). Games, Gods and Gambling: A History of Probability and Statistical Ideas. London: Griffin. The Pascal–Fermat correspondence (1654) on the “problem of points” is the canonical founding moment; see also: “History of probability.” Wikipedia. https://en.wikipedia.org/wiki/History_of_probability

[23] Fourier, J.B.J. (1822). Théorie analytique de la chaleur [The Analytical Theory of Heat]. Paris: Firmin Didot. English translation: Freeman, A., trans. (1878). Cambridge University Press. See also: “The Analytical Theory of Heat.” Encyclopædia Britannica. https://www.britannica.com/topic/The-Analytical-Theory-of-Heat

[24] “Verifiable AI Startup Axiom Raises $200M to Prove AI-Generated Code Is Safe to Use.” SiliconAngle, March 12, 2026. https://siliconangle.com/2026/03/12/verifiable-ai-startup-axiom-raises-200m-prove-ai-generated-code-safe-use/; Axiom Series A announcement on X: https://x.com/axiommathai/status/2032095751875231822; Axiom on LinkedIn (mission framing): https://www.linkedin.com/posts/axiommath_axiom-came-out-of-stealth-last-october-sharing-activity-7437871128997601280-Atar

[25] “Harmonic Builds Momentum Towards Mathematical Superintelligence with $120 Million Series C.” BusinessWire, November 25, 2025. https://www.businesswire.com/news/home/20251125727962/en/Harmonic-Builds-Momentum-Towards-Mathematical-Superintelligence-with-$120-Million-Series-C; see also: “Harmonic AI Raises $120M at $1.45B Valuation to Advance Mathematical Reasoning.” SiliconAngle, November 25, 2025. https://siliconangle.com/2025/11/25/harmonic-ai-raises-120m-1-45b-valuation-advance-mathematical-reasoning/

[26] Bird, A. (2022). “Thomas Kuhn.” Stanford Encyclopedia of Philosophy (Spring 2022 ed.), ed. E.N. Zalta. https://plato.stanford.edu/entries/thomas-kuhn/. Primary source: Kuhn, T.S. (1962). The Structure of Scientific Revolutions. Chicago: University of Chicago Press. The SEP article summarizes Kuhn’s core argument: in “normal science,” a paradigm “supply[ies] puzzles for scientists to solve and … the tools for their solution,” with revolutionary science being the exception rather than the rule.

[27] Gowers, T., & Nielsen, M. (2009). “Massively collaborative mathematics.” Nature, 461, 879–881. https://www.nature.com/articles/461879a. Reports on the Polymath Project — initiated in January 2009 on Gowers’s blog — in which many mathematicians collaborated openly online to solve a hard combinatorial problem (the density Hales–Jewett theorem). The article argues that open, distributed collaboration can succeed in mathematics, demonstrating that the field need not be organised exclusively around isolated specialists.

[28] Casado, M., & McCormick, M. (2026). “AI Will Write All the Code. Mathematics Will Prove It Works.” Menlo Ventures Perspective. https://menlovc.com/perspective/ai-will-write-all-the-code-mathematics-will-prove-it-works/. A lead investor in Axiom’s Series A articulates the capital-markets thesis: formal mathematics and proof verification are emerging as infrastructure for trustworthy AI, not merely as academic pursuits.

[29] Greiffenhagen, C. (2024). “Judging Importance before Checking Correctness: Quick Opinions in Mathematical Peer Review.” Social Studies of Science, 54(1), 3–32. https://journals.sagepub.com/doi/10.1177/01622439231203445. A sociological study of how mathematicians evaluate submitted work, finding that peer review in practice adds only incremental certainty about correctness, that trust in results accumulates communally over time, and that the system faces mounting strain from submission volume and referee scarcity.

[30] OECD. (2025). Science, Technology and Innovation Outlook 2025. OECD Publishing. https://www.oecd.org/en/publications/oecd-science-technology-and-innovation-outlook-2025_5fe57b90-en.html. See especially the chapter “How science systems need to adapt to support transformative change,” which argues that AI raises researcher capabilities but that realizing transformative benefit requires deliberate institutional adaptation — not just the presence of new tools.

[31] Zhang, Y., Khan, S.A., Mahmud, A., et al. (2025). “Exploring the role of large language models in the scientific method: from hypothesis to discovery.” npj Artificial Intelligence. https://www.nature.com/articles/s44387-025-00019-5. Reviews LLM applications across the scientific cycle and concludes that deep integration requires alignment with human scientific goals; AI enhances productivity and reshapes workflows, but does not currently displace human agency at the level of problem selection and discovery.

[32] Merton, R.K. (1968). “The Matthew Effect in Science.” Science, 159(3810), 56–63. https://www.science.org/doi/10.1126/science.159.3810.56. The original formulation: eminent scientists receive disproportionate credit in cases of collaboration or simultaneous discovery, while less-established researchers receive systematically less recognition — a cumulative-advantage dynamic that shapes how attention and resources are distributed across the field.

[33] Wang, D., & Barabási, A.-L. (2021). The Science of Science. Cambridge: Cambridge University Press. https://www.cambridge.org/core/books/science-of-science/572A745A6F97B55A263F5E86225E3F70. A comprehensive quantitative treatment of how scientific careers, collaboration, and impact actually work; documents the persistence of the Matthew effect and reputation signaling as structural features of science, including in review and evaluation contexts.