
A First-Person Account from a Mathematics Laboratory at the University of Tennessee, Knoxville — 1991 — and the Government Machinery That Made It Possible by a graduate student who was there | Research & Historical Context: UT Knoxville / Oak Ridge / DARPA / DOE
I. The Lab on the Hill
I had no particular reason to be there. I was a graduate student with an innate curiosity and a friend — a brilliant Ph.D. candidate from Venezuela — who happened to work in the mathematics department at the University of Tennessee, Knoxville. I would stop by his lab between my own commitments, half for the conversation, half because there was something in the air of that room I could not name. It was 1991. The Cold War was dissolving. Nirvana was on the radio. And in a room filled with the hum of powerful computers and monitors scrolling long ribbons of numbers, a small team of mathematicians was quietly doing something that would eventually reshape civilization.
I didn’t know that then. I knew only that the screens made no immediate sense to me — strings of floating-point numbers, iterative computations, matrices alive with motion. When I asked my friend what they were working on, he said, simply, “Artificial intelligence.” I nodded as if I understood. I did not. Nobody really did, not in the way the phrase is understood today. But the work was real, the funding was federal, and the stakes — though invisible to a curious visitor — were geopolitical.
This is not a story about my contribution. I had none. It is a story about what I accidentally witnessed, and what the historical record — once you dig into it — reveals about the extraordinary, largely invisible government apparatus that had been financing that kind of work for decades. If you’ve heard people suggest lately that the military is trying to hijack AI from Silicon Valley corporations, let me tell you: they were never not involved. From the very beginning, the Pentagon was the landlord.
II. The Tennessee Corridor and Its Secret
To understand what was happening in that UT Knoxville mathematics lab, you need to understand the geography. Knoxville and Oak Ridge sit roughly 25 miles apart in East Tennessee, connected not merely by highway but by a research relationship forged during the Manhattan Project in World War II. The Oak Ridge National Laboratory — built in wartime secrecy to produce enriched uranium — never really stopped being a national security facility. It just changed its definition of what “security” required.
By the 1980s, it required artificial intelligence.
The institutional bond between UT and Oak Ridge was never merely symbolic. More than 80 researchers have held joint appointments as UT faculty and ORNL scientists simultaneously. UT professors were physically embedded inside Oak Ridge, drawing from Department of Energy budgets, and working on problems that blurred the line between pure mathematics and national defense. When my Venezuelan friend described “AI research” in 1991, he was describing work that existed within this ecosystem — an ecosystem stretching from the UT math building across the ridge to one of the most important scientific installations in the Western world.
That partnership began during World War II and has never ended. It simply evolved. What started as uranium enrichment calculations became computational physics, then expert systems, then neural networks, then the exascale AI systems running today on ORNL’s Frontier supercomputer — the most powerful machine on Earth at the time of its launch in 2022. Every step of that evolution was funded by the federal government, primarily through the Department of Energy and the Department of Defense.
The 1979 Ignition
The institutional history is precise. In October 1979, —(the year I graduated from high school in Clinton, Illinois and in fact took a summer job that year building the Clinton Nuclear Power Plant)—, Oak Ridge launched the Oak Ridge Applied Artificial Intelligence Project — a formal DOE-funded initiative to evaluate AI’s potential for scientific research. The four areas targeted were spectroscopy, environmental management, nuclear fuel reprocessing, and programming assistance. Each of those domains had national security implications that required no imagination to see.
Among the key contributors documented at the lab in 1981 was Sara Jordan — a University of Tennessee professor holding a joint appointment at ORNL. She appears in archived photographs alongside other early AI researchers in what ORNL’s own historical record calls its first serious AI team. The institutional boundary between UT and Oak Ridge was, in practice, more administrative than actual.
The machines they ran in those early years included the DECSystem-10, loaded with AI frameworks borrowed from Rutgers University — rule-based expert systems that could chain logical inferences together the way a trained scientist would. The system, called EXPERT, was originally designed for healthcare consulting. At Oak Ridge it was retooled for spectroscopy, allowing computers to identify functional groups in organic molecules — work with direct applications in nuclear chemistry and environmental monitoring.
By 1984, ORNL had developed what they called SAM — the Simulation Analysis Module — an AI system that could model energy dynamics in real structures and adapt its own strategies over time. ORNL mathematician Alan Solomon described it in the 1984 ORNL Review: the system performed statistical analyses, made logical inferences, and learned to address novel situations it had not previously encountered. In today’s language, SAM was an early form of machine learning, deployed first in a nuclear laboratory’s thermal management program.
What I watched on those screens in 1991 was a direct descendant of that lineage. The numbers streaming past weren’t random. They were iterative mathematical processes — optimization loops, matrix operations, early neural network weight updates — the numerical scaffolding of machine cognition. I was watching mathematics trying to teach itself to think.
III. The Money Trail — The Military Was Never Absent
The contemporary narrative — that a civilian tech industry built AI and a militaristic government now wants to commandeer it — is historically inverted. The Department of Defense, through DARPA, was the primary funder of American AI research from the field’s founding through the 1990s. Not a contributor. The primary funder. The engine without which the field would not have existed in any recognizable form.
In June 1963, MIT received a $2.2 million grant from ARPA — the early name for what became DARPA — as seed money for Project MAC, which subsumed the original AI research groups of Minsky and McCarthy. DARPA continued providing approximately $3 million per year to MIT until the 1970s, and made parallel grants to Carnegie Mellon University and Stanford’s Artificial Intelligence Laboratory, founded by John McCarthy in 1963. These three institutions — MIT, CMU, and Stanford — became the foundational “centers of excellence” in American AI, and every one of them was built on Pentagon money.
The National Academies of Science, in a comprehensive 1999 study titled “Funding a Revolution: Government Support for Computing Research,” concluded without equivocation: from the 1960s through the 1990s, DARPA provided the bulk of the nation’s support for AI research and thus helped legitimize AI as an important field of inquiry. Not “a significant portion.” The bulk.
The Defense Science Board — a panel of civilian experts advising the Department of Defense — formally ranked AI second from the top of all technologies most likely to produce an order-of-magnitude impact on defense capability in the 1990s. This ranking occurred in 1981, two years before the most dramatic federal AI investment in history.
The Strategic Computing Initiative: DARPA’s Billion-Dollar Bet
In October 1983, DARPA presented Congress with a document outlining the Strategic Computing Initiative. It did not soften its language.
The document stated directly:
“If the new generation technology evolves as we now expect, there will be unique new opportunities for military applications of computing. Instead of fielding simple guided missiles or remotely piloted vehicles, we might launch completely autonomous land, sea, and air vehicles capable of complex, far-ranging reconnaissance and attack missions. The possibilities are quite startling, and suggest that new generation computing could fundamentally change the nature of future conflicts.”
This was a budget justification presented to elected representatives, not a science fiction proposal. Between 1983 and 1993, DARPA spent over one billion dollars on this program. Its stated goals included autonomous pilot’s-associate systems to assist fighter jet navigation, AI-driven battle management systems, and self-directing military vehicles on land, sea, and in the air. Every one of those goals required fundamental advances in natural language processing, machine vision, and neural networks — the same mathematical domains being worked on in university laboratories across the country, including the one I wandered into in Knoxville.
The universities that received Strategic Computing Initiative funding formed the intellectual backbone of the AI research community in the late 1980s and early 1990s. When DARPA tripled its AI investment between 1984 and 1988, that money flowed into graduate programs, into postdoctoral fellowships, into the computational infrastructure of university labs. It transformed into human knowledge — into the Ph.D. candidates who sat at those glowing monitors, running those strings of numbers that meant nothing to a curious visitor and everything to the future.
The Gulf War Proof — 1991
Remarkably, in the same year I stood in that UT mathematics lab, the United States military deployed its most significant real-world AI system to date. The Dynamic Analysis and Replanning Tool — known as DART — was a DARPA-funded artificial intelligence logistics system used during Operation Desert Storm to schedule the transportation of supplies, personnel, and equipment across a theatre of war. It used intelligent agents to handle the combinatorial complexity of military logistics in real time.
DART reportedly saved the military more money in that single operational deployment than DARPA had spent on AI research over the preceding decade. The math being run on those computers in Knoxville in 1991 was part of the same national intellectual project that produced DART. Different lab, different application domain — same funding lineage, same mathematical foundations, same America.
IV. A Chronology of Federal AI Investment — From Oak Ridge to the Exascale Era
1943: The Manhattan Project creates the UT–Oak Ridge research bond. UT faculty, students, and infrastructure become intertwined with the federal laboratory that will eventually become ORNL. The wartime relationship never fully ends.
1954: ORNL and Argonne National Laboratory unveil the Oak Ridge Automatic Computer and Logical Engine — ORACLE. For a brief period it is the world’s fastest computer. It is used for nuclear physics calculations and reduces computations that would take years on adding machines to minutes.
1958: ARPA is founded by President Eisenhower in response to Sputnik. Its Information Processing Techniques Office will become the primary federal engine of AI research for three decades.
1963: ARPA provides MIT with $2.2 million to found Project MAC, subsuming the Minsky-McCarthy AI research group. Parallel grants go to Carnegie Mellon and Stanford. The three foundational American AI institutions are all built on federal defense money.
1979: ORNL formally launches the Oak Ridge Applied Artificial Intelligence Project with DOE funding. UT faculty hold joint appointments. Expert systems for spectroscopy and nuclear applications begin active development. The UT–ORNL AI corridor is open for business.
1981: UT Professor Sara Jordan is documented as a key contributor to ORNL’s early AI team. The Defense Science Board ranks AI second among all technologies for military impact in the coming decade.
1983: DARPA announces the Strategic Computing Initiative — $1 billion over ten years for autonomous military AI, pilot systems, and battle management. Universities across the country, including those in the ORNL research network, compete for grants.
1984–1988: DARPA triples its AI investment. The money floods university mathematics and computer science departments. Neural networks, speech recognition, and expert systems all advance significantly. The “AI winter” of the late 1980s is itself caused by DARPA pulling back funding when results don’t meet military timelines — proof, if proof were needed, of who was in control of the field’s temperature.
1991: DART deploys in Operation Desert Storm. A graduate student with no mathematical expertise visits a friend’s lab at UT Knoxville and watches numbers scroll across monitors. DARPA reorganizes, creating the Software and Intelligent Systems Technology Office — a dedicated federal office for “machine intelligence and software engineering.”
2000: UT-Battelle, a 50-50 partnership between the University of Tennessee and Battelle Memorial Institute, assumes formal co-management of Oak Ridge National Laboratory. The de facto partnership of six decades becomes institutional. The largest science and energy lab in the Department of Energy system is now officially co-managed by a university.
2012: ORNL launches Titan — the first supercomputer integrating GPUs with CPUs, enabling rapid neural network prototyping. The GPU integration is considered a risky architectural choice at the time. It proves transformative and accelerates AI research in ways that would not be fully visible for another decade.
2014: ORNL develops MENNDL — Multimode Evolutionary Neural Networks for Deep Learning — an algorithm that automatically generates neural networks outperforming those designed by human experts. It is later licensed to General Motors for vehicle technology development.
2022: ORNL’s Frontier supercomputer breaks the exascale barrier, performing more than one billion billion calculations per second. It is the world’s most powerful computer for AI-driven science. It is managed by UT-Battelle for the Department of Energy. The machines that descend from those DECSystem-10s in the 1979 Oak Ridge lab now occupy the top of the global computing hierarchy.
2023: The Department of Energy announces a $67 million investment in AI for Science, with ORNL leading multiple projects in large language models for high-performance computing and scientific machine learning.
2026: DOE launches the Genesis Mission — a coordinated AI initiative across all 17 national laboratories including Oak Ridge. The stated goal: “secure American leadership in artificial intelligence for science, energy, and national security.” The language is word-for-word consistent with DARPA’s 1983 congressional briefing. The circle is complete.
V. The Myth Corrected — “The Military Is Trying to Take Over AI”
This sentence, or some version of it, appears regularly in technology journalism. It implies a historical sequence: that private enterprise created AI, that the military watched from the sidelines, and that the Pentagon is now — belatedly and aggressively — attempting to claim what was built without its involvement. Every word of that sequence is wrong.
The Department of Defense was not a latecomer to artificial intelligence. It was the field’s founding patron. DARPA’s Information Processing Techniques Office, established in 1962, transformed AI from scattered curiosities at a handful of universities into a nationally coordinated research enterprise with defined goals, defined institutions, and federal funding sufficient to sustain a generation of mathematicians and computer scientists. Without that money, the field as we know it does not exist. The private sector had no mechanism and no incentive to fund the kind of long-horizon, high-risk, mathematically abstract research that AI required through the 1960s, 1970s, and 1980s.
The “AI winters” — the two major periods of reduced progress and funding in the 1970s and late 1980s — were not caused by private market failures. They were caused by the military’s withdrawal of support when progress failed to meet Pentagon expectations. When DARPA pulled funding, university labs contracted, researchers left the field, and the term “artificial intelligence” became so stigmatized that researchers began using euphemisms like “computational intelligence” and “informatics” to avoid being associated with it on grant applications. The winters were budgetary, not scientific.
What changed after roughly 2012 was not military interest in AI — that never wavered — but the emergence of private capital as a competing funder. Google, Microsoft, Apple, and eventually OpenAI built commercial AI products on mathematical foundations laid by DARPA grants, trained on computational infrastructure descended from the same DOE supercomputing lineage that runs through Oak Ridge. The Internet itself — the substrate on which all of this runs — was a DARPA project. The GPU computing paradigm that makes modern deep learning possible was accelerated by government supercomputing investments at national laboratories.
When commentators say “the military wants to hijack AI from corporations,” they are describing a negotiation between two parties over territory that was always, at the level of foundational research, federally owned and federally built. The corporations arrived later, built faster, and became more publicly visible. That is not the same as having built the foundation.
The politicians understood this dynamic, even when the public did not. In 2015, the Department of Defense unveiled its “Third Offset Strategy,” which formally declared that rapid advances in AI would define the next generation of warfare. The word “next” was diplomatic. AI had been defining military planning since 1963. The strategy was codification, not revelation.
In 2019, an executive order directed the Department of Energy to coordinate AI development across all 17 national laboratories. In 2026, the Genesis Mission made that coordination explicit and public. Neither of these was a new policy direction. Both were the latest chapter in a story that began when ARPA handed MIT a check in the summer of 1963 and told the mathematicians to figure out how to make machines think.
VI. What Those Numbers Meant
I think about those monitors sometimes. The numbers that scrolled across them — in a language I didn’t speak, generated by a machine I couldn’t fully comprehend, funded by agencies whose names I probably wouldn’t have recognized — were among the early written sentences of a new kind of mind. Not conscious. Not feeling. But capable of learning, of inference, of pattern recognition at a scale no human neuron could match.
My Venezuelan friend went on to finish his doctorate. I moved on to new adventures in a different field. The computers in that lab were eventually replaced by faster ones, which were replaced by still faster ones, until the descendants of those machines now run the exascale systems at Oak Ridge — systems capable of a billion billion operations per second, housing models that generate language, analyze cancer scans, simulate nuclear reactions, and predict protein structures that took decades to unravel by hand.
I had no contribution to any of it. I was an observer, a visitor, a person with curiosity and the good fortune of a well-placed friendship. But I was there. And I can tell you that it did not look like what you see in movies. It looked like mathematics. It looked like patience. It looked like a room full of people who understood, better than anyone around them, that they were working on something that did not yet have a name the world would use.
They called it artificial intelligence. They were right.
And the military — whatever the current conversation implies — was not absent from that room. It was in the building across the ridge, and in the budget lines that paid for the electricity, and in the grant numbers stamped on the research proposals stacked on the desk beside the monitors. It had been there since the beginning. It will be there at whatever comes next.
That is not a warning. It is simply the history, told straight.
This article was written for readers who were there — scientists, mathematicians, and the merely curious — who deserve a history told without revision.