George Boole: A 200-Year View

Today is the 200th anniversary of the birth of George Boole. In our modern digital world, we’re always hearing about “Boolean variables”—1 or 0, true or false. And one might think, “What a trivial idea! Why did someone even explicitly need to invent it?” But as is so often the case, there’s a deeper story—for Boolean variables were really just a side effect of an important intellectual advance that George Boole made.

When George Boole came onto the scene, the disciplines of logic and mathematics had developed quite separately for more than 2000 years. And George Boole’s great achievement was to show how to bring them together, through the concept of what’s now called Boolean algebra. And in doing so he effectively created the field of mathematical logic, and set the stage for the long series of developments that led for example to universal computation.

When George Boole invented Boolean algebra, his basic goal was to find a set of mathematical axioms that could reproduce the classical results of logic. His starting point was ordinary algebra, with variables like x and y, and operations like addition and multiplication.

At first, ordinary algebra seems a lot like logic. After all, p and q is the same as q and p, just as p×q = q×p. But if one looks in more detail, there are differences. Like p×p = p2, but p and p is just p. Somewhat confusingly, Boole used the notation of standard algebra, but added special rules to create an axiom system that he then showed could reproduce all the usual results of logic.

Boole was rather informal in the way he described his axiom system. But within a few decades, it had been more precisely formalized, and over the course of the century that followed, a few progressively simpler forms of it were found. And then, as it happens, 15 years ago I ended up finishing this 150-year process, by finding—largely as a side effect of other science I was doing—the provably very simplest possible axiom system for logic, that actually happens to consist of just a single axiom.

Boolean logic axiom systems, simplifying down to a single axiom

This essay is also in Idea Makers: Personal Perspectives on the Lives & Ideas of Some Notable People »

I thought this axiom was pretty neat, and looking at where it lies in the space of possible axioms has interesting implications for the foundations of mathematics and logic. But in the context of George Boole, one can say that it’s a minimal version of his big idea: that one can have a mathematical axiom system that reproduces all the results of logic just by what amount to simple algebra-like transformations.

Who Was George Boole?

But let’s talk about George Boole, the person. Who was he, and how did he come to do what he did?

George Boole was born (needless to say) in 1815, in England, in the fairly small town of Lincoln, about 120 miles north of London. His father had a serious interest in science and mathematics, and had a small business as a shoemaker. George Boole was something of a self-taught prodigy, who first became locally famous at age 14 with a translation of a Greek poem that he published in the local newspaper. At age 16 he was hired as a teacher at a local school, and by that time he was reading calculus books, and apparently starting to formulate what would later be his idea about relations between mathematics and logic.

At age 19, George Boole did a startup: he started his own elementary school. It seems to have been decently successful, and in fact Boole continued making his living running (or “conducting” as it was then called) schools until he was in his thirties. He was involved with a few people educated in places like Cambridge, notably through the local Mechanics’ Institute (a little like a modern community college). But mostly he seems just to have learned by reading books on his own.

He took his profession as a schoolteacher seriously, and developed all sorts of surprisingly modern theories about the importance of understanding and discovery (as opposed to rote memorization), and the value of tangible examples in areas like mathematics (he surely would have been thrilled by what’s now possible with computers).

When he was 23, Boole started publishing papers on mathematics. His early papers were about hot topics of the time, such as calculus of variations. Perhaps it was his interest in education and exposition that led him to try creating different formalisms, but soon he became a pioneer in the “calculus of operations”: doing calculus by manipulating operators rather than explicit algebraic expressions.

It wasn’t long before he was interacting with leading British mathematicians of the day, and getting positive feedback. He considered going to Cambridge to become a “university person”, but was put off when told that he would have to start with the standard undergraduate course, and stop doing his own research.

Mathematical Analysis of Logic

Logic as a field of study had originated in antiquity, particularly with the work of Aristotle. It had been a staple of education throughout the Middle Ages and beyond, fitting into the practice of rote learning by identifying specific patterns of logical arguments (“syllogisms”) with mnemonics like “bArbArA” and “cElArEnt”. In many ways, logic hadn’t changed much in over a thousand years, though by the 1800s there were efforts to make it more streamlined and “formal”. But the question was how. And in particular, should this happen through the methods of philosophy, or mathematics?

In early 1847, Boole’s friend Augustus de Morgan had become embroiled in a piece of academic unpleasantness over the question. And this led Boole quickly to go off and work out his earlier ideas about how logic could be formulated using mathematics. The result was his first book, The Mathematical Analysis of Logic, published the same year:

Boole's "The Mathematical Analysis of Logic"

The book was not long—only 86 pages. But it explained Boole’s idea of representing logic using a form of algebra. The notion that one could have an algebra with variables that weren’t just ordinary numbers happened to have just arisen in Hamilton’s 1843 invention of quaternion algebra—and Boole was influenced by this. (Galois had also done something similar in 1832 working on groups and finite fields.)

150 years before Boole, Gottfried Leibniz had also thought about using algebra to represent logic. But he’d never managed to see quite how. And the idea seems to have been all but forgotten until Boole finally succeeded in doing it in 1847.

Looking at Boole’s book today, much of it is quite easy to understand. Here, for example, is him showing how his algebraic formulation reproduces a few standard results in logic:

Boole's algebraic formulations, reproducing standard results in logic

At a surface level, this all seems fairly straightforward. “And” is represented by multiplication of variables xy, “not” by 1–x, and “(exclusive) or” by x+y–2xy. There are also extra constraints like x2=x. But when one tries digging deeper, things become considerably murkier. Just what are x and y supposed to be? Today we’d call these Boolean variables, and imagine they could have discrete values 1 or 0, representing true or false. But Boole seems to have never wanted to talk about anything that explicit, or anything discrete or combinatorial. All he ever seemed to discuss was algebraic expressions and equations—even to the point of using series expansions to effectively enumerate possible combinations of values for logical variables.

The Laws of Thought

When Boole wrote his first book he was still working as a teacher and running a school. But he had also become well known as a mathematician, and in 1849, when Queen’s College, Cork (now University College Cork) opened in Ireland, Boole was hired as its first math professor. And once in Cork, Boole started to work on what would become his most famous book, An Investigation of the Laws of Thought:

Boole's "An Investigation of the Laws of Thought"

His preface began: “The design of the following treatise is to investigate the fundamental laws of those operations of the mind by which reasoning is performed; to give expression to them in the symbolical language of a Calculus, and upon this foundation to establish the science of Logic and construct its method; …”

Boole appears to have seen himself as trying to create a calculus for the “science of intellectual powers” analogous to Newton’s calculus for physical science. But while Newton had been able to rely on concepts like space and time to inform the structure of his calculus, Boole had to build on the basis of a model of how the mind works, which for him was unquestionably logic.

The first part of Laws of Thought is basically a recapitulation of Boole’s earlier book on logic, but with additional examples—such as a chapter covering logical proofs about the existence and characteristics of God. The second part of the book is in a sense more mathematically traditional. For instead of interpreting his algebraic variables as related to logic, he interprets them as traditional numbers corresponding to probabilities—and in doing so shows that the laws for combining probabilities of events have the same structure as the laws for combining logical statements.

For the most part Laws of Thought reads like a mathematical work, with abstract definitions and formal conclusions. But in the final chapter Boole tries to connect what he has done to empirical questions about the operation of the mind. He discusses how free will can be compatible with definite laws of thought. He talks about how imprecise human experiences can lead to precise concepts. He discusses whether there is truth that humans can recognize that goes beyond what mathematical laws can ever explain. And he talks about how an understanding of human thinking should inform education.

The Rest of Boole’s Life

After the publication of Laws of Thought, George Boole stayed in Cork, living another decade and dying in 1864 of pneumonia at the age of 49. He continued to publish widely on mathematics, but never published on logic again, though he probably intended to do so.

In his lifetime, Boole was much more recognized for his work on traditional mathematics than on logic. He wrote two textbooks, one in 1859 on differential equations, and one in 1860 on difference equations. Both are clean and elegant expositions. And interestingly, while there are endless modern alternatives to Boole’s Differential Equations, sufficiently little has been done on difference equations that when we were implementing them in Mathematica in the late 1990s, Boole’s 1860 book was still an important reference, notable especially for its nice examples of the factorization of linear difference operators.

What Was Boole Like?

What was Boole like as a person? There’s quite a bit of information on this, not least from his wife’s writings and from correspondence and reminiscences his sister collected when he died. From what one can tell, Boole was organized and diligent, with careful attention to detail. He worked hard, often late into the night, and could be so engrossed in his work that he became quite absent minded. Despite how he looks in pictures, he appears to have been rather genial in person. He was well liked as a teacher, and was a talented lecturer, though his blackboard writing was often illegible. He was a gracious and extensive correspondent, and made many visits to different people and places. He spent many years managing people, first at schools, and then at the university in Cork. He had a strong sense of justice, and while he did not like controversy, he was occasionally involved in it, and was not shy to maintain his position.

Despite his successes, Boole seems to have always thought of himself as a self-taught schoolteacher, rather than a member of the academic elite. And perhaps this helped in his ability to take intellectual risks. Whether it was playing fast and loose with differential operators in calculus, or finding ways to bend the laws of algebra so they could apply to logic, Boole seems to have always taken the attitude of just moving forward and seeing where he could go, trusting his own sense of what was correct and true.

Boole was single most of his life, though finally married at the age of 40. His wife, Mary Everest Boole, was 17 years his junior, and she outlived him by 52 years, dying in 1916. She had an interesting story in her own right, later in her life writing books with titles like Philosophy and Fun of Algebra, Logic Taught by Love, The Preparation of the Child for Science and The Message of Psychic Science to the World. George and Mary Boole had five daughters—who, along with their own children, had a wide range of careers and accomplishments, some quite mathematical.

Legacy

It is something of an irony that George Boole, committed as he was to the methods of algebra, calculus and continuous mathematics, should have come to symbolize discrete variables. But to be fair, this took a while. In the decades after he died, the primary influence of Boole’s work on logic was on the wave of abstraction and formalization that swept through mathematics—involving people like Frege, Peano, Hilbert, Whitehead, Russell and eventually Gödel and Turing. And it was only in 1937, with the work of Claude Shannon on switching networks, that Boolean algebra began to be used for practical purposes.

Today there is a lot on Boolean computation in Mathematica and the Wolfram Language, and in fact George Boole is the person with the largest number (15) of distinct functions in the system named after them.

But what has made Boole’s name so widely known is not Boolean algebra, it’s the much simpler notion of Boolean variables, which appear in essentially every computer language—leading to a progressive increase in mentions of the word “boolean” in publications since the 1950s:

The word "boolean" has appeared in increasing numbers of publications since the 1950s

Was this inevitable? In some sense, I suspect it was. For when one looks at history, sufficiently simple formal ideas have a remarkable tendency to eventually be widely used, even if they emerge only slowly from quite complex origins. Most often what happens is that at some moment the ideas become relevant to technology, and quickly then go from curiosities to mainstream.

My work on A New Kind of Science has made me think about enumerations of what amount to all possible “simple formal ideas”. Some have already become incorporated in technology, but many have not yet. But the story of George Boole and Boolean variables provides an interesting example of what can happen over the course of centuries—and how what at first seems obscure and abstruse can eventually become ubiquitous.

6 comments

  1. booleDay = WolframAlpha[“200th birthday of George Boole”, {{“Result”, 1}, “Plaintext”}];
    mondaysDate = “Monday, November 2, 2015”;
    booleDay == mondaysDate

  2. An Investigation of the Laws of Thought was described by the philosopher and mathematician Bertrand Russell as ‘the work in which pure mathematics was discovered’.

  3. // boolean spectrum

    function logicn_arr(op,arr) {
    var i = 0;
    var max = ( 1 << ( 1 <=0 ; x– ) {
    if( arr[x] == max ) {
    i |= ( 1 << ((arr.length-1)-x) );
    } else if( arr[x] == 0 ) {
    // do nothing
    } else {
    // next

    c = true;
    }
    }
    if(!c) return ( op % ( ( 1 <>> 0 ) ) > ( ( ( 1 <>> 0 ) – 1 ) ? max : 0;
    return null;
    }

  4. Mary Boole, George Boole’s wife, claimed that there was profound influence — via her uncle George Everest — of Indian thought in general and Indian logic, in particular, on George Boole, as well as on Augustus De Morgan and Charles Babbage:

    “Think what must have been the effect of the intense Hinduizing of three such men as Babbage, De Morgan, and George Boole on the mathematical atmosphere of 1830–65. What share had it in generating the Vector Analysis and the mathematics by which investigations in physical science are now conducted?” (See Boole, Mary Everest Indian Thought and Western Science in the Nineteenth Century, Boole, Mary Everest Collected Works eds. E. M. Cobham and E. S. Dummer, London, Daniel 1931 pp.947–967)

    Navya Nyaya, the Indian school of logic (13th century), had an algebraic formulation for logic centuries prior.

  5. Interesting that formula from 1999:

    ((p Nand q) Nand r) Nand (p Nand ((p Nand r) Nand p)) = r.

    I wonder why this formula was not known before 1999.