Dropping In on Gottfried Leibniz

I’ve been curious about Gottfried Leibniz for years, not least because he seems to have wanted to build something like Mathematica and Wolfram|Alpha, and perhaps A New Kind of Science as well—though three centuries too early. So when I took a trip recently to Germany, I was excited to be able to visit his archive in Hanover.

Leafing through his yellowed (but still robust enough for me to touch) pages of notes, I felt a certain connection—as I tried to imagine what he was thinking when he wrote them, and tried to relate what I saw in them to what we now know after three more centuries:

Page of Gottfried Leibniz's notes

Some things, especially in mathematics, are quite timeless. Like here’s Leibniz writing down an infinite series for √2 (the text is in Latin):

Example of Leibniz writing down an infinite series for Sqrt[2]

Or here’s Leibniz try to calculate a continued fraction—though he got the arithmetic wrong, even though he wrote it all out (the Π was his earlier version of an equal sign):

Leibniz calculating a continued fraction

Or here’s a little summary of calculus, that could almost be in a modern textbook:

Summary of calculus from Leibniz

But what was everything else about? What was the larger story of his work and thinking?

I have always found Leibniz a somewhat confusing figure. He did many seemingly disparate and unrelated things—in philosophy, mathematics, theology, law, physics, history, and more. And he described what he was doing in what seem to us now as strange 17th century terms.

But as I’ve learned more, and gotten a better feeling for Leibniz as a person, I’ve realized that underneath much of what he did was a core intellectual direction that is curiously close to the modern computational one that I, for example, have followed.

Gottfried Leibniz was born in Leipzig in what’s now Germany in 1646 (four years after Galileo died, and four years after Newton was born). His father was a professor of philosophy; his mother’s family was in the book trade. Leibniz’s father died when Leibniz was 6—and after a 2-year deliberation on its suitability for one so young, Leibniz was allowed into his father’s library, and began to read his way through its diverse collection of books. He went to the local university at age 15, studying philosophy and law—and graduated in both of them at age 20.

Even as a teenager, Leibniz seems to have been interested in systematization and formalization of knowledge. There had been vague ideas for a long time—for example in the semi-mystical Ars Magna of Ramon Llull from the 1300s—that one might be able to set up some kind of universal system in which all knowledge could be derived from combinations of signs drawn from a suitable (as Descartes called it) “alphabet of human thought”. And for his philosophy graduation thesis, Leibniz tried to pursue this idea. He used some basic combinatorial mathematics to count possibilities. He talked about decomposing ideas into simple components on which a “logic of invention” could operate. And, for good measure, he put in an argument that purported to prove the existence of God.

As Leibniz himself said in later years, this thesis—written at age 20—was in many ways naive. But I think it began to define Leibniz’s lifelong way of thinking about all sorts of things. And so, for example, Leibniz’s law graduation thesis about “perplexing legal cases” was all about how such cases could potentially be resolved by reducing them to logic and combinatorics.

Leibniz was on a track to become a professor, but instead he decided to embark on a life working as an advisor for various courts and political rulers. Some of what he did for them was scholarship, tracking down abstruse—but politically important—genealogy and history. Some of it was organization and systematization—of legal codes, libraries and so on. Some of it was practical engineering—like trying to work out better ways to keep water out of silver mines. And some of it—particularly in earlier years—was “on the ground” intellectual support for political maneuvering.

One such activity in 1672 took Leibniz to Paris for four years—during which time he interacted with many leading intellectual lights. Before then, Leibniz’s knowledge of mathematics had been fairly basic. But in Paris he had the opportunity to learn all the latest ideas and methods. And for example he sought out Christiaan Huygens, who agreed to teach Leibniz mathematics—after he succeeded in passing the test of finding the sum of the reciprocals of the triangular numbers.

Over the years, Leibniz refined his ideas about the systematization and formalization of knowledge, imagining a whole architecture for how knowledge would—in modern terms—be made computational. He saw the first step as being the development of an ars characteristica—a methodology for assigning signs or symbolic representations to things, and in effect creating a uniform “alphabet of thought”. And he then imagined—in remarkable resonance with what we now know about computation—that from this uniform representation it would be possible to find “truths of reason in any field… through a calculus, as in arithmetic or algebra”.

He talked about his ideas under a variety of rather ambitious names like scientia generalis (“general method of knowledge”), lingua philosophica (“philosophical language”), mathematique universelle (“universal mathematics”), characteristica universalis (“universal system”) and calculus ratiocinator (“calculus of thought”). He imagined applications ultimately in all areas—science, law, medicine, engineering, theology and more. But the one area in which he had clear success quite quickly was mathematics.

To me it’s remarkable how rarely in the history of mathematics that notation has been viewed as a central issue. It happened at the beginning of modern mathematical logic in the late 1800s with the work of people like Gottlob Frege and Giuseppe Peano. And in recent times it’s happened with me in my efforts to create Mathematica and the Wolfram Language. But it also happened three centuries ago with Leibniz. And I suspect that Leibniz’s successes in mathematics were in no small part due to the effort he put into notation, and the clarity of reasoning about mathematical structures and processes that it brought.

When one looks at Leibniz’s papers, it’s interesting to see his notation and its development. Many things look quite modern. Though there are charming dashes of the 17th century, like the occasional use of alchemical or planetary symbols for algebraic variables:

Example of Leibniz's use of alchemical or planetary symbols for algebraic variables

There’s Π as an equals sign instead of =, with the slightly hacky idea of having it be like a balance, with a longer leg on one side or the other indicating less than (“<”) or greater than (“>”):

Example of Leibniz using Pi as an equal sign instead of =

There are overbars to indicate grouping of terms—arguably a better idea than parentheses, though harder to type, and typeset:

Leibniz used overbars to indicate grouping of terms

We do use overbars for roots today. But Leibniz wanted to use them in integrals too. Along with the rather nice “tailed d”, which reminds me of the double-struck “differential d” that we invented for representing integrals in Mathematica.

Showing Leibniz's use of overbars in integrals

Particularly in solving equations, it’s quite common to want to use ±, and it’s always confusing how the grouping is supposed to work, say in a±b±c. Well, Leibniz seems to have found it confusing too, but he invented a notation to handle it—which we actually should consider using today too:

Leibniz example of a +- notation

I’m not sure what some of Leibniz’s notation means. Though those overtildes are rather nice-looking:

As are these things with dots:

One example of Leibniz's notation using dots

Or this interesting-looking diagrammatic form:

Diagrammatic form made by Leibniz

Of course, Leibniz’s most famous notations are his integral sign (long “s” for “summa”) and d, here summarized in the margin for the first time, on November 11th, 1675 (the “5″ in “1675″ was changed to a “3″ after the fact, perhaps by Leibniz):

Leibniz’s most famous notations summarized in the margin for the first time

I find it interesting that despite all his notation for “calculational” operations, Leibniz apparently did not invent similar notation for logical operations. “Or” was just the Latin word vel, “and” was et, and so on. And when he came up with the idea of quantifiers (modern ∀ and ∃), he just represented them by the Latin abbreviations U.A. and P.A.:

Leibniz's notation for logical operations

It’s always struck me as a remarkable anomaly in the history of thought that it took until the 1930s for the idea of universal computation to emerge. And I’ve often wondered if lurking in the writings of Leibniz there might be an early version of universal computation—maybe even a diagram that we could now interpret as a system like a Turing machine. But with more exposure to Leibniz, it’s become clearer to me why that’s probably not the case.

One big piece, I suspect, is that he didn’t take discrete systems quite seriously enough. He referred to results in combinatorics as “self-evident”, presumably because he considered them directly verifiable by methods like arithmetic. And it was only “geometrical”, or continuous, mathematics that he felt needed to have a calculus developed for it. In describing things like properties of curves, Leibniz came up with something like continuous functions. But he never seems to have applied the idea of functions  to discrete mathematics—which might for example have led him to think about universal elements for building up functions.

Leibniz recognized the success of his infinitesimal calculus, and was keen to come up with similar “calculi” for other things. And in another “near miss” with universal computation, Leibniz had the idea of encoding logical properties using numbers. He thought about associating every possible attribute of a thing with a different prime number, then characterizing the thing by the product of the primes for its attributes—and then representing logical inference by arithmetic operations. But he only considered static attributes—and never got to an idea like Gödel numbering where operations are also encoded in numbers.

But even though Leibniz did not get to the idea of universal computation, he did understand the notion that computation is in a sense mechanical. And indeed quite early in life he seems to have resolved to build an actual mechanical calculator for doing arithmetic. Perhaps in part it was because he wanted to use it himself (always a good reason to build a piece of technology!). For despite his prowess at algebra and the like, his papers are charmingly full of basic (and sometimes incorrect) school-level arithmetic calculations written out in the margin—and now preserved for posterity:

Example of basic school-level arithmetic calculations written out in the margin by Leibniz

There were scattered examples of mechanical calculators being built in Leibniz’s time, and when he was in Paris, Leibniz no doubt saw the addition calculator that had been built by Blaise Pascal in 1642. But Leibniz resolved to make a “universal” calculator, that could for the first time do all four basic functions of arithmetic with a single machine. And he wanted to give it a simple “user interface”, where one would for example turn a handle one way for multiplication, and the opposite way for division.

In Leibniz’s papers there are all sorts of diagrams about how the machine should work:

Leibniz's diagrams about how an arithmetic machine should work

Leibniz imagined that his calculator would be of great practical utility—and indeed he seems to have hoped that he would be able to turn it into a successful business. But in practice, Leibniz struggled to get the calculator to work at all reliably. For like other mechanical calculators of its time, it was basically a glorified odometer. And just like in Charles Babbage’s machines nearly 200 years later, it was mechanically difficult to make many wheels move at once when a cascade of carries occurred.

Leibniz at first had a wooden prototype of his machine built, intended to handle just 3 or 4 digits. But when he demoed this to people like Robert Hooke during a visit to London in 1673 it didn’t go very well. But he kept on thinking he’d figured everything out—for example in 1679 writing (in French) of the “last correction to the arithmetic machine”:

1679 writing (in French) of the last correction to the arithmetic machine

Notes from 1682 suggest that there were more problems, however:

Notes from 1682 suggesting that there were more problems with the arithmetic machine

But Leibniz had plans drafted up from his notes—and contracted an engineer to build a brass version with more digits:

Plans drafted up from Leibniz's notes

It’s fun to see Leibniz’s “marketing material” for the machine:

Leibniz's "marketing material" for the machine

As well as parts of the “manual” (with 365×24 as a “worked example”):

Usage diagrams of the machine

Complete with detailed usage diagrams:

Detailed usage diagram of the machine

But despite all this effort, problems with the calculator continued. And in fact, for more than 40 years, Leibniz kept on tweaking his calculator—probably altogether spending (in today’s currency) more than a million dollars on it.

So what actually happened to the physical calculator? When I visited Leibniz’s archive, I had to ask. “Well”, my hosts said, “we can show you”. And there in a vault, along with shelves of boxes, was Leibniz’s calculator, looking as good as new in a glass case—here captured by me in a strange juxtaposition of ancient and modern:

Leibniz’s calculator

All the pieces are there. Including a convenient wooden carrying box. Complete with a cranking handle. And, if it worked right, the ability to do any basic arithmetic operation with a few minutes of cranking:

Leibniz’s calculator with the cranking handle

Leibniz clearly viewed his calculator as a practical project. But he still wanted to generalize from it, for example trying to make a general “logic” to describe geometries of mechanical linkages. And he also thought about the nature of numbers and arithmetic. And was particularly struck by binary numbers.

Bases other than 10 had been used in recreational mathematics for several centuries. But Leibniz latched on to base 2 as having particular significance—and perhaps being a key bridge between philosophy, theology and mathematics. And he was encouraged in this by his realization that binary numbers were at the core of the I Ching, which he’d heard about from missionaries to China, and viewed as related in spirit to his characteristica universalis.

Leibniz worked out that it would be possible to build a calculator based on binary. But he appears to have thought that only base 10 could actually be useful.

It’s strange to read what Leibniz wrote about binary numbers. Some of it is clear and practical—and still seems perfectly modern. But some of it is very 17th century—talking for example about how binary proves that everything can be made from nothing, with 1 being identified with God, and 0 with nothing.

Almost nothing was done with binary for a couple of centuries after Leibniz: in fact, until the rise of digital computing in the last few decades. So when one looks at Leibniz’s papers, his calculations in binary are probably what seem most “out of his time”:

Leibniz's calculations in binary

With binary, Leibniz was in a sense seeking the simplest possible underlying structure. And no doubt he was doing something similar when he talked about what he called “monads”. I have to say that I’ve never really understood monads. And usually when I think I almost have, there’s some mention of souls that just throws me completely off.

Still, I’ve always found it tantalizing that Leibniz seemed to conclude that the “best of all possible worlds” is the one “having the greatest variety of phenomena from the smallest number of principles”. And indeed, in the prehistory of my work on A New Kind of Science, when I first started formulating and studying one-dimensional cellular automata in 1981, I considered naming them “polymones”—but at the last minute got cold feet when I got confused again about monads.

There’s always been a certain mystique around Leibniz and his papers. Kurt Gödel—perhaps displaying his paranoia—seemed convinced that Leibniz had discovered great truths that had been suppressed for centuries. But while it is true that Leibniz’s papers were sealed when he died, it was his work on topics like history and genealogy—and the state secrets they might entail—that was the concern.

Leibniz’s papers were unsealed long ago, and after three centuries one might assume that every aspect of them would have been well studied. But the fact is that even after all this time, nobody has actually gone through all of the papers in full detail. It’s not that there are so many of them. Altogether there are only about 200,000 pages—filling perhaps a dozen shelving units (and only a little larger than my own personal archive from just the 1980s). But the problem is the diversity of material. Not only lots of subjects. But also lots of overlapping drafts, notes and letters, with unclear relationships between them.

Leibniz’s archive contains a bewildering array of documents. From the very large:

Very large document from Leibniz's archive

To the very small (Leibniz’s writing got smaller as he got older and more near-sighted):

Very small document from Leibniz's archive

Most of the documents in the archive seem very serious and studious. But despite the high cost of paper in Leibniz’s time, one still finds preserved for posterity the occasional doodle (is that Spinoza, by any chance?):

Documents from the archive with a doodle by Leibniz

Leibniz exchanged mail with hundreds of people—famous and not-so-famous—all over Europe. So now, 300 years later, one can find in his archive “random letters” from the likes of Jacob Bernoulli:

Letter to Leibniz from Jacob Bernoulli

What did Leibniz look like? Here he is, both in an official portrait, and without his rather oversized wig (that was mocked even in his time), that he presumably wore to cover up a large cyst on his head:

Official portrait and statue of Leibniz

As a person, Leibniz seems to have been polite, courtierly and even tempered. In some ways, he may have come across as something of a nerd, expounding at great depth on all manner of topics. He seems to have taken great pains—as he did in his letters—to adapt to whoever he was talking to, emphasizing theology when he was talking to a theologian, and so on. Like quite a few intellectuals of his time, Leibniz never married, though he seems to have been something of a favorite with women at court.

In his career as a courtier, Leibniz was keen to climb the ladder. But not being into hunting or drinking, he never quite fit in with the inner circles of the rulers he worked for. Late in his life, when George I of Hanover became king of England, it would have been natural for Leibniz to join his court. But Leibniz was told that before he could go, he had to start writing up a history project he’d supposedly been working on for 30 years. Had he done so before he died, he might well have gone to England and had a very different kind of interaction with Newton.

At Leibniz’s archive, there are lots of papers, his mechanical calculator, and one more thing: a folding chair that he took with him when he traveled, and that he had suspended in carriages so he could continue to write as the carriage moved:

Folding chair that Leibniz took with him when he traveled

Leibniz was quite concerned about status (he often styled himself “Gottfried von Leibniz”, though nobody quite knew where the “von” came from). And as a form of recognition for his discoveries, he wanted to have a medal created to commemorate binary numbers. He came up with a detailed design, complete with the tag line omnibus ex nihilo ducendis; sufficit unum (“everything can be derived from nothing; all that is needed is 1”). But nobody ever made the medal for him.

In 2007, though, I wanted to come up with a 60th birthday gift for my friend Greg Chaitin, who has been a long-time Leibniz enthusiast. And so I thought: why not actually make Leibniz’s medal? So we did. Though on the back, instead of the picture of a duke that Leibniz proposed, we put a Latin inscription about Greg’s work.

And when I visited the Leibniz archive, I made sure to bring a copy of the medal, so I could finally put a real medal next to Leibniz’s design:

Leibniz’s medal with the original design

It would have been interesting to know what pithy statement Leibniz might have had on his grave. But as it was, when Leibniz died at the age of 70, his political fates were at a low ebb, and no elaborate memorial was constructed. Still, when I was in Hanover, I was keen to see his grave—which turns out to carry just the simple Latin inscription “bones of Leibniz”:

Leibniz's grave

Across town, however, there’s another commemoration of a sort—an outlet store for cookies that carry the name “Leibniz” in his honor:

Outlet store for cookies that carry the name "Leibniz" in his honor

So what should we make of Leibniz in the end? Had history developed differently, there would probably be a direct line from Leibniz to modern computation. But as it is, much of what Leibniz tried to do stands isolated—to be understood mostly by projecting backward from modern computational thinking to the 17th century.

And with what we know now, it is fairly clear what Leibniz understood, and what he did not. He grasped the concept of having formal, symbolic, representations for a wide range of different kinds of things. And he suspected that there might be universal elements (maybe even just 0 and 1) from which these representations could be built. And he understood that from a formal symbolic representation of knowledge, it should be possible to compute its consequences in mechanical ways—and perhaps create new knowledge by an enumeration of possibilities.

Some of what Leibniz wrote was abstract and philosophical—sometimes maddeningly so. But at some level Leibniz was also quite practical. And he had sufficient technical prowess to often be able to make real progress. His typical approach seems to have been to start by trying to create a formal structure to clarify things—with formal notation if possible. And after that his goal was to create some kind of “calculus” from which conclusions could systematically be drawn.

Realistically he only had true success with this in one specific area: continuous “geometrical” mathematics. It’s a pity he never tried more seriously in discrete mathematics, because I think he might have been able to make progress, and might conceivably even have reached the idea of universal computation. He might well also have ended up starting to enumerate possible systems in the kind of way I have done in the computational universe.

One area where he did try his approach was with law. But in this he was surely far too early, and it is only now—300 years later—that computational law is beginning to seem realistic.

Leibniz also tried thinking about physics. But while he made progress with some specific concepts (like kinetic energy), he never managed to come up with any sort of large-scale “system of the world”, of the kind that Newton in effect did in his Principia.

In some ways, I think Leibniz failed to make more progress because he was trying too hard to be practical, and—like Newton—to decode the operation of actual physics, rather than just looking at related formal structures. For had Leibniz tried to do at least the basic kinds of explorations that I did in A New Kind of Science, I don’t think he would have had any technical difficulty—but I think the history of science could have been very different.

And I have come to realize that when Newton won the PR war against Leibniz over the invention of calculus, it was not just credit that was at stake; it was a way of thinking about science. Newton was in a sense quintessentially practical: he invented tools then showed how these could be used to compute practical results about the physical world. But Leibniz had a broader and more philosophical view, and saw calculus not just as a specific tool in itself, but as an example that should inspire efforts at other kinds of formalization and other kinds of universal tools.

I have often thought that the modern computational way of thinking that I follow is somehow obvious—and somehow an inevitable feature of thinking about things in formal, structured, ways. But it has never been very clear to me whether this apparent obviousness is just the result of modern times, and of our experience with modern practical computer technology. But looking at Leibniz, we get some perspective. And indeed what we see is that some core of modern computational thinking was possible even long before modern times. But the ambient technology and understanding of past centuries put definite limits on how far the thinking could go.

And of course this leads to a sobering question for us today: how much are we failing to realize from the core computational way of thinking because we do not have the ambient technology of the distant future? For me, looking at Leibniz has put this question in sharper focus. And at least one thing seems fairly clear.

In Leibniz’s whole life, he basically saw less than a handful of computers, and all they did was basic arithmetic. Today there are billions of computers in the world, and they do all sorts of things. But in the future there will surely be far far more computers (made easier to create by the Principle of Computational Equivalence). And no doubt we’ll get to the point where basically everything we make will explicitly be made of computers at every level. And the result is that absolutely everything will be programmable, down to atoms. Of course, biology has in a sense already achieved a restricted version of this. But we will be able to do it completely and everywhere.

At some level we can already see that this implies some merger of computational and physical processes. But just how may be as difficult for us to imagine as things like Mathematica and Wolfram|Alpha would have been for Leibniz.

Leibniz died on November 14, 1716. In 2016 that’ll be 300 years ago.  And it’ll be a good opportunity to make sure everything we have from Leibniz has finally been gone through—and to celebrate after three centuries how many aspects of Leibniz’s core vision are finally coming to fruition, albeit in ways he could never have imagined.

19 comments. Show all »

  1.  

    Hi Stephen Wolfram,

    This was a very interesting read. As sort of a design/notation perfectionist myself, I agree with the importance of notation as well as the importance of computation in today’s technological world. It is really exciting that you are coming to Caltech for your talk. I will surely be attending. As a CS major with an interest in physics and mathematics, your interests coincide with mine. It’s no secret that you’ve been my hero for the longest time.

    Joseph Choi

  2.  

    Fantastic post! This blog needs more traffic!

  3.  

    Awesome, it’s very interesting how he tried to use binary coding to create his calculator. I had no idea that calculators this advanced existed (or were even in prototype form) back during this era. I thought the calculating technologies would have been an abacus or a rotating disc type/table type of deal. I also wonder how the cookies taste, I would like to try them.

    Christopher
  4.  

    Neal Stephenson’s Baroque Cycle is a great fictionalised account of Leibniz’s interest in computation.

  5.  

    Thank you for sharing this ! Awesome to see Leibniz and those thinkers of the 17th Century who could see so much more with so much less … but then only because during his time he read forward with much of the moderns or leading edge theories of the time in equal measure as he read backwards to the classics of a thousand or more years before him …. may we see so much more forward today by also looking so much more backwards …!!

    sharing the leibniz summary of the stanford site … http://plato.stanford.edu/entries/leibniz/

    A.J.
  6.  

    Thiis was an excellent piece. I’m a Leibniz fan myself and think he is often far underappreciated. I’m curious about one thing personally of yours though, are the cellular automata that you said you worked with the same as those discussed by Hans Moravec in Mind Children? I’ve just finished that this week and found the possibilities of the subject quite intriguing. If so, what might you recommend as good starting material? Again, thanks for the fantastic work on Leibniz history, hopefully I’m going to Berlin for my PhD work (applied math), if so I am definitely making a trip to Hanover.

    Tristen Wentling
  7.  

    Nice article. But the Wolfram Alpha link didn’t quite explain how to evaluate the sum of reciprocals of triangular numbers. In fact, this is easy to evaluate for a very specific reason.

    A triangular number is n(n+1)/2 and you notice that

    1/(n(n+1)) = 1/n – 1/(n+1)

    Therefore, the infinite sum telescopes

    2/(1 (1+1) + 2/(2 (2+1)) + 2/(3 (3 + 1)) + 2/(4 (4 + 1)) + ….

    =

    2/1 – 2/2 + 2/2 + 2/3 – 2/3

    = 2

    Since only the first term survives.

    This result is just a special case that successive difference of a sequence

    a(n) = A(n+1) – A(n)

    is the inverse of the summation operator, so that

    a(1) + …. + a(n) = A(n+1) – A(1).

    Sort of the discrete version of the fundamental theorem of calculus.

    -ilan

  8.  

    Leibniz cookies: the best of all possible biscuits.

    Taylor
  9.  

    THANKS! Great material !

    marcela
  10.  

    Leibniz and Newton are the two greatest pioneer scientists/philosophers to think up the world as a machine. Of course they were both Christian and thought of God as the operating subject of that machine. But if God doesn´t exist as such, who operates the machine?
    Well, the machine, or the systemic logic behind it, is not the utmost mode of thinking — there is a higher logic that can think this all up without an operating subject because it is an operating subject in itself.
    The Brazilian logician Luiz Sergio Sampaio created a system of logic, called Hyperdialectical Logical System (HLS) that accounts for all modes of thinking and being, and particularly for that highest logic, the hyperdialectical logic. The HLS is capable of accounting for from the structure of atomic particles to human thinking and development, including his social being. His works are as yet all in Portuguese, except for a few articles in English, such as the one on the Higgs boson and the structure of physical forces and atomic particles.
    If anyone is interested in getting to know about HLS, just send me a message: merciogomes@gmail.com. Though I am rather an anthropologist and don´t understand much of physics or mathematics (except for being a friend of Greg Chaitin´s), I work, following Parmenides´ “Thinking and Being are the same”, on the premise that man is capable of figuring out the world because they both somehow share the same logical structure.
    Wolfram´s sensitive view of Leibniz is an example of the hyperdialectical logic at work.

  11.  

    Very nice article. It always amazes me how much we owe Leibniz for some of the notation we take for granted. The symbol for integration demonstrates how simple universal formalism in math can cross borders and cultures. Good ideas are always universally recognized it seems, just as success has many fathers. It is heartening to see that an actual version of his medal having been made, an appropriate tribute to a great man.

    As far as our computational prowess as a species, it is exciting to think that we do have the ability to program at the atomic level. IBM’s recent release of “A Boy and His Atom” has certainly pointed a straight arrow to the future and its potential. A friend recently posted a cartoon with a caption that said something to effect, “Those who fail to study history are doomed to repeat it, and those who study history are doomed to watch others repeat it.” Which is potentially apropos when it comes to our new powers and what it means for future generations.

    I see more benefit from future technology then danger. There is a dream that for once our ideals of social equality for all may finally be realizable. Technology as it is developing has the potential to allow for the type of positive escapism and individual accomplishment that many crave. That all may live the dream is a notable worthy potential goal for society, and perhaps now finally contemplatable practically. However, with all things, the loftier the goal, the greater the risk, and we see this fear in our popular culture with the popularity of zombies, aliens and other forms of apocalyptic media.

    Is it too much or to soon to have an open societal dialogue on these issues? I don’t know. Much is still viewed as fantasy by many, the practical reality is that our technology is still much more of a convenience than a burden in most of the world, although current events seem to suggest the tide is beginning to shift. Certainly in competitive environments, the lack of technological savvy is almost intolerable.

    In any case, a very good article about a very great man. A pleasant read for a weekend in May 2013!

  12.  

    Great Article, Leibniz, Newton, Galileo, Brilliant Natural Philosophers Of A By Gone Age And Still We Owe These Scientific Giants So Much Respect.
    Thank You For Writing.

    Maurice Butler
  13.  

    Stephen,

    Fascinating post. I remember visiting another mathematics site in Germany, in this case

    http://www.math.uni-goettingen.de/historisches/index_en.html

    many years ago (with Bryce DeWitt and Larry Smarr) on a drive from one conference to another one summer. There appear to be a number of well kept museums that are really worth a visit for those of us interested in science and math history.

    Steve Christensen
  14.  

    You are lucky to have read Leibniz’s original writings. I am also thankful to those who preserved his writings on paper over 3 and half centuries. I bow to great personality like Leibniz, he’s really a giant in mathematics

  15.  

    Very nice article, thanks for sharing it.

    I can’t read the correspondence with Jacob Bernoulli in your blog, but there is a sad/funny episode with Johan Bernoulli recounted in a paper by H. J.M. Bos in the Archive for the History of exact science:

    LEIBNIZ, who saw use of the term “integral” for the first time in JAKOB BERNOULLI 1690, tried later to persuade JOHANN BERNOULLI to adopt the terminology of “sums” :
    I leave it to your deliberation if it would not be better in the future, for the sake of uniformity and harmony, not only between ourselves but in the whole field of study, to adopt -the terminology of summation instead of your integrals. Then for instance fydx would signify the sum of all y multiplied by the corresponding dx, or the sum of all such rectangles. I ask this primarily because in that way the geometrical summations, or quadratures, correspond best with the arithmetical sums or sums of sequences. (…) I do confess that I found this whole method by considering the reciprocity of sums and differences, and that my considerations proceeded from sequences of numbers to sequences of lines or ordinates. (44 – original Latin footnote)

    This request served as occasion for JOHANN BERNOULLI to explain the origin of the term integral:
    Further, as regards the terminology of the sum of differentials I shall gladly use in the future your terminology of summations instead of our integrals, i would have done so already much earlier if the term integral were not so much appreciated by certain geometers [a reference to French mathematicians, especially I'HoPITAL,who had studied BERNOULLI'S Integral Calculus] who acknowledge me as the inventor of the term. It would therefore be thought that I rather obscured matters, if I indicated the same thing now with one term and now with another. I confess that indeed the terminology does not aptly agree with the thing itself

    (the term suggested itself to me as I considered the differential as the infinitesimal part of a whole or integral; I did not think further about it).(45)

    It’s a sad case of a poor choice of terminology that is all too common in the history of mathematics. It is sad because clearer terminology could help students understand the “integral”…

    On my website I give a modern re-counting of Bernoulli’s proof of Leibniz’ formula for radius of curvature.

    http://homepage.math.uiowa.edu/~stroyan/InfsmlCalculus/Lecture1/HTMLLinks/Lect1_6.html

    I’m really not an historian, but find some of these old geometrical arguments more compelling than some of the contemporary proofs in basic calculus and wanted to see what needed to be added to the old proof to make it “rigorous” in the sense of Robinson’s infinitesimals.

    ==

    You may know that Robinson used Leibniz’ term “monad” for the neighborhood of points infinitely close to a given point. I don’t think it sheds much light on Leibniz’ thinking about monads.

    ==

    Thanks again,

  16.  

    Dear Stephen,
    Thanks for your nice pictures and the whole text.
    Kind regards from sunny Brazil

  17.  

    What a brilliant post and summary of Leibniz’s life and works! Thank you for sharing this with us!

  18.  

    very interesting post on Leibniz but shouldn’t you use gloves when handling these valuable old documents

    @pblakez
  19.  

    Whitehead in Modes of Thought says:

    “There is a book to be written, and its title should be, The Mind of
    Leibniz.”

    Michael Scott
Hide comments »

© Stephen Wolfram, LLC | Terms