Mitchell Feigenbaum (1944–2019), 4.66920160910299067185320382…

Mitchell Feigenbaum
(Artwork by Gunilla Feigenbaum)

Behind the Feigenbaum Constant

It’s called the Feigenbaum constant, and it’s about 4.6692016. And it shows up, quite universally, in certain kinds of mathematical—and physical—systems that can exhibit chaotic behavior.

Mitchell Feigenbaum, who died on June 30 at the age of 74, was the person who discovered it—back in 1975, by doing experimental mathematics on a pocket calculator.

It became a defining discovery in the history of chaos theory. But when it was first discovered, it was a surprising, almost bizarre result, that didn’t really connect with anything that had been studied before. Somehow, though, it’s fitting that it should have been Mitchell Feigenbaum—who I knew for nearly 40 years—who would discover it.

Trained in theoretical physics, and a connoisseur of its mathematical traditions, Mitchell always seemed to see himself as an outsider. He looked a bit like Beethoven—and projected a certain stylish sense of intellectual mystery. He would often make strong assertions, usually with a conspiratorial air, a twinkle in his eye, and a glass of wine or a cigarette in his hand.

He would talk in long, flowing sentences which exuded a certain erudite intelligence. But ideas would jump around. Sometimes detailed and technical. Sometimes leaps of intuition that I, for one, could not follow. He was always calculating, staying up until 5 or 6 am, filling yellow pads with formulas and stressing Mathematica with elaborate algebraic computations that might run for hours.

He published very little, and what he did publish he was often disappointed wasn’t widely understood. When he died, he had been working for years on the optics of perception, and on questions like why the Moon appears larger when it’s close to the horizon. But he never got to the point of publishing anything on any of this.

For more than 30 years, Mitchell’s official position (obtained essentially on the basis of his Feigenbaum constant result) was as a professor at the Rockefeller University in New York City. (To fit with Rockefeller’s biological research mission, he was themed as the Head of the “Laboratory of Mathematical Physics”.) But he dabbled elsewhere, lending his name to a financial computation startup, and becoming deeply involved in inventing new cartographic methods for the Hammond World Atlas.

What Mitchell Discovered

The basic idea is quite simple. Take a value x between 0 and 1. Then iteratively replace x by a x (1 – x). Let’s say one starts from x = , and takes a = 3.2. Then here’s what one gets for the successive values of x:

Successive values
&#10005

ListLinePlot[NestList[Compile[x, 3.2 x (1 - x)], N[1/3], 50], 
 Mesh -> All, PlotRange -> {0, 1}, Frame -> True]

After a little transient, the values of x are periodic, with period 2. But what happens with other values of a? Here are a few results for this so-called “logistic map”:

Logistic map
&#10005

GraphicsGrid[
 Partition[
  Table[Labeled[
    ListLinePlot[NestList[Compile[x, a x (1 - x)], N[1/3], 50], 
     Sequence[
     Mesh -> All, PlotRange -> {0, 1}, Frame -> True, 
      FrameTicks -> None]], StringTemplate["a = ``"][a]], {a, 2.75, 
    4, .25}], 3], Spacings -> {.1, -.1}]

For small a, the values of x quickly go to a fixed point. For larger a they become periodic, first with period 2, then 4. And finally, for larger a, the values start bouncing around seemingly randomly.

One can summarize this by plotting the values of x (here, 300, after dropping the first 50 to avoid transients) reached as a function of the value of a:

Period doublings
&#10005

ListPlot[Flatten[
  Table[{a, #} & /@ 
    Drop[NestList[Compile[x, a x (1 - x)], N[1/3], 300], 50], {a, 0, 
    4, .01}], 1], Frame -> True, FrameLabel -> {"a", "x"}]

As a increases, one sees a cascade of “period doublings”. In this case, they’re at a = 3, a 3.449, a 3.544090, a 3.5644072. What Mitchell noticed is that these successive values approach a limit (here a 3.569946) in a geometric sequence, with aan ~ δn and δ 4.669.

That’s a nice little result. But here’s what makes it much more significant: it isn’t just true about the specific iterated map x  a x (1 – x); it’s true about any map like that. Here, for example, is the “bifurcation diagram” for x  a sin(π ):

Bifucation diagram
&#10005

ListPlot[Flatten[
  Table[{a, #} & /@ 
    Drop[NestList[Compile[x, a Sin[Pi Sqrt@x]], N[1/3], 300], 50], {a,
     0, 1, .002}], 1], Frame -> True, FrameLabel -> {"a", "x"}]

The details are different. But what Mitchell noticed is that the positions of the period doublings again form a geometric sequence, with the exact same base: δ 4.669.

It’s not just that different iterated maps give qualitatively similar results; when one measures the convergence rate this turns out be exactly and quantitatively the same—always δ 4.669. And this was Mitchell’s big discovery: a quantitatively universal feature of the approach to chaos in a class of systems.

The Scientific Backstory

The basic idea behind iterated maps has a long history, stretching all the way back to antiquity. Early versions arose in connection with finding successive approximations, say to square roots. For example, using Newton’s method from the late 1600s, can be obtained by iterating x   (here starting from x = 1):

Starting from x = 1
&#10005

NestList[Function[x, 1/x + x/2], N[1, 8], 6]

The notion of iterating an arbitrary function seems to have first been formalized in an 1870 paper by Ernst Schröder (who was notable for his work in formalizing things from powers to Boolean algebra), although most of the discussion that arose was around solving functional equations, not actually doing iterations. (An exception was the investigation of regions of convergence for Newton’s approximation by Arthur Cayley in 1879.) In 1918 Gaston Julia made a fairly extensive study of iterated rational functions in the complex plane—inventing, if not drawing, Julia sets. But until fractals in the late 1970s (which soon led to the Mandelbrot set), this area of mathematics basically languished.

But quite independent of any pure mathematical developments, iterated maps with forms similar to x  a x (1 – x) started appearing in the 1930s as possible practical models in fields like population biology and business cycle theory—usually arising as discrete annualized versions of continuous equations like the Verhulst logistic differential equation from the mid-1800s. Oscillatory behavior was often seen—and in 1954 William Ricker (one of the founders of fisheries science) also found more complex behavior when he iterated some empirical fish reproduction curves.

Back in pure mathematics, versions of iterated maps had also shown up from time to time in number theory. In 1799 Carl Friedrich Gauss effectively studied the map x  FractionalPart[] in connection with continued fractions. And starting in the late 1800s there was interest in studying maps like x   FractionalPart[a x] and their connections to the properties of the number a.

Particularly following Henri Poincaré’s work on celestial mechanics around 1900, the idea of sensitive dependence on initial conditions arose, and it was eventually noted that iterated maps could effectively “excavate digits” in their initial conditions. For example, iterating x  FractionalPart[10 x], starting with the digits of π, gives (effectively just shifting the sequence of digits one place to the left at each step):

Starting with the digits of pi...
&#10005

N[NestList[Function[x, FractionalPart[10 x]], N[Pi, 100], 5], 10]

FractionalPart
&#10005

ListLinePlot[
 Rest@N[NestList[Function[x, FractionalPart[10 x]], N[Pi, 100], 50], 
   40], Mesh -> All]

(Confusingly enough, with typical “machine precision” computer arithmetic, this doesn’t work correctly, because even though one “runs out of precision”, the IEEE Floating Point standard says to keep on delivering digits, even though they are completely wrong. Arbitrary precision in the Wolfram Language gets it right.)

Maps like x  a x(1 – x) show similar kinds of “digit excavation” behavior (for example, replacing x by sin[π u]2, x  4 x(1 – x) becomes exactly u  FractionalPart[u, 2]—and this was already known by the 1940s, and, for example, commented on by John von Neumann in connection with his 1949 iterative “middle-square” method for generating pseudorandom numbers by computer.

But what about doing experimental math on iterated maps? There wasn’t too much experimental math at all on early digital computers (after all, most computer time was expensive). But in the aftermath of the Manhattan Project, Los Alamos had built its own computer (named MANIAC), that ended up being used for a whole series of experimental math studies. And in 1964 Paul Stein and Stan Ulam wrote a report entitled “Non-linear Transformation Studies on Electronic Computers” that included photographs of oscilloscope-like MANIAC screens displaying output from some fairly elaborate iterated maps. In 1971, another “just out of curiosity” report from Los Alamos (this time by Nick Metropolis [leader of the MANIAC project, and developer of the Monte Carlo method], Paul Stein and his brother Myron Stein) started to give more specific computer results for the behavior logistic maps, and noted the basic phenomenon of period doubling (which they called the “U-sequence”), as well as its qualitative robustness under changes in the underlying map.

But quite separately from all of this, there were other developments in physics and mathematics. In 1964 Ed Lorenz (a meteorologist at MIT) introduced and simulated his “naturally occurring” Lorenz differential equations, that showed sensitive dependence on initial conditions. Starting in the 1940s (but following on from Poincaré’s work around 1900) there’d been a steady stream of developments in mathematics in so-called dynamical systems theory—particularly investigating global properties of the solutions to differential equations. Usually there’d be simple fixed points observed; sometimes “limit cycles”. But by the 1970s, particularly after the arrival of early computer simulations (like Lorenz’s), it was clear that for nonlinear equations something else could happen: a so-called “strange attractor”. And in studying so-called “return maps” for strange attractors, iterated maps like the logistic map again appeared.

But it was in 1975 that various threads of development around iterated maps somehow converged. On the mathematical side, dynamical systems theorist Jim Yorke and his student Tien-Yien Li at the University of Maryland published their paper “Period Three Implies Chaos”, showing that in an iterated map with a particular parameter value, if there’s ever an initial condition that leads to a cycle of length 3, there must be other initial conditions that don’t lead to cycles at all—or, as they put it, show chaos. (As it turned out, Aleksandr Sarkovskii—who was part of a Ukrainian school of dynamical systems research—had already in 1962 proved the slightly weaker result that a cycle of period 3 implies cycles of all periods.)

But meanwhile there had also been growing interest in things like the logistic maps among mathematically oriented population biologists, leading to the rather readable review (published in mid-1976) entitled “Simple Mathematical Models with Very Complicated Dynamics” by physics-trained Australian Robert May, who was then a biology professor at Princeton (and would subsequently become science advisor to the UK government, and is now “Baron May of Oxford”).

But even though things like sketches of bifurcation diagrams existed, the discovery of their quantitatively universal properties had to await Mitchell Feigenbaum and his discovery.

Mitchell’s Journey

Mitchell Feigenbaum grew up in Brooklyn, New York. His father was an analytical chemist, and his mother was a public-school teacher. Mitchell was unenthusiastic about school, though did well on math and science tests, and managed to teach himself calculus and piano. In 1960, at age 16, as something of a prodigy, he enrolled in the City College of New York, officially studying electrical engineering, but also taking physics and math classes. After graduating in 1964, he went to MIT. Initially he was going to do a PhD in electrical engineering, but he quickly switched to physics.

But although he was enamored of classic mathematical physics (as represented, for example, in the books of Landau and Lifshiftz), he ended up writing his thesis on a topic set by his advisor about particle physics, and specifically about evaluating a class of Feynman diagrams for the scattering of photons by scalar particles (with lots of integrals, if not special functions). It wasn’t a terribly exciting thesis, but in 1970 he was duly dispatched to Cornell for a postdoc position.

Mitchell struggled with motivation, preferring to hang out in coffee shops doing the New York Times crossword (at which he was apparently very fast) to doing physics. But at Cornell, Mitchell made several friends who were to be important to him. One was Predrag Cvitanović, a star graduate student from what is now Croatia, who was studying quantum electrodynamics, and with whom he shared an interest in German literature. Another was a young poet named Kathleen Doorish (later, Kathy Hammond), who was a friend of Predrag’s. And another was a rising-star physics professor named Pete Carruthers, with whom he shared an interest in classical music.

In the early 1970s quantum field theory was entering a golden age. But despite the topic of his thesis, Mitchell didn’t get involved, and in the end, during his two years at Cornell, he produced no visible output at all. Still, he had managed to impress Hans Bethe enough to be dispatched for another postdoc position, though now at a place lower in the pecking order of physics, Virginia Polytechnic Institute, in rural Virginia.

At Virginia Tech, Mitchell did even less well than at Cornell. He didn’t interact much with people, and he produced only one three-page paper: “The Relationship between the Normalization Coefficient and Dispersion Function for the Multigroup Transport Equation”. As its title might suggest, the paper was quite technical and quite unexciting.

As Mitchell’s two years at Virginia Tech drew to a close it wasn’t clear what was going to happen. But luck intervened. Mitchell’s friend from Cornell, Pete Carruthers, had just been hired to build up the theory division (“T Division”) at Los Alamos, and given carte blanche to hire several bright young physicists. Pete would later tell me with pride (as part of his advice to me about general scientific management) that he had a gut feeling that Mitchell could do something great, and that despite other people’s input—and the evidence—he decided to bet on Mitchell.

Having brought Mitchell to Los Alamos, Pete set about suggesting projects for him. At first, it was following up on some of Pete’s own work, and trying to compute bulk collective (“transport”) properties of quantum field theories as a way to understand high-energy particle collisions—a kind of foreshadowing of investigations of quark-gluon plasma.

But soon Pete suggested that Mitchell try looking at fluid turbulence, and in particular on seeing whether renormalization group methods might help in understanding it.

Whenever a fluid—like water—flows sufficiently rapidly it forms lots of little eddies and behaves in a complex and seemingly random way. But even though this qualitative phenomenon had been discussed for centuries (with, for example, Leonardo da Vinci making nice pictures of it), physics had had remarkably little to say about it—though in the 1940s Andrei Kolmogorov had given a simple argument that the eddies should form a cascade with a k distribution of energies. At Los Alamos, though, with its focus on nuclear weapons development (inevitably involving violent fluid phenomena), turbulence was a very important thing to understand—even if it wasn’t obvious how to approach it.

But in 1974, there was news that Ken Wilson from Cornell had just “solved the Kondo problem” using a technique called the renormalization group. And Pete Carruthers suggested that Mitchell should try to apply this technique to turbulence.

The renormalization group is about seeing how changes of scale (or other parameters) affect descriptions (and behavior) of systems. And as it happened, it was Mitchell’s thesis advisor at MIT, Francis Low, who, along with Murray Gell-Mann, had introduced it back in 1954 in the context of quantum electrodynamics. The idea had lain dormant for many years, but in the early 1970s it came back to life with dramatic—though quite different—applications in both particle physics (specifically, QCD) and condensed matter physics.

In a piece of iron at room temperature, you can basically get all electron spins associated with each atom lined up, so the iron is magnetized. But if you heat the iron up, there start to be fluctuations, and suddenly—above the so-called Curie temperature (770°C for iron)—there’s effectively so much randomness that the magnetization disappears. And in fact there are lots of situations (think, for example, melting or boiling—or, for that matter, the formation of traffic jams) where this kind of sudden so-called phase transition occurs.

But what is actually going on in a phase transition? I think the clearest way to see this is by looking at an analog in cellular automata. With the particular rule shown below, if there aren’t very many initial black cells, the whole system will soon be white. But if you increase the number of initial black cells (as a kind of analog of increasing the temperature in a magnetic system), then suddenly, in this case at 50% black, there’s a sharp transition, and now the whole system eventually becomes black. (For phase transition experts: yes, this is a phase transition in a 1D system; one only needs 2D if the system is required to be microscopically reversible.)

&#10005

GraphicsRow[SeedRandom[234316];
 Table[ArrayPlot[
   CellularAutomaton[<|
     "RuleNumber" -> 294869764523995749814890097794812493824, 
     "Colors" -> 4|>, 
    3 Boole[Thread[RandomReal[{0, 1}, 2000] < rho]], {500, {-300, 
      300}}], FrameLabel -> {None, 
Row[{
Round[100 rho], "% black"}]}], {rho, {0.4, 0.45, 0.55, 0.6}}], -30]

But what does the system do near 50% black? In effect, it can’t decide whether to finally become black or white. And so it ends up showing a whole hierarchy of “fluctuations” from the smallest scales to the largest. And what became clear by the 1960s is that the “critical exponents” characterizing the power laws describing these fluctuations are universal across many different systems.

But how can one compute these critical exponents? In a few toy cases, analytical methods were known. But mostly, something else was needed. And in the late 1960s Ken Wilson realized that one could use the renormalization group, and computers. One might have a model for how individual spins interact. But the renormalization group gives a procedure for “scaling up” to the interactions of larger and larger blocks of spins. And by studying that on a computer, Ken Wilson was able to start computing critical exponents.

At first, the physics world didn’t pay much attention, not least because they weren’t used to computers being so intimately in the loop in theoretical physics. But then there was the Kondo problem (and, yes, so far as I know, it has no relation to modern Kondoing—though it does relate to modern quantum dot cellular automata). In most materials, electrical resistivity decreases as the temperature decreases (going to zero for superconductors even above absolute zero). But back in the 1930s, measurements on gold had shown instead an increase of resistivity at low temperatures. By the 1960s, it was believed that this was due to the scattering of electrons from magnetic impurities—but calculations ran into trouble, generating infinite results.

But then, in 1975, Ken Wilson applied his renormalization group methods—and correctly managed to compute the effect. There was still a certain mystery about the whole thing (and it probably didn’t help that—at least when I knew him in the 1980s and beyond—I often found Ken Wilson’s explanations quite hard to understand). But the idea that the renormalization group could be important was established.

So how might it apply to fluid turbulence? Kolmogorov’s power law seemed suggestive. But could one take the Navier–Stokes equations which govern idealized fluid flow and actually derive something like this? This was the project on which Mitchell Feigenbaum embarked.

The Big Discovery

The Navier–Stokes equations are very hard to work with. In fact, to this day it’s still not clear how even the most obvious feature of turbulence—its apparent randomness—arises from these equations. (It could be that the equations aren’t a full or consistent mathematical description, and one’s actually seeing amplified microscopic molecular motions. It could be that—as in chaos theory and the Lorenz equations—it’s due to amplification of randomness in the initial conditions. But my own belief, based on work I did in the 1980s, is that it’s actually an intrinsic computational phenomenon—analogous to the randomness one sees in my rule 30 cellular automaton.)

So how did Mitchell approach the problem? He tried simplifying it—first by going from equations depending on both space and time to ones depending only on time, and then by effectively making time discrete, and looking at iterated maps. Through Paul Stein, Mitchell knew about the (not widely known) previous work at Los Alamos on iterated maps. But Mitchell didn’t quite know where to go with it, though having just got a swank new HP-65 programmable calculator, he decided to program iterated maps on it.

Then in July 1975, Mitchell went (as I also did a few times in the early 1980s) to the summer physics hang-out-together event in Aspen, CO. There he ran into Steve Smale—a well-known mathematician who’d been studying dynamical systems—and was surprised to find Smale talking about iterated maps. Smale mentioned that someone had asked him if the limit of the period-doubling cascade a 3.56995 could be expressed in terms of standard constants like π and . Smale related that he’d said he didn’t know. But Mitchell’s interest was piqued, and he set about trying to figure it out.

He didn’t have his HP-65 with him, but he dove into the problem using the standard tools of a well-educated mathematical physicist, and had soon turned it into something about poles of functions in the complex plane—about which he couldn’t really say anything. Back at Los Alamos in August, though, he had his HP-65, and he set about programming it to find the bifurcation points an.

The iterative procedure ran pretty fast for small n. But by n = 5 it was taking 30 seconds. And for n = 6 it took minutes. While it was computing, however, Mitchell decided to look at the an values he had so far—and noticed something: they seemed to be converging geometrically to a final value.

At first, he just used this fact to estimate a, which he tried—unsuccessfully—to express in terms of standard constants. But soon he began to think that actually the convergence exponent δ was more significant than a—since its value stayed the same under simple changes of variables in the map. For perhaps a month Mitchell tried to express δ in terms of standard constants.

But then, in early October 1975, he remembered that Paul Stein had said period doubling seemed to look the same not just for logistic maps but for any iterated map with a single hump. Reunited with his HP-65 after a trip to Caltech, Mitchell immediately tried the map x  sin(x)—and discovered that, at least to 3-digit precision, the exponent δ was exactly the same.

He was immediately convinced that he’d discovered something great. But Stein told him he needed more digits to really conclude much. Los Alamos had plenty of powerful computers—so the next day Mitchell got someone to show him how to write a program in FORTRAN on one of them to go further—and by the end of the day he had managed to compute that in both cases δ was about 4.6692.

The computer he used was a typical workhorse US scientific computer of the day: a CDC 6000 series machine (of the same type I used when I first moved to the US in 1978). It had been designed by Seymour Cray, and by default it used 60-bit floating-point numbers. But at this precision (about 14 decimal digits), 4.6692 was as far as Mitchell could compute. Fortunately, however, Pete’s wife Lucy Carruthers was a programmer at Los Alamos, and she showed Mitchell how to use double precision—with the result that he was able to compute δ to 11-digit precision, and determine that the values for his two different iterated maps agreed.

Within a few weeks, Mitchell had found that δ seemed to be universal whenever the iterated map had a single quadratic maximum. But he didn’t know why this was, or have any particular framework for thinking about it. But still, finally, at the age of 30, Mitchell had discovered something that he thought was really interesting.

On Mitchell’s birthday, December 19, he saw his friend Predrag, and told him about his result. But at the time, Predrag was working hard on mainstream particle physics, and didn’t pay too much attention.

Mitchell continued working, and within a few months he was convinced that not only was the exponent δ universal—the appropriately scaled, limiting, infinitely wiggly, actual iteration of the map was too. In April 1976 Mitchell wrote a report announcing his results. Then on May 2, 1976, he gave a talk about them at the Institute for Advanced Study in Princeton. Predrag was there, and now he got interested in what Mitchell was doing.

As so often, however, it was hard to understand just what Mitchell was talking about. But by the next day, Predrag had successfully simplified things, and come up with a single, explicit, functional equation for the limiting form of the scaled iterated map: g(g(x)) = , with α 2.50290—implying that for any iterated map of the appropriate type, the limiting form would always look like an even wigglier version of:

FeigenbaumFunction plot
&#10005

fUD[z_] = 
  1. - 1.5276329970363323 z^2 + 0.1048151947874277 z^4 + 
   0.026705670524930787 z^6 - 0.003527409660464297 z^8 + 
   0.00008160096594827505 z^10 + 0.000025285084886512315 z^12 - 
   2.5563177536625283*^-6 z^14 - 9.65122702290271*^-8 z^16 + 
   2.8193175723520713*^-8 z^18 - 2.771441260107602*^-10 z^20 - 
   3.0292086423142963*^-10 z^22 + 2.6739057855563045*^-11 z^24 + 
   9.838888060875235*^-13 z^26 - 3.5838769501333333*^-13 z^28 + 
   2.063994985307743*^-14 z^30;
   fCF = Compile[{z}, 
    Module[{\[Alpha] = -2.5029078750959130867, n, \[Zeta]},
     n = If[Abs[z] <= 1., 0, Ceiling[Log[-\[Alpha], Abs[z]]]];
     \[Zeta] = z/\[Alpha]^n;
     Do[\[Zeta] = #, {2^n}];
     \[Alpha]^n \[Zeta]]] &[fUD[\[Zeta]]];
     Plot[fCF[x], {x, -100, 100}, MaxRecursion -> 5, PlotRange -> All]

How It Developed

The whole area of iterated maps got a boost on June 10, 1976, with the publication in Nature of Robert May’s survey about them, written independent of Mitchell and (of course) not mentioning his results. But in the months that followed, Mitchell traveled around and gave talks about his results. The reactions were mixed. Physicists wondered how the results related to physics. Mathematicians wondered about their status, given that they came from experimental mathematics, without any formal mathematical proof. And—as always—people found Mitchell’s explanations hard to understand.

In the fall of 1976, Predrag went as a postdoc to Oxford—and on the very first day that I showed up as 17-year-old particle-physics-paper-writing undergraduate, I ran into him. We talked mostly about his elegant “bird tracks” method for doing group theory (about which he finally published a book 32 years later). But he also tried to explain iterated maps. And I still remember him talking about an idealized model for fish populations in the Adriatic Sea (only years later did I make the connection that Predrag was from what is now Croatia).

At the time I didn’t pay much attention, but somehow the idea of iterated maps lodged in my consciousness, soon mixed together with the notion of fractals that I learned from Benoit Mandelbrot’s book. And when I began to concentrate on issues of complexity a couple of years later, these ideas helped guide me towards systems like cellular automata.

But back in 1976, Mitchell (who I wouldn’t meet for several more years) was off giving lots of talks about his results. He also submitted a paper to the prestigious academic journal Advances in Mathematics. For 6 months he heard nothing. But eventually the paper was rejected. He tried again with another paper, now sending it to the SIAM Journal of Applied Mathematics. Same result.

I have to say I’m not surprised this happened. In my own experience of academic publishing (now long in the past), if one was reporting progress within an established area it wasn’t too hard to get a paper published. But anything genuinely new or original one could pretty much count on getting rejected by the peer review process, either through intellectual shortsightedness or through academic corruption. And for Mitchell there was the additional problem that his explanations weren’t easy to understand.

But finally, in late 1977, Joel Lebowitz, editor of the Journal of Statistical Physics, agreed to publish Mitchell’s paper—essentially on the basis of knowing Mitchell, even though he admitted he didn’t really understand the paper. And so it was that early in 1978 “Quantitative Universality for a Class of Nonlinear Transformations”—reporting Mitchell’s big result—officially appeared. (For purposes of academic priority, Mitchell would sometimes quote a summary of a talk he gave on August 26, 1976, that was published in the Los Alamos Theoretical Division Annual Report 1975–1976. Mitchell was quite affected by the rejection of his papers, and for years kept the rejection letters in his desk drawer.)

Mitchell continued to travel the world talking about his results. There was interest, but also confusion. But in the summer of 1979, something exciting happened: Albert Libchaber in Paris reported results on a physical experiment on the transition to turbulence in convection in liquid helium—where he saw period doubling, with exactly the exponent δ that Mitchell had calculated. Mitchell’s δ apparently wasn’t just universal to a class of mathematical systems—it also showed up in real, physical systems.

Pretty much immediately, Mitchell was famous. Connections to the renormalization group had been made, and his work was becoming fashionable among both physicists and mathematicians. Mitchell himself was still traveling around, but now he was regularly hobnobbing with the top physicists and mathematicians.

I remember him coming to Caltech, perhaps in the fall of 1979. There was a certain rock-star character to the whole thing. Mitchell showed up, gave a stylish but somewhat mysterious talk, and was then whisked away to talk privately with Richard Feynman and Murray Gell-Mann.

Soon Mitchell was being offered all sorts of high-level jobs, and in 1982 he triumphantly returned to Cornell as a full professor of physics. There was an air of Nobel Prize–worthiness, and by June 1984 he was appearing in the New York Times magazine, in full Beethoven mode, in front of a Cornell waterfall:

Mitchell in New York Times Magazine

Still, the mathematicians weren’t satisfied. As with Benoit Mandelbrot’s work, they tended to see Mitchell’s results as mere “numerical conjectures”, not proven and not always even quite worth citing. But top mathematicians (who Mitchell had befriended) were soon working on the problem, and results began to appear—though it took a decade for there to be a full, final proof of the universality of δ.

Where the Science Went

So what happened to Mitchell’s big discovery? It was famous, for sure. And, yes, period-doubling cascades with his universal features were seen in a whole sequence of systems—in fluids, optics and more. But how general was it, really? And could it, for example, be extended to the full problem of fluid turbulence?

Mitchell and others studied systems other than iterated maps, and found some related phenomena. But none were quite as striking as Mitchell’s original discovery.

In a sense, my own efforts on cellular automata and the behavior of simple programs, beginning around 1981, have tried to address some of the same bigger questions as Mitchell’s work might have led to. But the methods and results have been very different. Mitchell always tried to stay close to the kinds of things that traditional mathematical physics can address, while I unabashedly struck out into the computational universe, investigating the phenomena that occur there.

I tried to see how Mitchell’s work might relate to mine—and even in my very first paper on cellular automata in 1981 I noted for example that the average density of black cells on successive steps of a cellular automaton’s evolution can be approximated (in “mean field theory”) by an iterated map.

I also noted that mathematically the whole evolution of a cellular automaton can be viewed as an iterated map—though on the Cantor set, rather than on ordinary real numbers. In my first paper, I even plotted the analog of Mitchell’s smooth mappings, but now they were wild and discontinuous:

Rules plot
&#10005

GraphicsRow[
 Labeled[ListPlot[
     Table[FromDigits[CellularAutomaton[#, IntegerDigits[n, 2, 12]], 
       2], {n, 0, 2^12 - 1}], Sequence[
     AspectRatio -> 1, Frame -> True, FrameTicks -> None]], 
    Text[StringTemplate["rule ``"][#]]] & /@ {22, 42, 90, 110}]

But try as I might, I could never find any strong connection with Mitchell’s work. I looked for analogs of things like period doubling, and Sarkovskii’s theorem, but didn’t find much. In my computational framework, even thinking about real numbers, with their infinite sequence of digits, was a bit unnatural. Years later, in A New Kind of Science, I had a note entitled “Smooth iterated maps”. I showed their digit sequences, and observed, rather undramatically, that Mitchell’s discovery implied an unusual nested structure at the beginning of the sequences:

Nested
&#10005

FractionalDigits[x_, digs_Integer] := 
 NestList[{Mod[2 First[#], 1], Floor[2 First[#]]} &, {x, 0}, digs][[
  2 ;;, -1]];
  GraphicsRow[
 Function[a, 
   ArrayPlot[
    FractionalDigits[#, 40] & /@ 
     NestList[a # (1 - #) &, N[1/8, 80], 80]]] /@ {2.5, 3.3, 3.4, 3.5,
    3.6, 4}]

The Rest of the Story

Portrait of Mitchell
(Photograph by Predrag Cvitanović)

So what became of Mitchell? After four years at Cornell, he moved to the Rockefeller University in New York, and for the next 30 years settled into a somewhat Bohemian existence, spending most of his time at his apartment on the Upper East Side of Manhattan.

While he was still at Los Alamos, Mitchell had married a woman from Germany named Cornelia, who was the sister of the wife of physicist (and longtime friend of mine) David Campbell, who had started the Center for Nonlinear Studies at Los Alamos, and would later go on to be provost at Boston University. But after not too long, Cornelia left Mitchell, taking up instead with none other than Pete Carruthers. (Pete—who struggled with alcoholism and other issues—later reunited with Lucy, but died in 1997 at the age of 61.)

When he was back at Cornell, Mitchell met a woman named Gunilla, who had run away from her life as a pastor’s daughter in a small town in northern Sweden at the age of 14, had ended up as a model for Salvador Dalí, and then in 1966 had been brought to New York as a fashion model. Gunilla had been a journalist, video maker, playwright and painter. Mitchell and she married in 1986, and remained married for 26 years, during which time Gunilla developed quite a career as a figurative painter.

Mitchell’s last solo academic paper was published in 1987. He did publish a handful of other papers with various collaborators, though none were terribly remarkable. Most were extensions of his earlier work, or attempts to apply traditional methods of mathematical physics to various complex fluid-like phenomena.

Mitchell liked interacting with the upper echelons of academia. He received all sorts of honors and recognition (though never a Nobel Prize). But to the end he viewed himself as something of an outsider—a Renaissance man who happened to have focused on physics, but didn’t really buy into all its institutions or practices.

From the early 1980s on, I used to see Mitchell fairly regularly, in New York or elsewhere. He became a daily user of Mathematica, singing its praises and often telling me about elaborate calculations he had done with it. Like many mathematical physicists, Mitchell was a connoisseur of special functions, and would regularly talk to me about more and more exotic functions he thought we should add.

Mitchell had two major excursions outside of academia. By the mid-1980s, the young poetess—now named Kathy Hammond—that Mitchell had known at Cornell had been an advertising manager for the New York Times and had then married into the family that owned the Hammond World Atlas. And through this connection, Mitchell was pulled into a completely new field for him: cartography.

I talked to him about it many times. He was very proud of figuring out how to use the Riemann mapping theorem to produce custom local projections for maps. He described (though I never fully understood it) a very physics-based algorithm for placing labels on maps. And he was very pleased when finally an entirely new edition of the Hammond World Atlas (that he would refer to as “my atlas”) came out.

Starting in the 1980s, there’d been an increasing trend for physics ideas to be applied to quantitative finance, and for physicists to become Wall Street quants. And with people in finance continually looking for a unique edge, there was always an interest in new methods. I was certainly contacted a lot about this—but with the success of James Gleick’s 1987 book Chaos (for which I did a long interview, though was only mentioned, misspelled, in a list of scientists who’d been helpful), there was a whole new set of people looking to see how “chaos” could help them in finance.

One of those was a certain Michael Goodkin. When he was in college back in the early 1960s, Goodkin had started a company that marketed the legal research services of law students. A few years later, he enlisted several Nobel Prize–winning economists and started what may have been the first hedge fund to do computerized arbitrage trading. Goodkin had always been a high-rolling, globetrotting gambler and backgammon player, and he made and lost a lot of money. And, down on his luck, he was looking for the next big thing—and found chaos theory, and Mitchell Feigenbaum.

For a few years he cultivated various physicists, then in 1995 he found a team to start a company called Numerix to commercialize the use of physics-like methods in computations for increasingly exotic financial instruments. Mitchell Feigenbaum was the marquee name, though the heavy lifting was mostly done by my longtime friend Nigel Goldenfeld, and a younger colleague of his named Sasha Sokol.

At the beginning there was lots of mathematical-physics-like work, and Mitchell was quite involved. (He was an enthusiast of Itô calculus, gave lectures about it, and was proud of having found 1000 speed-ups of stochastic integrations.) But what the company actually did was to write C++ libraries for banks to integrate into their systems. It wasn’t something Mitchell wanted to do long term. And after a number of years, Mitchell’s active involvement in the company declined.

(I’d met Michael Goodkin back in 1998, and 14 years later—having recently written his autobiography The Wrong Answer Faster: The Inside Story of Making the Machine That Trades Trillions—he suddenly contacted me again, pitching my involvement in a rather undefined new venture. Mitchell still spoke highly of Michael, though when the discussion rather bizarrely pivoted to me basically starting and CEOing a new company, I quickly dropped it.)

I had many interactions with Mitchell over the years, though they’re not as well archived as they might be, because they tended to be verbal rather than written, since, as Mitchell told me (in email): “I dislike corresponding by email. I still prefer to hear an actual voice and interact…”

There are fragments in my archive, though. There’s correspondence, for example, about Mitchell’s 2004 60th-birthday event, that I couldn’t attend because it conflicted with a significant birthday for one of my children. In lieu of attending, I commissioned the creation of a “Feigenbaum–Cvitanović Crystal”—a 3D rendering in glass of the limiting function g(z) in the complex plane.

It was a little complex to solve the functional equation, and the laser manufacturing method initially shattered a few blocks of glass, but eventually the object was duly made, and sent—and I was pleased many years later to see it nicely displayed in Mitchell’s apartment:

Feigenbaum–Cvitanović crystal

Sometimes my archives record mentions of Mitchell by others, usually Predrag. In 2007, Predrag reported (with characteristic wit):

“Other news: just saw Mitchell, he is dating Odyssey.

No, no, it’s not a high-level Washington type escort service—he is dating Homer’s Odyssey, by computing the positions of low stars as function of the 26000 year precession—says Hiparcus [sic] had it all figured out, but Catholic church succeeded in destroying every single copy of his tables.”

Living up to the Renaissance man tradition, Mitchell always had a serious interest in history. In 2013, responding to a piece of mine about Leibniz, Mitchell said he’d been a Leibniz enthusiast since he was a teenager, then explained:

“The Newton hagiographer (literally) Voltaire had no idea of the substance of the Monadology, so could only spoof ‘the best of all possible worlds’. Long ago I’ve published this as a verbal means of explaining 2^n universality.

Leibniz’s second published paper at age 19, ‘On the Method of Inverse Tangents’, or something like that, is actually the invention of the method of isoclines to solve ODEs, quite contrary to the extant scholarly commentary. Both Leibniz and Newton start with differential equations, already having received the diff. calculus. This is quite an intriguing story.”

But the mainstay of Mitchell’s intellectual life was always mathematical physics, though done more as a personal matter than as part of institutional academic work. At some point he was asked by his then-young goddaughter (he never had children of his own) why the Moon looks larger when it’s close to the horizon. He wrote back an explanation (a bit in the style of Euler’s Letters to a German Princess), then realized he wasn’t sure of the answer, and got launched into many years of investigation of optics and image formation. (He’d actually been generally interested in the retina since he was at MIT, influenced by Jerry Lettvin of “What the Frog’s Eye Tells the Frog’s Brain” fame.)

He would tell me about it, explaining that the usual theory of image formation was wrong, and he had a better one. He always used the size of the Moon as an example, but I was never quite clear whether the issue was one of optics or perception. He never published anything about what he did, though with luck his manuscripts (rumored to have the makings of a book) will eventually see the light of day—assuming others can understand them.

When I would visit Mitchell (and Gunilla), their apartment had a distinctly Bohemian feel, with books, papers, paintings and various devices strewn around. And then there was The Bird. It was a cockatoo, and it was loud. I’m not sure who got it or why. But it was a handful. Mitchell and Gunilla nearly got ejected from their apartment because of noise complaints from neighbors, and they ended up having to take The Bird to therapy. (As I learned in a slightly bizarre—and never executed—plan to make videogames for “they-are-alien-intelligences-right-here-on-this-planet” pets, cockatoos are social and, as pets, arguably really need a “Twitter for Cockatoos”.)

The Bird
(Photograph by Predrag Cvitanović)

In the end, though, it was Gunilla who left, with the rumor being that she’d been driven away by The Bird.

The last time I saw Mitchell in person was a few years ago. My son Christopher and I visited him at his apartment—and he was in full Mitchell form, with eyes bright, talking rapidly and just a little conspiratorially about the mathematical physics of image formation. “Bird eyes are overrated”, he said, even as his cockatoo squawked in the next room. “Eagles have very small foveas, you know. Their eyes are like telescopes.”

“Fish have the best eyes”, he said, explaining that all eyes evolved underwater—and that the architecture hadn’t really changed since. “Fish keep their entire field of view in focus, not like us”, he said. It was charming, eccentric, and very Mitchell.

For years, we had talked from time to time on the phone, usually late at night. I saw Predrag a few months ago, saying that I was surprised not to have heard from Mitchell. He explained that Mitchell was sick, but was being very private about it. Then, a few weeks ago, just after midnight, Predrag sent me an email with the subject line “Mitchell is dead”, explaining that Mitchell had died at around 8 pm, and attaching a quintessential Mitchell-in-New-York picture:

Mitchell in New York
(Photograph by Predrag Cvitanović)

It’s kind of a ritual I’ve developed when I hear that someone I know has died: I immediately search my archives. And this time I was surprised to find that a few years ago Mitchell had successfully reached voicemail I didn’t know I had. So now we can give Mitchell the last word:

And, of course, the last number too: 4.66920160910299067185320382…

Stephen Wolfram (2019), "Mitchell Feigenbaum (1944–2019), 4.66920160910299067185320382…," Stephen Wolfram Writings. writings.stephenwolfram.com/2019/07/mitchell-feigenbaum-1944-2019-4-66920160910299067185320382.
Text
Stephen Wolfram (2019), "Mitchell Feigenbaum (1944–2019), 4.66920160910299067185320382…," Stephen Wolfram Writings. writings.stephenwolfram.com/2019/07/mitchell-feigenbaum-1944-2019-4-66920160910299067185320382.
CMS
Wolfram, Stephen. "Mitchell Feigenbaum (1944–2019), 4.66920160910299067185320382…." Stephen Wolfram Writings. July 23, 2019. writings.stephenwolfram.com/2019/07/mitchell-feigenbaum-1944-2019-4-66920160910299067185320382.
APA
Wolfram, S. (2019, July 23). Mitchell Feigenbaum (1944–2019), 4.66920160910299067185320382…. Stephen Wolfram Writings. writings.stephenwolfram.com/2019/07/mitchell-feigenbaum-1944-2019-4-66920160910299067185320382.

Posted in: Historical Perspectives, Mathematics, Physics

13 comments

  1. Makes me think someone should compile a “Lives of the Mathematicians” and encourage apt young folks to read it, though it might be hard to find others this good to include. Thank you.

  2. Thanks for that obituary. It complements well the ones from the Washington post and the New York Times.
    Universality was one of the great discoveries of the last century. It will certainly have more ramifications in the future. Also nice is the visualization of the math which Feigenbaum discovered experimentally. It illustrates how important experiments
    have become in mathematics (which contrasts in an astounding way with an increasing dispatch of modern theoretical physics with experiments). Feigenbaum, who was somewhere between mathematics and physics illustrates the fields. I also appreciate the information about the person which one can not find any where else. The story, including of course the episode about `the bird’ could produce material for a movie. It is actually quite surprising that the `person` Mitchell Feigenbaum is hard found on the web (like talks, lectures or interviews). The preference of personal communication rather than through electronic means explains a bit this mystery.

  3. Stephen,

    I thank you for a fascinating article remembering Mitchell. As a long time admirer of your development of Mathematica and the Wolfram Language I wish for your continued success in developing computational knowledge and extending its availability to a broad world-wide user base.

    Best wishes… Syd Geraghty

  4. To Vincent DiCarlo

    A classic example of what you would like is:

    Men of Mathematics
    E.T. Bell
    Simon and Schuster, Oct 15, 1986 – Biography & Autobiography – 590 pages
    7 Reviews
    From one of the greatest minds in contemporary mathematics, Professor E.T. Bell, comes a witty, accessible, and fascinating look at the beautiful craft and enthralling history of mathematics.

    Men of Mathematics provides a rich account of major mathematical milestones, from the geometry of the Greeks through Newton’s calculus, and on to the laws of probability, symbolic logic, and the fourth dimension. Bell breaks down this majestic history of ideas into a series of engrossing biographies of the great mathematicians who made progress possible—and who also led intriguing, complicated, and often surprisingly entertaining lives.

    Never pedantic or dense, Bell writes with clarity and simplicity to distill great mathematical concepts into their most understandable forms for the curious everyday reader. Anyone with an interest in math may learn from these rich lessons, an advanced degree or extensive research is never necessary.

  5. Thanks for this great article. I found smooth, analytical function, which give us the same results, as iterations from scenario of professor Feigenbaum.
    It is easy to find Feigenbaum constant from this new function.
    It is pity, that he never see it. I am living near Montreal and I thought that he will live much longer.

  6. Always a big fan of your memoir writing, detailing both personal and technical aspects in a natural and highly interesting manner. Thank you.

  7. I’m wondering if you could possibly tell me (or write a full-blown post on it), how the Ricci Flow relates to renormalization group and universality! I read in Grisha Perelman’s biography that he wasn’t really interested in having closed the Poincare conjecture, rather, he was fascinated by the connections he saw between Ricci Flow and renormalization. Could you possibly write about that!?!

  8. This is the best obituary I’ve ever read. It not only brings this unusual person alive (after all, the real purpose of an obituary!), but invokes and explains and contextualizas his work. I knew almost nothing about Feigenbaum, and now he shines for me. What a gift; many thanks.

  9. It’s so great somebody wrote Mitchell Feigenbaum’s story. I’m still trying to grasp his theories and so glad I found this article, and to hear his voice. Thank You.

  10. Thank you for writing a beautiful précis of this talented man’s life. Being the same age roughly, and remembering back to the 60’s and 70’s, it is illustrative of the challenges faced in academia by those with a lot to offer who didn’t exactly fit into a particular pigeonhole. Multidisciplinarity, if there is such a word, is thankfully becoming more tolerated, at least conceptually. Young academics would do well to read what you have written here.

  11. I am not a mathematician, squeeked by in calculus. I want to know if I understand correctly the meaning of this number. I think it means in any randomness [or chaos] there is an eventual order that repeats at a dependable frequency. Essentially a sort of mandelbrat evolves from any chaos.
    By extension of that thought Deep Time, could explain the accidental Big Bang. By further consideration this number could answer, “Who made God?”
    And physical reality reveal fractals and apparently the Feigenbaums constant, perhaps as cause or fundamentals to the existence of the same?

  12. I was pleasantly surprised to find this today. Thank you for sharing!

  13. Answering myself, from elswhere I learned at Los Alamos, Mitchell was tasked with answering why atomic explosions stop. With a mathematical model of steady fluid dynamics he introduced, at various mathematical points, frequencies to produce chaos and accidently discovered what appeared to be a constant rate of increase IN CHAOS. With supercomputing he discovered various frequencies produced dependable fractal bifurcations at the constant rate named for him.
    I had been reading on Mandelbrot fractals and assumed these Feigenbaum fractals reiterated infinitely; were similar to Mandelbrots. But I think I understand that Feigenbaum’s constant is more in line with ‘cooling universe’ : order descends into chaos, showing a dependable rate for that. (I guess that answers why an atomic blast fizzles out.)
    For an abstract-number-universe the Mandelbrots stem from random interations of seed conditions and suggested structure appears at a constant rate from chaos: Fractals. I thought perhaps Feigenbaum’s constant showed something similar AND was applicable to non-abstract, physically-real systems. I wasn’t thinking of increasing chaos – I was thinking increasing complexity. Hence my wonder over the idea of a seed for either accidental big bang or creation by an abstract complexity approaching sentience.
    Not being versed in the terminology I misunderstood. But: Feigenbaum fired my imagination and I want to share this:
    I can’t help but feel that mathematicians could between these constants and the fine particle constant mathematically link abstract Mandelbrot structure with string or brane theory formation of non abstract physical laws. Reciprocal of Feigenbaum for creation?
    (Big-bang or God. Either works, choose your flavor, not the issue here. )