What Is Spacetime, Really?

Note: The ideas here have now been developed much further in the Wolfram Physics Project.
See the announcement: Finally We May Have a Path to the Fundamental Theory of Physics… and It’s Beautiful (April 14, 2020)

Space network

A hundred years ago today Albert Einstein published his General Theory of Relativity—a brilliant, elegant theory that has survived a century, and provides the only successful way we have of describing spacetime.

There are plenty of theoretical indications, though, that General Relativity isn’t the end of the story of spacetime. And in fact, much as I like General Relativity as an abstract theory, I’ve come to suspect it may actually have led us on a century-long detour in understanding the true nature of space and time.

I’ve been thinking about the physics of space and time for a little more than 40 years now. At the beginning, as a young theoretical physicist, I mostly just assumed Einstein’s whole mathematical setup of Special and General Relativity—and got on with my work in quantum field theory, cosmology, etc. on that basis.

But about 35 years ago, partly inspired by my experiences in creating technology, I began to think more deeply about fundamental issues in theoretical science—and started on my long journey to go beyond traditional mathematical equations and instead use computation and programs as basic models in science. Quite soon I made the basic discovery that even very simple programs can show immensely complex behavior—and over the years I discovered that all sorts of systems could finally be understood in terms of these kinds of programs.

Encouraged by this success, I then began to wonder if perhaps the things I’d found might be relevant to that ultimate of scientific questions: the fundamental theory of physics.

At first, it didn’t seem too promising, not least because the models that I’d particularly been studying (cellular automata) seemed to work in a way that was completely inconsistent with what I knew from physics. But sometime in 1988—around the time the first version of Mathematica was released—I began to realize that if I changed my basic way of thinking about space and time then I might actually be able to get somewhere.

A Simple Ultimate Theory?

In the abstract it’s far from obvious that there should be a simple, ultimate theory of our universe. Indeed, the history of physics so far might make us doubtful—because it seems as if whenever we learn more, things just get more complicated, at least in terms of the mathematical structures they involve. But—as noted, for example, by early theologians—one very obvious feature of our universe is that there is order in it. The particles in the universe don’t just all do their own thing; they follow a definite set of common laws.

But just how simple might the ultimate theory for the universe be? Let’s say we could represent it as a program, say in the Wolfram Language. How long would the program be? Would it be as long as the human genome, or as the code for an operating system? Or would it be much, much smaller?

Before my work on the computational universe of simple programs, I would have assumed that if there’s a program for the universe it must be at least somewhat complicated. But what I discovered is that in the computational universe even extremely simple programs can actually show behavior as complex as anything (a fact embodied in my general Principle of Computational Equivalence). So then the question arises: could one of these simple programs in the computational universe actually be the program for our physical universe?

Cellular automata evolution from simple rules

The Data Structure of the Universe

But what would such a program be like? One thing is clear: if the program is really going to be extremely simple, it’ll be too small to explicitly encode obvious features of our actual universe, like particle masses, or gauge symmetries, or even the number of dimensions of space. Somehow all these things have to emerge from something much lower level and more fundamental.

So if the behavior of the universe is determined by a simple program, what’s the basic “data structure” on which this program operates? At first, I’d assumed that it must be something simple for us to describe, like the lattice of cells that exists in a cellular automaton. But even though such a structure works well for models of many things, it seems at best incredibly implausible as a fundamental model of physics. Yes, one can find rules that give behavior which on a large scale doesn’t show obvious signs of the lattice. But if there’s really going to be a simple model of physics, it seems wrong that such a rigid structure for space should be burned in, while every other feature of physics just emerges.

So what’s the alternative? One needs something in a sense “underneath” space: something from which space as we know it can emerge. And one needs an underlying data structure that’s as flexible as possible. I thought about this for years, and looked at all sorts of computational and mathematical formalisms. But what I eventually realized was that basically everything I’d looked at could actually be represented in the same way: as a network.

A network—or graph—just consists of a bunch of nodes, joined by connections. And all that’s intrinsically defined in the graph is the pattern of these connections.

A graph

Space as a Network

So could this be what space is made of? In traditional physics—and General Relativity—one doesn’t think of space as being “made of” anything. One just thinks of space as a mathematical construct that serves as a kind of backdrop, in which there’s a continuous range of possible positions at which things can be placed.

But do we in fact know that space is continuous like this? In the early days of quantum mechanics, it was actually assumed that space would be quantized like everything else. But it wasn’t clear how this could fit in with Special Relativity, and there was no obvious evidence of discreteness. By the time I started doing physics in the 1970s, nobody really talked about discreteness of space anymore, and it was experimentally known that there wasn’t discreteness down to about 10-18 meters (1/1000 the radius of a proton, or 1 attometer). Forty years—and several tens of billions of dollars’ worth of particle accelerators—later there’s still no discreteness in space that’s been seen, and the limit is about 10-22 meters (or 100 yoctometers).

Still, there’s long been a suspicion that something has to be quantized about space down at the Planck length of about 10-34 meters. But when people have thought about this—and discussed spin networks or loop quantum gravity or whatever—they’ve tended to assume that whatever happens there has to be deeply connected to the formalism of quantum mechanics, and to the notion of quantum amplitudes for things.

But what if space—perhaps at something like the Planck scale—is just a plain old network, with no explicit quantum amplitudes or anything? It doesn’t sound so impressive or mysterious—but it certainly takes a lot less information to specify such a network: you just have to say which nodes are connected to which other ones.

But how could this be what space is made of? First of all, how could the apparent continuity of space on larger scales emerge? Actually, that’s not very difficult: it can just be a consequence of having lots of nodes and connections. It’s a bit like what happens in a fluid, like water. On a small scale, there are a bunch of discrete molecules bouncing around. But the large-scale effect of all these molecules is to produce what seems to us like a continuous fluid.

It so happens that I studied this phenomenon a lot in the mid-1980s—as part of my efforts to understand the origins of apparent randomness in fluid turbulence. And in particular I showed that even when the underlying “molecules” are cells in a simple cellular automaton, it’s possible to get large-scale behavior that exactly follows the standard differential equations of fluid flow.

Cellular automaton fluid

So when I started thinking about the possibility that underneath space there might be a network, I imagined that perhaps the same methods might be used—and that it might actually be possible to derive Einstein’s Equations of General Relativity from something much lower level.

Maybe There’s Nothing But Space

But, OK, if space is a network, what about all the stuff that’s in space? What about all the electrons, and quarks and photons, and so on? In the usual formulation of physics, space is a backdrop, on top of which all the particles, or strings, or whatever, exist. But that gets pretty complicated. And there’s a simpler possibility: maybe in some sense everything in the universe is just “made of space”.

As it happens, in his later years, Einstein was quite enamored of this idea. He thought that perhaps particles, like electrons, could be associated with something like black holes that contain nothing but space. But within the formalism of General Relativity, Einstein could never get this to work, and the idea was largely dropped.

As it happens, nearly 100 years earlier there’d been somewhat similar ideas. That was a time before Special Relativity, when people still thought that space was filled with a fluid-like ether. (Ironically enough, in modern times we’re back to thinking of space as filled with a background Higgs field, vacuum fluctuations in quantum fields, and so on.) Meanwhile, it had been understood that there were different types of discrete atoms, corresponding to the different chemical elements. And so it was suggested (notably by Kelvin) that perhaps these different types of atoms might all be associated with different types of knots in the ether.

Sequence of knots

It was an interesting idea. But it wasn’t right. But in thinking about space as a network, there’s a related idea: maybe particles just correspond to particular structures in the network. Maybe all that has to exist in the universe is the network, and then the matter in the universe just corresponds to particular features of this network. It’s easy to see similar things in cellular automata on a lattice. Even though every cell follows the same simple rules, there are definite structures that exist in the system—and that behave quite like particles, with a whole particle physics of interactions.

Particle in a cellular automata

There’s a whole discussion to be had about how this works in networks. But first, there’s something else that’s very important to talk about: time.

What Is Time?

Back in the 1800s, there was space and there was time. Both were described by coordinates, and in some mathematical formalisms, both appeared in related ways. But there was no notion that space and time were in any sense “the same thing”. But then along came Einstein’s Special Theory of Relativity—and people started talking about “spacetime”, in which space and time are somehow facets of the same thing.

It makes a lot of sense in the formalism of Special Relativity, in which, for example, traveling at a different velocity is like rotating in 4-dimensional spacetime. And for about a century, physics has pretty much just assumed that spacetime is a thing, and that space and time aren’t in any fundamental way different.

So how does that work in the context of a network model of space? It’s certainly possible to construct 4-dimensional networks in which time works just like space. And then one just has to say that the history of the universe corresponds to some particular spacetime network (or family of networks). Which network it is must be determined by some kind of constraint: our universe is the one which has such-and-such a property, or in effect satisfies such-and-such an equation. But this seems very non-constructive: it’s not telling one how the universe behaves, it’s just saying that if the behavior looks like this, then it can be the universe.

And, for example, in thinking about programs, space and time work very differently. In a cellular automaton, for example, the cells are laid out in space, but the behavior of the system occurs in a sequence of steps in time. But here’s the thing: just because the underlying rules treat space and time very differently, it doesn’t mean that on a large scale they can’t effectively behave similarly, just like in current physics.

Evolving the Network

OK, so let’s say that underneath space there’s a network. How does this network evolve? A simple hypothesis is to assume that there’s some kind of local rule, which says, in effect that if you see a piece of network that looks like this, replace it with one that looks like that.

Sample network rules

But now things get a bit complicated. Because there might be lots of places in the network where the rule could apply. So what determines in which order each piece is handled?

In effect, each possible ordering is like a different thread of time. And one could imagine a theory in which all threads are followed—and the universe in effect has many histories.

But that doesn’t need to be how it works. Instead, it’s perfectly possible for there to be just one thread of time—pretty much the way we experience it. And to understand this, we have to do something a bit similar to what Einstein did in formulating Special Relativity: we have to make a more realistic model of what an “observer” can be.

Needless to say, any realistic observer has to exist within our universe. So if the universe is a network, the observer must be just some part of that network. Now think about all those little network updatings that are happening. To “know” that a given update has happened, observers themselves must be updated.

If you trace this all the way through—as I did in my book, A New Kind of Science—you realize that the only thing observers can ever actually observe in the history of the universe is the causal network of what event causes what other event.

And then it turns out that there’s a definite class of underlying rules for which different orderings of underlying updates don’t affect that causal network. They’re what I call “causal invariant” rules.

Causal invariance is an interesting property, with analogs in a variety of computational and mathematical systems—for example in the fact that transformations in algebra can be applied in any order and still give the same final result. But in the context of the universe, its consequence is that it guarantees that there’s only one thread of time in the universe.

Deriving Special Relativity

So what about spacetime and Special Relativity? Here, as I figured out in the mid-1990s, something exciting happens: as soon as there’s causal invariance, it basically follows that there’ll be Special Relativity on a large scale. In other words, even though at the lowest level space and time are completely different kinds of things, on a larger scale they get mixed together in exactly the way prescribed by Special Relativity.

Roughly what happens is that different “reference frames” in Special Relativity—corresponding, for example, to traveling at different velocities—correspond to different detailed sequencings of the low-level updates in the network. But because of causal invariance, the overall behavior associated with these different detailed sequences is the same—so that the system follows the principles of Special Relativity.

At the beginning it might have looked hopeless: how could a network that treats space and time differently end up with Special Relativity? But it works out. And actually, I don’t know of any other model in which one can successfully derive Special Relativity from something lower level; in modern physics it’s always just inserted as a given.

Deriving General Relativity

OK, so one can derive Special Relativity from simple models based on networks. What about General Relativity—which, after all, is what we’re celebrating today? Here the news is very good too: subject to various assumptions, I managed in the late 1990s to derive Einstein’s Equations from the dynamics of networks.

The whole story is somewhat complicated. But here’s roughly how it goes. First, we have to think about how a network actually represents space. Now remember, the network is just a collection of nodes and connections. The nodes don’t say how they’re laid out in one-dimensional, two-dimensional, or any-dimensional space.

It’s easy to see that there are networks that on a large scale seem, say, two-dimensional, or three-dimensional. And actually, there’s a simple test for the effective dimension of a network. Just start from a node, then look at all nodes that are up to r connections away. If the network is behaving like it’s d-dimensional, then the number of nodes in that “ball” will be about rd.

Graphs with different effective dimensions

Here’s where things start to get really interesting. If the network behaves like flat d-dimensional space, then the number of nodes will always be close to rd. But if it behaves like curved space, as in General Relativity, then there’s a correction term, that’s proportional to a mathematical object called the Ricci scalar. And that’s interesting, because the Ricci scalar is precisely something that occurs in Einstein’s Equations.

Graphs with different effective Ricci curvatures

There’s lots of mathematical complexity here. One has to look at shortest paths—or geodesics—in the network. One has to see how to do everything not just in space, but in networks evolving in time. And one has to understand how the large-scale limits of networks work.

In deriving mathematical results, it’s important to be able to take certain kinds of averages. It’s actually very much the same kind of thing needed to derive fluid equations from dynamics of molecules: one needs to be able to assume a certain degree of effective randomness in low-level interactions to justify the taking of averages.

But the good news is that an incredible range of systems, even with extremely simple rules, work a bit like the digits of pi, and generate what seems for all practical purposes random. And the result is that even though the details of a causal network are completely determined once one knows the network one’s starting from, many of these details will appear effectively random.

So here’s the final result. If one assumes effective microscopic randomness, and one assumes that the behavior of the overall system does not lead to a change in overall limiting dimensions, then it follows that the large-scale behavior of the system satisfies Einstein’s Equations!

I think this is pretty exciting. From almost nothing, it’s possible to derive Einstein’s Equations. Which means that these simple networks reproduce the features of gravity that we know in current physics.

There are all sorts of technical things to say, not suitable for this general blog. Quite a few of them I already said long ago in A New Kind of Science—and particularly the notes at the back.

A few things are perhaps worth mentioning here. First, it’s worth noting that my underlying networks not only have no embedding in ordinary space intrinsically defined, but also don’t intrinsically define topological notions like inside and outside. All these things have to emerge.

When it comes to deriving the Einstein Equations, one creates Ricci tensors by looking at geodesics in the network, and looking at the growth rates of balls that start from each point on the geodesic.

The Einstein Equations one gets are the vacuum Einstein Equations. But just like with gravitational waves, one can effectively separate off features of space considered to be associated with “matter”, and then get Einstein’s full Equations, complete with “matter” energy-momentum terms.

As I write this, I realize how easily I still fall into technical “physics speak”. (I think it must be that I learned physics when I was so young…) But suffice it to say that at a high level the exciting thing is that from the simple idea of networks and causal invariant replacement rules, it’s possible to derive the Equations of General Relativity. One puts remarkably little in, yet one gets out that remarkable beacon of 20th-century physics: General Relativity.

Particles, Quantum Mechanics, Etc.

It’s wonderful to be able to derive General Relativity. But that’s not all of physics. Another very important part is quantum mechanics. It’s going to get me too far afield to talk about this in detail here, but presumably particles—like electrons or quarks or Higgs bosons—must exist as certain special regions in the network. In qualitative terms, they might not be that different from Kelvin’s “knots in the ether”.

But then their behavior must follow the rules we know from quantum mechanics—or more particularly, quantum field theory. A key feature of quantum mechanics is that it can be formulated in terms of multiple paths of behavior, each associated with a certain quantum amplitude. I haven’t figured it all out, but there’s definitely a hint of something like this going on when one looks at the evolution of a network with many possible underlying sequences of replacements.

My network-based model doesn’t have official quantum amplitudes in it. It’s more like (but not precisely like) a classical, if effectively probabilistic, model. And for 50 years people have almost universally assumed that there’s a crippling problem with models like that. Because there’s a theorem (Bell’s Theorem) that says that unless there’s instantaneous non-local propagation of information, no such “hidden variables” model can reproduce the quantum mechanical results that are observed experimentally.

But there’s an important footnote. It’s pretty clear what “non-locality” means in ordinary space with a definite dimension. But what about in a network? Here it’s a different story. Because everything is just defined by connections. And even though the network may mostly correspond on a large scale to 3D space, it’s perfectly possible for there to be “threads” that join what would otherwise be quite separated regions. And the tantalizing thing is that there are indications that exactly such threads can be generated by particle-like structures propagating in the network.

Searching for the Universe

OK, so it’s conceivable that some network-based model might be able to reproduce things from current physics. How might we set about finding such a model that actually reproduces our exact universe?

The traditional instinct would be to start from existing physics, and try to reverse engineer rules that could reproduce it. But is that the only way? What about just starting to enumerate possible rules, and seeing if any of them turn out to be our universe?

Before studying the computational universe of simple programs I would have assumed that this would be crazy: that there’s no way the rules for our universe could be simple enough to find by this kind of enumeration. But after seeing what’s out there in the computational universe—and seeing some other examples where amazing things were found just by a search—I’ve changed my mind.

So what happens if one actually starts doing such a search? Here’s the zoo of networks one gets after a fairly small number of steps by using all possible underlying rules of a certain very simple type:

Networks from different evolution rules

Some of these networks very obviously aren’t our universe. They just freeze after a few steps, so time effectively stops. Or they have far too simple a structure for space. Or they effectively have an infinite number of dimensions. Or other pathologies.

But the exciting thing is that remarkably quickly one finds rules that aren’t obviously not our universe. Telling if they actually are our universe is a difficult matter. Because even if one simulates lots of steps, it can be arbitrarily difficult to know whether the behavior they’re showing is what one would expect in the early moments of a universe that follows the laws of physics as we know them.

There are plenty of encouraging features, though. For example, these universes can start from effectively infinite numbers of dimensions, then gradually settle to a finite number of dimensions—potentially removing the need for explicit inflation in the early universe.

And at a higher level, it’s worth remembering that if the models one’s using are simple enough, there’s a big distance between “neighboring models”, so it’s likely one will either reproduce known physics exactly, or be very wide of the mark.

In the end, though, one needs to reproduce not just the rule, but also the initial condition for the universe. But once one has that, one will in principle know the exact evolution of the universe. So does that mean one would immediately be able to figure out everything about the universe? Absolutely not. Because of the phenomenon I call “computational irreducibility”—which implies that even though one may know the rule and initial condition for a system, it can still require an irreducible amount of computational work to trace through every step in the behavior of the system to find out what it does.

Still, the possibility exists that one could just find a simple rule—and initial condition—that one could hold up and say, “This is our universe!” We’d have found our universe in the computational universe of all possible universes.

Of course this would be an exciting day for science.

But it would raise plenty of other questions. Like: why this rule, and not another? And why should our particular universe have a rule that shows up early enough in our list of all possible universes that we could actually find it just by enumeration?

One might think that it’d just be something about us being in this universe, and that causing us to choose an enumeration which makes it come up early. But my current guess is that it’d be something much more bizarre, such as that with respect to observers in a universe, all of a large class of nontrivial possible universe rules are actually equivalent, so one could pick any of them and get the exact same results, just in a different way.

OK, Show Me the Universe

But these are all speculations. And until we actually find a serious candidate rule for our universe, it’s probably not worth discussing these things much.

So, OK. Where are we at with all this right now? Most of what I’ve said here I had actually figured out by around 1999—several years before I finished A New Kind of Science. And though it was described in simple language rather than physics-speak, I managed to cover the highlights of it in Chapter 9 of the book—giving some of the technical details in the notes at the back.

A New Kind of Science, Chapter 9

But after the book was finished in 2002, I started working on the problem of physics again. I found it a bit amusing to say I had a computer in my basement that was searching for the fundamental theory of physics. But that really was what it was doing: enumerating possible rules of certain types, and trying to see if their behavior satisfied certain criteria that could make them plausible as models of physics.

I was pretty organized in what I did, getting intuition from simplified cases, then systematically going through more realistic cases. There were lots of technical issues. Like being able to visualize large evolving sequences of graphs. Or being able to quickly recognize subtle regularities that revealed that something couldn’t be our actual universe.

I accumulated the equivalent of thousands of pages of results, and was gradually beginning to get an understanding of the basic science of what systems based on networks can do.

Notebooks from 2004

In a sense, though, this was always just a hobby, done alongside my “day job” of leading our company and its technology development. And there was another “distraction”. For many years I had been interested in the problem of computational knowledge, and in building an engine that could comprehensively embody it. And as a result of my work on A New Kind of Science, I became convinced that this might be actually be possible—and that this might be the right decade to do it.

By 2005 it was clear that it was indeed possible, and so I decided to devote myself to actually doing it. The result was Wolfram|Alpha. And once Wolfram|Alpha was launched it became clear that even more could be done—and I have spent what I think has probably been my most productive decade ever building a huge tower of ideas and technology, which has now made possible the Wolfram Language and much more.

To Do Physics, or Not to Do Physics?

But over the course of that decade, I haven’t been doing physics. And when I now look at my filesystem, I see a large number of notebooks about physics, all nicely laid out with the things I figured out—and all left abandoned and untouched since the beginning of 2005.

Should I get back to the physics project? I definitely want to. Though there are also other things I want to do.

I’ve spent most of my life working on very large projects. And I work hard to plan what I’m going to do, usually starting to think about projects decades ahead of actually doing them. Sometimes I’ll avoid a project because the ambient technology or infrastructure to do it just isn’t ready yet. But once I embark on a project, I commit myself to finding a way make it succeed, even if it takes many years of hard work to do so.

Finding the fundamental theory of physics, though, is a project of a rather different character than I’ve done before. In a sense its definition of success is much harsher: one either solves the problem and finds the theory, or one doesn’t. Yes, one could explore lots of interesting abstract features of the type of theory one’s constructing (as string theory has done). And quite likely such an investigation will have interesting spinoffs.

But unlike building a piece of technology, or exploring an area of science, the definition of the project isn’t under one’s control. It’s defined by our universe. And it could be that I’m simply wrong about how our universe works. Or it could be that I’m right, but there’s too deep a barrier of computational irreducibility for us to know.

One might also worry that one would find what one thinks is the universe, but never be sure. I’m actually not too worried about this. I think there are enough clues from existing physics—as well as from anomalies attributed to things like dark matter—that one will be able to tell quite definitively if one has found the correct theory. It’ll be neat if one can make an immediate prediction that can be verified. But by the time one’s reproducing all the seemingly arbitrary masses of particles, and other known features of physics, one will be pretty sure one has the correct theory.

It’s been interesting over the years to ask my friends whether I should work on fundamental physics. I get three dramatically different kinds of responses.

The first is simply, “You’ve got to do it!” They say that the project is the most exciting and important thing one can imagine, and they can’t see why I’d wait another day before starting on it.

The second class of responses is basically, “Why would you do it?” Then they say something like, “Why don’t you solve the problem of artificial intelligence, or molecular construction, or biological immortality, or at least build a giant multibillion-dollar company? Why do something abstract and theoretical when you can do something practical to change the world?”

There’s also a third class of responses, which I suppose my knowledge of the history of science should make me expect. It’s typically from physicist friends, and typically it’s some combination of, “Don’t waste your time working on that!” and, “Please don’t work on that.”

The fact is that the current approach to fundamental physics—through quantum field theory—is nearly 90 years old. It’s had its share of successes, but it hasn’t brought us the fundamental theory of physics. But for most physicists today, the current approach is almost the definition of physics. So when they think about what I’ve been working on, it seems quite alien—like it isn’t really physics.

And some of my friends will come right out and say, “I hope you don’t succeed, because then all that work we’ve done is wasted.” Well, yes, some work will be wasted. But that’s a risk you take when you do a project where in effect nature decides what’s right. But I have to say that even if one can find a truly fundamental theory of physics, there’s still plenty of use for what’s been done with standard quantum field theory, for example in figuring out phenomena at the scale where we can do experiments with particle accelerators today.

What Will It Take?

So, OK, if I mounted a project to try to find the fundamental theory of physics, what would I actually do? It’s a complex project, that’ll need not just me, but a diverse team of talented other people too.

Whether or not it ultimately works, I think it’ll be quite interesting to watch—and I’d plan to do it as “spectator science”, making it as educational and accessible as possible. (Certainly that would be a pleasant change from the distraction-avoiding hermit mode in which I worked on A New Kind of Science for a decade.)

Of course I don’t know how difficult the project is, or whether it will even work at all. Ultimately that depends on what’s true about our universe. But based on what I did a decade ago, I have a clear plan for how to get started, and what kind of team I have to put together.

It’s going to need both good scientists and good technologists. There’s going to be lots of algorithm development for things like network evolution, and for analysis. I’m sure it’ll need abstract graph theory, modern geometry and probably group theory and other kinds of abstract algebra too. And I won’t be surprised if it needs lots of other areas of math and theoretical computer science as well.

It’ll need serious, sophisticated physics—with understanding of the upper reaches of quantum field theory and perhaps string theory and things like spin networks. It’s also likely to need methods that come from statistical physics and the modern theoretical frameworks around it. It’ll need an understanding of General Relativity and cosmology. And—if things go well—it’ll need an understanding of a diverse range of physics experiments.

There’ll be technical challenges too—like figuring out how to actually run giant network computations, and collect and visualize their results. But I suspect the biggest challenges will be in building the tower of new theory and understanding that’s needed to study the kinds of network systems I want to investigate. There’ll be useful support from existing fields. But in the end, I suspect this is going to require building a substantial new intellectual structure that won’t look much like anything that’s been done before.

Is It the Right Time?

Is it the right time to actually try doing this project? Maybe one should wait until computers are bigger and faster. Or certain areas of mathematics have advanced further. Or some more issues in physics have been clarified.

I’m not sure. But nothing I have seen suggests that there are any immediate roadblocks—other than putting the effort and resources into trying to do it. And who knows: maybe it will be easier than we think, and we’ll look back and wonder why it wasn’t tried long ago.

One of the key realizations that led to General Relativity 100 years ago was that Euclid’s fifth postulate (“parallel lines never cross”) might not be true in our actual universe, so that curved space is possible. But if my suspicions about space and the universe are correct, then it means there’s actually an even more basic problem in Euclid—with his very first definitions. Because if there’s a discrete network “underneath” space, then Euclid’s assumptions about points and lines that can exist anywhere in space simply aren’t correct.

General Relativity is a great theory—but we already know that it cannot be the final theory. And now we have to wonder how long it will be before we actually know the final theory. I’m hoping it won’t be too long. And I’m hoping that before too many more anniversaries of General Relativity have gone by we’ll finally know what spacetime really is.

114 comments

  1. I admire your work and intuitively think your theory is right. Whether or not you succeed during your lifetime I am convinced children in 20-50 years will learn about Galileo, Newton, Einstein and Wolfram at school. I applaud your idea to build a interdisciplinary team and run this experiment in “spectator science” mode as it will be very educational for masses. This will also help with crowd funding so the project doesn’t depend solely on government grants (NASA or SETI casus). My suggestion is to join forces with people who work on digital physics, Gerard ‘t Hooft, Seth Lloyd, Ed Fredkin, Juergen Schmidhuber who might be willing to participate. One annoyance I see is such an ambitious project will also attract all kind of “Quantum Theory is not right” weirdos more interested in preaching their revelations rather than contributing to your project. Bottom line: DO IT! Good luck and I really count there will be critical mass to kick it off!

  2. Stephen,
    It was comforting to find this blog post of yours on spacetime. That is to say…it’s reassuring that other people are viewing spacetime in a similar (though more intensely mathematical) way as I currently do. I spend a good deal of my quiet alone time pondering how space is structured. And, I’ve come to believe much of what you posit here…though through intuition rather than computation. Spacetime is a “thing” from which all of the other particles and larger groupings emerge. I do not know what (other than “energy”) causes the disturbances in the underlying field. But, viewing these disturbances as knots is very similar to the way I tried viewing them as nodes on a standing wave. It is “order” that appears out of fluctuations.

    Sticking with your network structure, though, “time” is simply a change in the shape of the network. The pace of time is limited by that persistent “speed of light” which is really the limit at which the shape change can be passed to the next node on the network. The speed of causality. Reality cannot progress faster than the network can transmit the “ripple”.

    In my image of spacetime, nothing actually moves…and there are no particles. The nodes share the “vibration/fluctuation” through the network thus preserving the larger scale patterns. It’s a passing of the baton through the network. (or picture a massive 3D “Newton’s Cradle” where the colliding balls interact through the network and pass the pattern through something like a collision/stretching.

    So, I like your underlying philosophy …especially because it’s similar to mine. However…(and maybe you address this elsewhere) I cannot rationalize time dilation with increasing velocity. And…I cannot visualize how gravity functions in such a network. Even adding dimensions to “hide” things we don’t see going on…these truths of relativity elude my imagination. I also cannot account for the more mysterious quantum entanglement that seems to behave in an instantaneous manner. My mental models can make no sense of how that can occur. (maybe you models can?)

    Thank you for the inspiration

  3. I agree with the commentor who mentioned the work of Thad Roberts. A lot of overlap. His book “Einstein’s Intuition” outlines his model which he’s dubbed Quantum Space Theory, the idea that the vacuum is a superfluid. For a nice, visual intro to this entire concept, check out Thad’s TEDx talk.

    I’m excited to see this idea gaining momentum!

  4. Like you and many others we are searching for truth.

    I think the universe we observer is just motion, a ratio of space and time. In fact the only thing we are able to measure about the universe is motion. Al other entities in physics are just modelled by humans based on our experience with nature, like matter, but ultimately the only information we can get about entities is through space/time info. The mass of an object is the ratio of acceleration and turns out to be kinetic energy which is just a measure of a quantity of motion.

    Your idea about a network excludes also other entities, and should define motion as a local feature of the network. The network defines the regularity of the whole. And yes maybe some simple automata like rule defines the evolution of the network. This explains as you showed that we see in nature so many repeating patterns. The same rule is at work all over the place.

    Also thinking we do not with our consciousness, it is the network that does the thinking out of control of our consciousness. The consciousness is the interface were the mapping takes place between the thinking part of the network and the universe of discourse of our imagined world to translate thought into symbols. Animals and humans all are driven by the hidden rule of the network too. So once we find the rule it also explains how this rule maps to our imagined world in symbols of the world.

    It is a real challenge to find this hidden rule, I wait on your further research to reveal the truth.

  5. Re “Do you know the work of Thad Roberts?”

    I wasn’t familiar with Robert’s work, thanks for mentioning it. The book looks very interesting, and the author’s life story as well:

    http://einsteinsintuition.com/

    https://en.wikipedia.org/wiki/Thad_Roberts

  6. Would not the reality of a network-based universe suggest some sort of _more_ complex mechanism that runs it? For instance, you used Conway’s game of life as a simple example of how a trivial computation can create something very complex. But the game of life must run on a mechanism that is much more complex than the simple computation itself. A sentient being that existed within the game of life could probably work out the rules of the computation, but it would be at a loss to describe the silicon circuits that make the computation possible.

    So maybe we can work out the simple network rules fundamental to our universe (and that would be cool!), but this seems to imply that there is actually something way more complicated that causes the rules to operate.

  7. this seems rather similar to causal dynamical triangulations so I don’t know how much new stuff you would discover that hasn’t been discovered by the team of Renate Loll.

    • There is some similarity between the causal dynamical triangulations (CDT) proposed by Renate Loll, but mostly major differences. Here’s an explanation of Wolfram’s theory by A New Kind of Science (NKS).

      Most of the similarities are on the surface, and most of the differences are about what is essential to each theory.

      The fundamental objects in CDT are four dimensional simplices, which are thought of as a triangulation of spacetime (they include face/volume information, and are a sort of mini-version of ordinary spacetime). In NKS the fundamental objects are nodes and their connections, with no a priori interpretation.

      The rules that govern CDT are similar to quantum field theory (so are essentially based on constraints and include quantum mechanics at the smallest scale as part of the rules). The rules in NKS are simple, computational, and generate the dynamics. Quantum mechanics is not built in and the NKS rules are deterministic.

      Some similarities: Things are fundamentally discrete (so there are common problems with defining dimensionality, large scale geometry, particles, cosmology, and so on, problems that
      all discrete theories face).

      The name “causal” is in common (unlike the causal sets approach though, the role of causality is different here, largely due to the built-in quantumness).

      Of course there are major differences in methods and approaches. These ideas could be applied to the other camp’s project (NKS methods for CDT or CDT methods for NKS) but the methods are more appropriate for the kind of thing they are. Here is a short list: computation versus traditional math, searching simple rules versus first principles, intrinsic randomness versus built-in randomness, etc..One would have to summarize the whole of Wolfram’s book to list all of the differences in methods.

  8. If I were you, I would consider promoting research in this area in a less direct way by providing funding for projects led by others.

  9. Stephen,
    When you want to find the fundamental theory of physics, you have to prove it is the right concept. So you have to derive the main laws, constants and overall behaviour from our phenomenological universe. The idea that a concept must be right because we can replicate physical phenomena by enumerations isn’t a conclusive proof. So before you start your project, you have to find out what you are looking for. I have done this all before so I know what I am talking about.

  10. interesting remarks by Einstein:

    “I consider it quite possible that physics cannot be based on the field concept, i.e., on continuous structures. In that case nothing remains […] of modern physics.”
    – Albert Einstein

    “The drawback that the continuum brings [is that] if the molecular view of matter is the correct (appropriate) one, i.e., if a part of the universe is to be represented by a finite number of moving points, then the continuum of the present theory contains too great a manifold of possibilities. This is responsible for the fact that our present means of description miscarry with the quantum theory. The problem [is] how can one formulate statements about a discontinuum without calling upon a continuum (space-time) as an aid; the latter should be banned from the theory as a supplementary construction not justified by the essence of the problem and corresponding to nothing real.”
    – Albert Einstein

  11. I’d just like to say that I’m on board with the “you’ve got to do this” crowd. While I DO think about what OTHER amazing thing could come next in the Mathematica, NKS, Woflram|Alpha series, Wolfram Physics does sound like a worthy successor. Even if it fails, I think it will be worth the effort, and we will learn something.

  12. In the book “The Trouble with Physics”, Lee Smolin analyzes the present stagnation in theoretical physics, but the same trouble can be attributed to modern science as a whole. It still rests mainly on Newtonian dynamics and calculus, which boosted science and technology in previous centuries, but is not enough anymore. In order to understand “Life, the Universe and Everything”, other paradigms are needed and complex systems science may well become one of them. The study of complex systems (you are among those who are “guilty” of its advent), represents a new approach to science that investigates how relationships between simple parts give rise to the complex behaviors of a system which are analytically unpredictable in general. The quote of Johann Wolfgang von Goethe “Everything is simpler than you think and at the same time more complex than you imagine” might be a good slogan for this approach.
    Many prominent scientists, e.g. Stephen Hawking (“The Grand Design”) and Steven Weinberg (“The Dream of a Final theory”), also believe in the appearance of the final theory in the near future, but they think that even more sophisticated math and technology are required. In my view, the main “roadblock” is a lack of fresh ideas and models rather than math and technology. I’m not sure that it won’t be too long before we know the final theory, but any trial in this direction is invaluable. The stakes are very high. Not only the future of science, but also the survival of humankind may depend on it. Any result, even negative one, does matter (it’s not just a pun: “thoughts are tangible things”).
    Please strive to do it!

  13. i just came across an article on Einstein’s views contra field theory. it’s

    “The Other Einstein: Einstein Contra Field Theory” by John Satchel
    published in Science in Context 6 (1), 275-290 (1993).

    for people who can’t access the article themselves, i can send them a pdf of the article if they contact me at rjgaylord@gmail.com

    for those who would like to see lecture on the subject at the Perimeter Institute, it’s available at:

    http://streamer2.perimeterinstitute.ca/mp4-med/05100034.mp4

    i hope this encourages to think about this – i figure that if not both stephen iand einstein (as early as 1916 and as late as i954) think a non-field model of the universe has value, it’s worth thinking about. you might also want to look at the causal net model of Rafael Sorkin.

  14. This is great fun.

  15. You are absolutely correct about your assumptions. Im not as smart as you but I have a keen sense about what’s a great direction.
    Elements I feel are essential in solving this problem.
    1. Large Networks
    2. Rule 30 cellular automaton (as a base for rule generation)
    3. About 4 dimensions.
    4. Temporarily abandoning existing Intuitions from traditional physics.

    You can send me a copy of mathematica and I will review all the data coming out of this project and point out some cool directions to take. I neither know physics nor maths, an average programmer myself but I have an uncanny ability to guess rightly most of the time.

    I will be excited to be part of this project even if it takes a lifetime.

  16. How does energy fit into this? Time relates to change. Change requires energy. If there were no energy, would there be time?

  17. I don’t buy it.

    I think ‘computation’ and ‘information’ is a product of how observers view (or ‘divide up’) reality , not a feature of objective reality itself. After all, computation needs hardware. There cannot be computation without an underlying hardware to run it, and the hardware itself is not a part of the computation. So computation can’t be fundamental.

    The very notion of computation is rooted in a linear process that takes place over time, but according to the block-universe picture suggested by general relativity, the passage of time is just an illusion rooted in the observer. Again, this shows that the computational picture is not fundamental, but is feature of the way observers divide up reality.

    The derivation of special and general relativity you talk about sounds highly contrived and complex compared to the far simpler and elegant principles of standard special and general relativity that assume continuous spacetime.

    The equations of physics are expressed in terms of calculus, which assume continuity. There’s no evidence that spacetime is quantized, and as you point out, all attempts to find experimental proof have failed. There’s no theoretical basis for it either.

    Cellular automa are classical, whereas the universe is quantum. You admit that this ‘network’ idea can’t explain quantum effects, and Bells theorem is a severe road-block here. Any classical picture that replicates quantum effects would seem to require non-locality, a violation of relativity.

  18. You should join the “It From Qubit” collaboration, they are also trying to make spacetime emerge. The goal is to describe geometry in a purely information theoretic manner (see how the structure of entanglement defines the geometry). There has been some interesting work done on modelling the bulk-boundary correspondence of AdS/CFT as a quantum error correcting code (arXiv:1503.06237).

  19. Hi, Stephen:

    Very interesting. What if a particle is a highly curved space singularity?

    In the 1990’s, I was working at a research firm working on the computational fluid/chemical reactive model. I got interested in the randomness and veered briefly into the Fokker-Planck equation and I was able to derive the full Navier-Stokes by taking the statistic average from Fokker-Planck, although It was my side interest and I wasn’t sure my math is correct. Maybe this is similar to your exploration from cellular automation? Or Is this something well known with the scientific research circle but is unknown to me as an engineer?

    Colin

  20. I wonder how structure, dynamics emergence and geometry are connected with respect to information.

  21. I have an idea for a project you can do that actually combines all the projects you mentioned at the end, so you’d achieve all of your life-long ambitions in one spectacular shot if it works!

    I’d love to see you working on a ‘theory of everything’. But when I say ‘theory of everything’, I mean this in a very different sense from the physics-theories you are thinking about. So I want to get you (Stephen) thinking about what a ‘theory of everything’ really is, in a very different way from your current conception of it 😉

    What I want to suggest by the term ‘theory of everything’, is a theory of how to integrate multi-level *models of theory* into a single coherent framework. So this would really be a theory of how we *model* reality. It wouldn’t be a physics theory, rather it would be a theory of mathematics, logic and mind.

    I put it to you that such a theory is exactly equivalent to the quest for artificial general intelligence! Let’s see how this could be so.

    Imagine if you will an ‘automated philosopher’, a program tasked with the goal of ‘creating a data and process model of the structure of all knowledge’. In other words, we’re basically asking the program to ‘find a general method of modelling all of reality’. The output of data and process modelling consists of *ontologies* , which tell you *what kinds of things exist* and *the logical connections between these things*. I put it to you that this program then , IS a ‘theory of everything’ (because it’s output tells you everything you want to know about reality – what classes of things exists, their objects and behaviours). It is *also* an artificial intelligence, since it would be capable of modelling everything, *including the very process of logical reasoning itself*! So what I’m saying is:

    Artificial General Intelligence Theory of Everything !

    If this still isn’t clear, look at how I divide knowledge into 3 main levels of abstraction

    KNOWLEDGE = PURE MATHEMATICS > LOGIC > ONTOLOGY

    At the highest level of abstraction is pure mathematics. Things like combinatorics, algebra and set and category theories. These things I would say are timeless and universal truths. There is no conception of time in such math, and they can’t be directly equated to anything in the physical world.

    But we can drop down to a lower level of abstraction, the level of logic. This is applied math, or the part of mathematics we can equate with ‘computation’ or ‘intelligence’ itself. So here you would have things like symbolic logic, probability theory and categorization and concept learning.

    Now drop down to the lowest level of astraction, the level of ontology. This is the level of ‘information technology’, of programs and data models.

    How does this relate to physics and your ideas about spacetime? Well, in the course of modelling all knowledge, we would also end up with a model of how we model spacetime. By decomposing the structure of knowledge in just the right way, we would eventually hit the physics level (since physics is presumablly just a particular way that we humans interpret aspects of mathematics).

    Let’s get an idea of how this works. Decompose the physics world into levels of abstraction, and we end up with something like:

    PHYSICS = INVARIANT LAWS > PROCESSES > OBJECTS

    This is excatly analogous to my earlier decomposition of mathematics! At the highest level you have invariant properties equivalent to ‘laws of physics’ themselves. Things like symmetries, transforms and fields. Just like with pure math, these things are universal, timeless and can’t be equated with anything concrete (in fact, as you can see, they *are* actually a part of pure math, confirming that physics is really just a way we humans make sense of mathematical properties).

    Drop to the next level of abstraction and you have ‘processes’. These can equated to the physical description of computational systems; processes take place in time (they are eqivalent to ‘systems’ – things with input, processing and output).

    Finally at the lowest level of abstract, you have concrete objects (stable concrete physical things).

    So just from the outline I’m giving in this post, you should be able to see how it might be possible to derive a theory of everything (including a model of spacetime), purely by data and process modelling of the structure of knowledge.

    To conclude, if you (Stephen) embark on the project of creating the right kind of artificial general intelligence , one programmed with the goal ‘perform data and process modelling of the structure of all knowledge’ (the ‘automated philosopher’), you can achieve every single one of your life-long ambitions in one spectacular shot!

    Artificial General Intelligence Theory of Everything !

  22. “So, OK, if I mounted a project to try to find the fundamental theory of physics, what would I actually do?

    Well what I did…

    LOGIC REDUCTION:
    1.) If one assumes no directional distribution bias (non-pertrubative) upon differentiation… then Energy must expand as a sphere.
    By successful quantization of a spherical Origin Emission Singularity with a unified volume unit (automata)… one can achieve an initial point of differentiation of Space by Energy, define a computational geometry (coordinate system) for the singularity, and define a minimum Spatial differentiation (QI)…3 sec Ref: https://www.youtube.com/watch?v=Sbzf6NlU8q4

    2.) If one assumes the Origin Emission Singularity computational geometry expands equally in all directions from the Origin Singularity, as a unified field geometry… i.e. a single Base Unit (BU) volume (automata) with no Boolean spaces… the Base Unit (BU) volume (automata) chosen must also support spherical closure of shells of a radius expansion equal to the radius of one Base Unit (BU) volume (automata)… Ref: 5 BU Radii Shell Illustration
    http://www.uqsmatrixmechanix.com/SLLImage1.jpg

    A requirement for a single volume unit (automata) that quantizes a sphere… which expands equally in all directions… from an initial differentiation of space by Energy… as spherical shells of unit volume radii… drastically reduces the possible options for a valid quantization volume unit (automata) … all of which… in 20 years of documented analysis… I reduced to the UQS quadhedron.

    UQS is an acronym for Unified Quantization of a Sphere… i.e. a quantized spherical singularity.

    What I am doing…

    “COMPUTATIONAL IRREDUCIBILITY“:
    1.) If causal Energy Distribution is to be facilitated by the computational geometry… one must also assume a Pulsed Energy Source…i.e. network expansion requires more than 2-bits.

    2. )A Pulsed Energy Source… as opposed to a Single Energy Distribution Event… provides a minimum temporal component, and a minimum Energy unit (QE) which gives definition to QE/QI.

    I have developed a CAD based on the UQS Base Unit (BU) volume (automata) and am programing an Emission CAD/SIM… as a Lab/Game… Ref: http://www.uqsmatrixmechanix.com/UQSDB.php
    This paper is highly illustrated with game interface screen shots… i.e. you can skip the “yadayadyada”… none the less, I do utilize the actual code structure in illustration explanations… and it gets a bit deep.

    I suggest you start with a more generic exposure of the theory at: http://www.uqsmatrixmechanix.com/UQSConInv.php which illustrates the scalability… i.e. Human, Molecular, Boson, and Cosmological scales… of Energy choreographies… i.e. “Kelvin Knots”… within the UQS background computational
    geometry.

    My website at http://www.uqsmatrixmechanix.com has a collection of papers and SIMs that provide in depth math and conceptual CAD illustrations.
    I have also recently posted on at http://fqxi.org/community/forum/topic/1928 … that is where I connected with this link to your blog.

    S. Lingo
    UQS Author/Logician

  23. And two months later… general relativity has been proven true!

  24. In that mathematical theories of Space/Energy/Time… at least in principle… are derived from an observed and/or intellectually visualized Spatial geometry model.

    To the degree that one is speaking of the Spatial geometry model… a predicted observation does not constrain the observation to a single geometry model … nor does it necessarily verify all aspects of a theory inferred from a specified geometry model that resolves the observed prediction.

    That is to say, that direct observation of a gravitational radiation wave… does not verify Einstein’s rubber sheet (2D plane) geometry model as the definitive resolve of Space/Energy/Time computational geometry models.

    To the degree that one is speaking of equations extracted from a specified geometry model… to verify that the specified geometry model resolves the observation… the visual kinematic chain for the theoretical derivation… from the observation geometry… back to the specified geometry model… must be unbroken.

    Does UQS incorporate Einstein’s Relativity Theories?

    To replace Einstein’s geometry model… with a 3D spherical lattice which expands from a pulsed source… equally in all directions… as spherical shells of unified unit volume radii… is a major deviation from the “Relativity” geometry model. Ref: http://www.uqsmatrixmechanix.com/UQSWPDuality.php

    However… the to date verified predictions of “Relativity” can be resolved in the UQS computational geometry… and laser technology capable of resolving experimental observations to verify that a predicted computational geometry resolves Fundamental Universal Quantum Quantization… is in reach.
    Ref: http://www.uqsmatrixmechanix.com/UQSMMNVPHOTONPRESSURE.php

    S. Lingo
    UQS Author/Logician

  25. You have oversight. Of course the content of Euclid’s fifth postulate isn’t “parallel lines never cross”

  26. btw, I had this quick thought the other day (with gravitational waves being in the news), that maybe gravity is a manifestation of a reaction (inertia?) of masses to the flow of time (forced motion in the time dimension)…

  27. >>space and time aren’t in any fundamental way different.

    What, precisely, constitutes “fundamental difference” as opposed to what… superficial difference?

    I just opened up version 9.0 of Mathematica and typed in the following two lines:

    MatrixExp[{{0, x}, {-x, 0}}]
    MatrixExp[{{0, x}, {x, 0}}]

    To me, the results of these two lines show one of two fundamental difference between space-space rotation, and space-time rotation. The first yields a trigonometric rotation. The second yields a hyperbolic rotation. And the big difference between these two equations is that you cannot, by hyperbolic rotation, move an event from the future into the present, or from the present into the past. But you CAN move an event from in front of you, to your right, and from your right, behind you.

    Another fundamental difference between space and time is that you cannot stop in time. You are always moving forward. So there are always events crossing from future to present to past.

    Interestingly, Mathematica was able to identify the Sine and Cosine automatically. But, for whatever reason it did not simplify the other expression into hyperbolic sines and cosines.

    I recommend updating Mathematica’s software so that it works just as nicely recognizing hyperbolic functions, just as smoothly as it recognizes trigonometric functions… Then see if you are still saying the same thing about wanting to think of space and time as interconnected nodes, rather than geometry.

  28. So, what if you compare your theory to the forms and laws of a buble foam where the structure we observe is the outer film of the bubles and its interconnection with other bubles. Over time the mass and energy in those space bubles would move to the outer film and leave the inner part of the bubles with less matter/energy, building up gravity in the film. Ligth travelling thru this foam would tend to be bent around the bubles and disturb our view of the true dimension and structure. When observing such a structure, we observe it’s connecting film structure and not the whole as a 3D buble foam. Your first image in this posting may be explained as looking at/thru a buble foam. Light from distant stars would be bent following the film and travel thru masses of matter and gravity on its way,, not empty space, which might explain the red-shift. Speed of light would appear as beeng slowed down.

  29. I enjoyed reading your interesting blog: “What is spacetime, really?” , in particular your comment: “Maybe all that has to exist in the universe is the network, and then the matter in the universe just corresponds to particular features of this network.”

    According to current theory, small pieces of randomly moving matter in space attract each other with their individual minuscule gravitational fields and thereby eventually form increasing larger amounts of matter such as stars and planets. The high concentrations of matter warp nearby space-time and create a gravity effect.

    The Einstein field equations EFE describe the fundamental interaction of gravitation as a result of spacetime being curved by matter and energy. A re-interpretation of the EFE could lead to the following alternative explanation of how matter collects to form planets and stars, and how spacetime is warped by matter. Rather than matter first collecting, then distorting space-time and thereby creating gravity effect, INSTEAD it is discontinuous areas (what you refer to as particular features) of SpaceTime (the network) which result in concentrated areas of gravity which then attract collections of matter. In a way, a reversal of the chicken (matter) or the egg (gravity) argument.

  30. Stephen,
    have you ever considered the possibility of using your network approach in the twistor space put forward by Roger Penrose many years ago. It has recently been revived with a great deal of success to evaluate simply and compactly collision matrices in high energy physics using a geometric structure called an amplituhedron. The results are currently limited to super-symmetric Yang-Mills theory. The work seems so far to give a tantalizing glimpse into the possibility that space-time is a derived quantity coming from a more fundamental underlying framework and co-ordinate set. This view may substantially line up with your network approach to physics. The current problem is making the twistor cum amplituhedron approach compatible with more realistic theories than the Yang-Mills. Have you considered that your expertise and insight could be fruitfully applied to this current deep problematic that at first glance seems to line up with your network approach to physics.

  31. Here is a thought. If there is infinite number of possibilities for the simulation models, that one can try, being totally unsure, which model is right and which model is wrong, and where to start, then the PI number might be a clue, provided by God. Since this number can be seen everywhere in nature, it migth be a code, a hint on how to choose only one option amongst the inifte amount of others.

  32. Spacetime? Look how it works in music: https://itunes.apple.com/us/app/melody-composer-squared/id988457961?mt=8
    Perhaps Music has the key to Spacetime question more then others.

  33. i dont think the small particles like the higgs boson actually exist before the atom is broken down in the same way eddie currents only appear when you dip your paddle in the flowing stream

  34. OK…

    I checked the above (Ref: http://www.uqsmatrixmechanix.com/UQSPAEST.jpg) link to the UQS reduction of the Corpuscular String Theory model of the Hydrogen Proton.

    Yes… the link breaks due to my spelling error… should be: http://www.matrixmechanix.com/UQSPAECST.jpg

    Thanks for the heads-up… I do make errors.

    S. Lingo
    UQS Author/Logician
    http://www.UQSMatrixMechanix.com

  35. Dear Stephen, dear discrete algorithmic spacetime explorers,

    one of the attractive features of the algorithmic, network-based spacetime ideas discussed in this Blog and in the NKS book is that they immediately suggest a range of computing experiments that any curious, scientifically oriented mind can carry out rather easily (maybe using Mathematica).
    The requirement of full abstraction — that everything should emerge from this universal computation
    without plugging in features such as known physical constants —
    appears to offer an accessible entry point to the fascinating quest for the ultimate laws of nature,
    for scientists as well as amateurs not familiar with current quantum gravity proposals.

    In this respect, the massive exploration, by experiment and without preconceptions, of the ‘computational universe’,
    and of undirected/directed graph-rewriting systems in particular (for space/spacetime)
    might still be a sensible item in the agenda.

    However, the work that I have myself carried out in the last few years
    (https://tommasobolognesi.wordpress.com/publications/),
    in particular on trivalent networks and on algorithmic causal sets,
    has increasingly convinced me that brute-force ‘universe hunting’ will not progress significantly
    if the following issues are not addressed.

    1. Closer contact with the Causal Set Programme.

    Since the late 1980’s, Bombelli, Sorkin, Lee, Rideout, Reid, Henson, Dawker and others have explored *stochastic* causal sets, and have collected a number of results from which research on algorithmic, deterministic causal sets (or ‘networks’) can greatly benefit.
    For example, consider the fundamental requirement of Lorentz invariance,
    whose transposition from the continuous to the discrete setting is anything but trivial.
    The NKS take on this is based on assimilating the different inertial frames in continuous spacetime
    to the different total orders of the rewrite events that build up a discrete spacetime via a string rewrite system.
    The idea is quite appealing, in light of the existence of ’confluent’ rewrite systems
    for which the final partial order is unique and subsumes all the different total order realisations (‘causal invariance’).
    Nevertheless a 2006 paper by Bombelli, Henson and Sorkin [‘Discreteness without symmetry breaking: a theorem’
    https://arxiv.org/abs/gr-qc/0605006 ]
    proves that a directed graph aiming at ‘Lorentzianity’ – intended now as one that does support the identification of a preferred reference frame (or direction) – cannot have finite-degree nodes.
    (One refers here to the transitively *reduced*, Hasse graph, not to the transitive closure.)
    In other words, the node degrees of algorithmic causets *must* grow unbounded.
    I suspect that meeting this requirement may involve substantial rethinking of deterministic causet construction techniques…

    2. New paradigms promoting multi-level hierarchies of emergence

    The manifesto of the computational universe conjecture can be summarized in one sentence:
    Complexity in Nature = Emergence in deterministic computation.
    And the most effective example of emergence in computation is probably
    represented by the digital particles of Elementary Cellular Automaton (ECA) 110.

    However, as far as I know, nobody has ever been able to set up a simple program that
    exhibits more than one level of emergence.
    In ECA 110, Level 0 is represented by the boolean function of three variables that defines
    local cell behaviour, and Level 1 is represented by the emergent particle interaction rules.
    No Level 2 emerges where particles interactions yield a new layer of entities and rules.

    G. Ellis has observed that simple programs (such as those considered in the NKS book)
    cannot boost complexity up to a level akin to, say, the biosphere:
    that would require some radically new concept. One suggested possibility is ‘backward causation’.

    Another (possibly related) ingredient that might boost complexity is the appearance, as the computation unfolds,
    of elementary ’agents’ provided with some form of autonomy/initiative,
    able to interfere with the initial rules of the game
    – the same rules from which they have emerged! This resonates with the idea of self-modifying code.

    How can one incorporate these or similar concepts (with notions of ‘observation’/’consciousness’ pulling at the top)
    in simple models of computation, or promote their emergence?
    Very hard question indeed!
    But I currently do not see any other way out from the dead end in which the more ’traditional’
    NKS-type experiments on discrete spacetime are currently stuck.

  36. I think the spacetime continuum of STR and GTR are concretes.the reality 4dimensional proposed by Einstein implies that the speed of light in it constancy and limit is intrinsically linked for the PT symmetry breaking,carrying a generalized complex structures,and the STR is vinculated to the connection of space and time( two asymmetrical entities) into spacetime continuos,that is contain the both groups of Lorentz ,the orthochrous and antichrous,that makes the be variable spacetime,it is the the spacetime are deformed due the violation of CP,that implies the asymmetry between particles and antiparticles for relativistic velocities measured by the contraction of space and time dilatation,the diffeomorphism and homeomorphism implies the appearing of smooth and continue spacetime emerging of the lengths nearest the scale of Planck,and unify in quantum topological geometrical fields theory that define the metrics for the superstrings that gives the minimum length to tension in the strings generating the vibrations with fundamental frequency and theirs Harmonics through the reasonance till the infinity,this fundamental frequency is measured by the breakdown of pt that define by the asymmetry of particles and antiparticles and spacetime generated there,that the “particles” vibrate in a Frequency only one.the quantum entanglements permit explain the super unifying the quantum theory and STR and GTR through of the extra dimensions that carry the gravitons glueing the dimensions through the gravity.the extended 4dimensional manifolds explain several relations between different metrics obtained by S.Donaldson,through the classifications of the topological invariants,that permit through autointersections obtain the particles through these distortions of the spacetime with different curvatures.the potential of this represent the wave functions that carrying infinities families of curvatures with differents smoothness,or some producing exotics structures without any smooth structures and the spacetime generated by the fourth dimensions is a mixtures of these structures,being that the component Time does carry different story families of curvatures with different smooth structures with it those deformations generate the physics and geometry of the space through the spacetime,each space represent the geometry subnuclear given by the spins that are algebrical geometry.the quaternionsbrepresent the spacetime connected due the non commutativity that create the particles( energy,associated at the time)”

  37. I think that the quantum entanglement are spacetime sources that implies the connection of space and time are the spacetime continuos,and the time dilatation said is vinculated by the violation of cp,that implies the supersymmetry and extra dimension.i think that PT symmetry breaking generate the speed of light as constant and limit in the smooth and continue spacetime 4dimensional manifolds.Thence could to explain the dark matter

  38. I think that the junction of space and time generate the spacetime continuos,due the violation of PT that does the connection of space and time into spacetime,and for that reason appear the antiparticles as symmetry,but the particles and antiparticles are asymmetriics as in the case of neutrinos and antineutrinos violating the operator CP,the spacetime emerge of the quantum entanglement,that also originate the discrete spacetime.when observed by the General Theory of Relativity the curvatures of spacetime are smooth and continue.Could have the appearing of a new physics and news conceptions of spacetime,super symmetry,extra dimensions,dark matter and dark energy

  39. I have a PhD in physics… experimental solid state.. blah blah.. easy stuff! However the theoretical stuff is just so abstract these days and the bar is set so so high for entry.. and nearly impossible to digest as a layman.. you realistically need a PhD in maths just to begin reasearch in theoetical physics. However, more worring is the snails pace at which maths and physics are taugh at schools certainly and most universities these days… I’d suggest theoertical and experiment physics degrees are the only way… guys who average 85% and above year one get into theoretical degree, the rest of us become physicists who work as engineers 🙂 ..

  40. We can only measure what we can sense. Does a person believe they have all the senses necessary to understand the universe?

  41. If possible, I would like to see a simple specific example from your approach: Based on your approach to obtain the 4-dimensional transformation between two inertial frames F(w,x,y,z) and F'(w’,x’,y’,z’). Could your approach be extended to obtain a space-time transformations between the inertial frame F and a frame with a constant linear acceleration (with an inertial velocity) such that the acceleration transformation reduces to the usual transformation in the limit of zero acceleration?

  42. What if the universe isn’t analogous to a single spinning superfluid, but rather a multi-superfluid composite made up of 4 distinct superfluids? What if the fundamental particles of our standard model can be thought of as quantized vortices of these various superfluids within a larger, composite superfluid mixture?

    If this is the case, then each of the superfluids would have it’s own quantized vortices. And, at least according to this article (http://physics.aps.org/synopsis-for/10.1103/PhysRevLett.117.145301), those vortices should be able to interact with each other in a zero-viscocity system, transferring angular momentum between them. That would establish a theoretical basis for fundamental quantum particles and interactions between those particles/vortices.

    If the particle/antiparticle pairs are simply counter rotating vortices, then they would cancel each other out when they combine, transferring the sum of their energy into other nearby vortices via force carrier particles. The prevalence of matter over antimatter could be explained by the direction of spin of the composite superfluid spacetime universe (CSSU), since the prevailing spin direction would be what is generating the majority of particles/vortices to begin with.

    The fact that each of the fundamental particle pairs can be created out of the quantum vacuum (Dirac Sea) by applying enough energy might be another easily explained consequence of the CSSU theory. The energy required for pair production should be analogous to the lambda point of the superfluid the particles/vortices are made from. Different superfluid components, each having their own lambda point, result in different, discreet energy requirements for the creation of each type of particle/antiparticle pair.

    In the CSSU, the Big Bang’s Planck Epoch (zero to approximately 10-43seconds) would be analogous to the transition temperature and condensation of the first of the superfluids. As the precursor state cools and condenses and the first superfluid comes into existence, a spin direction (carried over and enhanced from the precursor state) is imparted to the CSSU. The imparting of spin and resulting creation of the first vortices marks the beginning of the so called Grand Unification Epoch (10–43 seconds to 10–36 seconds). During this time, the first particles/votrices are created. A second superfluid transition/condensation event correlates to the Inflationary Epoch, third to the Electro Weak Epoch and a forth to the Quark Epoch. With each condensation event, more total spin is imparted to the CSSU and more particles/vortices are come into existence.

    The end result of this rapid sequential condensation of the 4 superfluids is the creation of composite superfluid mixture containing a large number of interacting particles/vortices.

    Just a thought…

  43. I’m not much into physics but i’m interested in the topic from a philosphical perspective.

    I also think that the fundamental of existence is mathematical but i think graphs are too complex structures to be the most fundamental of existence. I can totally accept though that at some very low levels (but not the lowest level) some graph-like structures can exist in the universe.

    I think that the fundamental is mathematic because only this could explain why there’s anything at all, which is the most basic question of the universe. I think that practically there is always “nothing” in the universe because the net value of everything in all time is zero. This can explain how there’s something instead of (philosophical) nothing excluding religious explanations. In mathematics you can get a net value of zero in infinite ways. This explains the fluctuations of ‘nothing’ (or more correctly stability) in space. To be the easiest, +1 and -1 are also zero so this kind of fluctuation is inevitable. Why there’s not a total chaos in the universe comes from the intrinsic stability, which also explains why everything in the universe seem to seek for stability. This also explains why most of the universe are ’empty’ space, because these are more stable. The sensable matter are effect of bigger fluctuations which are uncommon so that’s why matter is rare in the universe. So ultimately the nature of the universe is a fluctuating stability. Why there are fluctuations at all is still a question since we could consider a state without any fluctuations even more stable. But maybe fluctuations are totally inevitable because of the mathematical nature of the universe since net value is always zero so macro-stability is always perfect and total micro-stability is not possible at all but the system is still seeking even for the most micro-stability given the huge prevalence of “empty” space over matter (large scale flectuations/perturbances).

  44. “building a substantial new intellectual structure” this is a key area for me.

    Ironically (or inevitably depending on your viewpoint) this is an evolution of a network that represents our knowledge of the world. When new knowledge networks evolve (or are designed), the emergent properties don’t always match, and often exceed what’s predicted.

    Similar to WoflramLang and Mathematica, the potential arguably becomes greater with time, but (similar to neural net training) can hit maxima and stagnate (at this abstract level, it’s likely to be observed as monoculture and HyperNormalisation). For this reason, I think we need multiple, competing networks [intellectual structures] with the ability to merge and fork, likely throwing in some mutators to the [super-structure] organisation to prevent poisonous monoculture arising from virtual echo-chambers and of the excessive [undue] influence of some nodes in the structure that may skew the overall system.

    I realise this is straying perilously close to sociology and politics, but the wider structure I’m leaning towards here is fractalism so that apparent closeness isn’t surprising to me.

    I’m currently firmly rooted in a day job making software for enterprise to earn a crust, but make occasional moves towards this at work (very slight, with some resistance), or at home (mostly on paper or gathering tools/skills, walking through possible approaches [aka progressive procrastination]). I’m sure I’m not alone in this position (a new intellectual structures might just propagate enough to make better use of us!).

    If I ever get anything worthwhile off the ground, I’m sure you’d notice, but until then, keep up the good work through the foundation and the technology stack you’re producing. I suspect universal modeling will evolve somewhat organically from that.

  45. i think you are on to omething big, i foud this information and perhaps q analogs could support the propagation of nodal structuring and evolution of the network: http://thescienceexplorer.com/universe/researchers-found-mathematical-structure-was-thought-not-exist

  46. the spacetime is measured mathematically. for the symmetry PT.it is Given by the speed of light as constant,that implies that the photons are massless.if the speed of light had mass the speed of light would not be constant.many experiments seek that the photons and gravitons have mass.all this is a puzzle

  47. Add another level of indirection, that’s how every computable problem can be solved.

    Let’s say space is a sea, with multiple sinks, each sink beeing a particule with a mass.

    Then that moving sea is possibly a field over a more primitive field that is a 3d euclidian infra space where waves travel from “nodes” to “nodes” in a casual manner, a simple manner.

    If that infra space is an algorithmically animated set of entities with a state, memory cells, any turing complete machine might be enough to run the algorithm.

    If such an algorithm exists, I think it’s worth the effort to try to figure it out 🙂

    Besides, if we live in a “simulation”, it opens interesting doors about the interconnections between multiple instances of that simulation. Who knows, maybe there is such a thing as a soul that emerges from such a complex system…

  48. Fuller had a great model for this with closest-packed spheres. Some of Wolfram’s gris look like the negative space between spheres. Maybe that’s all it is – the ‘aether’ is the negative space…?

  49. Regarding that ‘the possibility exists that one could just find a simple rule—and initial condition—that one could hold up and say, “This is our universe!”’. Although not stated explicitly, the concepts discussed here seem to imply a strict determinism once initial conditions and rules are known, albeit of the computationally irreducible kind. If so, then we simply have to wait long enough for the point in universal history where someone will unavoidably make the right discovery :). But perhaps more seriously: Any model and initial conditions we come up with must be quite literally be ‘made out of’ the same network we are trying to describe or discover. A model of the whole embedded as a part? That seems odd, perhaps even contradictory. Then there is also the question of what the ‘interpreter’ consists of that applies the rules to the network (the one the universe is made of, not our model of it). If we are going to avoid an infinite regress, then the interpreter must be implicit in the network itself. And if we allow for random actions as part of rules, then strict determinism is not longer implied. The whole-as-part question is not so easily answered.

  50. I am amazed by your research. Your discoveries are amazing, and so does science. People like you are an inspiration for the youth like me to be more curious and to find out more than I generally would, otherwise. Great work…