Stephen Wolfram Blog Stephen Wolfram's Personal Blog 2015-03-23T18:45:02Z http://blog.stephenwolfram.com/feed/atom/ WordPress Stephen Wolfram <![CDATA[Frontiers of Computational Thinking: A SXSW Report]]> http://blog.internal.stephenwolfram.com/?p=9171 2015-03-23T18:45:02Z 2015-03-23T17:44:33Z Stephen Wolfram speaking at SXSW 2015

Last week I spoke at SXSW Interactive 2015 in Austin, Texas. Here’s a slightly edited transcript:

A Most Productive Year

Well, hello again. I’ve actually talked about computation three times before at SXSW. And I have to say when I first agreed to give this talk, I was worried that I would not have anything at all new to say. But actually, there’s a huge amount that’s new. In fact, this has probably been the single most productive year of my life. And I’m excited to be able to talk to you here today about some of the things that I’ve figured out recently.

It’s going to be a fairly wild ride, sort of bouncing between very conceptual and very practical—from thousand-year-old philosophy issues, to cloud technology to use here and now.

Basically, for the last 40 years I’ve been building a big tower of ideas and technology, working more or less alternately on basic science and on technology. And using the basic science to figure out more technology, and technology to figure out more science.

I’m happy to say lots of people have used both the science and the technology that I’ve built. But I think what we’ve now got is much bigger than before. Actually, talking to people the last couple of days at SXSW I’m really excited, because probably about 3/4 of the people that I’ve talked to can seriously transform—or at least significantly upgrade—what they’re doing by using new things that we’ve built.

The Wolfram Language

OK. So now I’ve got to tell you how. It all starts with the Wolfram Language. Which actually, as it happens, I first talked about by that name two years ago right here at SXSW.

The Wolfram Language is a big and ambitious thing which is actually both a central piece of technology, and a repository and realization of a bunch of fundamental ideas. It’s also something that you can start to use right now, free on the web. Actually, it runs pretty much everywhere—on the cloud, desktops, servers, supercomputers, embedded processors, private clouds, whatever.

Wolfram Language

From an intellectual point of view, the goal of the Wolfram Language is basically to express as much as possible computationally—to provide a very broad way to encapsulate computation and knowledge, and to automate, as much as possible, what can be done with them.

I’ve been working on building what’s now the Wolfram Language for about three decades. And in Mathematica and , many many people have used precursors of what we have now.

But today’s Wolfram Language is something different. It’s something much broader, that I think can be the way that lots of computation gets done sort of everywhere, in all sorts of systems and devices and whatever.

So let’s see it in action. Let’s start off by just having a little conversation with the language, in a thing we invented 26 years ago that we call a notebook. Let’s do something trivial.

2 + 2

Good. Let’s try something different. You may know that it was Pi Day on Saturday: 3/14/15. And since we are the company that I think has served more mathematical pi than any other in history, we had a little celebration about Pi Day. So let’s have the Wolfram Language compute pi for us; let’s say to a thousand places:

N[Pi, 1000]

There. Or let’s be more ambitious; let’s calculate it to a million places. It’ll take a little bit longer…

N[Pi, 10^6]

But not much. And there’s the result. It goes on and on. Look how small the scroll thumb is.

For something different we could pick up the Wikipedia article about pi:

WikipediaData["Pi"]
And make a word cloud from it:

WordCloud[DeleteStopwords[%]]

Needless to say, in the article about pi, pi itself features prominently.

Or let’s get an image. Here’s me:

CurrentImage[]

So let’s go ahead and do something with that image—for example, let’s edge-detect it. % always means the most recent thing we got, so…

EdgeDetect[%]

…there’s the edge detection of that image. Or let’s say we make a morphological graph from that image, so now we’ll make some kind of network:

MorphologicalGraph[%]

Oh, that’s quite fetching; OK. Or let’s automatically make a little user interface here that controls the degree of edginess that we have here—so there I am:

Manipulate[EdgeDetect[CurrentImage[], r], {r, 1, 30}]" title="Manipulate[EdgeDetect[CurrentImage[], r], {r, 1, 30}]

Or let’s get a table of different levels of edginess in that picture:

Table[EdgeDetect[CurrentImage[], r], {r, 1, 30}]

And now for example we can take all of those images and stack them up and make a 3D image:

Image3D[%, BoxRatios -> 1]

A Language for the Real World

The Wolfram Language has zillions of different kinds of algorithms built in. It’s also got real-world knowledge and data. So, for example, I could just say something like “planets”:

(planets)

So it understood from natural language what we were talking about. Let’s get a list of planets:

EntityList[%]

And there’s a list of planets. Let’s get pictures of them:

EntityValue[%, "Image"]

Let’s find their masses:

EntityValue[%%, "Mass"]

Now let’s make an infographic of planets sized according to mass:

ImageCollage[% -> %%]" title="ImageCollage[% -> %%]

I think it’s pretty amazing that it’s just one line of code to make something like this.

Let’s go on a bit. This is where the internet thinks my computer is right now:

Here

We could say, “When’s sunset going to be, at this position on this day?”

Sunset[]

How long from now?

Now

OK, let’s get a map of, say, 10 miles around the center of Austin:

GeoGraphics[GeoDisk[(=Austin), (=10 mile)]]

Or, let’s say, a powers of 10 sequence:

Table[GeoGraphics[GeoDisk[(=Austin), Quantity[10^n, "Miles"]]], {n, -1, 4}]

Or let’s go off planet and do the same kind of thing. We ask for the Apollo 11 landing site, and let’s show a thousand miles around that on the Moon:

GeoGraphics[{Red, GeoDisk[First[(=apollo 11 landing site)], (=1000 miles)]}]

We can do all kinds of things. Let’s try something in a different kind of domain. Let’s get a list of van Gogh’s artworks:

(=van gogh artworks)
And let’s take, say, the first 20 of those, and let’s get images of those:

EntityValue[Take[%, 20], "Image"]

And now, for example, we can take those and say, “What were the dominant colors that were used in those images?”

DominantColors /@ %

And let’s plot those colors in a chromaticity diagram, in 3D:

ChromaticityPlot3D[%]

Philosophy of the Wolfram Language

I think it’s fairly amazing what can be done with just tiny amounts of Wolfram Language code.

It’s really a whole new situation for programming. I mean, it’s a dramatic change. The traditional idea has been to start from a fairly small programming language, and then write fairly big programs to do what you want. The idea in the Wolfram Language is to make the language itself in a sense as big as possible—to build in as much as we can—and in effect to automate as much as possible of the process of programming.

These are the types of things that the Wolfram Language deals with:

The Wolfram Language is very broad

And by now we’ve got thousands of built-in functions, tens of thousands of models and methods and algorithms and so on, and carefully curated data on thousands of different domains.

And I’ve basically spent nearly 30 years of my life keeping the design of all of this clean and consistent.

It’s been really interesting, and the result is really satisfying, because now we have something that’s incredibly powerful—that we’re also able to use to develop the language itself at an accelerating rate.

Tweetable Programs

Here’s something we did recently to have some fun with all of this. It’s called Tweet-a-Program.

Wolfram Tweet-a-Program

The idea here is you send a whole program as a tweet, and get back the result of running it. If you stop by our booth at the tradeshow here, you can pick up one of these little “Galleries of Tweetable Programs”. And here’s an online collection of some tweetable programs—and remember, every one of these programs is less than 140 characters long, and does all kinds of different types of things.

Wolfram Tweet-a-Program online collection

So to celebrate tweetable programs, we also have a deck of “code cards“, each with a tweetable program:

Part of our deck of code cards, each with a different Wolfram Language tweetable program

Computational Thinking for Kids

You know, if you look even at these tweetable programs, they’re surprisingly easy to understand. You can kind of just read the words to get a good idea how they work.

And you might think, OK, it’s like kids could do this. Well, actually, that’s true. And in fact I think this is an important moment for programming, where the same thing has happened as has happened in the past for things like video editing and so on: We’ve automated enough that the fancy professionals don’t really have any advantage over kids—now in learning programming.

So one thing I’m very keen on right now is to use our language as a way to teach computational thinking to a very broad range of people.

Soon we’ll have something we call Wolfram Programming Lab—which you can use free on the web. It’s kind of an immersion language learning for the Wolfram Language, where you see lots of small working examples of Wolfram Language programs that you get to modify and run.

Wolfram Programming Lab
I think it’s pretty powerful for education. Because it’s not just teaching programming: It’s immediately bringing in lots of real-world stuff, integrating with other things kids are learning, and really teaching a computational-thinking approach to everything.

So let’s take a look at a couple of examples. It’s been Pi Day; let’s look at Pi Necklaces:

Wolfram Programming Lab notebook:  Pi Digit Necklaces

The basic idea is that here’s a little piece of code—you can run it, see what it does, modify it. You can say to show the details, and it’ll tell you what’s going on. And so on.

And maybe we can try another example. Let’s say we do something a little bit more real-world… Where can you see from a particular skyscraper?

Wolfram Programming Lab notebook:  Skyscraper Views

This will show us the visible region from the Empire State Building. And we could go ahead and change lots of parameters of this and see what happens, or you can go down and look at challenges, where it’s asking you to try and do other kinds of related computations.

Wolfram Programming Lab notebook:  Skyscraper Views:  Challenges

I hope lots of people—kids and otherwise—will have fun with the explorations that we’ve been making. I think it’s great for education: a kind of mixture of sort of the precise thinking of math, and the creativity of something like writing. And, by the way, in the Programming Lab, we can watch programs people are trying to write, and do all kinds of education analytics inside.

I might mention that for people who don’t know English, we’ll soon be able to annotate any Wolfram Language programs in lots of other languages.

Translation annotation for Wolfram Language code—here in Chinese

I think some amazing things are going to happen when lots more people learn to think computationally using the Wolfram Language.

Natural Language as Input

Of course, many millions of people already use our technology every day without any of that. They’re just typing pure natural language into Wolfram|Alpha, or saying things to Siri that get sent to Wolfram|Alpha.

I guess one big breakthrough has been being able to use our very precise natural language understanding, using both new kinds of algorithms and our huge knowledgebase.

And using all our knowledge and computation capabilities to generate automated reports for things that people ask about. Whether it’s questions about demographics:

Wolfram|Alpha output for "cost of living in austin vs SF"

Or about airplanes—this shows airplanes currently overhead where the internet thinks my computer is:

Wolfram|Alpha output for "planes overhead"

Or for example about genome sequences. It will go look up whether that particular random base-pair sequence appears somewhere on the human genome:

Wolfram|Alpha output for "AAGCTAGCTAGCTCA"

So those are a few sort of things that we can do in Wolfram|Alpha. And we’ve been covering thousands of different domains of knowledge, adding new things all the time.

Wolfram|Alpha examples

By the way, there are now quite a lot of large organizations that have internal versions of Wolfram|Alpha that include their own data as well as our public data. And it’s really nice, because all types of people can kind of make “drive-by” queries in natural language without ever having to go to their IT departments.

You know, being able to use natural language is central to the actual Wolfram Language, too. Because when you want to refer to something in the real world—like a city, for example—you can’t be going to documentation to find out its name. You just want to type in natural language, and then get that interpreted as something precise.

Which is exactly what you can now do. So, for example, we would type something like:

Just type "=nyc"

and get:

And the Wolfram Language correctly interprets it as "New York City"

And that would be understood as the entity “New York City”. And we could go and ask things like what’s the population of that, and it will tell us the results:

New York City (city) ... ["Population"]

Big Idea: Symbolic Programming

There’s an awful lot that goes into making the Wolfram Language work—not only tens of millions of lines of algorithmic code, and terabytes of curated data, but also some big ideas.

Probably the biggest idea is the idea of symbolic programming, which has been the core of what’s become the Wolfram Language right from the very beginning.

Here’s the basic point: In the Wolfram Language, everything is symbolic. It doesn’t have to have any particular value; it can just be a thing.

If I just typed “x” in most computer languages, they’d say, “Help, I don’t know what x is”. But the Wolfram Language just says, “OK, x is x; it’s symbolic”.

x

And the point is that basically anything can be represented like this. If I type in “Jupiter”, it’s just a symbolic thing:

Jupiter (planet) ...

Or, for example, if I were to put in an image here, it’s just a symbolic thing:

Image of Jupiter

And I could have something like a slider, a user-interface element—again, it’s just a symbolic thing:

Slider[]

And now when you compute, you can do anything with anything. Like you could do math with x:

Factor[x^10 - 1]

Or with an image of Jupiter:

Factor[(jupiter)^10 - 1]

Or with sliders:

Factor[Slider[]^10 - 1]

Or whatever.

It’s taken me a really long time, actually, to understand just how powerful this idea of symbolic programming really is. Every few years I understand it a little bit more.

Language for Deployment

Long ago we understood how to represent programs symbolically, and documents, and interfaces, so that they all instantly become things you can compute with. Recently one of the big breakthroughs has been understanding how to represent not only operations and content symbolically, but also their deployments.

OK, there’s one thing I need to explain first. What I’ve been showing you here today has mostly been using a desktop version of the Wolfram Language, though it’s going to the cloud to get things from our knowledgebase and so on. Well, with great effort we’ve also built a full version of the whole language in the cloud.

So let me use that interface there, just through a web browser. I’m going to have the exact same experience, so to speak. And we can do all these same kinds of things just purely in the cloud through a web browser.

Wolfram Programming Cloud:  123^456

Wolfram Programming Cloud:  Graphics3D[Sphere[]]

Wolfram Programming Cloud:  Table[Rotate["hello",RandomReal[{0,2Pi}]],{100}]

You know, in my 40 years of writing software, I don’t believe there’s ever been a development environment as crazy as the web and the cloud. It’s taken us a huge amount of effort to kind of hack through the jungle to get the functionality that we want. We’re pretty much there now. And of course the great news for people who just use what we’ve built is that they don’t have to hack through the jungle themselves, because we’ve already done that.

But OK, so you can use the Wolfram Language directly in the cloud. And that’s really useful. But you can also deploy other things in the language through the cloud.

Like, cat pictures are popular on the internet, so let’s deploy a cat app. Let’s define a form that has a field that asks for a breed of cat, then shows a picture of that. Then let’s deploy that to the cloud.

CloudDeploy[FormFunction[{"cat" -> "CatBreed"}, Magnify[#cat["Image"], 2] &, "PNG"]]

Now we get a cloud object with a URL. We just go there, and we get a form. The form has a “smart field”, that understands natural language—in this particular case, the language for describing cat breeds. So now we can type in, let’s say, “siamese”. And it will go back and run that code… OK. There’s a picture of a cat.

Enter the name of a cat breed, get a picture of that breed

We can make our web app a little more complicated. Let’s add in another field here.

CloudDeploy[FormFunction[{"cat" -> "CatBreed", "angle" -> Restricted["Number", {0, 360}]}, Rotate[Magnify[#cat["Image"], 2], #angle Degree] &, "PNG"]]

Again, we deploy to the cloud, and now have a cat at an angle there:

Manx cat at a 70-degree angle

OK. So that’s how we can make a web app, which we can also deploy on mobile and so on. We can also make an API if we want to. Let’s use the same piece of code. Actually, the easiest thing to do would be just to edit that piece of code there, and change this from being a form to being an API:

CloudDeploy[APIFunction[{"cat" -> "CatBreed", "angle" -> Restricted["Number", {0, 360}]}, Rotate[Magnify[#cat["Image"], 2], #angle Degree] &, "PNG"]]

And now the thing that we have will be an API that we can go fill in parameters to; we can say “cat=manx”, “angle=300”, and now we can run that, and there’s another cat at an angle.

Manx cat at a 300-degree angle, in deployed API

So that was an API that we just created, that could be used by anybody in the cloud. And we can call the API from anywhere—a website, a program, whatever. And actually we can automatically generate the code to call this from all sorts of other languages—let’s say inside Java.

EmbedCode[%, "Java"]
So in effect you can knit Wolfram Language functionality right into any project you’re doing in any language.

In this particular case, you’re calling code in our cloud. I should mention that there are other ways you can set this up, too. You can have a private cloud. You can have a version of the Wolfram Engine that’s on your computer. You can even have the Wolfram Engine in a library that can be explicitly linked into a program you’ve written.

And all this stuff works on mobile too. You can deploy an app that works on mobile; even a complete APK file for Android if you want.

There’s lots of depth to all this software engineering stuff. And it’s rather wonderful how the Wolfram Language manages to simplify and automate so much of it.

The Automation of Programming

You know, I get to see this story of automation up close at our company every day. We have all these projects—all these things we’re building, a huge amount of stuff—that you might think we’d need thousands of people to do. But you see, we’ve been automating things, and then automating our automation and so on, for a quarter of a century now. And so we still only have a little private company with about 700 people—and lots of automation.

It’s fairly spectacular to see: When we automate something—like, say, a type of web development—projects that used to be really painful, and take a couple of months, suddenly become really easy, and take like a day. And from a management point of view, it’s great how that changes the level of innovation that you attempt.

Let me give you a little example from a couple of weeks ago. We were talking about what to do for Pi Day. And we thought it’d be fun to put up a website where people could type in their birthdays, and find out where in the digits of pi those dates show up, and then make a cool T-shirt based on that.

Well, OK, clearly that’s not a corporately critical activity. But if it’s easy, why not do it? Well, with all our automation, it is easy. Here’s the code that got written to create that website:

mypiday.com code notebook

It’s not particularly long. Somewhere here it’ll deploy to the cloud, and there it’s calling the Zazzle API, and so on. Let me show you the actual website that got made there:

mypiday.com website

And you can type in your birthday in some format, and then it’ll go off and try and find that birthday in the digits of pi. There we go; it found my birthday, at that digit position, and there’s a custom-created image showing me in pi and letting me go off and get a T-shirt with that on it.

mypiday.com results page, showing any entered birthdate (mine, in this case) in the digits of pi

And actually, zero programmers were involved in building this. As it happens, it was just done by our art director, and it went live a couple of days ago, before Pi Day, and has been merrily serving hundreds of thousands of custom T-shirt designs to pi enthusiasts around the world.

Large-Scale Programs

It’s interesting to see how large-scale code development happens in the Wolfram Language. There’s an Eclipse-based IDE, and we’re soon going to release a bunch of integration with Git that we use internally. But one thing that’s very different from other languages is that people tend to write their code in notebooks.

Notebooks can include the whole story of code

They can put the whole story of their code right there, with text and graphics and whatever right there with the code. They can use notebooks to make structured tests if they want to; there’s a testing notebook with various tests in it we could run and so on:

A unit-test notebook

And they can also use notebooks to make templates for computable documents, where you can directly embed symbolic Wolfram Language code that’ll get executed to make static or interactive documents that you can deliver as reports and so on.

By the way, one of the really nice things about this whole ecosystem is that if you see a finished result—say an infographic—there’s a standard way to include a kind of “compute-back link” that goes right back to the notebook where that graphic was made. So you see everything that’s behind it, and, say, start being able to use the data yourself. Which is useful for things like data publishing for research, or data journalism.

Internet of Things

OK, so, talking of data, a couple of weeks ago we launched what we call our Data Drop.

Wolfram Data Drop

The idea is to let anything—particularly connected devices—easily drop data into our cloud, and then immediately make it meaningful, and accessible, to the Wolfram Language everywhere.

Like here’s a little device I have that measures a few things… actually, I think this particular one only measures light levels; kind of boring.

An Electric Imp device that measures light levels

But in any case, it’s connected via wifi into our cloud. And everything it measures goes into our Data Drop, in this databin corresponding to that device.

bin = Databin["3Mfto-_m"]

We’re using what we call WDF—the Wolfram Data Framework—to say what the raw numbers from the device mean. And now we can do all kinds of computations.

OK, it hasn’t collected very much data yet, but we could go ahead and make a plot of the data that it’s collected:

DateListPlot[bin]

That was the light level as seen by that device, and I think it just sat there, and the lights got turned on and then it’s been a fixed light level—sorry, not very exciting. We can just make a histogram of that data, and again it’s going to be really boring in this particular case.

Histogram[bin]

You know, we have all this data about the world from our knowledgebase integrated right into our language. And now with the Data Drop, you can integrate data from any device that you want. We’ve got a whole inventory of different kinds of devices, which we’ve been making for the last couple of years.

Wolfram Connected Devices Project

Once you get data into this Data Drop, you can use it wherever the Wolfram Language is used. Like in Wolfram|Alpha. Or Siri. Or whatever.

It’s really critical that the Wolfram Language can represent different types of data in a standard way, because that means you can immediately do computations, combine databins, whatever. And I have to say that just being able to sort of “throw data” into the Wolfram Data Drop is really convenient.

Like of course we’re throwing data from the My Pi Day website into a databin. And that means, for example, it’s just one line of code to see where in the world people have been interested in pi and generating pi T-shirts from, and so on.

GeoGraphics[{Red, Point[Databin["3HPtHzvi"]["GeoLocations"]]},  GeoProjection -> "Albers", GeoRange -> Full, ImageSize -> 800]

Some of you might know that I’ve long been an enthusiast of personal analytics. In fact, somewhat to my surprise, I think I’m the human who has collected more data on themselves than anyone else. Like here’s a dot for every piece of outgoing email that I’ve sent for the past quarter century.

My outgoing email for the past quarter century

But now, with our Data Drop, I’m starting to accumulate even more data. I think I’m already in the double digits in terms of number of databins. Like here’s my heart rate on Pi Day, from a databin. I think there’s a peak there right at the pi moment of the century.

DateListPlot[Databin["3LV~DEJC", {DateObject[{2015, 3, 14, 7, 30, 0}], DateObject[{2015, 3, 15, 0, 0, 0}]}]["TimeSeries"]]

Machine Learning

So, with all this data coming in, what are we supposed to do with it? Well, within the Wolfram Language we’ve got all this visualization and analysis capability. One of our goals is to be able to do the best data science automatically—without needing to take data scientists’ time to do it. And one area where we’ve been working on that a lot is in machine learning.

Let’s say you want to classify pictures into day or night. OK, so here I’ve got a little tiny training set of pictures corresponding to scenes that are day or night, and I just have one little function in the Wolfram Language, Classify, which is going to build a classifier to determine whether a picture is a day or a night one:

daynight = Classify[{(classifier set)}]

So there I’ve got the classifier. Now I can just apply that classifier to a collection of pictures, and now it will tell me, according to that classifier, are those pictures day or night.

daynight[{ (6 images) }]

And we’re automatically figuring out what type of machine learning to use, and setting up so that you have a classifier that you can use, or can put in an app, or call in an API, or whatever, so that it is just one function to do this.

We’ve got lots of built-in classifiers as well; all sorts of different kinds of things. Let me show you a new thing that’s just coming together now, which is image identification. And I’m going to live dangerously and try and do a live demo on some very new technology.

I asked somebody to go to Walmart and buy a random pile of stuff to try for image identification. So this is probably going be really horrifying. Let’s see what happens here. First of all, let’s set it up so that I can actually capture some images. OK. I’m going to give it a little bit of a better chance by not having it have too funky of a background.

OK. Let us try one banana. Let’s try capturing a banana, and let us see what happens if I say ImageIdentify in our language…

(banana) // ImageIdentify

OK! That’s good!

All right. Let’s tempt fate, and try a couple of other things. What’s this? It appears to be a toy plastic triceratops. Let’s see what the system thinks it is. This could get really bad here.

(triceratops) // ImageIdentify

Oops. It says it’s a goat! Well, from that weird angle I guess I can see how it would think that.

OK, let’s try one more thing.

(African violet) // ImageIdentify

Oh, wow! OK! And the tag in the flower pot says the exact same thing! Which I certainly didn’t know. That’s pretty cool.

This does amazingly well most of the time. And what to me is the most interesting is that when it makes mistakes, like with the triceratops, the mistakes are very human-like. I mean, they’re mistakes that a person could reasonably make.

And actually, I think what’s going on here is pretty exciting. You know, 35 years ago I wanted to figure out brain-like things and I was studying neural nets and so on, and I did all kinds of computer experiments. And I ended up simplifying the underlying rules I looked at—and wound up studying not neural nets, but things called cellular automata, which are kind of like the simplest possible programs.

Mining the Computational Universe

And what I discovered is that if you look at that in the computational universe of all those programs, there’s a whole zoo of possible behaviors that you see. Here’s an example of a whole bunch of cellular automata. Each one is a different program showing different kinds of behavior.

An array of cellular automata

Even when the programs are incredibly simple, there can be incredibly complex behavior. Like, there’s an example; we can go and see what it does:

A complex cellular automaton

Well, that discovery led me on the path to developing a whole new kind of science that I wrote a big book about a number of years ago.

A New Kind of Science:  The book and its chapters

That’s ended up having applications all over the place. And for example, over the last decade it’s been pretty neat to see that the idea of modeling things using programs has been winning out over the idea that’s dominated exact science for about 300 years, of modeling things using mathematical equations.

And what’s also been really neat to see is the extent to which we can discover new technology just by kind of “mining” this computational universe of simple programs. Knowing some goal we have, we might sample a trillion programs to find one that’s good for our particular purposes.

Much can be mined from the computational universe of cellular automata

That purpose could be making art, or it could be making some new image processing or some new natural-language-understanding algorithm, or whatever.

Finally, Brain-Like Computing

Well, OK, so there’s a lot that we can model and build with simple programs. But people have often said somehow the brain must be special; it must be doing more than that.

Back 35 years ago I could get neural networks to make little attractors or classifiers, but I couldn’t really get them to do anything terribly interesting. And over the years I actually wasn’t terribly convinced by most of the applications of things like neural nets that I saw.

But just recently, some threshold has been passed. And, like, the image identifier I was showing is using pretty much the same ideas as 35 years ago—with a bunch of good engineering tweaks, and perhaps a nod to cellular automata too. But the amazing thing is that just doing pretty much the obvious stuff, with today’s technology, just works.

I couldn’t have predicted when this would happen. But looking at it now, it’s sort of shocking. We’re now able to use millions of neurons, tens of millions of training images and thousands of trillions of the equivalent of neuron firings. And although the engineering details are almost as different as birds versus airplanes, the orders of magnitude are pretty much just the same as for us humans when we learn to identify images.

For me, it’s sort of the missing link for AI. There are so many things now that we can do vastly better than humans, using computers. I mean, if you put a Wolfram|Alpha inside a Turing test bot, you’ll be able to tell instantly that it’s not a human, because it knows too much and can compute too much.

But there’ve been these tasks like image identification that we’ve never been able to do with computers. But now we can. And, by the way, the way people have thought this would work for 60 years is pretty much the way it works; we just didn’t have the technology to see it until now.

So, does this mean we should use neural nets for everything now? Well, no. Here’s the thing: There are some tasks, like image identification, that each human effectively learns to do for themselves, based on what they see in the world around them.

Language as Symbolic Representation

But that’s not everything humans do. There’s another very important thing, pretty much unique to our species. We have language. We have a way of communicating symbolically that lets us take knowledge acquired by one person, and broadcast it to other people. And that’s kind of how we’ve built our civilization.

Well, how do we make computers use that idea too? Well, they have to have a language that represents the world, and that they can compute with. And conveniently enough, that’s exactly what the Wolfram Language is trying to be, and that’s what I’ve been working on for the last 30 years or so.

You know, there’s all this abstract computation out there that can be done. Just go sample cellular automata out in the computational universe. But the question is, how does it relate to our human world, to what we as humans know about or care about?

Well, what’s happened is that humans have tried to boil things down: to describe the world symbolically, using language and linguistic constructs. We’ve seen what’s out there in the world, and we’ve come up with words to describe things. We have a word like “bird”, which refers abstractly to a large collection of things that are birds. And by now in English we’ve got maybe 30,000 words that we commonly use, that are the raw material for our description of the world.

Well, it’s interesting to compare that with the Wolfram Language. In English, there’s been a whole evolution over thousands of years to settle on the perhaps convenient, but often incoherent, language structure that we have. In the Wolfram Language, we—and particularly I—have been working hard for many many years keeping everything as consistent and coherent as possible. And now we’ve got 5,000 or so “core words” or functions, together with lots of other words that describe specific entities.

And in the process of developing the language, what I’ve been doing explicitly is a little like what’s implicitly happened in English. I’ve been looking at all those computational things and processes out there, and trying to understand which of them are common enough that it’s worth giving names to them.

You know, this idea of symbolic representation seems to be pretty critical to human rational thinking. And it’s really interesting to see how the structure of a language can affect how people think about things. We see a little bit of that in human natural languages, but the effect seems to be much larger in computer languages. And for me as a language designer, it’s fascinating to see the patterns of thinking that open up when people start really understanding the Wolfram Language.

Some people might say, “Why are we using computer languages at all? Why not just use human natural language?” Well, for a start, computers need something to talk to each other in. But one of the things I’ve worked hard on in the Wolfram Language is making sure that it’s easy not only for computers, but also for humans, to understand—kind of a bridge between computers and humans.

And what’s more, it turns out there are things that human natural language, as it’s evolved, just isn’t very good at expressing. Just think about programs. There are some programs that, yes, can easily be represented by a little piece of English, but a lot of programs are really awkward to state in English. But they’re very clean in the Wolfram Language.

So I think we need both. Some things it’s easier to say in English, some in the Wolfram Language.

Post-Linguistic Concepts

But back to things like image identification. That’s a task that’s really about going from all the stuff out there, in this case in the visual world, and finding how to make it symbolic—how to describe things abstractly with words.

Now, here’s the thing: Inside the neural net, one thing that’s happening is that it’s implicitly making distinctions, in effect putting things in categories. In the early layers of the net those categories look remarkably like the categories we know are used in the early stages of human visual processing, and we actually have pretty decent words for them: “round”, “pointy”, and so on.

But pretty soon there are categories implicitly being used that we don’t have words for. It’s interesting that in the course of history, our civilization gradually does develop new words for things. Like in the last few decades, we’ve started talking about “fractal patterns”. But before then, those kind of tree-like structures didn’t tend to get identified as being anything in particular, because we didn’t have words for them.

So our machines are going to discover a lot of categories that our civilization has not come up with. I’ve started calling these things a rather pretentious name: “post-linguistic emergent concepts”, or PLECs for short. I think we can make a metaframework for things like this within the Wolfram Language. But I think PLECs are part of the way our computers can start to really extend the traditional human worldview.

By the way, before we even get to PLECs, there are issues with concepts that humans already understand perfectly well. You see, in the Wolfram Language we’ve got representations of lots of things in the world, and we can turn the vast majority of things that people ask Wolfram|Alpha into precise symbolic forms. But we still can’t turn an arbitrary human conversation into something symbolic.

So how would we do that? Well, I think we have to break it down into some kind of “semantic primitives”: basic structures. Some, like fact statements, we already have in the Wolfram Language. And some, like mental statements, like “I think” or “I want”, we don’t.

The Ancient History

It’s a funny thing. I’ve been working recently on designing this kind of symbolic language. And of course people have tried to do things like this before. But the state of the art is actually mostly from a shockingly long time ago. I mean, like the 1200s, there was a chap called Ramon Lull who started working on this; in the 1600s, people like Gottfried Leibniz and John Wilkins.

It’s quite interesting to look at what those guys figured out with their “philosophical languages” or whatever. Of course, they never had an implementation substrate like we do today. But they understood quite a lot about ontological categories and so on. And looking at what they wrote really highlights, actually, what’s the same and what changes in the course of history. I mean, all their technology stuff is of course horribly out of date. But most of their stuff about the human condition is still just as valid as then, although they certainly had a lot more focus on mortality than we do today.

And today one interesting change is that we really need to attribute almost person-like internal state to machines. Not least because, as it happens, the early applications of all this everyday discourse stuff will be to things we’re building for people talking to consumer devices and cars and so on.

I could talk some about very practical here-and-now technology that’s actually going to be available starting next week, for making what we call PLIs, or Programmable Linguistic Interfaces. But instead let’s talk more about the big picture, and about the future.

The way I see things, throughout history there’s been a thread of using technology to automate more and more of what we do. Humans define goals, and then it’s the job of technology to automatically achieve those goals as well as possible.

A lot of what we’re trying to with the Wolfram Language is in effect to give people a good way to describe goals. Then our job is to do the computations—or make the external API requests or whatever—to have those goals be achieved.

What Will the AIs Do?

So, in any possible computational definition of the objectives of AI, we’re getting awfully close to achieving them—and actually, in many areas we’ve gone far beyond anything like human intelligence.

But here’s the point: Imagine we have this box that sits on our desk, and it’s able to do all those intelligent things humans can do. The problem is, what does the box choose to do? Somehow it has to be given goals, or purposes. And the point is that there are no absolute goals and purposes. Any given human might say, “The purpose of life is to do X”. But we know that there’s nothing absolute about that.

Purpose ends up getting defined by society and history and civilization. There are plenty of things people do or want to do today that would have seemed absolutely inconceivable 300 years ago. It’s interesting to see the complicated interplay between the progress of technology, the progress of our descriptions of the world—through memes and words and so on—and the evolution of human purposes.

To me, the path of technology seems fairly clear. The evolution of human purposes is a lot less clear.

I mean, on the technology side, more and more of what we do ourselves we’ll be able to outsource to machines. We’ve already outsourced lots of mechanical thinking, like say for doing math. We’re well on the way to outsourcing lots of things about memory, and soon also lots of things about judgment.

People might say, “Well, we’ll never outsource creativity.” Actually, some aspects of that are among the easier things to outsource: We can get lots of inspiration for music or art or whatever just by looking out into the computational universe, and it’s only a matter of time before we can automatically combine those things with knowledge and judgment about the human world.

You know, a lot of our use of technology in the past has been “on demand”. But we’re going to see more and more preemptive use—where our technology predicts what we will want to do, then suggests it.

It’s sort of amusing to me when people talk about the machines taking over; here’s the scenario that I think will actually happen. It’s like with GPSs in cars: Most people—like me—just follow what the GPS tells them to do. Similarly, when there’s something that’s saying, you know, “Pick out that food on the menu”, or “Talk to that person in the crowd”, much of the time we’ll just do what the machine tells us to—partly because the machine is basically able to figure out a lot more than we can.

It’s going to get complicated when machines are acting collectively across a whole society, effectively implementing in software all those things that political philosophers have talked about theoretically. But even at an individual level, it’s very complicated to understand the goal structure.

Yes, the machines can help us “be ourselves, but better”, amplifying and streamlining things we want to do and directions we want to go.

Immortality & Beyond

You know, in the world today, we’ve got a lot of scarce resources. In many parts of the world much less scarce than in the past, but some resources are still scarce. The most notable is probably time. We have finite lives, and that’s a key part of lots of aspects of human motivation and purpose.

It’s surely going to be the biggest discontinuity ever in human history when we achieve effective human immortality. And, by the way, I have absolutely no doubt that we will achieve it. I just wish more progress would get made on things like cryonics to give my generation a better probability of making it to that.

But it’s not completely clear how effective immortality will happen. I think there are several paths; that will probably in practice be combined. The first is that we manage to reverse-engineer enough of biology to put in patches that keep us running biologically indefinitely. That might turn out to be easy, but I’m concerned that it’ll be like trying to keep a server that’s running complex software up forever—which is something for which we have absolutely no theoretical framework, and lots of potential to run into undecidable halting problems and things like that.

The second immortality path is in effect uploading to some kind of engineered digital system. And probably this is something that would happen gradually. First we’d have digital systems that are directly connected to our brains, then these would use technology—perhaps not that different from the ImageIdentify function I was showing you—to start learning from our brains and the experiences we have, and taking more and more of the “cognitive load”… until eventually we’ve got something that responds exactly the same as our brain does.

Once we’ve got that, we’re dealing with something that can evolve quite quickly, independent of the immediate constraints of physics and chemistry, and that can for example explore different parts of the computational universe—inevitably sampling parts that are far from what we as humans currently understand.

Box of a Trillion Souls

So, OK, what is the end state? I often imagine some kind of “box of a trillion souls” that’s sort of the ultimate repository for our civilization. Now at first we might think, “Wow, that’s going to be such an impressive thing, with all that intelligence and consciousness and knowledge and so on inside.” But I’m afraid I don’t think so.

You see, one of the things that emerged from my basic science is what I call the Principle of Computational Equivalence, which says in effect that beyond some low threshold, all systems are equivalent in the sophistication of the computations they do. So that means that there’s not going to be anything abstractly spectacular about the box of a trillion souls. It’ll just be doing a computation at the same level as lots of systems in the universe.

Maybe that’s why we don’t see extraterrestrial intelligence: because there’s nothing abstractly different about something that came from a whole elaborate civilization as compared to things that are just happening in the physical world.

Now, of course, we can be proud that our box of a trillion souls is special, because it came from us, with our detailed history. But will interesting things be happening in it? Well, to define “interesting” we then need a sense of purpose—so things become pretty circular, and it’s a complicated philosophy discussion.

Back to 2015

I’ve come pretty far from talking about practical things for 2015. The way I like to work, understanding these fundamental issues is pretty important in not making mistakes in building technology here and now, because that’s how I’ve figured out one can build the best technology. And right now I’m really excited with the point we’ve reached with the Wolfram Language.

I think we’ve defined a new level of technology to support computational thinking, that I think is going to let people rather quickly do some very interesting things—going from algorithmic ideas to finished apps or new companies or whatever. The Wolfram Cloud and things around it are still in beta right now, but you can certainly try them out—and I hope you will. It’s really easy to get started—though, not surprisingly, because there are actually new ideas, there are things to learn if you really want to take the best advantage of this technology.

Well, that’s probably all I have to say right now. I hope I’ve been able to communicate a few of the exciting things that we’ve got going on right now, and a few of the things that I think are new and emerging in computational thinking and the technology around it. So, thanks very much.


]]>
2
Stephen Wolfram <![CDATA[Pi or Pie?! Celebrating Pi Day of the Century (And How to Get Your Very Own Piece of Pi)]]> http://blog.internal.stephenwolfram.com/?p=9154 2015-03-14T17:51:47Z 2015-03-12T17:08:17Z Pictures from Pi Day now added »

This coming Saturday is “Pi Day of the Century”. The date 3/14/15 in month/day/year format is like the first digits of And at 9:26:53.589… it’s a “super pi moment”.

3/14/15 9:26:53.589... a "super pi moment" indeed

Between Mathematica and , I’m pretty sure our company has delivered more π to the world than any other organization in history. So of course we have to do something special for Pi Day of the Century.

Pi Day of the Century with Wolfram: 3.14.15 9:26:53

A Corporate Confusion

One of my main roles as CEO is to come up with ideas—and I’ve spent decades building an organization that’s good at turning those ideas into reality. Well, a number of weeks ago I was in a meeting about upcoming corporate events, and someone noted that Pi Day (3/14) would happen during the big annual SXSW (South by Southwest) event in Austin, Texas. So I said (or at least I thought I said), “We should have a big pi to celebrate Pi Day.”

I didn’t give it another thought, but a couple of weeks later we had another meeting about upcoming events. One agenda item was Pi Day. And the person who runs our Events group started talking about the difficulty of finding a bakery in Austin to make something suitably big. “What are you talking about?” I asked. And then I realized: “You’ve got the wrong kind of pi!”

I guess in our world pi confusions are strangely common. Siri’s voice-to-text system sends Wolfram|Alpha lots of “pie” every day that we have to specially interpret as “pi”. And then there’s the Raspberry Pi, that has the Wolfram Language included. And for me there’s the additional confusion that my personal fileserver happens to have been named “pi” for many years.

After the pi(e) mistake in our meeting we came up with all kinds of wild ideas to celebrate Pi Day. We’d already rented a small park in the area of SXSW, and we wanted to make the most interesting “pi countdown” we could. We resolved to get a large number of edible pie “pixels”, and use them to create a π shape inside a pie shape. Of course, there’ll be the obligatory pi selfie station, with a “Stonehenge” pi. And a pi(e)-decorated Wolfie mascot for additional selfies. And of course we’ll be doing things with Raspberry Pis too.

A Piece of Pi for Everyone

I’m sure we’ll have plenty of good “pi fun” at SXSW. But we also want to provide pi fun for other people around the world. We were wondering, “What can one do with pi?” Well, in some sense, you can do anything with pi. Because, apart from being the digits of pi, its infinite digit sequence is—so far as we can tell—completely random. So for example any run of digits will eventually appear in it.

How about giving people a personal connection to that piece of math? Pi Day is about a date that appears as the first digits of pi. But any date appears somewhere in pi. So, we thought: Why not give people a way to find out where their birthday (or other significant date) appears in pi, and use that to make personalized pi T-shirts and posters?

In the Wolfram Language, it’s easy to find out where your birthday appears in π. It’s pretty certain that any mm/dd/yy will appear somewhere in the first 10 million digits. On my desktop computer (a Mac Pro), it takes 6.28 seconds (2π?!) to compute that many digits of π.

Here’s the Wolfram Language code to get the result and turn it into a string (dropping the decimal point at position 2):

PiString = StringDrop[ToString[N[Pi, 10^7]], {2}];

Now it’s easy to find any “birthday string”:

First[StringPosition[PiString, "82959"]]

So, for example, my birthday string first appears in π starting at digit position 151,653.

What’s a good way to display this? It depends how “pi lucky” you are. For those born on 4/15/92, their birthdate already appears at position 3. (Only about a certain fraction of positions correspond to a possible date string.) People born on November 23, 1960 have the birthday string that’s farthest out, appearing only at position 9,982,546. And in fact most people have birthdays that are pretty “far out” in π (the average is 306,150 positions).

Our long-time art director had the idea of using a spiral that goes in and out to display the beginnings and ends of such long digit sequences. And almost immediately, he’d written the code to do this (one of the great things about the Wolfram Language is that non-engineers can write their own code…).

Different ways to display birthdates found in pi, depending on the position at which they begin

Next came deploying that code to a website. And thanks to the Wolfram Programming Cloud, this was basically just one line of code! So now you can go to MyPiDay.com

Find your birthdate in pi at MyPiDay.com (mine's August 29, 1959)

…and get your own piece of π!

Here's mine...

And then you can share the image, or get a poster or T-shirt of it:

You can get a personalized shirt or poster of your very own MyPiDay result

The Science of Pi

With all this discussion about pi, I can’t resist saying just a little about the science of pi. But first, just why is pi so famous? Yes, it’s the ratio of circumference to diameter of a circle. And that means that π appears in zillions of scientific formulas. But it’s not the whole story. (And for example most people have never even heard of the analog of π for an ellipse—a so-called complete elliptic integral of the second kind.)

The bigger story is that π appears in a remarkable range of mathematical settings—including many that don’t seem to have anything to do with circles. Like sums of negative powers, or limits of iterations, or the probability that a randomly chosen fraction will not be in lowest terms.

If one’s just looking at digit sequences, pi’s 3.1415926… doesn’t seem like anything special. But let’s say one just starts constructing formulas at random and then doing traditional mathematical operations on them, like summing series, doing integrals, finding limits, and so on. One will get lots of answers that are 0, or 1/2, or square root of 2. And there’ll be plenty of cases where there’s no closed form one can find at all. But when one can get a definite result, my experience is that it’s remarkably common to find π in it.

A few other constants show up too, like e (2.1718…), or Euler gamma (0.5772…), or Catalan’s constant (0.9159…). But π is distinctly more common.

Perhaps math could have been set up differently. But at least with math as we humans have constructed it, the number that is π is a widespread building block, and it’s natural that we gave it a name, and that it’s famous—now even to the point of having a day to celebrate it.

What about other constants? “Birthday strings” will certainly appear at different places in different constants. And just like when Wolfram|Alpha tries to find closed forms for numbers, there’s typically a tradeoff between digit position and obscurity of the constants used. So, for example, my birthday string appears at position 151,653 in π, 241,683 in e, 45,515 in square root of 2, 40,979 in ζ(3) … and 196 in the 1601th Fibonacci number.

Randomness in π

Let’s say you make a plot that goes up whenever a digit of π is 5 or above, and down otherwise:

For each consecutive digit of pi, this plot line goes up if the digit is 5-9 and down if it's 0-4

It looks just like a random walk. And in fact, all statistical and cryptographic tests of randomness that have been tried on the digits (except tests that effectively just ask “are these the digits of pi?”) say that they look random too.

Why does that happen? There are fairly simple procedures that generate digits of pi. But the remarkable thing is that even though these procedures are simple, the output they produce is complicated enough to seem completely random. In the past, there wasn’t really a context for thinking about this kind of behavior. But it’s exactly what I’ve spent many years studying in all kinds of systems—and wrote about in A New Kind of Science. And in a sense the fact that one can “find any birthday in pi” is directly connected to concepts like my general Principle of Computational Equivalence.

SETI among the Digits

Of course, just because we’ve never seen any regularity in the digits of pi, it doesn’t mean that no such regularity exists. And in fact it could still be that if we did a big search, we might find somewhere far out in the digits of pi some strange regularity lurking.

What would it mean? There’s a science fiction answer at the end of Carl Sagan’s book version of Contact. In the book, the search for extraterrestrial intelligence succeeds in making contact with an interstellar civilization that has created some amazing artifacts—and that then explains that what they in turn find remarkable is that encoded in the distant digits of pi, they’ve found intelligent messages, like an encoded picture of a circle.

At first one might think that finding “intelligence” in the digits of pi is absurd. After all, there’s just a definite simple algorithm that generates these digits. But at least if my suspicions are correct, exactly the same is actually true of our whole universe, so that every detail of its history is in principle computable much like the digits of pi.

Now we know that within our universe we have ourselves as an example of intelligence. SETI is about trying to find other examples. The goal is fairly well defined when the search is for “human-like intelligence”. But—as my Principle of Computational Equivalence suggests—I think that beyond that it’s essentially impossible to make a sharp distinction between what should be considered “intelligent” and what is “merely computational”.

If the century-old mathematical suspicion is correct that the digits of pi are “normal”, it means that every possible sequence eventually occurs among the digits, including all the works of Shakespeare, or any other artifact of any possible civilization. But could there be some other structure—perhaps even superimposed on normality—that for example shows evidence of the generation of intelligence-like complexity?

While it may be conceptually simple, it’s certainly more bizarre to contemplate the possibility of a human-like intelligent civilization lurking in the digits of pi, than in the physical universe as explored by SETI. But if one generalizes what one counts as intelligence, the situation is a lot less clear.

Of course, if we see a complex signal from a pulsar magnetosphere we say it’s “just physics”, not the result of the evolution of a “magnetohydrodynamic civilization”. And similarly if we see some complex structure in the digits of pi, we’re likely to say it’s “just mathematics”, not the result of some “number theoretic civilization”.

One can generalize from the digit sequence of pi to representations of any mathematical constant that is easy to specify with traditional mathematical operations. Sometimes there are simple regularities in those representations. But often there is apparent randomness. And the project of searching for structure is quite analogous to SETI in the physical universe. (One difference, however, is that π as a number to study is selected as a result of the structure of our physical universe, our brains, and our mathematical development. The universe presumably has no such selection, save implicitly from the fact the we exist in it.)

I’ve done a certain amount of searching for regularities in representations of numbers like π. I’ve never found anything significant. But there’s nothing to say that any regularities have to be at all easy to find. And there’s certainly a possibility that it could take a SETI-like effort to reveal them.

But for now, let’s celebrate the Pi Day of our century, and have fun doing things like finding birthday strings in the digits of pi. Of course, someone like me can’t help but wonder what success there will have been by the next Pi Day of the Century, in 2115, in either SETI or “SETI among the digits”…

This Just In…

Pictures from the Pi Day event:

Pi Day with Wolfram photos


To comment, please visit the copy of this post at the Wolfram Blog »

]]>
0
Stephen Wolfram <![CDATA[The Wolfram Data Drop Is Live!]]> http://blog.internal.stephenwolfram.com/?p=9158 2015-03-06T19:50:11Z 2015-03-04T19:38:38Z Where should data from the Internet of Things go? We’ve got great technology in the Wolfram Language for interpreting, visualizing, analyzing, querying and otherwise doing interesting things with it. But the question is, how should the data from all those connected devices and everything else actually get to where good things can be done with it? Today we’re launching what I think is a great solution: the Wolfram Data Drop.

Wolfram Data Drop

When I first started thinking about the Data Drop, I viewed it mainly as a convenience—a means to get data from here to there. But now that we’ve built the Data Drop, I’ve realized it’s much more than that. And in fact, it’s a major step in our continuing efforts to integrate computation and the real world.

So what is the Wolfram Data Drop? At a functional level, it’s a universal accumulator of data, set up to get—and organize—data coming from sensors, devices, programs, or for that matter, humans or anything else. And to store this data in the cloud in a way that makes it completely seamless to compute with.

Data Drop data can come from anywhere

Our goal is to make it incredibly straightforward to get data into the Wolfram Data Drop from anywhere. You can use things like a web API, email, Twitter, web form, Arduino, Raspberry Pi, etc. And we’re going to be progressively adding more and more ways to connect to other hardware and software data collection systems. But wherever the data comes from, the idea is that the Wolfram Data Drop stores it in a standardized way, in a “databin”, with a definite ID.

Here’s an example of how this works. On my desk right now I have this little device:

This device records the humidity, light, pressure, and temperature at my desk, and sends it to a Data Drop databin. The cable is power; the pen is there to show scale.

Every 30 seconds it gets data from the tiny sensors on the far right, and sends the data via wifi and a web API to a Wolfram Data Drop databin, whose unique ID happens to be “3pw3N73Q”. Like all databins, this databin has a homepage on the web: wolfr.am/3pw3N73Q.

The homepage is an administrative point of presence that lets you do things like download raw data. But what’s much more interesting is that the databin is fundamentally integrated right into the Wolfram Language. A core concept of the Wolfram Language is that it’s knowledge based—and has lots of knowledge about computation and about the world built in.

For example, the Wolfram Language knows in real time about stock prices and earthquakes and lots more. But now it can also know about things like environmental conditions on my desk—courtesy of the Wolfram Data Drop, and in this case, of the little device shown above.

Here’s how this works. There’s a symbolic object in the Wolfram Language that represents the databin:

Databin representation in the Wolfram Language

And one can do operations on it. For instance, here are plots of the time series of data in the databin:

Time series from the databin of condition data from my desk: humidity, light, pressure, and temperature

And here are histograms of the values:

Histograms of the same humidity, light, pressure, and temperature data from my desk

And here’s the raw data presented as a dataset:

Raw data records for each of the four types of desktop atmospheric data I collected to the Data Drop

What’s really nice is that the databin—which could contain data from anywhere—is just part of the language. And we can compute with it just like we would compute with anything else.

So here for example are the minimum and maximum temperatures recorded at my desk:
(for aficionados: MinMax is a new Wolfram Language function)

Minimum and maximum temperatures collected by my desktop device

We can convert those to other units (% stands for the previous result):

Converting the minimum and maximum collected temperatures to Fahrenheit

Let’s pull out the pressure as a function of time. Here it is:

It's easy to examine any individual part of the data—here pressure as a function of time

Of course, the Wolfram Knowledgebase has historical weather data. So in the Wolfram Language we can just ask it the pressure at my current location for the time period covered by the databin—and the result is encouragingly similar:

The official weather data on pressure for my location nicely parallels the pressures recorded at my desk

If we wanted, we could do all sorts of fancy time series analysis, machine learning, modeling, or whatever, with the data. Or we could do elaborate visualizations of it. Or we could set up structured or natural language queries on it.

Here’s an important thing: notice that when we got data from the databin, it came with units attached. That’s an example of a crucial feature of the Wolfram Data Drop: it doesn’t just store raw data, it stores data that has real meaning attached to it, so it can be unambiguously understood wherever it’s going to be used.

We’re using a big piece of technology to do this: our Wolfram Data Framework (WDF). Developed originally in connection with Wolfram|Alpha, it’s our standardized symbolic representation of real-world data. And every databin in the Wolfram Data Drop can use WDF to define a “data semantics signature” that specifies how its data should be interpreted—and also how our automatic importing and natural language understanding system should process new raw data that comes in.

The beauty of all this is that once data is in the Wolfram Data Drop, it becomes both universally interpretable and universally accessible, to the Wolfram Language and to any system that uses the language. So, for example, any public databin in the Wolfram Data Drop can immediately be accessed by Wolfram|Alpha, as well as by the various intelligent assistants that use Wolfram|Alpha. Tell Wolfram|Alpha the name of a databin, and it’ll automatically generate an analysis and a report about the data that’s in it:

The Wolfram|Alpha results for "databin 3pw3N73Q"

Through WDF, the Wolfram Data Drop immediately handles more than 10,000 kinds of units and physical quantities. But the Data Drop isn’t limited to numbers or numerical quantities. You can put anything you want in it. And because the Wolfram Language is symbolic, it can handle it all in a unified way.

The Wolfram Data Drop automatically includes timestamps, and, when it can, geolocations. Both of these have precise canonical representations in WDF. As do chemicals, cities, species, networks, or thousands of other kinds of things. But you can also drop things like images into the Wolfram Data Drop.

Somewhere in our Quality Assurance department there’s a camera on a Raspberry Pi watching two recently acquired corporate fish—and dumping an image every 10 minutes into a databin in the Wolfram Data Drop:

Images are easy to store in Data Drop, and to retrieve

In the Wolfram Language, it’s easy to stack all the images up in a manipulable 3D “fish cube” image:

If this were a Wolfram CDF document, you could simply click and drag to rotate the cube and view it from any angle

Or to process the images to get a heat map of where the fish spend time:

Apparently the fish like the lower right area of the tank

We can do all kinds of analysis in the Wolfram Language. But to me the most exciting thing here is how easy it is to get new real-world data into the language, through the Wolfram Data Drop.

Around our company, databins are rapidly proliferating. It’s so easy to create them, and to hook up existing monitoring systems to them. We’ve got databins now for server room HVAC, for weather sensors on the roof of our headquarters building, for breakroom refrigerators, for network ping data, and for the performance of the Data Drop itself. And there are new ones every day.

Lots of personal databins are being created, too. I myself have long been a personal data enthusiast. And in fact, I’ve been collecting personal analytics on myself for more than a quarter of a century. But I can already tell that March 2015 is going to show a historic shift. Because with the Data Drop, it’s become vastly easier to collect data, with the result that the number of streams I’m collecting is jumping up. I’ll be at least a 25-databin human soon… with more to come.

A really important thing is that because everything in the Wolfram Data Drop is stored in WDF, it’s all semantic and canonicalized, with the result that it’s immediately possible to compare or combine data from completely different databins—and do meaningful computations with it.

So long as you’re dealing with fairly modest amounts of data, the basic Wolfram Data Drop is set up to be completely free and open, so that anyone or any device can immediately drop data into it. Official users can enter much larger amounts of data—at a rate that we expect to be able to progressively increase.

Wolfram Data Drop databins can be either public or private. And they can either be open to add to, or require authentication. Anyone can get access to the Wolfram Data Drop in our main Wolfram Cloud. But organizations that get their own Wolfram Private Clouds will also soon be able to have their own private Data Drops, running inside their own infrastructure.

So what’s a typical workflow for using the Wolfram Data Drop? It depends on what you’re doing. And even with a single databin, it’s common in my experience to want more than one workflow.

It’s very convenient to be able to take any databin and immediately compute with it interactively in a Wolfram Language session, exploring the data in it, and building up a notebook about it

But in many cases one also wants something to be done automatically with a databin. For example, one can set up a scheduled task to create a report from the databin, say to email out. One can also have the report live on the web, hosted in the Wolfram Cloud, perhaps using CloudCDF to let anyone interactively explore the data. One can make it so that a new report is automatically generated whenever someone visits a page, or one can create a dashboard where the report is continuously regenerated.

It’s not limited to the web. Once a report is in the Wolfram Cloud, it immediately becomes accessible on standard mobile or wearable devices. And it’s also accessible on desktop systems.

You don’t have to make a report. Instead, you can just have a Wolfram Language program that watches a databin, then for example sends out alerts—or takes some other action—if whatever combination of conditions you specify occur.

You can make a databin public, so you’re effectively publishing data through it. Or you can make it private, and available only to the originator of the data—or to some third party that you designate. You can make an API that accesses data from a databin in raw or processed form, and you can call it not only from the web, but also from any programming language or system.

A single databin can have data coming only from one source—or one device—or it can have data from many sources, and act as an aggregation point. There’s always detailed metadata included with each piece of data, so one can tell where it comes from.

For several years, we’ve been quite involved with companies who make connected devices, particularly through our Connected Devices Project. And many times I’ve had a similar conversation: The company will tell me about some wonderful new device they’re making, that measures something very interesting. Then I’ll ask them what’s going to happen with data from the device. And more often than not, they’ll say they’re quite concerned about this, and that they don’t really want to have to hire a team to build out cloud infrastructure and dashboards and apps and so on for them.

Well, part of the reason we created the Wolfram Data Drop is to give such companies a better solution. They deal with getting the data—then they just drop it into the Data Drop, and it goes into our cloud (or their own private version of it), where it’s easy to analyze, visualize, query, and distribute through web pages, apps, APIs, or whatever.

It looks as if a lot of device companies are going to make use of the Wolfram Data Drop. They’ll get their data to it in different ways. Sometimes through web APIs. Sometimes by direct connection to a Wolfram Language system, say on a Raspberry Pi. Sometimes through Arduino or Electric Imp or other hardware platforms compatible with the Data Drop. Sometimes gatewayed through phones or other mobile devices. And sometimes from other clouds where they’re already aggregating data.

We’re not at this point working specifically on the “first yard” problem of getting data out of the device through wires or wifi or Bluetooth or whatever. But we’re setting things up so that with any reasonable solution to that, it’s easy to get the data into the Wolfram Data Drop.

There are different models for people to access data from connected devices. Developers or researchers can come directly to the Wolfram Cloud, through either cloud or desktop versions of the Wolfram Language. Consumer-oriented device companies can choose to set up their own private portals, powered by the Wolfram Cloud, or perhaps by their own Wolfram Private Cloud. Or they can access the Data Drop from a Wolfram mobile app, or their own mobile app. Or from a wearable app.

Sometimes a company may want to aggregate data from many devices—say for a monitoring net, or for a research study. And again their users may want to work directly with the Wolfram Language, or through a portal or app.

When I first thought about the Wolfram Data Drop, I assumed that most of the data dropped into it would come from automated devices. But now that we have the Data Drop, I’ve realized that it’s very useful for dealing with data of human origin too. It’s a great way to aggregate answers—say in a class or a crowdsourcing project—collect feedback, keep diary-type information, do lifelogging, and so on. Once one’s defined a data semantics signature for a databin, the Wolfram Data Drop can automatically generate a form to supply data, which can be deployed on the web or on mobile.

The form can ask for text, or for images, or whatever. And when it’s text, our natural language understanding system can take the input and automatically interpret it as WDF, so it’s immediately standardized.

Now that we’ve got the Wolfram Data Drop, I keep on finding more uses for it—and I can’t believe I lived so long without it. As throughout the Wolfram Language, it’s really a story of automation: the Wolfram Data Drop automates away lots of messiness that’s been associated with collecting and processing actual data from real-world sources.

And the result for me is that it’s suddenly realistic for anyone to collect and analyze all sorts of data themselves, without getting any special systems built. For example, last weekend, I ended up using the Wolfram Data Drop to aggregate performance data on our cloud. Normally this would be a complex and messy task that I wouldn’t even consider doing myself. But with the Data Drop, it took me only minutes to set up—and, as it happens, gave me some really interesting results.

I’m excited about all the things I’m going to be able to do with the Wolfram Data Drop, and I’m looking forward to seeing what other people do with it. Do try out the beta that we launched today, and give us feedback (going into a Data Drop databin of course). I’m hoping it won’t be long before lots of databins are woven into the infrastructure of the world: another step forward in our long-term mission of making the world computable…


To comment, please visit the copy of this post at the Wolfram Blog »

]]>
0
Stephen Wolfram <![CDATA[Introducing Tweet-a-Program]]> http://blog.internal.stephenwolfram.com/?p=8877 2014-10-17T19:05:27Z 2014-09-18T21:29:39Z In the Wolfram Language a little code can go a long way. And to use that fact to let everyone have some fun, today we’re introducing Tweet-a-Program.

Compose a tweet-length Wolfram Language program, and tweet it to @WolframTaP. Our Twitter bot will run your program in the Wolfram Cloud and tweet back the result.

Hello World from Tweet-a-Program: GeoGraphics[Text[Style["Hello!",150]],GeoRange->"World"]

One can do a lot with Wolfram Language programs that fit in a tweet. Like here’s a 78-character program that generates a color cube made of spheres:


Graphics3D[Table[{RGBColor[{i,j,k}/5],Sphere[{i,j,k},1/2]},{i,5},{j,5},{k,5}]]

It’s easy to make interesting patterns:

Graphics[Riffle[NestList[Scale[Rotate[#,.1],.9]&,Rectangle[],40],{Black,White}]]

Here’s a 44-character program that seems to express itself like an executable poem:

Graphics3D@Point@Tuples@Table[Range[20],{3}]

Going even shorter, here’s a little “fractal hack”, in just 36 characters:


NestList[Subsuperscript[#,#,#]&,o,6]

Putting in some math makes it easy to get all sorts of elaborate structures and patterns:


ContourPlot3D[Cos[{x,y,z}/Norm[{x,y,z}]^2]==0,{x,-0.5,0},{y,0,0.5},{z,-0.5,0}]

ReliefPlot[Arg[Fourier[Table[1/LCM[i,j],{i,512},{j,512}]]]]

You don’t have to make pictures. Here, for instance, are the first 1000 digits of π, sized according to their magnitudes (notice that run of 9s!):

Row[Style[#,5#+1]&/@First[RealDigits[Pi,10,1000]]]

The Wolfram Language not only knows how to compute π, as well as a zillion other algorithms; it also has a huge amount of built-in knowledge about the real world. So right in the language, you can talk about movies or countries or chemicals or whatever. And here’s a 78-character program that makes a collage of the flags of Europe, sized according to country population:

ImageCollage[CountryData["Europe","Population"]->CountryData["Europe","Flag"]]

We can make this even shorter if we use some free-form natural language in the program. In a typical Wolfram notebook interface, you do this using CTRL + =, but in Tweet-a-Program, you can do it just using =[...]:

ImageCollage[=[Europe populations]->=[Europe flags]]
ImageCollage[=[Europe populations]->=[Europe flags]]

The Wolfram Language knows a lot about geography. Here’s a program that makes a “powers of 10” sequence of disks, centered on the Eiffel Tower:

Table[GeoGraphics[GeoDisk[=[Eiffel Tower],Quantity[10^(n+1),"Meters"]],GeoProjection->"Bonne"],{n,6}]
Table[GeoGraphics[GeoDisk[=[Eiffel Tower],Quantity[10^(n+1),"Meters"]],GeoProjection->"Bonne"],{n,6}]

There are many, many kinds of real-world knowledge built into the Wolfram Language, including some pretty obscure ones. Here’s a map of all the shipwrecks it knows in the Atlantic:

GeoListPlot[GeoEntities[=[Atlantic Ocean],"Shipwreck"]]
GeoListPlot[GeoEntities[=[Atlantic Ocean],"Shipwreck"]]

The Wolfram Language deals with images too. Here’s a program that gets images of the planets, then randomly scrambles their colors to give them a more exotic look:

ColorCombine[RandomSample[ColorSeparate[#]]]&/@EntityValue[=[planets],"Image"]
ColorCombine[RandomSample[ColorSeparate[#]]]&/@EntityValue[=[planets],"Image"]

Here’s an image of me, repeatedly edge-detected:

NestList[EdgeDetect,=[Stephen Wolfram image],5]
NestList[EdgeDetect,=[Stephen Wolfram image],5]

Or, for something more “pop culture” (and ready for image analysis etc.), here’s an array of random movie posters:

Grid[Partition[DeleteMissing[EntityValue[RandomSample[MovieData[],50],"Image"]],6]]

The Wolfram Language does really well with words and text too. Like here’s a program that generates an “infographic” showing the relative frequencies of first letters for words in English and in Spanish:

Row[Style[#,#2/70]&@@@Tally[ToUpperCase[StringTake[DictionaryLookup[{#,All}],1]]]]&/@{"English","Spanish"}

And here—just fitting in a tweet—is a program that computes a smoothed estimate of the frequencies of “Alice” and “Queen” going through the text of Alice in Wonderland:

SmoothHistogram[Legended[First/@StringPosition[ExampleData@{"Text","AliceInWonderland"},#],#]&/@{"Alice","Queen"},Filling->Axis]

Networks are good fodder for Tweet-a-Program too. Like here’s a program that generates a sequence of networks:

Table[Graph[Table[i->Mod[i^2,n],{i,n}]],{n,105,110}]

And here—just below the tweet length limit—is a program that generates a random cloud of polyhedra:

Graphics3D[Table[{RandomColor[],Translate[PolyhedronData[RandomChoice[PolyhedronData[]]][[1]],RandomReal[20,3]]},{100}]]

What’s the shortest “interesting program” in the Wolfram Language?

In some languages, it might be a “quine”—a program that outputs its own code. But in the Wolfram Language, quines are completely trivial. Since everything is symbolic, all it takes to make a quine is a single character:

x

Using the built-in knowledge in the Wolfram Language, you can make some very short programs with interesting output. Like here’s a 15-character program that generates an image from built-in data about knots:

KnotData[{8,4}]

Some short programs are very easy to understand:

Grid[Array[Times,{12,12}]]

It’s fun to make short “mystery” programs. What’s this one doing?

NestList[#^#&,x,5]

Or this one?

FixedPointList[#/.{s[x_][y_][z_]->x[z][y[z]],k[x_][y_]->x}&,s[s[s]][s][s][s][k],10]//Column

Or, much more challengingly, this one:

Style[\[FilledCircle],5#]&/@(If[#1>2,2#0[#1-#0[#1-2]],1]&/@Range[50])

I’ve actually spent many years of my life studying short programs and what they do—and building up a whole science of the computational universe, described in my big book A New Kind of Science. It all started more than three decades ago—with a computer experiment that I can now do with just a single tweet:

GraphicsGrid[Partition[Table[ArrayPlot[CellularAutomaton[n,{{1},0},{40,All}]],{n,0,255}],16]]

My all-time favorite discovery is tweetable too:

ArrayPlot[CellularAutomaton[30,{{1},0},100]]

If you go out searching in the computational universe, it’s easy to find all sorts of amazing things:

ArrayPlot[CellularAutomaton[{1635,{3,1}},{{1},0},500],ColorFunction->(Hue[#/3]&)]

An ultimate question is whether somewhere out there in the computational universe there is a program that represents our whole physical universe. And is that program short enough to be tweetable in the Wolfram Language?

But regardless of this, we already know that the Wolfram Language lets us write amazing tweetable programs about an incredible diversity of things. It’s taken more than a quarter of a century to build the huge tower of knowledge and automation that’s now in the Wolfram Language. But this richness is what makes it possible to express so much in the space of a tweet.

In the past, only ordinary human languages were rich enough to be meaningfully used for tweeting. But what’s exciting now is that it seems like the Wolfram Language has passed a kind of threshold of general expressiveness that lets it, too, be meaningfully tweetable. For like ordinary human languages, it can talk about all sorts of things, and represent all sorts of ideas. But there’s also something else about it: unlike ordinary human languages, everything in it always has a precisely defined meaning—and what you write is not just readable, but also runnable.

Tweets in an ordinary human language are (presumably) intended to have some effect on the mind of whoever reads them. But the effect may be different on different minds, and it’s usually hard to know exactly what it is. But tweets in the Wolfram Language have a well-defined effect—which you see when they’re run.

It’s interesting to compare the Wolfram Language to ordinary human languages. An ordinary language, like English, has a few tens of thousands of reasonably common “built-in” words, excluding proper names etc. The Wolfram Language has about 5000 built-in named objects, excluding constructs like entities specified by proper names.

And one thing that’s important about the Wolfram Language—that it shares with ordinary human languages—is that it’s not only writable by humans, but also readable by them. There’s vocabulary to acquire, and there are a few principles to learn—but it doesn’t take long before, as a human, one can start to understand typical Wolfram Language programs.

Sometimes it’s fairly easy to give at least a rough translation (or “explanation”) of a Wolfram Language program in ordinary human language. But it’s very common for a Wolfram Language program to express something that’s quite difficult to communicate—at least at all succinctly—in ordinary human language. And inevitably this means that there are things that are easy to think about in the Wolfram Language, but difficult to think about in ordinary human language.

Just like with an ordinary language, there are language arts for the Wolfram Language. There’s reading and comprehension. And there’s writing and composition. Always with lots of ways to express something, but now with a precise notion of correctness, as well as all sorts of measures like speed of execution.

And like with ordinary human language, there’s also the matter of elegance. One can look at both meaning and presentation. And one can think of distilling the essence of things to create a kind of “code poetry”.

When I first came up with Tweet-a-Program it seemed mostly like a neat hack. But what I’ve realized is that it’s actually a window into a new kind of expression—and a form of communication that humans and computers can share.

Of course, it’s also intended to be fun. And certainly for me there’s great satisfaction in creating a tiny, elegant gem of a program that produces something amazing.

And now I’m excited to see what everyone will do with it. What kinds of things will be created? What popular “code postcards” will there be? Who will be inspired to code? What puzzles will be posed and solved? What competitions will be defined and won? And what great code artists and code poets will emerge?

Now that we have tweetable programs, let’s go find what’s possible…

To develop and test programs for Tweet-a-Program, you can log in free to the Wolfram Programming Cloud, or use any other Wolfram Language system, on the desktop or in the cloud. Check out some details here.


To comment, please visit the copy of this post at the Wolfram Blog »

]]>
0
Stephen Wolfram <![CDATA[Launching Today: Mathematica Online!]]> http://blog.internal.stephenwolfram.com/?p=8816 2014-09-16T20:00:53Z 2014-09-15T19:12:21Z It’s been many years in the making, and today I’m excited to announce the launch of Mathematica Online: a version of Mathematica that operates completely in the cloud—and is accessible just through any modern web browser.

In the past, using Mathematica has always involved first installing software on your computer. But as of today that’s no longer true. Instead, all you have to do is point a web browser at Mathematica Online, then log in, and immediately you can start to use Mathematica—with zero configuration.

Here’s what it looks like:

Click to open in Mathematica Online (you will need to log in or create a free account)

It’s a notebook interface, just like on the desktop. You interactively build up a computable document, mixing text, code, graphics, and so on—with inputs you can immediately run, hierarchies of cells, and even things like Manipulate. It’s taken a lot of effort, but we’ve been able to implement almost all the major features of the standard Mathematica notebook interface purely in a web browser—extending CDF (Computable Document Format) to the cloud.

There are some tradeoffs of course. For example, Manipulate can’t be as zippy in the cloud as it is on the desktop, because it has to run across the network.  But because its Cloud CDF interface is running directly in the web browser, it can immediately be embedded in any web page, without any plugin, like right here:


Another huge feature of Mathematica Online is that because your files are stored in the cloud, you can immediately access them from anywhere. You can also easily collaborate: all you have to do is set permissions on the files so your collaborators can access them.  Or, for example, in a class, a professor can create notebooks in the cloud that are set so each student gets their own active copy to work with—that they can then email or share back to the professor.

And since Mathematica Online runs purely through a web browser, it immediately works on mobile devices too. Even better, there’s soon going to be a Wolfram Cloud app that provides a native interface to Mathematica Online, both on tablets like the iPad, and on phones:

Wolfram Cloud app: native interface to Mathematica Online

There are lots of great things about Mathematica Online. There are also lots of great things about traditional desktop Mathematica. And I, for one, expect routinely to use both of them.

They fit together really well.  Because from Mathematica Online there’s a single button that “peels off” a notebook to run on the desktop. And within desktop Mathematica, you can seamlessly access notebooks and other files that are stored in the cloud.

If you have desktop Mathematica installed on your machine, by all means use it.  But get Mathematica Online too (which is easy to do—through Premier Service Plus for individuals, or a site license add-on).  And then use the Wolfram Cloud to store your files, so you can access and compute with them from anywhere with Mathematica Online. And so you can also immediately share them with anyone you want.

Share access easily from Mathematica Online

By the way, when you run notebooks in the cloud, there are some extra web-related features you get—like being able to embed inside a notebook other web pages, or videos, or actually absolutely any HTML code.

Mathematica Online is initially set up to run—and store content—in our main Wolfram Cloud. But it’ll soon also be possible to get a Wolfram Private Cloud—so you operate entirely in your own infrastructure, and for example let people in your organization access Mathematica Online without ever using the public web.

A few weeks ago we launched the Wolfram Programming Cloud—our very first full product based on the Wolfram Language, and Wolfram Cloud technology. Mathematica Online is our second product based on this technology stack.

The Wolfram Programming Cloud is focused on creating deployable cloud software. Mathematica Online is instead focused on providing a lightweight web-based version of the traditional Mathematica experience. Over the next few months, we’re going to be releasing a sequence of other products based on the same technology stack, including the Wolfram Discovery Platform (providing unlimited access to the Wolfram Knowledgebase for R&D) and the Wolfram Data Science Platform (providing a complete data-source-to-reports data science workflow).

One of my goals since the beginning of Mathematica more than a quarter century ago has been to make the system as widely accessible as possible. And it’s exciting today to be able to take another major new step in that direction—making Mathematica immediately accessible to anyone with a web browser.

There’ll be many applications. From allowing remote access for existing Mathematica users. To supporting mobile workers. To making it easy to administer Mathematica for project-based users, or on public-access computers. As well as providing a smooth new workflow for group collaboration and for digital classrooms.

But for me right now it’s just so neat to be able to see all the power of Mathematica immediately accessible through a plain old web browser—on a computer or even a phone.

And all you need do is go to the Mathematica Online website


To comment, please visit the copy of this post at the Wolfram Blog »

]]>
0
Stephen Wolfram <![CDATA[Computational Knowledge and the Future of Pure Mathematics]]> http://blog.internal.stephenwolfram.com/?p=8593 2014-08-12T19:44:20Z 2014-08-12T14:50:54Z Every four years for more than a century there’s been an International Congress of Mathematicians (ICM) held somewhere in the world. In 1900 it was where David Hilbert announced his famous collection of math problems—and it’s remained the top single periodic gathering for the world’s research mathematicians.

This year the ICM is in Seoul, and I’m going to it today. I went to the ICM once before—in Kyoto in 1990. Mathematica was only two years old then, and mathematicians were just getting used to it. Plenty already used it extensively—but at the ICM there were also quite a few who said, “I do pure mathematics. How can Mathematica possibly help me?”

Mathematics

Twenty-four years later, the vast majority of the world’s pure mathematicians do in fact use Mathematica in one way or another. But there’s nevertheless a substantial core of pure mathematics that still gets done pretty much the same way it’s been done for centuries—by hand, on paper.

Ever since the 1990 ICM I’ve been wondering how one could successfully inject technology into this. And I’m excited to say that I think I’ve recently begun to figure it out. There are plenty of details that I don’t yet know. And to make what I’m imagining real will require the support and involvement of a substantial set of the world’s pure mathematicians. But if it’s done, I think the results will be spectacular—and will surely change the face of pure mathematics at least as much as Mathematica (and for a younger generation, Wolfram|Alpha) have changed the face of calculational mathematics, and potentially usher in a new golden age for pure mathematics.

Workflow of pure math The whole story is quite complicated. But for me one important starting point is the difference in the typical workflows for calculational mathematics and pure mathematics. Calculational mathematics tends to involve setting up calculational questions, and then working through them to get results—just like in typical interactive Mathematica sessions. But pure mathematics tends to involve taking mathematical objects, results or structures, coming up with statements about them, and then giving proofs to show why those statements are true.

How can we usefully insert technology into this workflow? Here’s one simple way. Think about Wolfram|Alpha. If you enter 2+2, Wolfram|Alpha—like Mathematica—will compute 4. But if you enter new york—or, for that matter, 2.3363636 or cos(x) log(x)—there’s no single “answer” for it to compute. And instead what it does is to generate a report that gives you a whole sequence of “interesting facts” about what you entered.

Part of Wolfram|Alpha's output for cos(x) log(x)

And this kind of thing fits right into the workflow for pure mathematics. You enter some mathematical object, result or structure, and then the system tries to tell you interesting things about it—just like some extremely wise mathematical colleague might. You can guide the system if you want to, by telling it what kinds of things you want to know about, or even by giving it a candidate statement that might be true. But the workflow is always the Wolfram|Alpha-like “what can you tell me about that?” rather than the Mathematica-like “what’s the answer to that?”

Wolfram|Alpha already does quite a lot of this kind of thing with mathematical objects. Enter a number, or a mathematical expression, or a graph, or a probability distribution, or whatever, and Wolfram|Alpha will use often-quite-sophisticated methods to try to tell you a collection of interesting things about it.

Wolfram|Alpha tells you interesting things about mathematical objects—here "petersen graph", "stellated dodecahedron", "pareto distribution", and "42424"

But to really be useful in pure mathematics, there’s something else that’s needed. In addition to being able to deal with concrete mathematical objects, one also has to be able to deal with abstract mathematical structures.

Countless pure mathematical papers start with things like, “Let F be a field with such-and-such properties.” We need to be able to enter something like this—then have our system automatically give us interesting facts and theorems about F, in effect creating a whole automatically generated paper that tells the story of F.

So what would be involved in creating a system to do this? Is it even possible? There are several different components, all quite difficult and time consuming to build. But based on my experiences with Mathematica, Wolfram|Alpha, and A New Kind of Science, I am quite certain that with the right leadership and enough effort, all of them can in fact be built.

A key part is to have a precise symbolic description of mathematical concepts and constructs. Lots of this now already exists—after more than a quarter century of work—in Mathematica. Because built right into the Wolfram Language are very general ways to represent geometries, or equations, or stochastic processes or quantifiers. But what’s not built in are representations of pure mathematical concepts like bijections or abstract semigroups or pullbacks.

Mathematica Pura Over the years, plenty of mathematicians have implemented specific cases. But could we systematically extend the Wolfram Language to cover the whole range of pure mathematics—and make a kind of “Mathematica Pura”? The answer is unquestionably yes. It’ll be fascinating to do, but it’ll take lots of difficult language design.

I’ve been doing language design now for 35 years—and it’s the hardest intellectual activity I know. It requires a curious mixture of clear thinking, aesthetics and pragmatic judgement. And it involves always seeking the deepest possible understanding, and trying to do the broadest unification—to come up in the end with the cleanest and “most obvious” primitives to represent things.

Today the main way pure mathematics is described—say in papers—is through a mixture of mathematical notation and natural language, together with a few diagrams. And in designing a precise symbolic language for pure mathematics, this has to be the starting point.

One might think that somehow mathematical notation would already have solved the whole problem. But there’s actually only a quite small set of constructs and concepts that can be represented with any degree of standardization in mathematical notation—and indeed many of these are already in the Wolfram Language.

So how should one go further? The first step is to understand what the appropriate primitives are. The whole Wolfram Language today has about 5000 built-in functions—together with many millions of built-in standardized entities. My guess is that to broadly support pure mathematics there would need to be something less than a thousand other well-designed functions that in effect define frameworks—together with maybe a few tens of thousands of new entities or their analogs.

Wolfram Language function and entity categories

Take something like function spaces. Maybe there’ll be a FunctionSpace function to represent a function space. Then there’ll be various operations on function spaces, like PushForward or MetrizableQ. Then there’ll be lots of named function spaces, like “CInfinity”, with various kinds of parameterizations.

Underneath, everything’s just a symbolic expression. But in the Wolfram Language there end up being three immediate ways to input things, all of which are critical to having a convenient and readable language. The first is to use short notations—like + or \[ForAll]—as in standard mathematical notation. The second is to use carefully chosen function names—like MatrixRank or Simplex. And the third is to use free-form natural language—like trefoil knot or aleph0.

One wants to have short notations for some of the most common structural or connective elements. But one needs the right number: not too few, like in LISP, nor too many, like in APL. Then one wants to have function names made of ordinary words, arranged so that if one’s given something written in the language one can effectively just “read the words” to know at least roughly what’s going on in it.

Computers & humans But in the modern Wolfram Language world there’s also free-form natural language. And the crucial point is that by using this, one can leverage all the various convenient—but sloppy—notations that actual mathematicians use and find familiar. In the right context, one can enter “L2” for Lebesgue Square Integrable—and the natural language system will take care of disambiguating it and inserting the canonical symbolic underlying form.

Ultimately every named construct or concept in pure mathematics needs to have a place in our symbolic language. Most of the 13,000+ entries in MathWorld. Material from the 5600 or so entries in the MSC2010 classification scheme. All the many things that mathematicians in any given field would readily recognize when told their names.

But, OK, so let’s say we manage to create a precise symbolic language that captures the concepts and constructs of pure mathematics. What can we do with it?

One thing is to use it “Wolfram|Alpha style”: you give free-form input, which is then interpreted into the language, and then computations are done, and a report is generated.

But there’s something else too. If we have a sufficiently well-designed symbolic language, it’ll be useful not only to computers but also to humans. In fact, if it’s good enough, people should prefer to write out their math in this language than in their current standard mixture of natural language and mathematical notation.

When I write programs in the Wolfram Language, I pretty much think directly in the language. I’m not coming up with a description in English of what I’m trying to do and then translating it into the Wolfram Language. I’m forming my thoughts from the beginning in the Wolfram Language—and making use of its structure to help me define those thoughts.

If we can develop a sufficiently good symbolic language for pure mathematics, then it’ll provide something for pure mathematicians to think in too. And the great thing is that if you can describe what you’re thinking in a precise symbolic language, there’s never any ambiguity about what anything means: there’s a precise definition that you can just go to the documentation for the language to find.

And once pure math is represented in a precise symbolic language, it becomes in effect something on which computation can be done. Proofs can be generated or checked. Searches for theorems can be done. Connections can automatically be made. Chains of prerequisites can automatically be found.

But, OK, so let’s say we have the raw computational substrate we need for pure mathematics. How can we use this to actually implement a Wolfram|Alpha-like workflow where we enter descriptions of things, and then in effect automatically get mathematical wisdom about them?

There are two seemingly different directions one can go. The first is to imagine abstractly enumerating possible theorems about what has been entered, and then using heuristics to decide which of them are interesting. The second is to start from computable versions of the millions of theorems that have actually been published in the literature of mathematics, and then figure out how to connect these to whatever has been entered.

Each of these directions in effect reflects a slightly different view of what doing mathematics is about. And there’s quite a bit to say about each direction.

Math by enumeration Let’s start with theorem enumeration. In the simplest case, one can imagine starting from an axiom system and then just enumerating true theorems based on that system. There are two basic ways to do this. The first is to enumerate possible statements, and then to use (implicit or explicit) theorem-proving technology to try to determine which of them are true. And the second is to enumerate possible proofs, in effect treeing out possible ways the axioms can be applied to get theorems.

It’s easy to do either of these things for something like Boolean algebra. And the result is that one gets a sequence of true theorems. But if a human looks at them, many of them seem trivial or uninteresting. So then the question is how to know which of the possible theorems should actually be considered “interesting enough” to be included in a report that’s generated.

My first assumption was that there would be no automatic approach to this—and that “interestingness” would inevitably depend on the historical development of the relevant area of mathematics. But when I was working on A New Kind of Science, I did a simple experiment for the case of Boolean algebra.

Partial list of Boolean algebra theorems, from p. 817 of "A New Kind of Science"

There are 14 theorems of Boolean algebra that are usually considered “interesting enough” to be given names in textbooks. I took all possible theorems and listed them in order of complexity (number of variables, number of operators, etc). And the surprising thing I found is that the set of named theorems corresponds almost exactly to the set of theorems that can’t be proved just from ones that precede them in the list. In other words, the theorems which have been given names are in a sense exactly the minimal statements of new information about Boolean algebra.

Boolean algebra is of course a very simple case. And in the kind of enumeration I just described, once one’s got the theorems corresponding to all the axioms, one would conclude that there aren’t any more “interesting theorems” to find—which for many mathematical theories would be quite silly. But I think this example is a good indication of how one can start to use automated heuristics to figure out which theorems are “worth reporting on”, and which are, for example, just “uninteresting embellishments”.

Interestingness Of course, the general problem of ranking “what’s interesting” comes up all over Wolfram|Alpha. In mathematical examples, one’s asking what region is interesting to plot?, “what alternate forms are interesting?” and so on. When one enters a single number, one’s also asking “what closed forms are interesting enough to show?”—and to know this, one for example has to invent rankings for all sorts of mathematical objects (how complicated should one consider \[Pi] relative to log(343) relative to Khinchin’s Constant, and so on?).

Wolfram|Alpha shows possible closed forms for the continued fraction "137.036"

So in principle one can imagine having a system that takes input and generates “interesting” theorems about it. Notice that while in a standard Mathematica-like calculational workflow, one would be taking input and “computing an answer” from it, here one’s just “finding interesting things to say about it”.

The character of the input is different too. In the calculational case, one’s typically dealing with an operation to be performed. In the Wolfram|Alpha-like pure mathematical case, one’s typically just giving a description of something. In some cases that description will be explicit. A specific number. A particular equation. A specific graph. But more often it will be implicit. It will be a set of constraints. One will say (to use the example from above), “Let F be a field,” and then one will give constraints that the field must satisfy.

In a sense an axiom system is a way of giving constraints too: it doesn’t say that such-and-such an operator “is Nand”; it just says that the operator must satisfy certain constraints. And even for something like standard Peano arithmetic, we know from Gödel’s Theorem that we can never ultimately resolve the constraints–we can never nail down that the thing we denote by “+” in the axioms is the particular operation of ordinary integer addition. Of course, we can still prove plenty of theorems about “+”, and those are what we choose from for our report.

So given a particular input, we can imagine representing it as a set of constraints in our precise symbolic language. Then we would generate theorems based on these constraints, and heuristically pick the “most interesting” of them.

One day I’m sure doing this will be an important part of pure mathematical work. But as of now it will seem quite alien to most pure mathematicians—because they are not used to “disembodied theorems”; they are used to theorems that occur in papers, written by actual mathematicians.

And this brings us to the second approach to the automatic generation of “mathematical wisdom”: start from the historical corpus of actual mathematical papers, and then make connections to whatever specific input is given. So one is able to say for example, “The following theorem from paper X applies in such-and-such a way to the input you have given”, and so on.

Curating the math corpus So how big is the historical corpus of mathematics? There’ve probably been about 3 million mathematical papers published altogether—or about 100 million pages, growing at a rate of about 2 million pages per year. And in all of these papers, perhaps 5 million distinct theorems have been formally stated.

So what can be done with these? First, of course, there’s simple search and retrieval. Often the words in the papers will make for better search targets than the more notational material in the actual theorems. But with the kind of linguistic-understanding technology for math that we have in Wolfram|Alpha, it should not be too difficult to build what’s needed to do good statistical retrieval on the corpus of mathematical papers.

But can one go further? One might think about tagging the source documents to improve retrieval. But my guess is that most kinds of static tagging won’t be worth the trouble; just as one’s seen for the web in general, it’ll be much easier and better to make the search system more sophisticated and content-aware than to add tags document by document.

What would unquestionably be worthwhile, however, is to put the theorems into a genuine computable form: to actually take theorems from papers and rewrite them in a precise symbolic language.

Will it be possible to do this automatically? Eventually I suspect large parts of it will. Today we can take small fragments of theorems from papers and use the linguistic understanding system built for Wolfram|Alpha to turn them into pieces of Wolfram Language code. But it should gradually be possible to extend this to larger fragments—and eventually get to the point where it takes, at most, modest human effort to convert a typical theorem to precise symbolic form.

So let’s imagine we curate all the theorems from the literature of mathematics, and get them in computable form. What would we do then? We could certainly build a Wolfram|Alpha-like system that would be quite spectacular—and very useful in practice for doing lots of pure mathematics.

Undecidability bites But there will inevitably be some limitations—resulting in fact from features of mathematics itself. For example, it won’t necessarily be easy to tell what theorem might apply to what, or even what theorems might be equivalent. Ultimately these are classic theoretically undecidable problems—and I suspect that they will often actually be difficult in practical cases too. And at the very least, all of them involve the same kind of basic process as automated theorem proving.

And what this suggests is a kind of combination of the two basic approaches we’ve discussed—where in effect one takes the complete corpus of published mathematics, and views it as defining a giant 5-million-axiom formal system, and then follows the kind of automated theorem-enumeration procedure we discussed to find “interesting things to say”.

Math: science or art? So, OK, let’s say we build a wonderful system along these lines. Is it actually solving a core problem in doing pure mathematics, or is it missing the point?

I think it depends on what one sees the nature of the pure mathematical enterprise as being. Is it science, or is it art? If it’s science, then being able to make more theorems faster is surely good. But if it’s art, that’s really not the point. If doing pure mathematics is like creating a painting, automation is going to be largely counterproductive—because the core of the activity is in a sense a form of human expression.

This is not unrelated to the role of proof. To some mathematicians, what matters is just the theorem: knowing what’s true. The proof is essentially backup to ensure one isn’t making a mistake. But to other mathematicians, proof is a core part of the content of the mathematics. For them, it’s the story that brings mathematical concepts to light, and communicates them.

So what happens when we generate a proof automatically? I had an interesting example about 15 years ago, when I was working on A New Kind of Science, and ended up finding the simplest axiom system for Boolean algebra (just the single axiom ((p\[SmallCircle]q)\[SmallCircle]r)\[SmallCircle](p\[SmallCircle]((p\[SmallCircle]r)\[SmallCircle]p))==r, as it turned out). I used equational-logic automated theorem-proving (now built into FullSimplify) to prove the correctness of the axiom system. And I printed the proof that I generated in the book:

Proof of a Boolean-algebra axiom system, from pp. 811–812 of "A New Kind of Science"

It has 343 steps, and in ordinary-size type would be perhaps 40 pages long. And to me as a human, it’s completely incomprehensible. One might have thought it would help that the theorem prover broke the proof into 81 lemmas. But try as I might, I couldn’t really find a way to turn this automated proof into something I or other people could understand. It’s nice that the proof exists, but the actual proof itself doesn’t tell me anything.

Proof as story And the problem, I think, is that there’s no “conceptual story” around the elements of the proof. Even if the lemmas are chosen “structurally” as good “waypoints” in the proof, there are no cognitive connections—and no history—around these lemmas. They’re just disembodied, and apparently disconnected, facts.

So how can we do better? If we generate lots of similar proofs, then maybe we’ll start seeing similar lemmas a lot, and through being familiar they will seem more meaningful and comprehensible. And there are probably some visualizations that could help us quickly get a good picture of the overall structure of the proof. And of course, if we manage to curate all known theorems in the mathematics literature, then we can potentially connect automatically generated lemmas to those theorems.

It’s not immediately clear how often that will possible—and indeed in existing examples of computer-assisted proofs, like for the Four Color Theorem, the Kepler Conjecture, or the simplest universal Turing machine, my impression is that the often-computer-generated lemmas that appear rarely correspond to known theorems from the literature.

But despite all this, I know at least one example showing that with enough effort, one can generate proofs that tell stories that people can understand: the step-by-step solutions system in Wolfram|Alpha Pro. Millions of times a day students and others compute things like integrals with Wolfram|Alpha—then ask to see the steps.

Wolfram|Alpha's step-by-step solution for an indefinite integral

It’s notable that actually computing the integral is much easier than figuring out good steps to show; in fact, it takes some fairly elaborate algorithms and heuristics to generate steps that successfully communicate to a human how the integral can be done. But the example of step-by-step in Wolfram|Alpha suggests that it’s at least conceivable that with enough effort, it would be possible to generate proofs that are readable as “stories”—perhaps even selected to be as short and simple as possible (“proofs from The Book”, as Erdős would say).

Of course, while these kinds of automated methods may eventually be good at communicating the details of something like a proof, they won’t realistically be able to communicate—or even identify—overarching ideas and motivations. Needless to say, present-day pure mathematics papers are often quite deficient in communicating these too. Because in an effort to ensure rigor and precision, many papers tend to be written in a very formal way that cannot successfully represent the underlying ideas and motivations in the mind of the author—with the result that some of the most important ideas in mathematics are transmitted through an essentially oral tradition.

It would certainly help the progress of pure mathematics if there were better ways to communicate its content. And perhaps having a precise symbolic language for pure mathematics would make it easier to express concretely some of those important points that are currently left unwritten. But one thing is for sure: having such a language would make it possible to take a theorem from anywhere, and—like with a typical Wolfram Language code fragment—immediately be able to plug it in anywhere else, and use it.

But back to the question of whether automation in pure mathematics can ultimately make sense. I consider it fairly clear that a Wolfram|Alpha-like “pure math assistant” would be useful to human mathematicians. I also consider it fairly clear that having a good, precise, symbolic language—a kind of Mathematica Pura that’s a well-designed follow-on to standard mathematical notation—would be immensely helpful in formulating, checking and communicating math.

Automated discovery But what about a computer just “going off and doing math by itself”? Obviously the computer can enumerate theorems, and even use heuristics to select ones that might be considered interesting to human mathematicians. And if we curate the literature of mathematics, we can do extensive “empirical metamathematics” and start trying to recognize theorems with particular characteristics, perhaps by applying graph-theoretic criteria on the network of theorems to see what counts as “surprising” or a “powerful” theorem. There’s also nothing particularly difficult—like in WolframTones—about having the computer apply aesthetic criteria deduced from studying human choices.

But I think the real question is whether the computer can build up new conceptual frameworks and structures—in effect new mathematical theories. Certainly some theorems found by enumeration will be surprising and indicative of something fundamentally new. And it will surely be impressive when a computer can take a large collection of theorems—whether generated or from the literature—and discover correlations among them that indicate some new unifying principle. But I would expect that in time the computer will be able not only to identify new structures, but also name them, and start building stories about them. Of course, it is for humans to decide whether they care about where the computer is going, but the basic character of what it does will, I suspect, be largely indistinguishable from many forms of human pure mathematics.

All of this is still fairly far in the future, but there’s already a great way to discover math-like things today—that’s not practiced nearly as much as it should be: experimental mathematics. The term has slightly different meanings to different people. For me it’s about going out and studying what mathematical systems do by running experiments on them. And so, for example, if we want to find out about some class of cellular automata, or nonlinear PDEs, or number sequences, or whatever, we just enumerate possible cases and then run them and see what they do.

There’s a lot to discover like this. And certainly it’s a rich way to generate observations and hypotheses that can be explored using the traditional methodologies of pure mathematics. But the real thrust of what can be done does not fit into what pure mathematicians typically think of as math. It’s about exploring the “flora and fauna”—and principles—of the universe of possible systems, not about building up math-like structures that can be studied and explained using theorems and proofs. Which is why—to quote the title of my book—I think one should best consider this a new kind of science, rather than something connected to existing mathematics.

In discussing experimental mathematics and A New Kind of Science, it’s worth mentioning that in some sense it’s surprising that pure mathematics is doable at all—because if one just starts asking absolutely arbitrary questions about mathematical systems, many of them will end up being undecidable.

This is particularly obvious when one’s out in the computational universe of possible programs, but it’s also true for programs that represent typical mathematical systems. So why isn’t undecidability more of a problem for typical pure mathematics? The answer is that pure mathematics implicitly tends to select what it studies so as to avoid undecidability. In a sense this seems to be a reflection of history: pure mathematics follows what it has historically been successful in doing, and in that way ends up navigating around undecidability—and producing the millions of theorems that make up the corpus of existing pure mathematics.

OK, so those are some issues and directions. But where are we at in practice in bringing computational knowledge to pure mathematics?

Getting it done There’s certainly a long history of related efforts. The works of Peano and Whitehead and Russell from a century ago. Hilbert’s program. The development of set theory and category theory. And by the 1960s, the first computer systems—such as Automath—for representing proof structures. Then from the 1970s, systems like Mizar that attempted to provide practical computer frameworks for presenting proofs. And in recent times, increasingly popular “proof assistants” based on systems like Coq and HOL.

One feature of essentially all these efforts is that they were conceived as defining a kind of “low-level language” for mathematics. Like most of today’s computer languages, they include a modest number of primitives, then imagine that essentially any actual content must be built externally, by individual users or in libraries.

But the new idea in the Wolfram Language is to have a knowledge-based language, in which as much actual knowledge as possible is carefully designed into the language itself. And I think that just like in general computing, the idea of a knowledge-based language is going to be crucial for injecting computation into pure mathematics in the most effective and broadly useful way.

So what’s involved in creating our Mathematica Pura—an extension to the Wolfram Language that builds in the actual structure and content of pure math? At the lowest level, the Wolfram Language deals with arbitrary symbolic expressions, which can represent absolutely anything. But then the language uses these expressions for many specific purposes. For example, it can use a symbol x to represent an algebraic variable. And given this, it has many functions for handling symbolic expressions—interpreted as mathematical or algebraic expressions—and doing various forms of math with them.

The emphasis of the math in Mathematica and the Wolfram Language today is on practical, calculational, math. And by now it certainly covers essentially all the math that has survived from the 19th century and before. But what about more recent math? Historically, math itself went through a transition about a century ago. Just around the time modernism swept through areas like the arts, math had its own version: it started to consider systems that emerged purely from its own formalism, without regard for obvious connection to the outside world.

And this is the kind of math—through developments like Bourbaki and beyond—that came to dominate pure mathematics in the 20th century. And inevitably, a lot of this math is about defining abstract structures to study. In simple cases, it seems like one might represent these structures using some hierarchy of types. But the types need to be parametrized, and quite quickly one ends up with a whole algebra or calculus of types—and it’s just as well that in the Wolfram Language one can use general symbolic expressions, with arbitrary heads, rather than just simple type descriptions.

As I mentioned early in this blog post, it’s going to take all sorts of new built-in functions to capture the frameworks needed to represent modern pure mathematics—together with lots of entity-like objects. And it’ll certainly take years of careful design to make a broad system for pure mathematics that’s really clean and usable. But there’s nothing fundamentally difficult about having symbolic constructs that represent differentiability or moduli spaces or whatever. It’s just language design, like designing ways to represent 3D images or remote computation processes or unique external entity references.

So what about curating theorems from the literature? Through Wolfram|Alpha and the Wolfram Language, not to mention for example the Wolfram Functions Site and the Wolfram Connected Devices Project, we’ve now had plenty of experience at the process of curation, and in making potentially complex things computable.

The eCF example But to get a concrete sense of what’s involved in curating mathematical theorems, we did a pilot project over the last couple of years through the Wolfram Foundation, supported by the Sloan Foundation. For this project we picked a very specific and well-defined area of mathematics: research on continued fractions. Continued fractions have been studied continually since antiquity, but were at their most popular between about 1780 and 1910. In all there are around 7000 books and papers about them, running to about 150,000 pages.

We chose about 2000 documents, then set about extracting theorems and other mathematical information from them. The result was about 600 theorems, 1500 basic formulas, and about 10,000 derived formulas. The formulas were directly in computable form—and were in effect immediately able to join the 300,000+ on the Wolfram Functions Site, that are all now included in Wolfram|Alpha. But with the theorems, our first step was just to treat them as entities themselves, with properties such as where they were first published, who discovered them, etc. And even at this level, we were able to insert some nice functionality into Wolfram|Alpha.

Some of the output from entering "Worpitzky theorem" into Wolfram|Alpha

But we also started trying to actually encode the content of the theorems in computable form. It took introducing some new constructs like LebesgueMeasure, ConvergenceSet and LyapunovExponent. But there was no fundamental problem in creating precise symbolic representations of the theorems. And just from these representations, it became possible to do computations like this in Wolfram|Alpha:

Wolfram|Alpha results for "continued fraction theorems for sqrt(7)" Wolfram|Alpha results for "continued fraction results involving quadratic irrationals" Wolfram|Alpha results for "who proved the Stern-Stolz theorem"

An interesting feature of the continued fraction project (dubbed “eCF”) was how the process of curation actually led to the discovery of some new mathematics. For having done curation on 50+ papers about the Rogers–Ramanujan continued fraction, it became clear that there were missing cases that could now be computed. And the result was the filling of a gap left by Ramanujan for 100 years.

Ramanujan's missing cases are now computable

There’s always a tradeoff between curating knowledge and creating it afresh. And so, for example, in the Wolfram Functions Site, there was a core of relations between functions that came from reference books and the literature. But it was vastly more efficient to generate other relations than to scour the literature to find them.

The Wolfram Function Site, and Wolfram|Alpha, generate relations between functions

But if the goal is curation, then what would it take to curate the complete literature of mathematics? In the eCF project, it took about 3 hours of mathematician time to encode each theorem in computable form. But all this work was done by hand, and in a larger-scale project, I am certain that an increasing fraction of it could be done automatically, not least using extensions of our Wolfram|Alpha natural language understanding system.

Of course, there are all sorts of practical issues. Newer papers are predominantly in TeX, so it’s not too difficult to pull out theorems with all their mathematical notation. But older papers need to be scanned, which requires math OCR, which has yet to be properly developed.

Then there are issues like whether theorems stated in papers are actually valid. And even whether theorems that were considered valid, say, 100 years ago are still considered valid today. For example, for continued fractions, there are lots of pre-1950 theorems that were successfully proved in their time, but which ignore branch cuts, and so wouldn’t be considered correct today.

And in the end of course it requires lots of actual, skilled mathematicians to guide the curation process, and to encode theorems. But in a sense this kind of mobilization of mathematicians is not completely unfamiliar; it’s something like what was needed when Zentralblatt was started in 1931, or Mathematical Reviews in 1941. (As a curious footnote, the founding editor of both these publications was Otto Neugebauer, who worked just down the hall from me at the Institute for Advanced Study in the early 1980s, but who I had no idea was involved in anything other than decoding Babylonian mathematics until I was doing research for this blog post.)

When it comes to actually constructing a system for encoding pure mathematics, there’s an interesting example: Theorema, started by Bruno Buchberger in 1995, and recently updated to version 2. Theorema is written in the Wolfram Language, and provides both a document-based environment for representing mathematical statements and proofs, and actual computation capabilities for automated theorem proving and so on.

A proof in Theorema

No doubt it’ll be an element of what’s ultimately built. But the whole project is necessarily quite large—perhaps the world’s first example of “big math”. So can the project get done in the world today? A crucial part is that we now have the technical capability to design the language and build the infrastructure that’s needed. But beyond that, the project also needs a strong commitment from the world’s mathematics community—as well as lots of contributions from individual mathematicians from every possible field. And realistically it’s not a project that can be justified on commercial grounds—so the likely $100+ million that it will need will have to come from non-commercial sources.

But it’s a great and important project—that promises to be pivotal for pure mathematics. In almost every field there are golden ages when dramatic progress is made. And more often than not, such golden ages are initiated by new methodology and the arrival of new technology. And this is exactly what I think will happen in pure mathematics. If we can mobilize the effort to curate known mathematics and build the system to use and generate computational knowledge around it, then we will not only succeed in preserving and spreading the great heritage of pure mathematics, but we will also thrust pure mathematics into a period of dramatic growth.

Large projects like this rely on strong leadership. And I stand ready to do my part, and to contribute the core technology that is needed. Now to move this forward, what it takes is commitment from the worldwide mathematics community. We have the opportunity to make the second decade of the 21st century really count in the multi-millennium history of pure mathematics. Let’s actually make it happen!

]]>
16
Stephen Wolfram <![CDATA[Entrepreneurism of Ideas: An Education Adventure]]> http://blog.internal.stephenwolfram.com/?p=8488 2014-07-28T17:56:15Z 2014-07-28T16:02:43Z For as long as I can remember, my all-time favorite activity has been creating ideas and turning them into reality—a kind of “entrepreneurism of ideas”. And over the years—in science, technology and business—I think I’ve developed some pretty good tools and strategies for doing this, that I’ve increasingly realized would be good for a lot of other people (and organizations) too.

So how does one spread idea entrepreneurism—entrepreneurism centered on ideas rather than commercial enterprises? Somewhat unwittingly I think we’ve developed a rather good vehicle—that’s both a very successful educational program, and a fascinating annual adventure for me.

Twelve years ago my book A New Kind of Science had just come out, and we were inundated with people wanting to learn more, and get involved in research around it. We considered various alternatives, but eventually we decided to organize a summer school where we would systematically teach about our methodology, while mentoring each student to do a unique original project.

From the very beginning, the summer school was a big success. And over the years we’ve gradually improved and expanded it. It’s still the Wolfram Science Summer School—and its intellectual core is still A New Kind of Science. But today it has become a broader vehicle for passing on our tools and strategies for idea entrepreneurism.

This year’s summer school just ended last week. We had 63 students from 21 countries—with a fascinating array of backgrounds and interests. Most were in college or graduate school; a few were younger or older. And over the course of the three weeks of the summer school—with great energy and intellectual entrepreneurism—each student worked towards their own unique project.
At the Wolfram Science Summer School 2014
The summer school is part idea incubator, part course, part hackathon and part mentoring event. And it’s become a tradition for me to open it with a concentrated burst of idea entrepreneurism: a live experiment in which over the course of an hour or so I try—live and in public—to discover or invent something new.

Realistically, what makes this—and indeed much of what’s done at the summer school—possible is what’s now the Wolfram Language, with all its built-in knowledge and automation, as well as immediate presentation capabilities.

My rule for live experiments is that apart from spending a few minutes beforehand coming up with a topic (sometimes just by opening A New Kind of Science at random), I don’t think at all about what I’m going to do. The experiment is always fresh and spontaneous—and quite an adventure for all concerned. It’s a strange kind of intellectual performance, and it takes quite a bit of concentration. But I think it’s pretty educational to watch—not least because most people have never seen something done from scratch in real time like this.

There are always ups and downs in the course of a live experiment—and sometimes it seems that all is lost. But so far, in dozens of live experiments I’ve done, I’ve always found a way to navigate them to some kind of success. And seeing this always seems quite empowering to people; and makes this kind of idea entrepreneurism feel like something close at hand, that they can do too.

This year I actually did two live experiments. My first one was a piece of pure science that involved numbers. The idea is pretty elementary: just take a number, write it in base 2, manipulate its digits, then add it to the original number. Then iterate this many times. Here’s the little piece of Wolfram Language code I wrote during the live experiment to do this:
Repeated digit manipulation sequence from live experiment at Wolfram Science Summer School 2014
In A New Kind of Science, I did a version of this where the digit manipulation consisted of reversing the whole sequence of digits. But now I wanted to try something simpler: just rotating the digits by some number of positions. I wasn’t sure this would do anything interesting. But in the spirit of the live experiment, I wrote a little piece of code to find out:
Simple input produces complicated behavior
What happened was quintessential NKS (New Kind of Science). Even though the underlying rule was incredibly simple, the behavior was far from simple—and in many ways looked quite random.

Here’s the whole notebook from the hour or so of live experiment:
The notebook from my live experiment
It’s got quite a few interesting results. And indeed—like many previous summer school live experiments—it’s got the core of what would make a nice research paper.

In addition to this pure science experiment I decided this year to do a second—more practical—live experiment. In a sense it was a meta experiment. Because it consisted of analyzing code of pretty much the type used to do live experiments. Specifically, I read in lots of code from the Wolfram Demonstrations Project, then started doing statistics on it.

At first I looked at the general distribution of functions used, and started analyzing correlations and so on. But then, following a suggestion from the audience, I decided to focus on one simple example, and just started looking specifically at the use of named colors. The result was this bar chart, showing that (for whatever reason) red and black are the most common named colors in this corpus of code:
Frequency of named colors in Wolfram Demonstrations
It’s always important to get visualizations at every step along the way. And in this particular live experiment, we quickly decided to visualize correlations between colors, generating bar charts showing what the distribution of colors is if one already knows that a certain color (shown as the background) appears:
Backgrounds show specific named colors; in each graph, bars show the frequency of other named colors in Demonstrations that include the background's named color.
My original idea for the live experiment was to look for repeated patterns of code that might suggest functions we’d want to name and implement. But we never quite got there—and when we ran out of time we were instead looking at how one could use “code corpus analysis” to develop good color palettes: a quite different, but interesting, direction that emerged from the experiment.

Doing live experiments in a sense provides a way to illustrate the spirit of idea entrepreneurism—as well as letting one introduce some specific methodologies and tools. But another important element of successful idea entrepreneurism is choosing a direction to pursue. And it’s become a tradition at our summer school that after I do my live experiment, I talk about the directions we’re currently pursuing—and about what science and technology I’m currently most excited about.

After that we launch into the most important business of the summer school: defining a project for each student. Over the years we’ve developed and steadily refined a whole process for doing this. I’m always accumulating lists of interesting problems and projects. And students often come in with definite ideas for projects. But I’ve found the best results come from pure real-time creativity: from learning about each student and then creatively coming up with a project that matches their skills and interests.

And this year, over the course of three fairly long days, we did that 63 times, defining a unique original project for each of our students.

There’s quite a bit of structure to the summer school. For example, there’s always “homework” done during the first week. Usually we pick some previously unexplored area of the computational universe, and ask students to find something interesting in it. This year lots of students found lots of interesting things—that are actually now being assembled into a paper.
Some of the Wolfram Science Summer School 2014 students' favorite cellular automata
Every day there are a few hours of lectures. About how to do a good computational experiment. About the types of systems studied in A New Kind of Science. About computational techniques. About implications for philosophy, music, engineering and more. About perception. About natural language. About what’s worked in previous student projects. About principles that emerge from A New Kind of Science. About all sorts of other things.

Being an instructor at the Wolfram Science Summer School has become a favorite activity for some of our top R&D employees. And as it happens, this year all the instructors were also summer school alumni from previous years (yes, we’ve done a lot of recruiting from the summer school). For three weeks they worked with students, bringing to bear on each project the kind of idea entrepreneurism that we’ve taught at the summer school—and practice at our company.

This year—not least because we’d just finished launching Wolfram Programming Cloud days before—I had the pleasure of spending plenty of time at the summer school, getting to know all the students (by the end, I knew everyone by name!), and watching lots of projects take shape. I’ve been involved with my share of hackathons and incubators. But the summer school is something different. It’s really about entrepreneurism of ideas: about the process of creating ideas and turning them into reality.
At the Wolfram Science Summer School 2014
And at the end of three weeks, there were 63 projects to present—and lots of interesting things discovered and invented:
Some student projects from the Wolfram Science Summer School 2014
Over the years at the Wolfram Science Summer School, there have been many hundreds of great projects done, as well as many careers launched.

There’s a lot to say about education. But I think for many people, doing a unique original project is the single most powerful and useful form of education there is. Doing this successfully is quintessential idea entrepreneurism. And I think that in the long run the most important achievement of the Wolfram Science Summer School may just be the framework it’s developed for spreading entrepreneurism of ideas.

The Wolfram Science Summer School is just one of a growing constellation of education initiatives that we’re involved in. (Another that runs alongside the summer school is our two-week Mathematica Summer Camp for high-school students—that directly uses ideas from the summer school.) And particularly with all the new technologies that we’ve been developing, there are vast new opportunities for education. For me the Wolfram Science Summer School is an important and fascinating success story about innovation in education—and an encouragement for us to do more.

]]>
1
Stephen Wolfram <![CDATA[Launching Mathematica 10—with 700+ New Functions and a Crazy Amount of R&D]]> http://blog.internal.stephenwolfram.com/?p=8131 2015-03-04T03:10:32Z 2014-07-09T18:52:30Z We’ve got an incredible amount of new technology coming out this summer. Two weeks ago we launched Wolfram Programming Cloud. Today I’m pleased to announce the release of a major new version of Mathematica: Mathematica 10.

Wolfram Mathematica 10

We released Mathematica 1 just over 26 years ago—on June 23, 1988. And ever since we’ve been systematically making Mathematica ever bigger, stronger, broader and deeper. But Mathematica 10—released today—represents the single biggest jump in new functionality in the entire history of Mathematica.

At a personal level it is very satisfying to see after all these years how successful the principles that I defined at the very beginning of the Mathematica project have proven to be. And it is also satisfying to see how far we’ve gotten with all the talent and hard work that has been poured into Mathematica over nearly three decades.

We’ll probably never know whether our commitment to R&D over all these years makes sense at a purely commercial level. But it has always made sense to me—and the success of Mathematica and our company has allowed us to take a very long-term view, continually investing in building layer upon layer of long-term technology.

One of the recent outgrowths—from combining Mathematica, Wolfram|Alpha and more—has been the creation of the Wolfram Language. And in effect Mathematica is now an application of the Wolfram Language.

But Mathematica still very much has its own identity too—as our longtime flagship product, and the system that has continually redefined technical computing for more than a quarter of a century.

And today, with Mathematica 10, more is new than in any single previous version of Mathematica. It is satisfying to see such a long curve of accelerating development—and to realize that there are more new functions being added with Mathematica 10 than there were functions altogether in Mathematica 1.
Mathematica functions over time, by version

So what is the new functionality in Mathematica 10? It’s a mixture of completely new areas and directions (like geometric computation, machine learning and geographic computation)—together with extensive strengthening, polishing and expanding of existing areas. It’s also a mixture of things I’ve long planned for us to do—but which had to wait for us to develop the necessary technology—together with things I’ve only fairly recently realized we’re in a position to tackle.

New functionality in Mathematica 10

When you first launch Mathematica 10 there are some things you’ll notice right away. One is that Mathematica 10 is set up to connect immediately to the Wolfram Cloud. Unlike Wolfram Programming Cloud—or the upcoming Mathematica OnlineMathematica 10 doesn’t run its interface or computations in the cloud. Instead, it maintains all the advantages of running these natively on your local computer—but connects to the Wolfram Cloud so it can have cloud-based files and other forms of cloud-mediated sharing, as well as the ability to access cloud-based parts of the Wolfram Knowledgebase.

If you’re an existing Mathematica user, you’ll notice some changes when you start using notebooks in Mathematica 10. Like there’s now autocompletion everywhere—for option values, strings, wherever. And there’s also a hovering help box that lets you immediately get function templates or documentation. And there’s also—as much requested by the user community—computation-aware multiple undo. It’s horribly difficult to know how and when you can validly undo Mathematica computations—but in Mathematica 10 we’ve finally managed to solve this to the point of having a practical multiple undo.

Another very obvious change in Mathematica 10 is that plots and graphics have a fresh new default look (you can get the old look with an option setting, of course):

Some new default styles in Mathematica 10

And as in lots of other areas, that’s just the tip of the iceberg. Underneath, there’s actually a whole powerful new mechanism of “plot themes”—where instead of setting lots of individual options, you can for example now just specify an overall theme for a plot—like “web” or “minimal” or “scientific”.

Plot themes in Mathematica 10

But what about more algorithmic areas? There’s an amazing amount there that’s new in Mathematica 10. Lots of new algorithms—including many that we invented in-house. Like the algorithm that lets Mathematica 10 routinely solve systems of numerical polynomial equations that have 100,000+ solutions. Or the cluster of algorithms we invented that for the first time give exact symbolic solutions to all sorts of hybrid differential equations or differential delay equations—making such equations now as accessible as standard ordinary differential equations.

Solving differential equations in Mathematica 10

Of course, when it comes to developing algorithms, we’re in a spectacular position these days. Because our multi-decade investment in coherent system design now means that in any new algorithm we develop, it’s easy for us to bring together algorithmic capabilities from all over our system. If we’re developing a numerical algorithm, for example, it’s easy for us to do sophisticated algebraic preprocessing, or use combinatorial optimization or graph theory or whatever. And we get to make new kinds of algorithms that mix all sorts of different fields and approaches in ways that were never possible before.

From the very beginning, one of our central principles has been to automate as much as possible—and to create not just algorithms, but complete meta-algorithms that automate the whole process of going from a computational goal to a specific computation done with a specific algorithm. And it’s been this kind of automation that’s allowed us over the years to “consumerize” more and more areas of computation—and to take them from being accessible only to experts, to being usable by anyone as routine building blocks.

And in Mathematica 10 one important area where this is happening is machine learning. Inside the system there are all kinds of core algorithms familiar to experts—logistic regression, random forests, SVMs, etc. And all kinds of preprocessing and scoring schemes. But to the user there are just two highly automated functions: Classify and Predict. And with these functions, it’s now easy to call on machine learning whenever one wants.

Machine learning in Mathematica 10

There are huge new algorithmic capabilities in Mathematica 10 in graph theory, image processing, control theory and lots of other areas. Sometimes one’s not surprised that it’s at least possible to have such-and-such a function—even though it’s really nice to have it be as clean as it is in Mathematica 10. But in other cases it at first seems somehow impossible that the function could work.

There are all kinds of issues. Maybe the general problem is undecidable, or theoretically intractable. Or it’s ill conditioned. Or it involves too many cases. Or it needs too much data. What’s remarkable is how often—by being algorithmically sophisticated, and by leveraging what we’ve built in Mathematica and the Wolfram Language—it’s possible to work around these issues, and to build a function that covers the vast majority of important practical cases.

Another important issue is just how much we can represent and do computation on. Expanding this is a big emphasis in the Wolfram Language—and Mathematica 10 has access to everything that’s been developed there. And so, for example, in Mathematica 10 there’s an immediate symbolic representation for dates, times and time series—as well as for geolocations and geographic data.

An example of geographic visualization in Mathematica 10

The Wolfram Language has ways to represent a very broad range of things in the real world. But what about data on those things? Much of that resides in the Wolfram Knowledgebase in the cloud. Soon we’re going to be launching the Wolfram Discovery Platform, which is built to allow large-scale access to data from the cloud. But since that’s not the typical use of Mathematica, basic versions of Mathematica 10 are just set up for small-scale data access—and need explicit Wolfram Cloud Credits to get more.

Still, within Mathematica 10 there are plenty of spectacular new things that will be possible by using just small amounts of data from the Wolfram Knowledgebase.

A little while ago I found a to-do list for Mathematica that I wrote in 1991. Some of the entries on it were done in just a few years. But most required the development of towers of technology that took many years to build. And at least one has been a holdout all these years—until now.

On the to-do it was just “PDEs”. But behind those four letters are centuries of mathematics, and a remarkably complex tower of algorithmic requirements. Yes, Mathematica has been able to handle various kinds of PDEs (partial differential equations) for 20 years. But in Mathematica we always seek as much generality and robustness as possible, and that’s where the challenge has been. Because we’ve wanted to be able to handle PDEs in any kind of geometry. And while there are standard methods—like finite element analysis—for solving PDEs in different geometries, there’s been no good way to describe the underlying geometry in enough generality.

Over the years, we’ve put immense effort into the design of Mathematica and what’s now the Wolfram Language. And part of that design has involved developing broad computational representations for what have traditionally been thought of as mathematical concepts. It’s difficult—if fascinating—intellectual work, in effect getting “underneath” the math to create new, often more general, computational representations.

A few years ago we did it for probability theory and the whole cluster of things around it, like statistical distributions and random processes. Now in Mathematica 10 we’ve done it for another area: geometry.

Geometry examples in Mathematica 10

What we’ve got is really a fundamental extension to the domain of what can be represented computationally, and it’s going to be an important building block for many things going forward. And in Mathematica 10 it delivers some very powerful new functionality—including PDEs and finite elements.

So, what’s hard about representing geometry computationally? The problem is not in handling specific kinds of cases—there are a variety of methods for doing that—but rather in getting something that’s truly general, and extensible, while still being easy to use in specific cases. We’ve been thinking about how to do this for well over a decade, and it’s exciting to now have a solution.

It turns out that math in a sense gets us part of the way there—because it recognizes that there are various kinds of geometrical objects, from points to lines to surfaces to volumes, that are just distinguished by their dimensions. In computer systems, though, these objects are typically represented rather differently. 3D graphics systems, for example, typically handle points, lines and surfaces, but don’t really have a notion of volumes or solids. CAD systems, on the other hand, handle volumes and solids, but typically don’t handle points, lines and surfaces. GIS systems do handle both boundaries and interiors of regions—but only in 2D.

So why can’t we just “use the math”? The problem is that specific mathematical theories—and representations—tend once again to handle, or at least be convenient in, only specific kinds of cases. So, for example, one can describe geometry in terms of equations and inequalities—in effect using real algebraic geometry—but this is only convenient for simple “math-related” shapes. One can use combinatorial topology, which is essentially based on mesh regions, and which is quite general, but difficult to use directly—and doesn’t readily cover things like non-bounded regions. Or one could try using differential geometry—which may be good for manifolds, but doesn’t readily cover geometries with mixed dimensions, and isn’t closed under Boolean operations.

What we’ve built in effect operates “underneath the math”: it’s a general symbolic representation of geometry, which makes it convenient to apply any of these different mathematical or computational approaches. And so instead of having all sorts of separate “point in polygon”, “point in mesh”, “point on line” etc. functions, everything is based on a single general RegionMember function. And similarly Area, Volume, ArcLength and all their generalizations are just based on a single RegionMeasure function.

The result is a remarkably smooth and powerful way of doing geometry, which conveniently spans from middle-school triangle math to being able to describe the most complex geometrical forms for engineering and physics. What’s also important—and typical of our approach to everything—is that all this geometry is deeply integrated with the rest of the system. So, for example, one can immediately find equation solutions within a geometric region, or compute a maximum in it, or integrate over it—or, for that matter, solve a partial differential equation in it, with all the various kinds of possible boundary conditions conveniently being described.

The geometry language we have is very clean. But underneath it is a giant tower of algorithmic functionality—that relies on a fair fraction of the areas that we’ve been developing for the past quarter century. To the typical user there are few indications of this complexity—although perhaps the multi-hundred-page treatise on the details of going beyond automatic settings for finite elements in Mathematica 10 provides a clue.

Geometry is just one new area. The drive for generality continues elsewhere too. Like in image processing, where we’re now supporting most image processing operations not only in 2D but also in 3D images. Or in graph computation, where everything works seamlessly with directed graphs, undirected graphs, mixed graphs, multigraphs and weighted graphs. As usual, it’s taken developing all sorts of new algorithms and methods to deal with cases that in a sense cross disciplines, and so haven’t been studied before, even though it’s obvious they can appear in practice.

As I’ve mentioned, there are some things in Mathematica 10 that we’ve been able to do essentially because our technology stack has now reached the point where they’re possible. There are others, though, that in effect have taken solving a problem, and often a problem that we’ve been thinking about for a decade or two. An example of this is the system for handling formal math operators in Mathematica 10.

In a sense what we’re doing is to take the idea of symbolic representation one more step. In math, we’ve always allowed a variable like x to be symbolic, so it can correspond to any possible value. And we’ve allowed functions like f to be symbolic too. But what about mathematical operators like derivative? In the past, these have always been explicit—so for example they actually take derivatives if they can. But now we have a new notion of “inactive” functions and operators, which gives us a general way to handle mathematical operators purely symbolically, so that we can transform and manipulate expressions formally, while still maintaining the meaning of these operators.

Inactive functionality in Mathematica 10

This makes possible all sorts of new things—from conveniently representing complicated vector analysis expressions, to doing symbolic transformations not only on math but also on programs, to being able to formally manipulate objects like integrals, with built-in implementations of all the various generalizations of things like Leibniz’s rule.

In building Mathematica 10, we’ve continued to push forward into uncharted computational—and mathematical—territory. But we’ve also worked to make Mathematica 10 even more convenient for areas like elementary math. Sometimes it’s a challenge to fit concepts from elementary math with the overall generality that we want to maintain. And often it requires quite a bit of sophistication to make it work. But the result is a wonderfully seamless transition from the elementary to the advanced. And in Mathematica 10, we’ve once again achieved this for things like curve computations and function domains and ranges.

The development of the Wolfram Language has had many implications for Mathematica—first visible now in Mathematica 10. In addition to all sorts of interaction with real-world data and with external systems, there are some fundamental new constructs in the system itself. An example is key-value associations, which in effect introduce “named parts” throughout the system. Another example is the general templating system, important for programmatically constructing strings, files or web pages.

Using associations in Mathematica 10

With the Wolfram Language there are vast new areas of functionality—supporting new kinds of programming, new structures and new kinds of data, new forms of deployment, and new ways to integrate with other systems. And with all this development—and all the new products it’s making possible—one might worry that the traditional core directions of Mathematica would be left behind. But nothing is further from the truth. And in fact all the new Wolfram Language development has made possible even more energetic efforts in traditional Mathematica areas.

Partly that is the result of new software capabilities. Partly it is the result of new understanding that we’ve developed about how to push forward the design of a very large knowledge-based system. And partly it’s the result of continued strengthening and streamlining of our internal R&D processes.

We’re still a fairly small company (about 700 people), but we’ve been steadily ramping up our R&D output. And it’s amazing to see what we’ve been able to build for Mathematica 10. In the 19 months (588 days) since Mathematica 9 was released, we’ve finished more than 700 new functions that are now in Mathematica 10—and we’ve added countless enhancements to existing functions.

I think the fact that this is possible is a great tribute to the tools, team and organization we’ve built—and the strength of the principles under which we’ve been operating all these years.

To most people in the software business, if they knew the amount of R&D that’s gone into Mathematica 10, it would seem crazy. Most would assume that a 26-year-old product would be in a maintenance mode, with tiny updates being made every few years. But that’s not the story with Mathematica at all. Instead, 26 years after its initial release, its rate of growth is still accelerating. There’s going to be even more to come.

But today I’m pleased to announce that the fruits of a “crazy” amount of R&D are here: Mathematica 10.


To comment, please visit the copy of this post at the Wolfram Blog »

]]>
0
Stephen Wolfram <![CDATA[Wolfram Programming Cloud Is Live!]]> http://blog.internal.stephenwolfram.com/?p=8159 2014-07-09T13:01:57Z 2014-06-23T18:38:52Z Twenty-six years ago today we launched Mathematica 1.0. And I am excited that today we have what I think is another historic moment: the launch of Wolfram Programming Cloud—the first in a sequence of products based on the new Wolfram Language.

Wolfram Programming Cloud

My goal with the Wolfram Language in general—and Wolfram Programming Cloud in particular—is to redefine the process of programming, and to automate as much as possible, so that once a human can express what they want to do with sufficient clarity, all the details of how it is done should be handled automatically.

I’ve been working toward this for nearly 30 years, gradually building up the technology stack that is needed—at first in Mathematica, later also in Wolfram|Alpha, and now in definitive form in the Wolfram Language. The Wolfram Language, as I have explained elsewhere, is a new type of programming language: a knowledge-based language, whose philosophy is to build in as much knowledge about computation and about the world as possible—so that, among other things, as much as possible can be automated.

The Wolfram Programming Cloud is an application of the Wolfram Language—specifically for programming, and for creating and deploying cloud-based programs.

How does it work? Well, you should try it out! It’s incredibly simple to get started. Just go to the Wolfram Programming Cloud in any web browser, log in, and press New. You’ll get what we call a notebook (yes, we invented those more than 25 years ago, for Mathematica). Then you just start typing code.

Type code in Wolfram Programming Cloud

It’s all interactive. When you type something, you can immediately run it, and see the result in the notebook.

Like let’s say you want to build a piece of code that takes text, figures out what language it’s in, then shows an image based on the flag of the largest country where it’s spoken.

First, you might want to try out the machine-learning language classifier built into the Wolfram Language:

Wolfram Language has a built-in machine-learning classifier

OK. That’s a good start. Now we have to find the largest country where it’s spoken:

Find the largest country that speaks a given language

Now we can get a flag:

Find the country's flag

Notebooks in the Wolfram Programming Cloud can mix text and code and anything else, so it’s easy to document what you’re doing:

Notebooks in the Wolfram Programming Cloud let you mix text, code, and more

We’re obviously already making pretty serious use of the knowledge-based character of the Wolfram Language. But now let’s say that we want to make a custom graphic, in which we programmatically superimpose a language code on the flag.

It took me about 3 minutes to write a little function to do this, using image processing:

Image processing function to put a language code on a flag

And now we can test the function:

Testing the labeled-flag function

It’s interesting to see what we’ve got going on here. There’s a bit of machine learning, some data about human languages and about countries, some typesetting, and finally some image processing. What’s great about the Wolfram Language is that all this—and much much more—is built in, and the language is designed so that all these pieces fit perfectly together. (Yes, that design discipline is what I personally have spent a fair fraction of the past three decades of my life on.)

But OK, so we’ve got a function that does something. Now what can we do with it? Well, this is one of the big things about the Wolfram Programming Cloud: it lets us use the Wolfram Language to deploy the function to the cloud.

One way we can do that is to make a web API. And that’s very straightforward to do in the Wolfram Language. We just specify a symbolic API function—then deploy it to the cloud:

Specify a symbolic API function and deploy it to the cloud

And now from anywhere on the web, if we call this API by going to the appropriate URL, our Wolfram Language code will run in the Wolfram Cloud—and we’ll get a result back on the web, in this case as a PNG:

ay "bonjour," get a French flag

There are certainly lots of bells and whistles that we can add to this. We can make a fancier image. We can make the code more efficient by precomputing things. And so on. But to me it’s quite spectacular—and extremely useful—that in a matter of seconds I’m able to deploy something to the cloud that I can use from any website, web program, etc.

Here’s another example. This time I’m setting up a URL which, every time it’s visited, gives the computed current number of minutes until the next sunset, for the inferred location of the user:

Deployment code for the computed number of minutes until sunset

Every time you visit this URL, then, you get a number, as a piece of text. (You can also get JSON and lots of other things if you want.)

It’s easy to set it up a dashboard too. Like here’s a countdown timer for sunset, which, web willing, updates every half second:

Deploy a counter for the number of seconds until sunset

How many seconds until sunset?

What about forms? Those are easy too. This creates a form that generates a map of a given location, with a disk of a given radius:

A line of code makes a web form to generate maps marked with disks

Here’s the form:

The map-generating form deployed on the web

And here’s the result of submitting the form:

A map with a two-mile disk centered on the Empire State Building—it's that easy

There’s a lot of fancy technology being used here. Like even the fields in the form are “Smart Fields” (as indicated by their little icons), because they can accept not just literal input, but hundreds of types of arbitrary natural language—which gets interpreted by the same Natural Language Understanding technology that’s at the heart of Wolfram|Alpha. And, by the way, if, for example, your form needs a color, the Wolfram Programming Cloud will automatically create a field with a color picker. Or you can have radio buttons, or a slider, or whatever.

OK, but at this point, professional programmers may be saying, “This is all very nice, but how do I use this in my particular environment?” Well, we’ve gone to a lot of effort to make that easy. For example, with forms, the Wolfram Language has a very clean mechanism for letting you build them out of arbitrary XML templates, to give them whatever look and feel you want.

And when it comes to APIs, the Wolfram Programming Cloud makes it easy to create “embed code” for calling an API from any standard language:

Embed code for calling an API from any standard language

Soon it’ll also be easy to deploy to a mobile app. And in the future there’ll be Embedded Wolfram Engines and other things too.

So what does it all mean? I think it’s pretty important, because it really changes the whole process—and economics—of programming. I’ve even seen it quite dramatically within our own company. As the Wolfram Language and the Wolfram Programming Cloud have been coming together, there’ve been more and more places where we’ve been able to use them internally. And each time, it’s been amazing to see programming tasks that used to take weeks or months suddenly get done in days or less.

But much more than that, the whole knowledge-based character of the Wolfram Language makes feasible for the first time all sorts of programming that were basically absurd to consider before. And indeed within our own organization, that’s for example how it became possible to build Wolfram|Alpha—which is now millions of lines of Wolfram Language code.

But the exciting thing today is that with the launch of the Wolfram Programming Cloud, all this technology is now available to anyone, for projects large and small.

It’s set up so that anyone can just go to a web browser and—for free—start writing Wolfram Language code, and even deploying it on a small scale to the Wolfram Cloud. There are then a whole sequence of options available for larger deployments—including having your very own Wolfram Private Cloud within your organization.

Something to mention is that you don’t have to do everything in a web browser. It’s been a huge challenge to implement the Wolfram Programming Cloud notebook interface on the web—and there are definite limitations imposed by today’s web browsers and tools. But there’s also a native desktop version of the Wolfram Programming Cloud—which benefits from the 25+ years of interface engineering that we’ve done for Mathematica and CDF.

Wolfram Desktop

It’s very cool—and often convenient—to be able to use the Wolfram Programming Cloud purely on the web. But at least for now you get the very best experience by combining desktop and cloud, and running the native Wolfram Desktop interface connected to the Wolfram Cloud. What’s really neat is that it all fits perfectly together, so you can seamlessly transfer notebooks between cloud and desktop.

I’ve built some pretty complex software systems in my time. But the Wolfram Programming Cloud is the most complex I’ve ever seen. Of course, it’s based on the huge technology stack of the Wolfram Language. But the collection of interactions that have to go on in the Wolfram Programming Cloud between the Wolfram Language kernel, the Wolfram Knowledgebase, the Wolfram Natural Language Understanding System, the Wolfram Cloud, and all sorts of other subsystems are amazingly complex.

There are certainly still rough edges (and please don’t be shy in telling us about them!). Many things will, for example, get faster and more efficient. But I’m very pleased with what we’re able to launch today as the Wolfram Programming Cloud.

So if you’re going to try it out, what should you actually do? First, go to the Wolfram Programming Cloud on the web:

Wolfram Programming Cloud on the web

There’s a quick Getting Started video there. Or you can check out the Examples Gallery. Or you can go to Things to Try—and just start running Wolfram Language examples in the Wolfram Programming Cloud. If you’re an experienced programmer, I’d strongly recommend going through the Fast Introduction for Programmers:

The Wolfram Language: A Fast Introduction for Programmers

This should get you up to speed on the basic principles and concepts of the Wolfram Language, and quickly get you to the point where you can read most Wolfram Language code and just start “expanding your vocabulary” across its roughly 5000 built-in functions:

Function categories for the Wolfram Language

Today is an important day not only for our company and our technology, but also, I believe, for programming in general. There’s a lot that’s new in the Wolfram Programming Cloud—some in how far it’s been possible to take things, and some in basic ideas and philosophy. And in addition to dramatically simplifying and automating many kinds of existing programming, I think the Wolfram Programming Cloud is going to make possible whole new classes of software applications—and, I suspect, a wave of new algorithmically based startups.

For me, it’s been a long journey. But today I’m incredibly excited to start a new chapter—and to be able to see what people will be able to do with the Wolfram Language and the Wolfram Programming Cloud.


To comment, please visit the copy of this post at the Wolfram Blog »

]]>
0
Stephen Wolfram <![CDATA[A Speech for (High-School) Graduates]]> http://blog.internal.stephenwolfram.com/?p=8108 2014-06-09T18:05:55Z 2014-06-09T18:00:24Z Last weekend I gave a speech at this year’s graduation event for the Stanford Online High School (OHS) that one of my children has been attending. Here’s the transcript:

Thank you for inviting me to be part of this celebration today—and congratulations to this year’s OHS graduates.

You know, as it happens, I myself never officially graduated from high school, and this is actually the first high school graduation I’ve ever been to.

It’s been fun over the past three years—from a suitable parental distance of course—to see my daughter’s experiences at OHS. One day I’m sure everyone will know about online high schools—but you’ll be able to say, “Yes, I was there when that way of doing such-and-such a thing was first invented—at OHS.”

It’s great to see the OHS community—and to see so many long-term connections being formed independent of geography. And it’s also wonderful to see students with such a remarkable diversity of unique stories.

Of course, for the graduates here today, this is the beginning of a new chapter in their stories.

I suspect some of you already have very definite life plans. Many are still exploring. It’s worth remembering that there’s no “one right answer” to life. Different people are amazingly different in what they’ll consider an “‘A’ in life”. I think the first challenge is always to understand what you really like. Then you’ve got to know what’s out there to do in the world. And then you’ve got to solve the puzzle of fitting the two together.

Maybe you’ll discover there’s a niche that already exists; maybe you’ll have to create one.

I’ve always been interested in trajectories of peoples’ lives, and one thing I’ve noticed is that after some great direction has emerged in someone’s life, one can almost always look back and see the seeds of it very early.

Like I was recently a bit shocked actually to find some things I did when I was 12 years old—about systematizing knowledge and data—and to realize that what I was trying to do was incredibly similar to Wolfram|Alpha. And then to realize that my tendency to invent projects and organize other kids to help do them was awfully like leading an entrepreneurial company.

You know, it’s funny how things can play out. Back when I was a kid I was really interested in physics. And to do physics you have to do a lot of math calculations. Which I found really boring, and wasn’t very good at.

So what did I do? Well, I figured out that even though I might not be good at these calculations, I could make a computer be good at them. And needless to say, that’s what I did—and through a pretty straight path, that’s what brought the world Mathematica and Wolfram|Alpha.

You know, another thing was that when I was a kid I always had a hard time getting myself to do exercises from textbooks. I kept on thinking to myself, “Why am I doing this exercise when zillions of other people have already done it? Why don’t I do something different, that’s new, and mine?”

People might think: that must be really hard. But it’s not. It’s just that you have to learn not just about how to do stuff, but also about how to figure out what stuff to do. And actually one thing I’ve noticed is that in almost every area, the people who go furthest are not the ones with the best technical skills, but the ones who have the best strategy for figuring out what to do.

But I have to say that for me it’s just incredibly fun inventing new stuff—and that’s pretty much what I’ve spent my life doing.

I think most people don’t really internalize enough how stuff in our world gets made. I mean, everything we have in our civilization—our technology, our ways of doing things, whatever—had to be invented. It had to start with some person somewhere—maybe like you—having an idea. And then that idea got turned into reality.

It’s a wonderful thing going from nothing but an idea—to something real in the world. For me, that’s my favorite thing to do. And I’ve been fortunate enough to do that with a number of big projects, alternating between science, technology and business. At some level, my projects might look very different: building a new kind of science, creating a computer language, encoding the world’s knowledge in computational form.

But it turns out that at some level they’re really all the same. They’re all about taking some complicated area, drilling down to the essence of it, then doing a big project to build up to something that’s useful in the world.

And when you think about what it is you really like, and what you’re really good at, it’s important to be thematic. Maybe you like math. But why? Is it the definiteness? Problem solving? Elegance? Even at OHS you only get to learn about certain specific subjects. So to understand yourself, you have to take your reactions to them, and generalize—figure out the overall theme.

You know, something I’ve learned is that the more different areas I know about, the better. When I was a kid I learned Latin and Greek—and I was always complaining that they’d never be useful. But then I grew up—and had to make up names for products and things. And actually for years a big part of what I’ve done every day is to take ideas from very different areas that I’ve learned about—and bring them together to make new ideas.

One thing, if you want to do this, is that you really have to keep all those things you’ve learned at your fingertips. History of science can’t stay in a history of science class. It has to inform that clever social media idea you have, or that great new policy direction you come up with, or that artistic creation you’re making, or whatever. The real payoff comes not from doing well in the class, but from internalizing that way of thinking or that knowledge so it becomes part of you.

You know, as you think about what to do in the world, it’s worth remembering that some of the very best areas are ones that almost nobody’s heard about yet—and there certainly aren’t classes about. But if you get into one of those new areas, it’s great—because there’s still all this basic ground-floor stuff to do there, and as the area grows, you get propelled by that.

I’ve been pretty lucky in that regard. Because early in life I got really interested in computation, and in the computational way of thinking about things. And I think it’s becoming clear that computation is really the single most important idea that’s emerged in the past century. And that even after all the technology that’s been built with it, we’re only just beginning to see its true significance.

And today, you just have to prepend the word “computational” to almost any existing field to get something that’s an exciting growth direction: computational law, computational medicine, computational archaeology, computational philosophy, computational photography, whatever.

And yes, to be able to do all this stuff, you have to get familiar with the computational way of thinking, and with things like programming. That’s going to be an increasingly important literacy skill. And I have to say that in general, even more valuable than learning the content of specific fields is to learn general approaches and tools—and keep up to date with them.

It’s not for everybody, but I myself happen to have spent a lot of time actually building tools. And for me the most powerful thing has been being able to build a tower of tools, then to use them to figure things out, then to use those things to go on and build more tools. And I’ve been fortunate enough to be able to go on doing that for more than 30 years now.

You know, it’s always an interesting judgment call when to go on in a life direction you’re already going, and when to branch into something new, or to chase some new opportunity. For myself, I try to maintain a portfolio—continuing to build on what I’ve done, but also always making sure to add new things.

One of the consequences of that is that at any given time, there’s always an area where I’m basically a beginner—and just learning. Right now, for example, that happens to be programming education. We’ve managed to automate a lot of programming—which I think is going to be a pretty big deal in general—but for education it means there’s a much broader range of people and places where programming can be taught. But how should it be done? Math and language and areas like that have centuries of education experience to draw on. But with what’s now possible with programming education, we’ve got a completely new situation, that kind of has to be figured out from the ground up. It’s always a little scary doing something like that, and I always think, “Maybe this is finally an area I’ll never figure out.” But somehow if one has the confidence to keep going, it always seems to come together—and it’s really satisfying.

You know, when I was a kid I learned some things in school and some things on my own. I was always doing projects about this or that. And somehow I’ve just kept on doing projects and learning more and more things. You’ve been exposed to lots of interesting things at OHS. Make sure you expose yourselves to lots more things in college or wherever you’re going next. And don’t forget to do projects—to do things that are really yours, and that people can look at and really get a sense of you from.

And don’t just learn stuff. Keep thinking about strategy too. Keep trying to solve the puzzle of what your best niche is. You might find it or you might have to create it. But there will be something great out there for you. And never assume that the world won’t let you get to it. It’s all part of the puzzle to solve. And the seeds are already there in who you are; you just have to find them, nurture them, and keep pushing to let them grow as each chapter of your story unfolds…

]]>
1