Stephen Wolfram Blog Stephen Wolfram's Personal Blog Fri, 17 Jul 2015 21:39:05 +0000 en-US hourly 1 Wolfram Language Artificial Intelligence: The Image Identification Project Wed, 13 May 2015 16:13:36 +0000 Stephen Wolfram “What is this a picture of?” Humans can usually answer such questions instantly, but in the past it’s always seemed out of reach for computers to do this. For nearly 40 years I’ve been sure computers would eventually get there—but I’ve wondered when.

I’ve built systems that give computers all sorts of intelligence, much of it far beyond the human level. And for a long time we’ve been integrating all that intelligence into the Wolfram Language.

Now I’m excited to be able to say that we’ve reached a milestone: there’s finally a function called ImageIdentify built into the Wolfram Language that lets you ask, “What is this a picture of?”—and get an answer.

And today we’re launching the Wolfram Language Image Identification Project on the web to let anyone easily take any picture (drag it from a web page, snap it on your phone, or load it from a file) and see what ImageIdentify thinks it is:

Give the Wolfram Language Image Identify Project a picture, and it uses the language's ImageIdentify function to identify it

It won’t always get it right, but most of the time I think it does remarkably well. And to me what’s particularly fascinating is that when it does get something wrong, the mistakes it makes mostly seem remarkably human.

It’s a nice practical example of artificial intelligence. But to me what’s more important is that we’ve reached the point where we can integrate this kind of “AI operation” right into the Wolfram Language—to use as a new, powerful building block for knowledge-based programming.

Now in the Wolfram Language

In a Wolfram Language session, all you need do to identify an image is feed it to the ImageIdentify function:

In[1]:= ImageIdentify[image:giant anteater]

What you get back is a symbolic entity, that the Wolfram Language can then do more computation with—like, in this case, figure out if you’ve got an animal, a mammal, etc. Or just ask for a definition:

In[2]:= giant anteater ["Definition"]

Or, say, generate a word cloud from its Wikipedia entry:

In[3]:= WordCloud[DeleteStopwords[WikipediaData[giant anteater]]]

And if one had lots of photographs, one could immediately write a Wolfram Language program that, for example, gave statistics on the different kinds of animals, or planes, or devices, or whatever, that appear in the photographs.

With ImageIdentify built right into the Wolfram Language, it’s easy to create APIs, or apps, that use it. And with the Wolfram Cloud, it’s also easy to create websites—like the Wolfram Language Image Identification Project.

Personal Backstory

For me personally, I’ve been waiting a long time for ImageIdentify. Nearly 40 years ago I read books with titles like The Computer and the Brain that made it sound inevitable we’d someday achieve artificial intelligence—probably by emulating the electrical connections in a brain. And in 1980, buoyed by the success of my first computer language, I decided I should think about what it would take to achieve full-scale artificial intelligence.

Part of what encouraged me was that—in an early premonition of the Wolfram Language—I’d based my first computer language on powerful symbolic pattern matching that I imagined could somehow capture certain aspects of human thinking. But I knew that while tasks like image identification were also based on pattern matching, they needed something different—a more approximate form of matching.

I tried to invent things like approximate hashing schemes. But I kept on thinking that brains manage to do this; we should get clues from them. And this led me to start studying idealized neural networks and their behavior.

Meanwhile, I was also working on some fundamental questions in natural science—about cosmology and about how structures arise in our universe—and studying the behavior of self-gravitating collections of particles.

And at some point I realized that both neural networks and self-gravitating gases were examples of systems that had simple underlying components, but somehow achieved complex overall behavior. And in getting to the bottom of this, I wound up studying cellular automata and eventually making all the discoveries that became A New Kind of Science.

So what about neural networks? They weren’t my favorite type of system: they seemed a little too arbitrary and complicated in their structure compared to the other systems that I studied in the computational universe. But every so often I would think about them again, running simulations to understand more about the basic science of their behavior, or trying to see how they could be used for practical tasks like approximate pattern matching:

Some of my early work on neural networks--from 1983...

Neural networks in general have had a remarkable roller-coaster history. They first burst onto the scene in the 1940s. But by the 1960s, their popularity had waned, and the word was that it had been “mathematically proven” that they could never do anything very useful.

It turned out, though, that that was only true for one-layer “perceptron” networks. And in the early 1980s, there was a resurgence of interest, based on neural networks that also had a “hidden layer”. But despite knowing many of the leaders of this effort, I have to say I remained something of a skeptic, not least because I had the impression that neural networks were mostly getting used for tasks that seemed like they would be easy to do in lots of other ways.

I also felt that neural networks were overly complex as formal systems—and at one point even tried to develop my own alternative. But still I supported people at my academic research center studying neural networks, and included papers about them in my Complex Systems journal.

I knew that there were practical applications of neural networks out there—like for visual character recognition—but they were few and far between. And as the years went by, little of general applicability seemed to emerge.

Machine Learning

Meanwhile, we’d been busy developing lots of powerful and very practical ways of analyzing data, in Mathematica and in what would become the Wolfram Language. And a few years ago we decided it was time to go further—and to try to integrate highly automated general machine learning. The idea was to make broad, general functions with lots of power; for example, to have a single function Classify that could be trained to classify any kind of thing: say, day vs. night photographs, sounds from different musical instruments, urgency level of email, or whatever.

We put in lots of state-of-the-art methods. But, more importantly, we tried to achieve complete automation, so that users didn’t have to know anything about machine learning: they just had to call Classify.

I wasn’t initially sure it was going to work. But it does, and spectacularly.

People can give training data on pretty much anything, and the Wolfram Language automatically sets up classifiers for them to use. We’re also providing more and more built-in classifiers, like for languages, or country flags:

In[4]:= Classify["Language", {"欢迎光临", "Welcome", "Bienvenue", "Добро пожаловать", "Bienvenidos"}]

In[5]:= Classify["CountryFlag", {images:flags}]

And a little while ago, we decided it was time to try a classic large-scale classifier problem: image identification. And the result now is ImageIdentify.

It’s All about Attractors

What is image identification really about? There are some number of named kinds of things in the world, and the point is to tell which of them a particular picture is of. Or, more formally, to map all possible images into a certain set of symbolic names of objects.

We don’t have any intrinsic way to describe an object like a chair. All we can do is just give lots of examples of chairs, and effectively say, “Anything that looks like one of these we want to identify as a chair.” So in effect we want images that are “close” to our examples of chairs to map to the name “chair”, and others not to.

Now, there are lots of systems that have this kind of “attractor” behavior. As a physical example, think of a mountainscape. A drop of rain may fall anywhere on the mountains, but (at least in an idealized model) it will flow down to one of a limited number of lowest points. Nearby drops will tend to flow to the same lowest point. Drops far away may be on the other side of a watershed, and so will flow to other lowest points.

In a mountainscape, water flows to different lowest points depending on where it falls on the terrain

The drops of rain are like our images; the lowest points are like the different kinds of objects. With raindrops we’re talking about things physically moving, under gravity. But images are composed of digital pixels. And instead of thinking about physical motion, we have to think about digital values being processed by programs.

And exactly the same “attractor” behavior can happen there. For example, there are lots of cellular automata in which one can change the colors of a few cells in their initial conditions, but still end up in the same fixed “attractor” final state. (Most cellular automata actually show more interesting behavior, that doesn’t go to a fixed state, but it’s less clear how to apply this to recognition tasks.)

Cellular automata with different initial states but same final states. Like rain on a mountainscape, initial cells can "fall" in any of many different places and wind up in the same final position.

So what happens if we take images and apply cellular automaton rules to them? In effect we’re doing image processing, and indeed some common image processing operations (both done on computers and in human visual processing) are just simple 2D cellular automata.

A lot of image processing can be--and is--done with cellular automata

It’s easy to get cellular automata to pick out certain features of an image—like blobs of dark pixels. But for real image identification, there’s more to do. In the mountain analogy, we have to “sculpt” the mountainscape so that the right raindrops flow to the right points.

Programs Automatically Made

So how do we do this? In the case of digital data like images, it isn’t known how to do this in one fell swoop; we only know how to do it iteratively, and incrementally. We have to start from a base “flat” system, and gradually do the “sculpting”.

There’s a lot that isn’t known about this kind of iterative sculpting. I’ve thought about it quite extensively for discrete programs like cellular automata (and Turing machines), and I’m sure something very interesting can be done. But I’ve never figured out just how.

Cellular automata can be used for a kind of iterative sculpting

For systems with continuous (real-number) parameters, however, there’s a great method called back propagation—that’s based on calculus. It’s essentially a version of the very common method of gradient descent, in which one computes derivatives, then uses them to work out how to change parameters to get the system one is using to better fit the behavior one wants.

So what kind of system should one use? A surprisingly general choice is neural networks. The name makes one think of brains and biology. But for our purposes, neural networks are just formal, computational, systems, that consist of compositions of multi-input functions with continuous parameters and discrete thresholds.

How easy is it to make one of these neural networks perform interesting tasks? In the abstract, it’s hard to know. And for at least 20 years my impression was that in practice neural networks could mostly do only things that were also pretty easy to do in other ways.

But a few years ago that began to change. And one started hearing about serious successes in applying neural networks to practical problems, like image identification.

What made that happen? Computers (and especially linear algebra in GPUs) got fast enough that—with a variety of algorithmic tricks, some actually involving cellular automata—it became practical to train neural networks with millions of neurons, on millions of examples. (By the way, these were “deep” neural networks, no longer restricted to having very few layers.) And somehow this suddenly brought large-scale practical applications within reach.

Why Now?

I don’t think it’s a coincidence that this happened right when the number of artificial neurons being used came within striking distance of the number of neurons in relevant parts of our brains.

It’s not that this number is significant on its own. Rather, it’s that if we’re trying to do tasks—like image identification—that human brains do, then it’s not surprising if we need a system with a similar scale.

Humans can readily recognize a few thousand kinds of things—roughly the number of picturable nouns in human languages. Lower animals likely distinguish vastly fewer kinds of things. But if we’re trying to achieve “human-like” image identification—and effectively map images to words that exist in human languages—then this defines a certain scale of problem, which, it appears, can be solved with a “human-scale” neural network.

There are certainly differences between computational and biological neural networks—although after a network is trained, the process of, say, getting a result from an image seems rather similar. But the methods used to train computational neural networks are significantly different from what it seems plausible for biology to use.

Still, in the actual development of ImageIdentify, I was quite shocked at how much was reminiscent of the biological case. For a start, the number of training images—a few tens of millions—seemed very comparable to the number of distinct views of objects that humans get in their first couple of years of life.

All It Saw Was the Hat

There were also quirks of training that seemed very close to what’s seen in the biological case. For example, at one point, we’d made the mistake of having no human faces in our training. And when we showed a picture of Indiana Jones, the system was blind to the presence of his face, and just identified the picture as a hat. Not surprising, perhaps, but to me strikingly reminiscent of the classic vision experiment in which kittens reared in an environment of vertical stripes are blind to horizontal stripes.

When we gave it a picture of Indiana Jones, it zeroed in on the hat

Probably much like the brain, the ImageIdentify neural network has many layers, containing a variety of different kinds of neurons. (The overall structure, needless to say, is nicely described by a Wolfram Language symbolic expression.)

It’s hard to say meaningful things about much of what’s going on inside the network. But if one looks at the first layer or two, one can recognize some of the features that it’s picking out. And they seem to be remarkably similar to features we know are picked out by real neurons in the primary visual cortex.

I myself have long been interested in things like visual texture recognition (are there “texture primitives”, like there are primary colors?), and I suspect we’re now going to be able to figure out a lot about this. I also think it’s of great interest to look at what happens at later layers in the neural network—because if we can recognize them, what we should see are “emergent concepts” that in effect describe classes of images and objects in the world—including ones for which we don’t yet have words in human languages.

We Lost the Anteaters!

Like many projects we tackle for the Wolfram Language, developing ImageIdentify required bringing many diverse things together. Large-scale curation of training images. Development of a general ontology of picturable objects, with mapping to standard Wolfram Language constructs. Analysis of the dynamics of neural networks using physics-like methods. Detailed optimization of parallel code. Even some searching in the style of A New Kind of Science for programs in the computational universe. And lots of judgement calls about how to create functionality that would actually be useful in practice.

At the outset, it wasn’t clear to me that the whole ImageIdentify project was going to work. And early on, the rate of utterly misidentified images was disturbingly high. But one issue after another got addressed, and gradually it became clear that finally we were at a point in history when it would be possible to create a useful ImageIdentify function.

There were still plenty of problems. The system would do well on certain things, but fail on others. Then we’d adjust something, and there’d be new failures, and a flurry of messages with subject lines like “We lost the anteaters!” (about how pictures that ImageIdentify used to correctly identify as anteaters were suddenly being identified as something completely different).

Debugging ImageIdentify was an interesting process. What counts as reasonable input? What’s reasonable output? How should one make the choice between getting more-specific results, and getting results that one’s more certain aren’t incorrect (just a dog, or a hunting dog, or a beagle)?

Sometimes we saw things that at first seemed completely crazy. A pig misidentified as a “harness”. A piece of stonework misidentified as a “moped”. But the good news was that we always found a cause—like confusion from the same irrelevant objects repeatedly being in training images for a particular type of object (e.g. “the only time ImageIdentify had ever seen that type of Asian stonework was in pictures that also had mopeds”).

To test the system, I often tried slightly unusual or unexpected images:

Unexpected images often gave unexpected results

And what I found was something very striking, and charming. Yes, ImageIdentify could be completely wrong. But somehow the errors seemed very understandable, and in a sense very human. It seemed as if what ImageIdentify was doing was successfully capturing some of the essence of the human process of identifying images.

So what about things like abstract art? It’s a kind of Rorschach-like test for both humans and machines—and an interesting glimpse into the “mind” of ImageIdentify:

Abstract art gets fascinating interpretations, sort of like Rorschach-blot interpretations from humans

Out into the Wild

Something like ImageIdentify will never truly be finished. But a couple of months ago we released a preliminary version in the Wolfram Language. And today we’ve updated that version, and used it to launch the Wolfram Language Image Identification Project.

We’ll continue training and developing ImageIdentify, not least based on feedback and statistics from the site. Like for Wolfram|Alpha in the domain of natural language understanding, without actual usage by humans there’s no real way to realistically assess progress—or even to define just what the goals should be for “natural image understanding”.

I must say that I find it fun to play with the Wolfram Language Image Identification Project. It’s satisfying after all these years to see this kind of artificial intelligence actually working. But more than that, when you see ImageIdentify respond to a weird or challenging image, there’s often a certain “aha” feeling, like one was just shown in a very human-like way some new insight—or joke—about an image.

Some of ImageIdentify's errors are quite funny

Underneath, of course, it’s just running code—with very simple inner loops that are pretty much the same as, for example, in my neural network programs from the beginning of the 1980s (except that now they’re Wolfram Language functions, rather than low-level C code).

It’s a fascinating—and extremely unusual—example in the history of ideas: neural networks were studied for 70 years, and repeatedly dismissed. Yet now they are what has brought us success in such a quintessential example of an artificial intelligence task as image identification. I expect the original pioneers of neural networks—like Warren McCulloch and Walter Pitts—would find little surprising about the core of what the Wolfram Language Image Identification Project does, though they might be amazed that it’s taken 70 years to get here.

But to me the greater significance is what can now be done by integrating things like ImageIdentify into the whole symbolic structure of the Wolfram Language. What ImageIdentify does is something humans learn to do in each generation. But symbolic language gives us the opportunity to represent shared intellectual achievements across all of human history. And making all these things computational is, I believe, something of monumental significance, that I am only just beginning to understand.

But for today, I hope you will enjoy the Wolfram Language Image Identification Project. Think of it as a celebration of where artificial intelligence has reached. Think of it as an intellectual recreation that helps build intuition for what artificial intelligence is like. But don’t forget the part that I think is most exciting: it’s also practical technology, that you can use here and now in the Wolfram Language, and deploy wherever you want.

]]> 22
Instant Apps for the Apple Watch with the Wolfram Language Tue, 28 Apr 2015 19:31:28 +0000 Stephen Wolfram My goal with the Wolfram Language is to take programming to a new level. And over the past year we’ve been rolling out ways to use and deploy the language in many places—desktop, cloud, mobile, embedded, etc. So what about wearables? And in particular, what about the Apple Watch? A few days ago I decided to explore what could be done. So I cleared my schedule for the day, and started writing code.

My idea was to write code with our standard Wolfram Programming Cloud, but instead of producing a web app or web API, to produce an app for the Apple Watch. And conveniently enough, a preliminary version of our Wolfram Cloud app just became available in the App Store—letting me deploy from the Wolfram Cloud to both mobile devices and the watch.

A few lines of Wolfram Language code creates and deploys an Apple Watch app

To some extent it was adventure programming. The Apple Watch was just coming out, and the Wolfram Cloud app was still just preliminary. But of course I was building on nearly 30 years of progressive development of the Wolfram Language. And I’m happy to say that it didn’t take long for me to start getting interesting Wolfram Language apps running on the watch. And after less than a day of work—with help from a handful of other people—I had 25 watch-ready apps:

Icons for a number of our new Wolfram Language watch apps, on an iPhone ready for instant deployment to the Apple Watch

All of these I built by writing code in the Wolfram Programming Cloud (either on the web or the desktop), then deploying to the Wolfram Cloud, and connecting to the Apple Watch via the Wolfram Cloud app. And although the apps were designed for the Apple Watch, you can actually also use them on the web, or on a phone. There are links to the web versions scattered through this post. To get the apps onto your phone and watch, just go to this page and follow the instructions. That page also has all the Wolfram Language source code for the apps, and you can use any Wolfram Language system—Wolfram Programming Cloud (including the free version), Mathematica etc.—to experiment with the code for yourself, and perhaps deploy your own version of any of the apps.

My First Watch App

So how does it all work? For my first watch-app-writing session, I decided to start by making a tiny app that just generates a single random number. The core Wolfram Language code to do that is simply:

In[1]:= RandomInteger[1000]

For the watch we want the number to look nice and bold and big, and it might as well be a random color:

In[2]:= Style[RandomInteger[1000], Bold, 30, RandomColor[]]

We can immediately deploy this publicly to the cloud by saying:

In[3]:= CloudDeploy[Delayed[Style[RandomInteger[1000], Bold, 250, RandomColor[]], "PNG"], Permissions -> "Public"]

And if you go to that URL in any web browser, you’ll get to a minimal web app which immediately gives a web page with a random number. (The Delayed in the code says to delay the computation until the moment the page is accessed or refreshed, so you get a fresh random number each time.)

So what about getting this to the Apple Watch? First, it has to get onto an iPhone. And that’s easy. Because anything that you’ve deployed to the Wolfram Cloud is automatically accessible on an iPhone through the Wolfram Cloud app. To make it easy to find, it’s good to add a recognizable name and icon. And if it’s ultimately headed for the watch, it’s good to put it on a black background:

In[4]:= CloudDeploy[Delayed[ExpressionCell[Style[RandomInteger[1000], Bold, 250, RandomColor[]], Background -> Black], "PNG"], "WatchApps/RandomNumber", IconRules -> WordCloud[RandomInteger[10, 20]]]

And now if you go to this URL in a web browser, you’ll find a public version of the app there. Inside the Wolfram Cloud app on an iPhone, the app appears inside the WatchApps folder:

Deploy that RandomNumber app, and it will appear on your phone

And now, if you touch the app icon, you’ll run the Wolfram Language code in the Wolfram Cloud, and back will come a random number, displayed on the phone:

The RandomNumber app works fine on the phone, but of course is sized for the Apple Watch screen

If you want to run the app again, and get a fresh random number, just pull down from the top of the phone.

To get the app onto the watch, go back to the listing of apps, then touch the watch icon at the top and select the app. This will get the app listed on the watch that’s paired with your phone:

That's all it takes to get the app onto your watch

Now just touch the entry for the RandomNumber app and it’ll go to the Wolfram Cloud, run the Wolfram Language code, and display a random number on the watch:

And here it is running on the watch--it's that easy


Randomness Apps

It’s simple to make all sorts of “randomness apps” with the Wolfram Language. Here’s the core of a Coin Flip app:

In[5]:= RandomChoice[{image:heads, image:tails}]

And this is all it takes to deploy the app, to the web, mobile and watch:

In[6]:= CloudDeploy[Delayed[ExpressionCell[RandomChoice[{image:heads, image:tails}], Background -> Black], "PNG"], "WatchApps/CoinFlip", IconRules -> image:heads]

One might argue that it’s overkill to use our sophisticated technology stack to do this. After all, it’s easy enough to flip a physical coin. But that assumes you have one of those around (which I, for one, don’t any more). Plus, the Coin Flip app will make better randomness.

What about playing Rock, Paper, Scissors with your watch? The core code for that is again trivial:

In[7]:= RandomChoice[{image:rock, image:paper, image:scissors}]

There’s a huge amount of knowledge built in to the Wolfram Language—including, in one tiny corner, the knowledge to trivially create a Random Pokemon app:

In[8]:= EntityValue[EntityValue["Pokemon", "RandomEntity"], {"Image", "Name"}]

Here it is running on the watch:

Stats pop quiz: How many random displays will it take, on average, before you catch 'em all?

Let’s try some slightly more complex Wolfram Language code. Here’s a Word Inventor that makes a “word” by alternating random vowels and consonants (and often the result sounds a lot like a Pokemon, or a tech startup):

In[9]:= vowels = {"a", "e", "i", "o", "u"}; consonants = Complement[CharacterRange["a", "z"], vowels]; Style[StringJoin[Flatten[Table[{RandomChoice[consonants], RandomChoice[vowels]}, {3}]]], 40]


Watches Tell Time

If nothing else, one thing people presumably want to use a watch for is to tell time. And since we’re in the modern internet world, it has to be more fun if there’s a cat or two involved. So here’s the Wolfram Language code for a Kitty Clock:

In[10]:= ClockGauge[Now, PlotTheme -> "Minimal", GaugeMarkers -> {image:graycat, image:orangecat, None}, Background -> Black, TicksStyle -> White]

Which on the watch becomes:

You can has kitty clock...

One can get pretty geeky with clocks. Remembering our recent very popular My Pi Day website, here’s some slightly more complicated code to make a Pi Clock where the digits of the current time are displayed in the context where they first occur in pi:

In[11]:= pi = Characters[ToString[N[Pi, 65000]]]; time = Characters[DateString[{"Hour12", "Minute"}]]; pos = First[SequencePosition[pi, time]]; Style[Grid[Partition[Join[Take[pi, 14], Characters["..."], Take[pi, pos - {13, 1}], Style[#, Orange] & /@ Take[pi, pos], Take[pi, pos + {5, 4}]], 10], Spacings -> {0, 0}], 40, Background -> Black, FontColor -> White]

Or adding a little more:

And now you can know exactly what digit of pi any time of day begins at

Where Are You?

So long as you enable it, the Apple Watch uses GPS, etc. on its paired phone to know where you are. That makes it extremely easy to have a Lat-Long app that shows your current latitude and longitude on the watch (this one is for our company HQ):

In[12]:= Style[Column[{DMSString[Latitude[Here], {1, "NS"}], DMSString[Longitude[Here], {1, "EW"}]}], 30, White, Background -> Black]

I’m not quite sure why it’s useful (prove location over Skype?), but here’s a Here & Now QR app that shows your current location and time in a QR code:

In[13]:= BarcodeImage[StringJoin[DMSString[Here], "|", DateString[Now]], "QR"]

Among the many things the Wolfram Language knows a lot about is geography. So here’s the code to find the ten volcanoes closest to you:

In[14]:= v = GeoNearest["Volcano", Here, 10]

A little more code shows them on a map, and constructs a Nearest Volcanoes app:

In[15]:= GeoGraphics[{GeoPath[{Here, #}] & /@ v, GeoMarker[Here], GeoMarker[#, image:volcano-icon] & /@ v}, GeoRange -> 1.5 GeoDistance[Here, First[v]]]

Here’s the code for a 3D Topography app, that shows the (scaled) 3D topography for 10 miles around your location:

In[16]:= ListPlot3D[GeoElevationData[GeoDisk[Here, Quantity[10, "Miles"]]], MeshFunctions -> {#3 &}, Mesh -> 30, Background -> Black, Axes -> False, ViewPoint -> {2, 0, 3}]


Data Flows In

Since the watch communicates with the Wolfram Cloud, it can make use of all the real-time data that’s flowing into the Wolfram Knowledgebase. That data includes things like the current (x,y,z,t) position of the International Space Station:

In[17]:= entity:International Space Station (satellite) ["Position"]

Given the position, a little bit of Wolfram Language graphics programming gives us an ISS Locator app:

In[18]:= Module[{pos, line, rise}, {pos, line, rise} = SatelliteData[entity:International Space Station (satellite), {"Position", "SatelliteLocationLine", "RiseTime"}]; Style[Labeled[GeoGraphics[{{Pink, AbsoluteThickness[3], GeoPath @@ line}, {Red, PointSize[.04], Point[pos]}, {Opacity[.1], Black, GeoVisibleRegion[pos]}}, GeoGridLines -> Automatic, GeoCenter -> pos, GeoRange -> "World", GeoProjection -> "Orthographic", ImageSize -> {272, 340 - 38}], Style[TemplateApply["Next rise: ``", NumberForm[ UnitConvert[DateDifference[Now, rise], "Minutes"], 3]], White, 20]], Background -> Black]]

As another example of real-time data, here’s the code for an Apple Quanting app that does some quant-oriented computations on Apple stock:

In[19]:= Style[TradingChart[{"AAPL", DatePlus[-90]}, {"Volume", Style["MESASineWave", {RGBColor[1, 1, 1], RGBColor[0.46, 0.62, 0.8200000000000001]}], Style["BollingerBands", RGBColor[1, 1, 1]], Style["DoubleExponentialMovingAverage", RGBColor[1, 0.85, 0.21]]}, PerformanceGoal -> "Speed", Axes -> False, Frame -> False], Background -> Black]

And here’s the code for a Market Word Cloud app that shows a stock-symbols word cloud weighted by fractional price changes in the past day (Apple up, Google down today):

In[20]:= WordCloud[With[{c = FinancialData[#, "FractionalChange"]}, Abs[c] -> Style[#, ColorData[{"RedGreenSplit", 0.01 {-1, 1}}, c]]] & /@ {"AAPL", "XOM", "GOOG", "MSFT", "BRK-A", "WFC", "JNJ", "GE", "WMT", "JPM"}, Background -> Black]

Here’s the complete code for a geo-detecting Currency Converter app:

In[21]:= With[{home = $GeoLocationCountry["CurrencyUnit"]}, Style[QuantityForm[Grid[{#, "=", CurrencyConvert[#, home]} & /@ Cases[{Quantity[1, "USDollars"], Quantity[1, "Euros"], Quantity[1, "Yen"], Quantity[1, "BritishPounds"]}, Except[home]], Alignment -> Left], "Abbreviation"], White, Background -> Black, 30]]

It’s easy to make so many apps with the Wolfram Language. Here’s the core code for a Sunrise/Sunset app:

In[22]:= {Sunrise[], Sunset[]}

Setting up a convenient display for the watch takes a little more code:

In[23]:= With[{sunfmt = Style[DateString[#, {#2, " ", "Hour12Short", ":", "Minute", "AMPMLowerCase"}], 54] &, tfmt = Round[DateDifference[Now, #, {"Hour", "Minute"}], 5] &}, Rasterize@Style[Column[{sunfmt[Sunrise[], "rise"], tfmt[Sunrise[]], sunfmt[Sunset[], "set"], tfmt[Sunset[]]}, Alignment -> Right], FontSize -> 32, Background -> Black, White]]

The Wolfram Language includes real-time weather feeds:

In[24]:= AirTemperatureData[]

Which we can also display iconically:

In[25]:= IconData["AirTemperature", AirTemperatureData[]]

Here’s the data for the last week of air temperatures:

In[26]:= AirTemperatureData[Here, {Now - Quantity[1, "Weeks"], Now}]

And with a little code, we can format this to make a Temperature History app:

In[27]:= With[{temps = DeleteMissing[AirTemperatureData[Here, {Now - Quantity[1, "Weeks"], Now}]["Values"]]}, QuantityForm[Style[Column[{Grid[{{"Current", Last[temps]},{"High", Max[temps]}, {"Low", Min[temps]}}, Alignment -> {{Right, Left}}], ListLinePlot[temps, ImageSize -> 312, PlotStyle -> None, Filling -> Bottom, FillingStyle -> Automatic, ColorFunction -> Function[{x, y}, Blend[{RGBColor[0.45, 0.72, 0], RGBColor[1, 0.85, 0]}, y]], PlotTheme -> "NoAxes"]}, Alignment -> Right], Background -> Black, 24, White], "Abbreviation"]]

Sometimes the easiest way to get a result in the Wolfram Language is just to call Wolfram|Alpha. Here’s what Wolfram|Alpha shows on the web if you ask about the time to sunburn (it detects your current location):

Wolfram|Alpha recognizes your location, knows the current UV index there, and computes how long you could safely stay out in the sun depending on your skin type

Now here’s a real-time Sunburn Time app created by calling Wolfram|Alpha through the Wolfram Language (the different rows are for different skin tones):

In[28]:= times = Style[QuantityForm[#, {}], 24, White, FontFamily -> "Source Sans Pro"] & /@ Rest[WolframAlpha["sunburn time", {{"TypicalTimeToSunburn", 1}, "ComputableData"}][[All, 2]]]; In[29]:= Panel[Grid[Transpose[{{image:skintonesI, image:skintonesII, image:skintonesIII, image:skintonesIV, image:skintonesV, image:skintonesVI}, times}], Dividers -> {False, Center}, FrameStyle -> Gray, Spacings -> 5, Alignment -> {Center, Center}], Background -> Black]


Reports & Data Drops

The Wolfram Language has access not only to all its own curated data feeds, but also to private data feeds, especially ones in the Wolfram Data Drop.

As a personal analytics enthusiast, I maintain a databin in the Wolfram Data Drop that tells me my current backlog of unprocessed and unread email messages. I have a scheduled task that runs in the cloud and generates a report of my backlog history. And given this, it’s easy to have an SW Email Backlog app that imports this report on demand, and displays it on a watch:

Lighter orange is total number of messages; darker orange is unread messages...

And, yes, the recent increase in unprocessed and unread email messages is at least in part a consequence of work on this blog.

There are now lots of Wolfram Data Drop databins around, and of course you can make your own. And from any databin you can immediately make a watch app that shows a dashboard for it. Like here’s a Company Fridge app based on a little temperature sensor sitting in a break-room refrigerator at our company HQ (the cycling is from the compressor; the spike is from someone opening the fridge):

In[30]:= DateListPlot[Databin["4r4-gP4o", -300, "temp"], PlotStyle -> RGBColor[0, 0.501961, 1], Background -> Black, DateTicksFormat -> {"Hour12Short", "AMPMLowerCase"}, FrameStyle -> Directive[Black, FontColor -> White, 18], FrameLabel -> Automatic, TargetUnits -> Quantity[1, "DegreesFahrenheitDifference"], AspectRatio -> 1.11, ImageSize -> 312]["temp"]

Databins often get data from just a single source or single device. But one can also have a databin that gets data from an app running on lots of different devices.

As a simple example, let’s make an app that just shows where in the world that app is being accessed from. Here’s the complete code to the deploy such a “Data Droplets” app:

In[31]:= CloudDeploy[Delayed[With[{db = Databin[DatabinAdd["4rwD7T5G", 0], -20]["GeoLocations"]}, GeoGraphics[{Red, PointSize[.02], MapThread[{Opacity[#], Point[#2]} &, {Subdivide[0.15, 1, Length[db] - 1], db}]}, GeoRange -> All, GeoProjection -> "LambertAzimuthal", Background -> Black, PlotLabel -> Style["Recent Data Droplets", White, 24]]], "PNG"], "WatchApps/DataDroplets"]

The app does two things. First, whenever it’s run, it adds the geo location of the device that’s running it to a central databin in the Wolfram Data Drop. And second, it displays a world map that marks the last 20 places in the world where the app has been used:

Data Droplets app on the watch--just touch the screen...


Making Things Happen

A typical reason to run an app on the watch is to be able to see results right on your wrist. But another reason is to use the app to make things happen externally, say through APIs.

As one very simple example, here’s the complete code to deploy an app that mails the app’s owner a map of a 1-mile region around wherever they are when they access the app:

In[32]:= CloudDeploy[Delayed[SendMail[GeoGraphics[{Opacity[.4, Red], PointSize[.05], Point[Here]}, GeoRange -> Quantity[1, "Miles"]]]; Style["Sent!", 200], "PNG"], "WatchApps/MailMyLocation", IconRules -> image:maillocationicon]

Email sent by the MailMyLocation app--log where you've been, share your location, remember where you parked...


Apps to Generate Apps

So far, all the apps we’ve talked about are built from fixed pieces of Wolfram Language code that get deployed once to the Apple Watch. But the Wolfram Language is symbolic, so it’s easy for it to manipulate the code of an app, just like it manipulates any other data. And that means that it’s straightforward to use the Wolfram Language to build and deploy custom apps on the fly.

Here’s a simple example. Say we want to have an app on the watch that gives a countdown of days to one’s next birthday. It’d be very inconvenient to have to enter the date of one’s birthday directly on the watch. But instead we can have an app on the phone where one enters one’s birthday, and then this app can in real time build a custom watch app that gives the countdown for that specific birthday.

Here we enter a birthday in a standard Wolfram Language “smart field” that accepts any date format:

Run the generator app on your phone and enter your birthday...

And as soon as we touch Submit, this app runs Wolfram Language code in the Wolfram Cloud that generates a new custom app for whatever birthday we entered, then deploys that generated app so it shows up on our watch:

...And it deploys the generated app to the watch, ready to run

Here’s the complete code that’s needed to make the Birthday Countdown app-generating app.

In[33]:= CloudDeploy[FormFunction[{"Birthday" -> "Date"}, (CloudDeploy[Delayed[ExpressionCell[With[{count = Floor[UnitConvert[Mod[# - Today, ="1 yr"], "Day"]] &}, Style[Framed[Pane[QuantityForm[count[#Birthday], "Abbreviation"], {250, 250}, Alignment -> Center], RoundingRadius -> 50, FrameStyle -> Thick], 40, Hue[.52]]], Background -> Black], "PNG"], "WatchApps/BirthdayCountdown", IconRules -> image:cakeicon]; Style["BirthdayCountdown app generated & deployed", Larger, Background -> LightYellow]) &, "PNG"], "WatchApps/CountdownGenerator", IconRules -> image:cakeandgearicon]

And here is the result from the generated countdown app for my birthday:

As of this writing, there are 123 days until my next birthday. How many days until your own?

We can make all sorts of apps like this. Here’s a World Clocks example where you fill out a list of any number of places, and create an app that displays an array of clocks for all those places:

Enter a list of cities on your phone, and get an array of clocks for them

You can also use app generation to put you into an app. Here’s the code to deploy a “You Clock” app-generating app that lets you take a picture of yourself with your phone, then creates an app that uses that picture as the hands of a clock:

In[34]:= CloudDeploy[FormFunction[{"image" -> "Image"}, (With[{hand = ImageRotate[ImagePad[ImageResize[#image, 100, Resampling -> "Gaussian"], {{0, 0}, {50, 0}}], -Pi/2]}, CloudDeploy[Delayed[ClockGauge[Now, PlotTheme -> "Minimal", GaugeMarkers -> {hand, hand, None}, Background -> Black, TicksStyle -> White, ImageSize -> 312], "PNG"], "WatchApps/YouClock", IconRules -> "YouClock"]]; Style["YouClock app deployed", 50]) &, "PNG"], "WatchApps/YouClockGenerator", IconRules -> "YCG"]

And here I am as the hands of a clock

And actually, you can easily go even more meta, and have apps that generate apps that generate apps: apps all the way down!


More Than I Expected

When I set out to use the Wolfram Language to make apps for the Apple Watch I wasn’t sure how it would go. Would the deployment pipeline to the watch work smoothly enough? Would there be compelling watch apps that are easy to build in the Wolfram Language?

I’m happy to say that everything has gone much better than I expected. The watch is very new, so there were a few initial deployment issues, which are rapidly getting worked out. But it became clear that there are lots and lots of good watch apps that can be made even with tiny amounts of Wolfram Language code (tweet-a-watch-app?). And to me it’s very impressive that in less than one full day’s work I was able to develop and deploy 25 complete apps.

Of course, what ultimately made this possible is the whole Wolfram Language technology stack that I’ve been building for nearly 30 years. But it’s very satisfying to see all the automation we’ve built work so nicely, and make it so easy to turn ideas into yet another new kind of thing: watch apps.

It’s always fun to program in the Wolfram Language, and it’s neat to see one’s code deployed on something like a watch. But what’s ultimately more important is that it’s going to be very useful to lots of people for lots of purposes. The code here is a good way to get started learning what to do. But there are many directions to go, and many important—or simply fun—apps to create. And the remarkable thing is that the Wolfram Language makes it so easy to create watch apps that they can become a routine part of everyday workflow: just another place where functionality can be deployed.

To comment, please visit the copy of this post at the Wolfram Blog »

]]> 0
Scientific Bug Hunting in the Cloud: An Unexpected CEO Adventure Thu, 16 Apr 2015 18:34:25 +0000 Stephen Wolfram The Wolfram Cloud Needs to Be Perfect

The Wolfram Cloud is coming out of beta soon (yay!), and right now I’m spending much of my time working to make it as good as possible (and, by the way, it’s getting to be really great!). Mostly I concentrate on defining high-level function and strategy. But I like to understand things at every level, and as a CEO, one’s ultimately responsible for everything. And at the beginning of March I found myself diving deep into something I never expected…

Here’s the story. As a serious production system that lots of people will use to do things like run businesses, the Wolfram Cloud should be as fast as possible. Our metrics were saying that typical speeds were good, but subjectively when I used it something felt wrong. Sometimes it was plenty fast, but sometimes it seemed way too slow.

We’ve got excellent software engineers, but months were going by, and things didn’t seem to be changing. Meanwhile, we’d just released the Wolfram Data Drop. So I thought, why don’t I just run some tests myself, maybe collecting data in our nice new Wolfram Data Drop?

A great thing about the Wolfram Language is how friendly it is for busy people: even if you only have time to dash off a few lines of code, you can get real things done. And in this case, I only had to run three lines of code to find a problem.

First, I deployed a web API for a trivial Wolfram Language program to the Wolfram Cloud:

In[1]:= CloudDeploy[APIFunction[{}, 1 &]]

Then I called the API 50 times, measuring how long each call took (% here stands for the previous result):

In[2]:= Table[First[AbsoluteTiming[URLExecute[%]]], {50}]

Then I plotted the sequence of times for the calls:

In[3]:= ListLinePlot[%]

And immediately there seemed to be something crazy going on. Sometimes the time for each call was 220 ms or so, but often it was 900 ms, or even twice that long. And the craziest thing was that the times seemed to be quantized!

I made a histogram:

In[4]:= Histogram[%%, 40]

And sure enough, there were a few fast calls on the left, then a second peak of slow calls, and a third “outcropping” of very slow calls. It was weird!

I wondered whether the times were always like this. So I set up a periodic scheduled task to do a burst of API calls every few minutes, and put their times in the Wolfram Data Drop. I left this running overnight… and when I came back the next morning, this is what I saw:

Graph of API calls, showing strange, large-scale structure

Even weirder! Why the large-scale structure? I could imagine that, for example, a particular node in the cluster might gradually slow down (not that it should), but why would it then slowly recover?

My first thought was that perhaps I was seeing network issues, given that I was calling the API on a test cloud server more than 1000 miles away. So I looked at ping times. But apart from a couple of weird spikes (hey, it’s the internet!), the times were very stable.

Ping times


Something’s Wrong inside the Servers

OK, so it must be something on the servers themselves. There’s a lot of new technology in the Wolfram Cloud, but most of it is pure Wolfram Language code, which is easy to test. But there’s also generic modern server infrastructure below the Wolfram Language layer. Much of this is fundamentally the same as what Wolfram|Alpha has successfully used for half a dozen years to serve billions of results, and what webMathematica started using even nearly a decade earlier. But being a more demanding computational system, the Wolfram Cloud is set up slightly differently.

And my first suspicion was that this different setup might be causing something to go wrong inside the webserver layer. Eventually I hope we’ll have pure Wolfram Language infrastructure all the way down, but for now we’re using a webserver system called Tomcat that’s based on Java. And at first I thought that perhaps the slowdowns might be Java garbage collection. Profiling showed that there were indeed some “stop the world” garbage-collection events triggered by Tomcat, but they were rare, and were taking only milliseconds, not hundreds of milliseconds. So they weren’t the explanation.

By now, though, I was hooked on finding out what the problem was. I hadn’t been this deep in the trenches of system debugging for a very long time. It felt a lot like doing experimental science. And as in experimental science, it’s always important to simplify what one’s studying. So I cut out most of the network by operating “cloud to cloud”: calling the API from within the same cluster. Then I cut out the load balancer, that dispatches requests to particular nodes in a cluster, by locking my requests to a single node (which, by the way, external users can’t do unless they have a Private Cloud). But the slowdowns stayed.

So then I started collecting more-detailed data. My first step was to make the API return the absolute times when it started and finished executing Wolfram Language code, and compare those to absolute times in the wrapper code that called the API. Here’s what I saw:

The blue line shows the API-call times from before the Wolfram Language code was run; the gold line, after.

The blue line shows times before the Wolfram Language code is run; the gold line after. I collected this data in a period when the system as a whole was behaving pretty badly. And what I saw was lots of dramatic slowdowns in the “before” times—and just a few quantized slowdowns in the “after” times.

Once again, this was pretty weird. It didn’t seem like the slowdowns were specifically associated with either “before” or “after”. Instead, it looked more as if something was randomly hitting the system from the outside.

One confusing feature was that each node of the cluster contained (in this case) 8 cores, with each core running a different instance of the Wolfram Engine. The Wolfram Engine is nice and stable, so each of these instances was running for hours to days between restarts. But I wondered if perhaps some instances might be developing problems along the way. So I instrumented the API to look at process IDs and process times, and then for example plotted total process time against components of the API call time:

Total process time plotted against components of the API call time

And indeed there seemed to be some tendency for “younger” processes to run API calls faster, but (particularly noting the suppressed zero on the x axis) the effect wasn’t dramatic.


What’s Eating the CPU?

I started to wonder about other Wolfram Cloud services running on the same machine. It didn’t seem to make sense that these would lead to the kind of quantized slowdowns we were seeing, but in the interest of simplifying the system I wanted to get rid of them. At first we isolated a node on the production cluster. And then I got my very own Wolfram Private Cloud set up. Still the slowdowns were there. Though, confusingly, at different times and on different machines, their characteristics seemed to be somewhat different.

On the Private Cloud I could just log in to the raw Linux system and start looking around. The first thing I did was to read the results from the “top” and “ps axl” Unix utilities into the Wolfram Language so I could analyze them. And one thing that was immediately obvious was that lots of “system” time was being used: the Linux kernel was keeping very busy with something. And in fact, it seemed like the slowdowns might not be coming from user code at all; they might be coming from something happening in the kernel of the operating system.

So that made me want to trace system calls. I hadn’t done anything like this for nearly 25 years, and my experience in the past had been that one could get lots of data, but it was hard to interpret. Now, though, I had the Wolfram Language.

Running the Linux “strace” utility while doing a few seconds of API calls gave 28,221,878 lines of output. But it took just a couple of lines of Wolfram Language code to knit together start and end times of particular system calls, and to start generating histograms of system-call durations. Doing this for just a few system calls gave me this:

System-call durations--note the clustering...

Interestingly, this showed evidence of discrete peaks. And when I looked at the system calls in these peaks they all seemed to be “futex” calls—part of the Linux thread synchronization system. So then I picked out only futex calls, and, sure enough, saw sharp timing peaks—at 250 ms, 500 ms and 1s:

System-call durations for just the futex calls--showing sharp timing peaks

But were these really a problem? Futex calls are essentially just “sleeps”; they don’t burn processor time. And actually it’s pretty normal to see calls like this that are waiting for I/O to complete and so on. So to me the most interesting observation was actually that there weren’t other system calls that were taking hundreds of milliseconds.


The OS Is Freezing!

So… what was going on? I started looking at what was happening on different cores of each node. Now, Tomcat and other parts of our infrastructure stack are all nicely multithreaded. Yet it seemed that whatever was causing the slowdown was freezing all the cores, even though they were running different threads. And the only thing that could do that is the operating system kernel.

But what would make a Linux kernel freeze like that? I wondered about the scheduler. I couldn’t really see why our situation would lead to craziness in a scheduler. But we looked at the scheduler anyway, and tried changing a bunch of settings. No effect.

Then I had a more bizarre thought. The instances of the Wolfram Cloud I was using were running in virtual machines. What if the slowdown came from “outside The Matrix”? I asked for a version of the Wolfram Cloud running on bare metal, with no VM. But before that was configured, I found a utility to measure the “steal time” taken by the VM itself—and it was negligible.

By this point, I’d been spending an hour or two each day for several days on all of this. And it was time for me to leave for an intense trip to SXSW. Still, people in our cloud-software engineering team were revved up, and I left the problem in their capable hands.

By the time my flight arrived there was already another interesting piece of data. We’d divided each API call into 15 substeps. Then one of our physics-PhD engineers had compared the probability for a slowdown in a particular substep (on the left) to the median time spent in that substep (on the right):

Bars on the left show the probability for a slowdown in particular substeps; bars on the right show the median time spent in each of those substeps

With one exception (which had a known cause), there was a good correlation. It really looked as if the Linux kernel (and everything running under it) was being hit by something at completely random times, causing a “slowdown event” if it happened to coincide with the running of some part of an API call.

So then the hunt was on for what could be doing this. The next suspicious thing noticed was a large amount of I/O activity. In the configuration we were testing, the Wolfram Cloud was using the NFS network file system to access files. We tried tuning NFS, changing parameters,  going to asynchronous mode, using UDP instead of TCP, changing the NFS server I/O scheduler, etc. Nothing made a difference. We tried using a completely different distributed file system called Ceph. Same problem. Then we tried using local disk storage. Finally this seemed to have an effect—removing most, but not all, of the slowdown.

We took this as a clue, and started investigating more about I/O. One experiment involved editing a huge notebook on a node, while running lots of API calls to the same node:

Graph of system time, user time, and API time spent editing a huge notebook--with quite a jump while the notebook was being edited and continually saved
The result was interesting. During the period when the notebook was being edited (and continually saved), the API times suddenly jumped from around 100 ms to 500 ms. But why would simple file operations have such an effect on all 8 cores of the node?


The Culprit Is Found

We started investigating more, and soon discovered that what seemed like “simple file operations” weren’t—and we quickly figured out why. You see, perhaps five years before, early in the development of the Wolfram Cloud, we wanted to experiment with file versioning. And as a proof of concept, someone had inserted a simple versioning system named RCS.

Plenty of software systems out there in the world still use RCS, even though it hasn’t been substantially updated in nearly 30 years and by now there are much better approaches (like the ones we use for infinite undo in notebooks). But somehow the RCS “proof of concept” had never been replaced in our Wolfram Cloud codebase—and it was still running on every file!

One feature of RCS is that when a file is modified even a tiny bit, lots of data (even several times the size of the file itself) ends up getting written to disk. We hadn’t been sure how much I/O activity to expect in general. But it was clear that RCS was making it needlessly more intense.

Could I/O activity really hang up the whole Linux kernel? Maybe there’s some mysterious global lock. Maybe the disk subsystem freezes because it doesn’t flush filled buffers quickly enough. Maybe the kernel is busy remapping pages to try to make bigger chunks of memory available.  But whatever might be going on, the obvious thing was just to try taking out RCS, and seeing what happened.

And so we did that, and lo and behold, the horrible slowdowns immediately went away!

So, after a week of intense debugging, we had a solution to our problem. And repeating my original experiment, everything now ran cleanly, with API times completely dominated by network transmission to the test cluster:

Clean run times! Compare this to the In[3] image above.


The Wolfram Language and the Cloud

What did I learn from all this? First, it reinforced my impression that the cloud is the most difficult—even hostile—development and debugging environment that I’ve seen in all my years in software. But second, it made me realize how valuable the Wolfram Language is as a kind of metasystem, for analyzing, visualizing and organizing what’s going on inside complex infrastructure like the cloud.

When it comes to debugging, I myself have been rather spoiled for years—because I do essentially all my programming in the Wolfram Language, where debugging is particularly easy, and it’s rare for a bug to take me more than a few minutes to find. Why is debugging so easy in the Wolfram Language? I think, first and foremost, it’s because the code tends to be short and readable. One also typically writes it in notebooks, where one can test out, and document, each piece of a program as one builds it up. Also critical is that the Wolfram Language is symbolic, so one can always pull out any piece of a program, and it will run on its own.

Debugging at lower levels of the software stack is a very different experience. It’s much more like medical diagnosis, where one’s also dealing with a complex multicomponent system, and trying to figure out what’s going on from a few measurements or experiments. (I guess our versioning problem might be the analog of some horrible defect in DNA replication.)

My whole adventure in the cloud also very much emphasizes the value we’re adding with the Wolfram Cloud. Because part of what the Wolfram Cloud is all about is insulating people from the messy issues of cloud infrastructure, and letting them instead implement and deploy whatever they want directly in the Wolfram Language.

Of course, to make that possible, we ourselves have needed to build all the automated infrastructure. And now, thanks to this little adventure in “scientific debugging”, we’re one step closer to finishing that. And indeed, as of today, the Wolfram Cloud has its APIs consistently running without any mysterious quantized slowdowns—and is rapidly approaching the point when it can move out of beta and into full production.

]]> 4
Frontiers of Computational Thinking: A SXSW Report Mon, 23 Mar 2015 17:44:33 +0000 Stephen Wolfram Stephen Wolfram speaking at SXSW 2015

Last week I spoke at SXSW Interactive 2015 in Austin, Texas. Here’s a slightly edited transcript:

A Most Productive Year

Well, hello again. I’ve actually talked about computation three times before at SXSW. And I have to say when I first agreed to give this talk, I was worried that I would not have anything at all new to say. But actually, there’s a huge amount that’s new. In fact, this has probably been the single most productive year of my life. And I’m excited to be able to talk to you here today about some of the things that I’ve figured out recently.

It’s going to be a fairly wild ride, sort of bouncing between very conceptual and very practical—from thousand-year-old philosophy issues, to cloud technology to use here and now.

Basically, for the last 40 years I’ve been building a big tower of ideas and technology, working more or less alternately on basic science and on technology. And using the basic science to figure out more technology, and technology to figure out more science.

I’m happy to say lots of people have used both the science and the technology that I’ve built. But I think what we’ve now got is much bigger than before. Actually, talking to people the last couple of days at SXSW I’m really excited, because probably about 3/4 of the people that I’ve talked to can seriously transform—or at least significantly upgrade—what they’re doing by using new things that we’ve built.

The Wolfram Language

OK. So now I’ve got to tell you how. It all starts with the Wolfram Language. Which actually, as it happens, I first talked about by that name two years ago right here at SXSW.

The Wolfram Language is a big and ambitious thing which is actually both a central piece of technology, and a repository and realization of a bunch of fundamental ideas. It’s also something that you can start to use right now, free on the web. Actually, it runs pretty much everywhere—on the cloud, desktops, servers, supercomputers, embedded processors, private clouds, whatever.

Wolfram Language

From an intellectual point of view, the goal of the Wolfram Language is basically to express as much as possible computationally—to provide a very broad way to encapsulate computation and knowledge, and to automate, as much as possible, what can be done with them.

I’ve been working on building what’s now the Wolfram Language for about three decades. And in Mathematica and , many many people have used precursors of what we have now.

But today’s Wolfram Language is something different. It’s something much broader, that I think can be the way that lots of computation gets done sort of everywhere, in all sorts of systems and devices and whatever.

So let’s see it in action. Let’s start off by just having a little conversation with the language, in a thing we invented 26 years ago that we call a notebook. Let’s do something trivial.

2 + 2

Good. Let’s try something different. You may know that it was Pi Day on Saturday: 3/14/15. And since we are the company that I think has served more mathematical pi than any other in history, we had a little celebration about Pi Day. So let’s have the Wolfram Language compute pi for us; let’s say to a thousand places:

N[Pi, 1000]

There. Or let’s be more ambitious; let’s calculate it to a million places. It’ll take a little bit longer…

N[Pi, 10^6]

But not much. And there’s the result. It goes on and on. Look how small the scroll thumb is.

For something different we could pick up the Wikipedia article about pi:

And make a word cloud from it:


Needless to say, in the article about pi, pi itself features prominently.

Or let’s get an image. Here’s me:


So let’s go ahead and do something with that image—for example, let’s edge-detect it. % always means the most recent thing we got, so…


…there’s the edge detection of that image. Or let’s say we make a morphological graph from that image, so now we’ll make some kind of network:


Oh, that’s quite fetching; OK. Or let’s automatically make a little user interface here that controls the degree of edginess that we have here—so there I am:

Manipulate[EdgeDetect[CurrentImage[], r], {r, 1, 30}]" title="Manipulate[EdgeDetect[CurrentImage[], r], {r, 1, 30}]

Or let’s get a table of different levels of edginess in that picture:

Table[EdgeDetect[CurrentImage[], r], {r, 1, 30}]

And now for example we can take all of those images and stack them up and make a 3D image:

Image3D[%, BoxRatios -> 1]

A Language for the Real World

The Wolfram Language has zillions of different kinds of algorithms built in. It’s also got real-world knowledge and data. So, for example, I could just say something like “planets”:


So it understood from natural language what we were talking about. Let’s get a list of planets:


And there’s a list of planets. Let’s get pictures of them:

EntityValue[%, "Image"]

Let’s find their masses:

EntityValue[%%, "Mass"]

Now let’s make an infographic of planets sized according to mass:

ImageCollage[% -> %%]" title="ImageCollage[% -> %%]

I think it’s pretty amazing that it’s just one line of code to make something like this.

Let’s go on a bit. This is where the internet thinks my computer is right now:


We could say, “When’s sunset going to be, at this position on this day?”


How long from now?


OK, let’s get a map of, say, 10 miles around the center of Austin:

GeoGraphics[GeoDisk[(=Austin), (=10 mile)]]

Or, let’s say, a powers of 10 sequence:

Table[GeoGraphics[GeoDisk[(=Austin), Quantity[10^n, "Miles"]]], {n, -1, 4}]

Or let’s go off planet and do the same kind of thing. We ask for the Apollo 11 landing site, and let’s show a thousand miles around that on the Moon:

GeoGraphics[{Red, GeoDisk[First[(=apollo 11 landing site)], (=1000 miles)]}]

We can do all kinds of things. Let’s try something in a different kind of domain. Let’s get a list of van Gogh’s artworks:

(=van gogh artworks)
And let’s take, say, the first 20 of those, and let’s get images of those:

EntityValue[Take[%, 20], "Image"]

And now, for example, we can take those and say, “What were the dominant colors that were used in those images?”

DominantColors /@ %

And let’s plot those colors in a chromaticity diagram, in 3D:


Philosophy of the Wolfram Language

I think it’s fairly amazing what can be done with just tiny amounts of Wolfram Language code.

It’s really a whole new situation for programming. I mean, it’s a dramatic change. The traditional idea has been to start from a fairly small programming language, and then write fairly big programs to do what you want. The idea in the Wolfram Language is to make the language itself in a sense as big as possible—to build in as much as we can—and in effect to automate as much as possible of the process of programming.

These are the types of things that the Wolfram Language deals with:

The Wolfram Language is very broad

And by now we’ve got thousands of built-in functions, tens of thousands of models and methods and algorithms and so on, and carefully curated data on thousands of different domains.

And I’ve basically spent nearly 30 years of my life keeping the design of all of this clean and consistent.

It’s been really interesting, and the result is really satisfying, because now we have something that’s incredibly powerful—that we’re also able to use to develop the language itself at an accelerating rate.

Tweetable Programs

Here’s something we did recently to have some fun with all of this. It’s called Tweet-a-Program.

Wolfram Tweet-a-Program

The idea here is you send a whole program as a tweet, and get back the result of running it. If you stop by our booth at the tradeshow here, you can pick up one of these little “Galleries of Tweetable Programs”. And here’s an online collection of some tweetable programs—and remember, every one of these programs is less than 140 characters long, and does all kinds of different types of things.

Wolfram Tweet-a-Program online collection

So to celebrate tweetable programs, we also have a deck of “code cards“, each with a tweetable program:

Part of our deck of code cards, each with a different Wolfram Language tweetable program

Computational Thinking for Kids

You know, if you look even at these tweetable programs, they’re surprisingly easy to understand. You can kind of just read the words to get a good idea how they work.

And you might think, OK, it’s like kids could do this. Well, actually, that’s true. And in fact I think this is an important moment for programming, where the same thing has happened as has happened in the past for things like video editing and so on: We’ve automated enough that the fancy professionals don’t really have any advantage over kids—now in learning programming.

So one thing I’m very keen on right now is to use our language as a way to teach computational thinking to a very broad range of people.

Soon we’ll have something we call Wolfram Programming Lab—which you can use free on the web. It’s kind of an immersion language learning for the Wolfram Language, where you see lots of small working examples of Wolfram Language programs that you get to modify and run.

Wolfram Programming Lab
I think it’s pretty powerful for education. Because it’s not just teaching programming: It’s immediately bringing in lots of real-world stuff, integrating with other things kids are learning, and really teaching a computational-thinking approach to everything.

So let’s take a look at a couple of examples. It’s been Pi Day; let’s look at Pi Necklaces:

Wolfram Programming Lab notebook:  Pi Digit Necklaces

The basic idea is that here’s a little piece of code—you can run it, see what it does, modify it. You can say to show the details, and it’ll tell you what’s going on. And so on.

And maybe we can try another example. Let’s say we do something a little bit more real-world… Where can you see from a particular skyscraper?

Wolfram Programming Lab notebook:  Skyscraper Views

This will show us the visible region from the Empire State Building. And we could go ahead and change lots of parameters of this and see what happens, or you can go down and look at challenges, where it’s asking you to try and do other kinds of related computations.

Wolfram Programming Lab notebook:  Skyscraper Views:  Challenges

I hope lots of people—kids and otherwise—will have fun with the explorations that we’ve been making. I think it’s great for education: a kind of mixture of sort of the precise thinking of math, and the creativity of something like writing. And, by the way, in the Programming Lab, we can watch programs people are trying to write, and do all kinds of education analytics inside.

I might mention that for people who don’t know English, we’ll soon be able to annotate any Wolfram Language programs in lots of other languages.

Translation annotation for Wolfram Language code—here in Chinese

I think some amazing things are going to happen when lots more people learn to think computationally using the Wolfram Language.

Natural Language as Input

Of course, many millions of people already use our technology every day without any of that. They’re just typing pure natural language into Wolfram|Alpha, or saying things to Siri that get sent to Wolfram|Alpha.

I guess one big breakthrough has been being able to use our very precise natural language understanding, using both new kinds of algorithms and our huge knowledgebase.

And using all our knowledge and computation capabilities to generate automated reports for things that people ask about. Whether it’s questions about demographics:

Wolfram|Alpha output for "cost of living in austin vs SF"

Or about airplanes—this shows airplanes currently overhead where the internet thinks my computer is:

Wolfram|Alpha output for "planes overhead"

Or for example about genome sequences. It will go look up whether that particular random base-pair sequence appears somewhere on the human genome:

Wolfram|Alpha output for "AAGCTAGCTAGCTCA"

So those are a few sort of things that we can do in Wolfram|Alpha. And we’ve been covering thousands of different domains of knowledge, adding new things all the time.

Wolfram|Alpha examples

By the way, there are now quite a lot of large organizations that have internal versions of Wolfram|Alpha that include their own data as well as our public data. And it’s really nice, because all types of people can kind of make “drive-by” queries in natural language without ever having to go to their IT departments.

You know, being able to use natural language is central to the actual Wolfram Language, too. Because when you want to refer to something in the real world—like a city, for example—you can’t be going to documentation to find out its name. You just want to type in natural language, and then get that interpreted as something precise.

Which is exactly what you can now do. So, for example, we would type something like:

Just type "=nyc"

and get:

And the Wolfram Language correctly interprets it as "New York City"

And that would be understood as the entity “New York City”. And we could go and ask things like what’s the population of that, and it will tell us the results:

New York City (city) ... ["Population"]

Big Idea: Symbolic Programming

There’s an awful lot that goes into making the Wolfram Language work—not only tens of millions of lines of algorithmic code, and terabytes of curated data, but also some big ideas.

Probably the biggest idea is the idea of symbolic programming, which has been the core of what’s become the Wolfram Language right from the very beginning.

Here’s the basic point: In the Wolfram Language, everything is symbolic. It doesn’t have to have any particular value; it can just be a thing.

If I just typed “x” in most computer languages, they’d say, “Help, I don’t know what x is”. But the Wolfram Language just says, “OK, x is x; it’s symbolic”.


And the point is that basically anything can be represented like this. If I type in “Jupiter”, it’s just a symbolic thing:

Jupiter (planet) ...

Or, for example, if I were to put in an image here, it’s just a symbolic thing:

Image of Jupiter

And I could have something like a slider, a user-interface element—again, it’s just a symbolic thing:


And now when you compute, you can do anything with anything. Like you could do math with x:

Factor[x^10 - 1]

Or with an image of Jupiter:

Factor[(jupiter)^10 - 1]

Or with sliders:

Factor[Slider[]^10 - 1]

Or whatever.

It’s taken me a really long time, actually, to understand just how powerful this idea of symbolic programming really is. Every few years I understand it a little bit more.

Language for Deployment

Long ago we understood how to represent programs symbolically, and documents, and interfaces, so that they all instantly become things you can compute with. Recently one of the big breakthroughs has been understanding how to represent not only operations and content symbolically, but also their deployments.

OK, there’s one thing I need to explain first. What I’ve been showing you here today has mostly been using a desktop version of the Wolfram Language, though it’s going to the cloud to get things from our knowledgebase and so on. Well, with great effort we’ve also built a full version of the whole language in the cloud.

So let me use that interface there, just through a web browser. I’m going to have the exact same experience, so to speak. And we can do all these same kinds of things just purely in the cloud through a web browser.

Wolfram Programming Cloud:  123^456

Wolfram Programming Cloud:  Graphics3D[Sphere[]]

Wolfram Programming Cloud:  Table[Rotate["hello",RandomReal[{0,2Pi}]],{100}]

You know, in my 40 years of writing software, I don’t believe there’s ever been a development environment as crazy as the web and the cloud. It’s taken us a huge amount of effort to kind of hack through the jungle to get the functionality that we want. We’re pretty much there now. And of course the great news for people who just use what we’ve built is that they don’t have to hack through the jungle themselves, because we’ve already done that.

But OK, so you can use the Wolfram Language directly in the cloud. And that’s really useful. But you can also deploy other things in the language through the cloud.

Like, cat pictures are popular on the internet, so let’s deploy a cat app. Let’s define a form that has a field that asks for a breed of cat, then shows a picture of that. Then let’s deploy that to the cloud.

CloudDeploy[FormFunction[{"cat" -> "CatBreed"}, Magnify[#cat["Image"], 2] &, "PNG"]]

Now we get a cloud object with a URL. We just go there, and we get a form. The form has a “smart field”, that understands natural language—in this particular case, the language for describing cat breeds. So now we can type in, let’s say, “siamese”. And it will go back and run that code… OK. There’s a picture of a cat.

Enter the name of a cat breed, get a picture of that breed

We can make our web app a little more complicated. Let’s add in another field here.

CloudDeploy[FormFunction[{"cat" -> "CatBreed", "angle" -> Restricted["Number", {0, 360}]}, Rotate[Magnify[#cat["Image"], 2], #angle Degree] &, "PNG"]]

Again, we deploy to the cloud, and now have a cat at an angle there:

Manx cat at a 70-degree angle

OK. So that’s how we can make a web app, which we can also deploy on mobile and so on. We can also make an API if we want to. Let’s use the same piece of code. Actually, the easiest thing to do would be just to edit that piece of code there, and change this from being a form to being an API:

CloudDeploy[APIFunction[{"cat" -> "CatBreed", "angle" -> Restricted["Number", {0, 360}]}, Rotate[Magnify[#cat["Image"], 2], #angle Degree] &, "PNG"]]

And now the thing that we have will be an API that we can go fill in parameters to; we can say “cat=manx”, “angle=300”, and now we can run that, and there’s another cat at an angle.

Manx cat at a 300-degree angle, in deployed API

So that was an API that we just created, that could be used by anybody in the cloud. And we can call the API from anywhere—a website, a program, whatever. And actually we can automatically generate the code to call this from all sorts of other languages—let’s say inside Java.

EmbedCode[%, "Java"]
So in effect you can knit Wolfram Language functionality right into any project you’re doing in any language.

In this particular case, you’re calling code in our cloud. I should mention that there are other ways you can set this up, too. You can have a private cloud. You can have a version of the Wolfram Engine that’s on your computer. You can even have the Wolfram Engine in a library that can be explicitly linked into a program you’ve written.

And all this stuff works on mobile too. You can deploy an app that works on mobile; even a complete APK file for Android if you want.

There’s lots of depth to all this software engineering stuff. And it’s rather wonderful how the Wolfram Language manages to simplify and automate so much of it.

The Automation of Programming

You know, I get to see this story of automation up close at our company every day. We have all these projects—all these things we’re building, a huge amount of stuff—that you might think we’d need thousands of people to do. But you see, we’ve been automating things, and then automating our automation and so on, for a quarter of a century now. And so we still only have a little private company with about 700 people—and lots of automation.

It’s fairly spectacular to see: When we automate something—like, say, a type of web development—projects that used to be really painful, and take a couple of months, suddenly become really easy, and take like a day. And from a management point of view, it’s great how that changes the level of innovation that you attempt.

Let me give you a little example from a couple of weeks ago. We were talking about what to do for Pi Day. And we thought it’d be fun to put up a website where people could type in their birthdays, and find out where in the digits of pi those dates show up, and then make a cool T-shirt based on that.

Well, OK, clearly that’s not a corporately critical activity. But if it’s easy, why not do it? Well, with all our automation, it is easy. Here’s the code that got written to create that website: code notebook

It’s not particularly long. Somewhere here it’ll deploy to the cloud, and there it’s calling the Zazzle API, and so on. Let me show you the actual website that got made there: website

And you can type in your birthday in some format, and then it’ll go off and try and find that birthday in the digits of pi. There we go; it found my birthday, at that digit position, and there’s a custom-created image showing me in pi and letting me go off and get a T-shirt with that on it. results page, showing any entered birthdate (mine, in this case) in the digits of pi

And actually, zero programmers were involved in building this. As it happens, it was just done by our art director, and it went live a couple of days ago, before Pi Day, and has been merrily serving hundreds of thousands of custom T-shirt designs to pi enthusiasts around the world.

Large-Scale Programs

It’s interesting to see how large-scale code development happens in the Wolfram Language. There’s an Eclipse-based IDE, and we’re soon going to release a bunch of integration with Git that we use internally. But one thing that’s very different from other languages is that people tend to write their code in notebooks.

Notebooks can include the whole story of code

They can put the whole story of their code right there, with text and graphics and whatever right there with the code. They can use notebooks to make structured tests if they want to; there’s a testing notebook with various tests in it we could run and so on:

A unit-test notebook

And they can also use notebooks to make templates for computable documents, where you can directly embed symbolic Wolfram Language code that’ll get executed to make static or interactive documents that you can deliver as reports and so on.

By the way, one of the really nice things about this whole ecosystem is that if you see a finished result—say an infographic—there’s a standard way to include a kind of “compute-back link” that goes right back to the notebook where that graphic was made. So you see everything that’s behind it, and, say, start being able to use the data yourself. Which is useful for things like data publishing for research, or data journalism.

Internet of Things

OK, so, talking of data, a couple of weeks ago we launched what we call our Data Drop.

Wolfram Data Drop

The idea is to let anything—particularly connected devices—easily drop data into our cloud, and then immediately make it meaningful, and accessible, to the Wolfram Language everywhere.

Like here’s a little device I have that measures a few things… actually, I think this particular one only measures light levels; kind of boring.

An Electric Imp device that measures light levels

But in any case, it’s connected via wifi into our cloud. And everything it measures goes into our Data Drop, in this databin corresponding to that device.

bin = Databin["3Mfto-_m"]

We’re using what we call WDF—the Wolfram Data Framework—to say what the raw numbers from the device mean. And now we can do all kinds of computations.

OK, it hasn’t collected very much data yet, but we could go ahead and make a plot of the data that it’s collected:


That was the light level as seen by that device, and I think it just sat there, and the lights got turned on and then it’s been a fixed light level—sorry, not very exciting. We can just make a histogram of that data, and again it’s going to be really boring in this particular case.


You know, we have all this data about the world from our knowledgebase integrated right into our language. And now with the Data Drop, you can integrate data from any device that you want. We’ve got a whole inventory of different kinds of devices, which we’ve been making for the last couple of years.

Wolfram Connected Devices Project

Once you get data into this Data Drop, you can use it wherever the Wolfram Language is used. Like in Wolfram|Alpha. Or Siri. Or whatever.

It’s really critical that the Wolfram Language can represent different types of data in a standard way, because that means you can immediately do computations, combine databins, whatever. And I have to say that just being able to sort of “throw data” into the Wolfram Data Drop is really convenient.

Like of course we’re throwing data from the My Pi Day website into a databin. And that means, for example, it’s just one line of code to see where in the world people have been interested in pi and generating pi T-shirts from, and so on.

GeoGraphics[{Red, Point[Databin["3HPtHzvi"]["GeoLocations"]]},  GeoProjection -> "Albers", GeoRange -> Full, ImageSize -> 800]

Some of you might know that I’ve long been an enthusiast of personal analytics. In fact, somewhat to my surprise, I think I’m the human who has collected more data on themselves than anyone else. Like here’s a dot for every piece of outgoing email that I’ve sent for the past quarter century.

My outgoing email for the past quarter century

But now, with our Data Drop, I’m starting to accumulate even more data. I think I’m already in the double digits in terms of number of databins. Like here’s my heart rate on Pi Day, from a databin. I think there’s a peak there right at the pi moment of the century.

DateListPlot[Databin["3LV~DEJC", {DateObject[{2015, 3, 14, 7, 30, 0}], DateObject[{2015, 3, 15, 0, 0, 0}]}]["TimeSeries"]]

Machine Learning

So, with all this data coming in, what are we supposed to do with it? Well, within the Wolfram Language we’ve got all this visualization and analysis capability. One of our goals is to be able to do the best data science automatically—without needing to take data scientists’ time to do it. And one area where we’ve been working on that a lot is in machine learning.

Let’s say you want to classify pictures into day or night. OK, so here I’ve got a little tiny training set of pictures corresponding to scenes that are day or night, and I just have one little function in the Wolfram Language, Classify, which is going to build a classifier to determine whether a picture is a day or a night one:

daynight = Classify[{(classifier set)}]

So there I’ve got the classifier. Now I can just apply that classifier to a collection of pictures, and now it will tell me, according to that classifier, are those pictures day or night.

daynight[{ (6 images) }]

And we’re automatically figuring out what type of machine learning to use, and setting up so that you have a classifier that you can use, or can put in an app, or call in an API, or whatever, so that it is just one function to do this.

We’ve got lots of built-in classifiers as well; all sorts of different kinds of things. Let me show you a new thing that’s just coming together now, which is image identification. And I’m going to live dangerously and try and do a live demo on some very new technology.

I asked somebody to go to Walmart and buy a random pile of stuff to try for image identification. So this is probably going be really horrifying. Let’s see what happens here. First of all, let’s set it up so that I can actually capture some images. OK. I’m going to give it a little bit of a better chance by not having it have too funky of a background.

OK. Let us try one banana. Let’s try capturing a banana, and let us see what happens if I say ImageIdentify in our language…

(banana) // ImageIdentify

OK! That’s good!

All right. Let’s tempt fate, and try a couple of other things. What’s this? It appears to be a toy plastic triceratops. Let’s see what the system thinks it is. This could get really bad here.

(triceratops) // ImageIdentify

Oops. It says it’s a goat! Well, from that weird angle I guess I can see how it would think that.

OK, let’s try one more thing.

(African violet) // ImageIdentify

Oh, wow! OK! And the tag in the flower pot says the exact same thing! Which I certainly didn’t know. That’s pretty cool.

This does amazingly well most of the time. And what to me is the most interesting is that when it makes mistakes, like with the triceratops, the mistakes are very human-like. I mean, they’re mistakes that a person could reasonably make.

And actually, I think what’s going on here is pretty exciting. You know, 35 years ago I wanted to figure out brain-like things and I was studying neural nets and so on, and I did all kinds of computer experiments. And I ended up simplifying the underlying rules I looked at—and wound up studying not neural nets, but things called cellular automata, which are kind of like the simplest possible programs.

Mining the Computational Universe

And what I discovered is that if you look at that in the computational universe of all those programs, there’s a whole zoo of possible behaviors that you see. Here’s an example of a whole bunch of cellular automata. Each one is a different program showing different kinds of behavior.

An array of cellular automata

Even when the programs are incredibly simple, there can be incredibly complex behavior. Like, there’s an example; we can go and see what it does:

A complex cellular automaton

Well, that discovery led me on the path to developing a whole new kind of science that I wrote a big book about a number of years ago.

A New Kind of Science:  The book and its chapters

That’s ended up having applications all over the place. And for example, over the last decade it’s been pretty neat to see that the idea of modeling things using programs has been winning out over the idea that’s dominated exact science for about 300 years, of modeling things using mathematical equations.

And what’s also been really neat to see is the extent to which we can discover new technology just by kind of “mining” this computational universe of simple programs. Knowing some goal we have, we might sample a trillion programs to find one that’s good for our particular purposes.

Much can be mined from the computational universe of cellular automata

That purpose could be making art, or it could be making some new image processing or some new natural-language-understanding algorithm, or whatever.

Finally, Brain-Like Computing

Well, OK, so there’s a lot that we can model and build with simple programs. But people have often said somehow the brain must be special; it must be doing more than that.

Back 35 years ago I could get neural networks to make little attractors or classifiers, but I couldn’t really get them to do anything terribly interesting. And over the years I actually wasn’t terribly convinced by most of the applications of things like neural nets that I saw.

But just recently, some threshold has been passed. And, like, the image identifier I was showing is using pretty much the same ideas as 35 years ago—with a bunch of good engineering tweaks, and perhaps a nod to cellular automata too. But the amazing thing is that just doing pretty much the obvious stuff, with today’s technology, just works.

I couldn’t have predicted when this would happen. But looking at it now, it’s sort of shocking. We’re now able to use millions of neurons, tens of millions of training images and thousands of trillions of the equivalent of neuron firings. And although the engineering details are almost as different as birds versus airplanes, the orders of magnitude are pretty much just the same as for us humans when we learn to identify images.

For me, it’s sort of the missing link for AI. There are so many things now that we can do vastly better than humans, using computers. I mean, if you put a Wolfram|Alpha inside a Turing test bot, you’ll be able to tell instantly that it’s not a human, because it knows too much and can compute too much.

But there’ve been these tasks like image identification that we’ve never been able to do with computers. But now we can. And, by the way, the way people have thought this would work for 60 years is pretty much the way it works; we just didn’t have the technology to see it until now.

So, does this mean we should use neural nets for everything now? Well, no. Here’s the thing: There are some tasks, like image identification, that each human effectively learns to do for themselves, based on what they see in the world around them.

Language as Symbolic Representation

But that’s not everything humans do. There’s another very important thing, pretty much unique to our species. We have language. We have a way of communicating symbolically that lets us take knowledge acquired by one person, and broadcast it to other people. And that’s kind of how we’ve built our civilization.

Well, how do we make computers use that idea too? Well, they have to have a language that represents the world, and that they can compute with. And conveniently enough, that’s exactly what the Wolfram Language is trying to be, and that’s what I’ve been working on for the last 30 years or so.

You know, there’s all this abstract computation out there that can be done. Just go sample cellular automata out in the computational universe. But the question is, how does it relate to our human world, to what we as humans know about or care about?

Well, what’s happened is that humans have tried to boil things down: to describe the world symbolically, using language and linguistic constructs. We’ve seen what’s out there in the world, and we’ve come up with words to describe things. We have a word like “bird”, which refers abstractly to a large collection of things that are birds. And by now in English we’ve got maybe 30,000 words that we commonly use, that are the raw material for our description of the world.

Well, it’s interesting to compare that with the Wolfram Language. In English, there’s been a whole evolution over thousands of years to settle on the perhaps convenient, but often incoherent, language structure that we have. In the Wolfram Language, we—and particularly I—have been working hard for many many years keeping everything as consistent and coherent as possible. And now we’ve got 5,000 or so “core words” or functions, together with lots of other words that describe specific entities.

And in the process of developing the language, what I’ve been doing explicitly is a little like what’s implicitly happened in English. I’ve been looking at all those computational things and processes out there, and trying to understand which of them are common enough that it’s worth giving names to them.

You know, this idea of symbolic representation seems to be pretty critical to human rational thinking. And it’s really interesting to see how the structure of a language can affect how people think about things. We see a little bit of that in human natural languages, but the effect seems to be much larger in computer languages. And for me as a language designer, it’s fascinating to see the patterns of thinking that open up when people start really understanding the Wolfram Language.

Some people might say, “Why are we using computer languages at all? Why not just use human natural language?” Well, for a start, computers need something to talk to each other in. But one of the things I’ve worked hard on in the Wolfram Language is making sure that it’s easy not only for computers, but also for humans, to understand—kind of a bridge between computers and humans.

And what’s more, it turns out there are things that human natural language, as it’s evolved, just isn’t very good at expressing. Just think about programs. There are some programs that, yes, can easily be represented by a little piece of English, but a lot of programs are really awkward to state in English. But they’re very clean in the Wolfram Language.

So I think we need both. Some things it’s easier to say in English, some in the Wolfram Language.

Post-Linguistic Concepts

But back to things like image identification. That’s a task that’s really about going from all the stuff out there, in this case in the visual world, and finding how to make it symbolic—how to describe things abstractly with words.

Now, here’s the thing: Inside the neural net, one thing that’s happening is that it’s implicitly making distinctions, in effect putting things in categories. In the early layers of the net those categories look remarkably like the categories we know are used in the early stages of human visual processing, and we actually have pretty decent words for them: “round”, “pointy”, and so on.

But pretty soon there are categories implicitly being used that we don’t have words for. It’s interesting that in the course of history, our civilization gradually does develop new words for things. Like in the last few decades, we’ve started talking about “fractal patterns”. But before then, those kind of tree-like structures didn’t tend to get identified as being anything in particular, because we didn’t have words for them.

So our machines are going to discover a lot of categories that our civilization has not come up with. I’ve started calling these things a rather pretentious name: “post-linguistic emergent concepts”, or PLECs for short. I think we can make a metaframework for things like this within the Wolfram Language. But I think PLECs are part of the way our computers can start to really extend the traditional human worldview.

By the way, before we even get to PLECs, there are issues with concepts that humans already understand perfectly well. You see, in the Wolfram Language we’ve got representations of lots of things in the world, and we can turn the vast majority of things that people ask Wolfram|Alpha into precise symbolic forms. But we still can’t turn an arbitrary human conversation into something symbolic.

So how would we do that? Well, I think we have to break it down into some kind of “semantic primitives”: basic structures. Some, like fact statements, we already have in the Wolfram Language. And some, like mental statements, like “I think” or “I want”, we don’t.

The Ancient History

It’s a funny thing. I’ve been working recently on designing this kind of symbolic language. And of course people have tried to do things like this before. But the state of the art is actually mostly from a shockingly long time ago. I mean, like the 1200s, there was a chap called Ramon Llull who started working on this; in the 1600s, people like Gottfried Leibniz and John Wilkins.

It’s quite interesting to look at what those guys figured out with their “philosophical languages” or whatever. Of course, they never had an implementation substrate like we do today. But they understood quite a lot about ontological categories and so on. And looking at what they wrote really highlights, actually, what’s the same and what changes in the course of history. I mean, all their technology stuff is of course horribly out of date. But most of their stuff about the human condition is still just as valid as then, although they certainly had a lot more focus on mortality than we do today.

And today one interesting change is that we really need to attribute almost person-like internal state to machines. Not least because, as it happens, the early applications of all this everyday discourse stuff will be to things we’re building for people talking to consumer devices and cars and so on.

I could talk some about very practical here-and-now technology that’s actually going to be available starting next week, for making what we call PLIs, or Programmable Linguistic Interfaces. But instead let’s talk more about the big picture, and about the future.

The way I see things, throughout history there’s been a thread of using technology to automate more and more of what we do. Humans define goals, and then it’s the job of technology to automatically achieve those goals as well as possible.

A lot of what we’re trying to with the Wolfram Language is in effect to give people a good way to describe goals. Then our job is to do the computations—or make the external API requests or whatever—to have those goals be achieved.

What Will the AIs Do?

So, in any possible computational definition of the objectives of AI, we’re getting awfully close to achieving them—and actually, in many areas we’ve gone far beyond anything like human intelligence.

But here’s the point: Imagine we have this box that sits on our desk, and it’s able to do all those intelligent things humans can do. The problem is, what does the box choose to do? Somehow it has to be given goals, or purposes. And the point is that there are no absolute goals and purposes. Any given human might say, “The purpose of life is to do X”. But we know that there’s nothing absolute about that.

Purpose ends up getting defined by society and history and civilization. There are plenty of things people do or want to do today that would have seemed absolutely inconceivable 300 years ago. It’s interesting to see the complicated interplay between the progress of technology, the progress of our descriptions of the world—through memes and words and so on—and the evolution of human purposes.

To me, the path of technology seems fairly clear. The evolution of human purposes is a lot less clear.

I mean, on the technology side, more and more of what we do ourselves we’ll be able to outsource to machines. We’ve already outsourced lots of mechanical thinking, like say for doing math. We’re well on the way to outsourcing lots of things about memory, and soon also lots of things about judgment.

People might say, “Well, we’ll never outsource creativity.” Actually, some aspects of that are among the easier things to outsource: We can get lots of inspiration for music or art or whatever just by looking out into the computational universe, and it’s only a matter of time before we can automatically combine those things with knowledge and judgment about the human world.

You know, a lot of our use of technology in the past has been “on demand”. But we’re going to see more and more preemptive use—where our technology predicts what we will want to do, then suggests it.

It’s sort of amusing to me when people talk about the machines taking over; here’s the scenario that I think will actually happen. It’s like with GPSs in cars: Most people—like me—just follow what the GPS tells them to do. Similarly, when there’s something that’s saying, you know, “Pick out that food on the menu”, or “Talk to that person in the crowd”, much of the time we’ll just do what the machine tells us to—partly because the machine is basically able to figure out a lot more than we can.

It’s going to get complicated when machines are acting collectively across a whole society, effectively implementing in software all those things that political philosophers have talked about theoretically. But even at an individual level, it’s very complicated to understand the goal structure.

Yes, the machines can help us “be ourselves, but better”, amplifying and streamlining things we want to do and directions we want to go.

Immortality & Beyond

You know, in the world today, we’ve got a lot of scarce resources. In many parts of the world much less scarce than in the past, but some resources are still scarce. The most notable is probably time. We have finite lives, and that’s a key part of lots of aspects of human motivation and purpose.

It’s surely going to be the biggest discontinuity ever in human history when we achieve effective human immortality. And, by the way, I have absolutely no doubt that we will achieve it. I just wish more progress would get made on things like cryonics to give my generation a better probability of making it to that.

But it’s not completely clear how effective immortality will happen. I think there are several paths; that will probably in practice be combined. The first is that we manage to reverse-engineer enough of biology to put in patches that keep us running biologically indefinitely. That might turn out to be easy, but I’m concerned that it’ll be like trying to keep a server that’s running complex software up forever—which is something for which we have absolutely no theoretical framework, and lots of potential to run into undecidable halting problems and things like that.

The second immortality path is in effect uploading to some kind of engineered digital system. And probably this is something that would happen gradually. First we’d have digital systems that are directly connected to our brains, then these would use technology—perhaps not that different from the ImageIdentify function I was showing you—to start learning from our brains and the experiences we have, and taking more and more of the “cognitive load”… until eventually we’ve got something that responds exactly the same as our brain does.

Once we’ve got that, we’re dealing with something that can evolve quite quickly, independent of the immediate constraints of physics and chemistry, and that can for example explore different parts of the computational universe—inevitably sampling parts that are far from what we as humans currently understand.

Box of a Trillion Souls

So, OK, what is the end state? I often imagine some kind of “box of a trillion souls” that’s sort of the ultimate repository for our civilization. Now at first we might think, “Wow, that’s going to be such an impressive thing, with all that intelligence and consciousness and knowledge and so on inside.” But I’m afraid I don’t think so.

You see, one of the things that emerged from my basic science is what I call the Principle of Computational Equivalence, which says in effect that beyond some low threshold, all systems are equivalent in the sophistication of the computations they do. So that means that there’s not going to be anything abstractly spectacular about the box of a trillion souls. It’ll just be doing a computation at the same level as lots of systems in the universe.

Maybe that’s why we don’t see extraterrestrial intelligence: because there’s nothing abstractly different about something that came from a whole elaborate civilization as compared to things that are just happening in the physical world.

Now, of course, we can be proud that our box of a trillion souls is special, because it came from us, with our detailed history. But will interesting things be happening in it? Well, to define “interesting” we then need a sense of purpose—so things become pretty circular, and it’s a complicated philosophy discussion.

Back to 2015

I’ve come pretty far from talking about practical things for 2015. The way I like to work, understanding these fundamental issues is pretty important in not making mistakes in building technology here and now, because that’s how I’ve figured out one can build the best technology. And right now I’m really excited with the point we’ve reached with the Wolfram Language.

I think we’ve defined a new level of technology to support computational thinking, that I think is going to let people rather quickly do some very interesting things—going from algorithmic ideas to finished apps or new companies or whatever. The Wolfram Cloud and things around it are still in beta right now, but you can certainly try them out—and I hope you will. It’s really easy to get started—though, not surprisingly, because there are actually new ideas, there are things to learn if you really want to take the best advantage of this technology.

Well, that’s probably all I have to say right now. I hope I’ve been able to communicate a few of the exciting things that we’ve got going on right now, and a few of the things that I think are new and emerging in computational thinking and the technology around it. So, thanks very much.

]]> 6
Pi or Pie?! Celebrating Pi Day of the Century (And How to Get Your Very Own Piece of Pi) Thu, 12 Mar 2015 17:08:17 +0000 Stephen Wolfram Pictures from Pi Day now added »

This coming Saturday is “Pi Day of the Century”. The date 3/14/15 in month/day/year format is like the first digits of And at 9:26:53.589… it’s a “super pi moment”.

3/14/15 9:26:53.589... a "super pi moment" indeed

Between Mathematica and , I’m pretty sure our company has delivered more π to the world than any other organization in history. So of course we have to do something special for Pi Day of the Century.

Pi Day of the Century with Wolfram: 3.14.15 9:26:53

A Corporate Confusion

One of my main roles as CEO is to come up with ideas—and I’ve spent decades building an organization that’s good at turning those ideas into reality. Well, a number of weeks ago I was in a meeting about upcoming corporate events, and someone noted that Pi Day (3/14) would happen during the big annual SXSW (South by Southwest) event in Austin, Texas. So I said (or at least I thought I said), “We should have a big pi to celebrate Pi Day.”

I didn’t give it another thought, but a couple of weeks later we had another meeting about upcoming events. One agenda item was Pi Day. And the person who runs our Events group started talking about the difficulty of finding a bakery in Austin to make something suitably big. “What are you talking about?” I asked. And then I realized: “You’ve got the wrong kind of pi!”

I guess in our world pi confusions are strangely common. Siri’s voice-to-text system sends Wolfram|Alpha lots of “pie” every day that we have to specially interpret as “pi”. And then there’s the Raspberry Pi, that has the Wolfram Language included. And for me there’s the additional confusion that my personal fileserver happens to have been named “pi” for many years.

After the pi(e) mistake in our meeting we came up with all kinds of wild ideas to celebrate Pi Day. We’d already rented a small park in the area of SXSW, and we wanted to make the most interesting “pi countdown” we could. We resolved to get a large number of edible pie “pixels”, and use them to create a π shape inside a pie shape. Of course, there’ll be the obligatory pi selfie station, with a “Stonehenge” pi. And a pi(e)-decorated Wolfie mascot for additional selfies. And of course we’ll be doing things with Raspberry Pis too.

A Piece of Pi for Everyone

I’m sure we’ll have plenty of good “pi fun” at SXSW. But we also want to provide pi fun for other people around the world. We were wondering, “What can one do with pi?” Well, in some sense, you can do anything with pi. Because, apart from being the digits of pi, its infinite digit sequence is—so far as we can tell—completely random. So for example any run of digits will eventually appear in it.

How about giving people a personal connection to that piece of math? Pi Day is about a date that appears as the first digits of pi. But any date appears somewhere in pi. So, we thought: Why not give people a way to find out where their birthday (or other significant date) appears in pi, and use that to make personalized pi T-shirts and posters?

In the Wolfram Language, it’s easy to find out where your birthday appears in π. It’s pretty certain that any mm/dd/yy will appear somewhere in the first 10 million digits. On my desktop computer (a Mac Pro), it takes 6.28 seconds (2π?!) to compute that many digits of π.

Here’s the Wolfram Language code to get the result and turn it into a string (dropping the decimal point at position 2):

PiString = StringDrop[ToString[N[Pi, 10^7]], {2}];

Now it’s easy to find any “birthday string”:

First[StringPosition[PiString, "82959"]]

So, for example, my birthday string first appears in π starting at digit position 151,653.

What’s a good way to display this? It depends how “pi lucky” you are. For those born on 4/15/92, their birthdate already appears at position 3. (Only about a certain fraction of positions correspond to a possible date string.) People born on November 23, 1960 have the birthday string that’s farthest out, appearing only at position 9,982,546. And in fact most people have birthdays that are pretty “far out” in π (the average is 306,150 positions).

Our long-time art director had the idea of using a spiral that goes in and out to display the beginnings and ends of such long digit sequences. And almost immediately, he’d written the code to do this (one of the great things about the Wolfram Language is that non-engineers can write their own code…).

Different ways to display birthdates found in pi, depending on the position at which they begin

Next came deploying that code to a website. And thanks to the Wolfram Programming Cloud, this was basically just one line of code! So now you can go to

Find your birthdate in pi at (mine's August 29, 1959)

…and get your own piece of π!

Here's mine...

And then you can share the image, or get a poster or T-shirt of it:

You can get a personalized shirt or poster of your very own MyPiDay result

The Science of Pi

With all this discussion about pi, I can’t resist saying just a little about the science of pi. But first, just why is pi so famous? Yes, it’s the ratio of circumference to diameter of a circle. And that means that π appears in zillions of scientific formulas. But it’s not the whole story. (And for example most people have never even heard of the analog of π for an ellipse—a so-called complete elliptic integral of the second kind.)

The bigger story is that π appears in a remarkable range of mathematical settings—including many that don’t seem to have anything to do with circles. Like sums of negative powers, or limits of iterations, or the probability that a randomly chosen fraction will not be in lowest terms.

If one’s just looking at digit sequences, pi’s 3.1415926… doesn’t seem like anything special. But let’s say one just starts constructing formulas at random and then doing traditional mathematical operations on them, like summing series, doing integrals, finding limits, and so on. One will get lots of answers that are 0, or 1/2, or square root of 2. And there’ll be plenty of cases where there’s no closed form one can find at all. But when one can get a definite result, my experience is that it’s remarkably common to find π in it.

A few other constants show up too, like e (2.1718…), or Euler gamma (0.5772…), or Catalan’s constant (0.9159…). But π is distinctly more common.

Perhaps math could have been set up differently. But at least with math as we humans have constructed it, the number that is π is a widespread building block, and it’s natural that we gave it a name, and that it’s famous—now even to the point of having a day to celebrate it.

What about other constants? “Birthday strings” will certainly appear at different places in different constants. And just like when Wolfram|Alpha tries to find closed forms for numbers, there’s typically a tradeoff between digit position and obscurity of the constants used. So, for example, my birthday string appears at position 151,653 in π, 241,683 in e, 45,515 in square root of 2, 40,979 in ζ(3) … and 196 in the 1601th Fibonacci number.

Randomness in π

Let’s say you make a plot that goes up whenever a digit of π is 5 or above, and down otherwise:

For each consecutive digit of pi, this plot line goes up if the digit is 5-9 and down if it's 0-4

It looks just like a random walk. And in fact, all statistical and cryptographic tests of randomness that have been tried on the digits (except tests that effectively just ask “are these the digits of pi?”) say that they look random too.

Why does that happen? There are fairly simple procedures that generate digits of pi. But the remarkable thing is that even though these procedures are simple, the output they produce is complicated enough to seem completely random. In the past, there wasn’t really a context for thinking about this kind of behavior. But it’s exactly what I’ve spent many years studying in all kinds of systems—and wrote about in A New Kind of Science. And in a sense the fact that one can “find any birthday in pi” is directly connected to concepts like my general Principle of Computational Equivalence.

SETI among the Digits

Of course, just because we’ve never seen any regularity in the digits of pi, it doesn’t mean that no such regularity exists. And in fact it could still be that if we did a big search, we might find somewhere far out in the digits of pi some strange regularity lurking.

What would it mean? There’s a science fiction answer at the end of Carl Sagan’s book version of Contact. In the book, the search for extraterrestrial intelligence succeeds in making contact with an interstellar civilization that has created some amazing artifacts—and that then explains that what they in turn find remarkable is that encoded in the distant digits of pi, they’ve found intelligent messages, like an encoded picture of a circle.

At first one might think that finding “intelligence” in the digits of pi is absurd. After all, there’s just a definite simple algorithm that generates these digits. But at least if my suspicions are correct, exactly the same is actually true of our whole universe, so that every detail of its history is in principle computable much like the digits of pi.

Now we know that within our universe we have ourselves as an example of intelligence. SETI is about trying to find other examples. The goal is fairly well defined when the search is for “human-like intelligence”. But—as my Principle of Computational Equivalence suggests—I think that beyond that it’s essentially impossible to make a sharp distinction between what should be considered “intelligent” and what is “merely computational”.

If the century-old mathematical suspicion is correct that the digits of pi are “normal”, it means that every possible sequence eventually occurs among the digits, including all the works of Shakespeare, or any other artifact of any possible civilization. But could there be some other structure—perhaps even superimposed on normality—that for example shows evidence of the generation of intelligence-like complexity?

While it may be conceptually simple, it’s certainly more bizarre to contemplate the possibility of a human-like intelligent civilization lurking in the digits of pi, than in the physical universe as explored by SETI. But if one generalizes what one counts as intelligence, the situation is a lot less clear.

Of course, if we see a complex signal from a pulsar magnetosphere we say it’s “just physics”, not the result of the evolution of a “magnetohydrodynamic civilization”. And similarly if we see some complex structure in the digits of pi, we’re likely to say it’s “just mathematics”, not the result of some “number theoretic civilization”.

One can generalize from the digit sequence of pi to representations of any mathematical constant that is easy to specify with traditional mathematical operations. Sometimes there are simple regularities in those representations. But often there is apparent randomness. And the project of searching for structure is quite analogous to SETI in the physical universe. (One difference, however, is that π as a number to study is selected as a result of the structure of our physical universe, our brains, and our mathematical development. The universe presumably has no such selection, save implicitly from the fact the we exist in it.)

I’ve done a certain amount of searching for regularities in representations of numbers like π. I’ve never found anything significant. But there’s nothing to say that any regularities have to be at all easy to find. And there’s certainly a possibility that it could take a SETI-like effort to reveal them.

But for now, let’s celebrate the Pi Day of our century, and have fun doing things like finding birthday strings in the digits of pi. Of course, someone like me can’t help but wonder what success there will have been by the next Pi Day of the Century, in 2115, in either SETI or “SETI among the digits”…

This Just In…

Pictures from the Pi Day event:

Pi Day with Wolfram photos

To comment, please visit the copy of this post at the Wolfram Blog »

]]> 0
The Wolfram Data Drop Is Live! Wed, 04 Mar 2015 19:38:38 +0000 Stephen Wolfram Where should data from the Internet of Things go? We’ve got great technology in the Wolfram Language for interpreting, visualizing, analyzing, querying and otherwise doing interesting things with it. But the question is, how should the data from all those connected devices and everything else actually get to where good things can be done with it? Today we’re launching what I think is a great solution: the Wolfram Data Drop.

Wolfram Data Drop

When I first started thinking about the Data Drop, I viewed it mainly as a convenience—a means to get data from here to there. But now that we’ve built the Data Drop, I’ve realized it’s much more than that. And in fact, it’s a major step in our continuing efforts to integrate computation and the real world.

So what is the Wolfram Data Drop? At a functional level, it’s a universal accumulator of data, set up to get—and organize—data coming from sensors, devices, programs, or for that matter, humans or anything else. And to store this data in the cloud in a way that makes it completely seamless to compute with.

Data Drop data can come from anywhere

Our goal is to make it incredibly straightforward to get data into the Wolfram Data Drop from anywhere. You can use things like a web API, email, Twitter, web form, Arduino, Raspberry Pi, etc. And we’re going to be progressively adding more and more ways to connect to other hardware and software data collection systems. But wherever the data comes from, the idea is that the Wolfram Data Drop stores it in a standardized way, in a “databin”, with a definite ID.

Here’s an example of how this works. On my desk right now I have this little device:

This device records the humidity, light, pressure, and temperature at my desk, and sends it to a Data Drop databin. The cable is power; the pen is there to show scale.

Every 30 seconds it gets data from the tiny sensors on the far right, and sends the data via wifi and a web API to a Wolfram Data Drop databin, whose unique ID happens to be “3pw3N73Q”. Like all databins, this databin has a homepage on the web:

The homepage is an administrative point of presence that lets you do things like download raw data. But what’s much more interesting is that the databin is fundamentally integrated right into the Wolfram Language. A core concept of the Wolfram Language is that it’s knowledge based—and has lots of knowledge about computation and about the world built in.

For example, the Wolfram Language knows in real time about stock prices and earthquakes and lots more. But now it can also know about things like environmental conditions on my desk—courtesy of the Wolfram Data Drop, and in this case, of the little device shown above.

Here’s how this works. There’s a symbolic object in the Wolfram Language that represents the databin:

Databin representation in the Wolfram Language

And one can do operations on it. For instance, here are plots of the time series of data in the databin:

Time series from the databin of condition data from my desk: humidity, light, pressure, and temperature

And here are histograms of the values:

Histograms of the same humidity, light, pressure, and temperature data from my desk

And here’s the raw data presented as a dataset:

Raw data records for each of the four types of desktop atmospheric data I collected to the Data Drop

What’s really nice is that the databin—which could contain data from anywhere—is just part of the language. And we can compute with it just like we would compute with anything else.

So here for example are the minimum and maximum temperatures recorded at my desk:
(for aficionados: MinMax is a new Wolfram Language function)

Minimum and maximum temperatures collected by my desktop device

We can convert those to other units (% stands for the previous result):

Converting the minimum and maximum collected temperatures to Fahrenheit

Let’s pull out the pressure as a function of time. Here it is:

It's easy to examine any individual part of the data—here pressure as a function of time

Of course, the Wolfram Knowledgebase has historical weather data. So in the Wolfram Language we can just ask it the pressure at my current location for the time period covered by the databin—and the result is encouragingly similar:

The official weather data on pressure for my location nicely parallels the pressures recorded at my desk

If we wanted, we could do all sorts of fancy time series analysis, machine learning, modeling, or whatever, with the data. Or we could do elaborate visualizations of it. Or we could set up structured or natural language queries on it.

Here’s an important thing: notice that when we got data from the databin, it came with units attached. That’s an example of a crucial feature of the Wolfram Data Drop: it doesn’t just store raw data, it stores data that has real meaning attached to it, so it can be unambiguously understood wherever it’s going to be used.

We’re using a big piece of technology to do this: our Wolfram Data Framework (WDF). Developed originally in connection with Wolfram|Alpha, it’s our standardized symbolic representation of real-world data. And every databin in the Wolfram Data Drop can use WDF to define a “data semantics signature” that specifies how its data should be interpreted—and also how our automatic importing and natural language understanding system should process new raw data that comes in.

The beauty of all this is that once data is in the Wolfram Data Drop, it becomes both universally interpretable and universally accessible, to the Wolfram Language and to any system that uses the language. So, for example, any public databin in the Wolfram Data Drop can immediately be accessed by Wolfram|Alpha, as well as by the various intelligent assistants that use Wolfram|Alpha. Tell Wolfram|Alpha the name of a databin, and it’ll automatically generate an analysis and a report about the data that’s in it:

The Wolfram|Alpha results for "databin 3pw3N73Q"

Through WDF, the Wolfram Data Drop immediately handles more than 10,000 kinds of units and physical quantities. But the Data Drop isn’t limited to numbers or numerical quantities. You can put anything you want in it. And because the Wolfram Language is symbolic, it can handle it all in a unified way.

The Wolfram Data Drop automatically includes timestamps, and, when it can, geolocations. Both of these have precise canonical representations in WDF. As do chemicals, cities, species, networks, or thousands of other kinds of things. But you can also drop things like images into the Wolfram Data Drop.

Somewhere in our Quality Assurance department there’s a camera on a Raspberry Pi watching two recently acquired corporate fish—and dumping an image every 10 minutes into a databin in the Wolfram Data Drop:

Images are easy to store in Data Drop, and to retrieve

In the Wolfram Language, it’s easy to stack all the images up in a manipulable 3D “fish cube” image:

If this were a Wolfram CDF document, you could simply click and drag to rotate the cube and view it from any angle

Or to process the images to get a heat map of where the fish spend time:

Apparently the fish like the lower right area of the tank

We can do all kinds of analysis in the Wolfram Language. But to me the most exciting thing here is how easy it is to get new real-world data into the language, through the Wolfram Data Drop.

Around our company, databins are rapidly proliferating. It’s so easy to create them, and to hook up existing monitoring systems to them. We’ve got databins now for server room HVAC, for weather sensors on the roof of our headquarters building, for breakroom refrigerators, for network ping data, and for the performance of the Data Drop itself. And there are new ones every day.

Lots of personal databins are being created, too. I myself have long been a personal data enthusiast. And in fact, I’ve been collecting personal analytics on myself for more than a quarter of a century. But I can already tell that March 2015 is going to show a historic shift. Because with the Data Drop, it’s become vastly easier to collect data, with the result that the number of streams I’m collecting is jumping up. I’ll be at least a 25-databin human soon… with more to come.

A really important thing is that because everything in the Wolfram Data Drop is stored in WDF, it’s all semantic and canonicalized, with the result that it’s immediately possible to compare or combine data from completely different databins—and do meaningful computations with it.

So long as you’re dealing with fairly modest amounts of data, the basic Wolfram Data Drop is set up to be completely free and open, so that anyone or any device can immediately drop data into it. Official users can enter much larger amounts of data—at a rate that we expect to be able to progressively increase.

Wolfram Data Drop databins can be either public or private. And they can either be open to add to, or require authentication. Anyone can get access to the Wolfram Data Drop in our main Wolfram Cloud. But organizations that get their own Wolfram Private Clouds will also soon be able to have their own private Data Drops, running inside their own infrastructure.

So what’s a typical workflow for using the Wolfram Data Drop? It depends on what you’re doing. And even with a single databin, it’s common in my experience to want more than one workflow.

It’s very convenient to be able to take any databin and immediately compute with it interactively in a Wolfram Language session, exploring the data in it, and building up a notebook about it

But in many cases one also wants something to be done automatically with a databin. For example, one can set up a scheduled task to create a report from the databin, say to email out. One can also have the report live on the web, hosted in the Wolfram Cloud, perhaps using CloudCDF to let anyone interactively explore the data. One can make it so that a new report is automatically generated whenever someone visits a page, or one can create a dashboard where the report is continuously regenerated.

It’s not limited to the web. Once a report is in the Wolfram Cloud, it immediately becomes accessible on standard mobile or wearable devices. And it’s also accessible on desktop systems.

You don’t have to make a report. Instead, you can just have a Wolfram Language program that watches a databin, then for example sends out alerts—or takes some other action—if whatever combination of conditions you specify occur.

You can make a databin public, so you’re effectively publishing data through it. Or you can make it private, and available only to the originator of the data—or to some third party that you designate. You can make an API that accesses data from a databin in raw or processed form, and you can call it not only from the web, but also from any programming language or system.

A single databin can have data coming only from one source—or one device—or it can have data from many sources, and act as an aggregation point. There’s always detailed metadata included with each piece of data, so one can tell where it comes from.

For several years, we’ve been quite involved with companies who make connected devices, particularly through our Connected Devices Project. And many times I’ve had a similar conversation: The company will tell me about some wonderful new device they’re making, that measures something very interesting. Then I’ll ask them what’s going to happen with data from the device. And more often than not, they’ll say they’re quite concerned about this, and that they don’t really want to have to hire a team to build out cloud infrastructure and dashboards and apps and so on for them.

Well, part of the reason we created the Wolfram Data Drop is to give such companies a better solution. They deal with getting the data—then they just drop it into the Data Drop, and it goes into our cloud (or their own private version of it), where it’s easy to analyze, visualize, query, and distribute through web pages, apps, APIs, or whatever.

It looks as if a lot of device companies are going to make use of the Wolfram Data Drop. They’ll get their data to it in different ways. Sometimes through web APIs. Sometimes by direct connection to a Wolfram Language system, say on a Raspberry Pi. Sometimes through Arduino or Electric Imp or other hardware platforms compatible with the Data Drop. Sometimes gatewayed through phones or other mobile devices. And sometimes from other clouds where they’re already aggregating data.

We’re not at this point working specifically on the “first yard” problem of getting data out of the device through wires or wifi or Bluetooth or whatever. But we’re setting things up so that with any reasonable solution to that, it’s easy to get the data into the Wolfram Data Drop.

There are different models for people to access data from connected devices. Developers or researchers can come directly to the Wolfram Cloud, through either cloud or desktop versions of the Wolfram Language. Consumer-oriented device companies can choose to set up their own private portals, powered by the Wolfram Cloud, or perhaps by their own Wolfram Private Cloud. Or they can access the Data Drop from a Wolfram mobile app, or their own mobile app. Or from a wearable app.

Sometimes a company may want to aggregate data from many devices—say for a monitoring net, or for a research study. And again their users may want to work directly with the Wolfram Language, or through a portal or app.

When I first thought about the Wolfram Data Drop, I assumed that most of the data dropped into it would come from automated devices. But now that we have the Data Drop, I’ve realized that it’s very useful for dealing with data of human origin too. It’s a great way to aggregate answers—say in a class or a crowdsourcing project—collect feedback, keep diary-type information, do lifelogging, and so on. Once one’s defined a data semantics signature for a databin, the Wolfram Data Drop can automatically generate a form to supply data, which can be deployed on the web or on mobile.

The form can ask for text, or for images, or whatever. And when it’s text, our natural language understanding system can take the input and automatically interpret it as WDF, so it’s immediately standardized.

Now that we’ve got the Wolfram Data Drop, I keep on finding more uses for it—and I can’t believe I lived so long without it. As throughout the Wolfram Language, it’s really a story of automation: the Wolfram Data Drop automates away lots of messiness that’s been associated with collecting and processing actual data from real-world sources.

And the result for me is that it’s suddenly realistic for anyone to collect and analyze all sorts of data themselves, without getting any special systems built. For example, last weekend, I ended up using the Wolfram Data Drop to aggregate performance data on our cloud. Normally this would be a complex and messy task that I wouldn’t even consider doing myself. But with the Data Drop, it took me only minutes to set up—and, as it happens, gave me some really interesting results.

I’m excited about all the things I’m going to be able to do with the Wolfram Data Drop, and I’m looking forward to seeing what other people do with it. Do try out the beta that we launched today, and give us feedback (going into a Data Drop databin of course). I’m hoping it won’t be long before lots of databins are woven into the infrastructure of the world: another step forward in our long-term mission of making the world computable…

To comment, please visit the copy of this post at the Wolfram Blog »

]]> 0
Introducing Tweet-a-Program Thu, 18 Sep 2014 21:29:39 +0000 Stephen Wolfram In the Wolfram Language a little code can go a long way. And to use that fact to let everyone have some fun, today we’re introducing Tweet-a-Program.

Compose a tweet-length Wolfram Language program, and tweet it to @WolframTaP. Our Twitter bot will run your program in the Wolfram Cloud and tweet back the result.

Hello World from Tweet-a-Program: GeoGraphics[Text[Style["Hello!",150]],GeoRange->"World"]

One can do a lot with Wolfram Language programs that fit in a tweet. Like here’s a 78-character program that generates a color cube made of spheres:


It’s easy to make interesting patterns:


Here’s a 44-character program that seems to express itself like an executable poem:


Going even shorter, here’s a little “fractal hack”, in just 36 characters:


Putting in some math makes it easy to get all sorts of elaborate structures and patterns:



You don’t have to make pictures. Here, for instance, are the first 1000 digits of π, sized according to their magnitudes (notice that run of 9s!):


The Wolfram Language not only knows how to compute π, as well as a zillion other algorithms; it also has a huge amount of built-in knowledge about the real world. So right in the language, you can talk about movies or countries or chemicals or whatever. And here’s a 78-character program that makes a collage of the flags of Europe, sized according to country population:


We can make this even shorter if we use some free-form natural language in the program. In a typical Wolfram notebook interface, you do this using CTRL + =, but in Tweet-a-Program, you can do it just using =[...]:

ImageCollage[=[Europe populations]->=[Europe flags]]
ImageCollage[=[Europe populations]->=[Europe flags]]

The Wolfram Language knows a lot about geography. Here’s a program that makes a “powers of 10” sequence of disks, centered on the Eiffel Tower:

Table[GeoGraphics[GeoDisk[=[Eiffel Tower],Quantity[10^(n+1),"Meters"]],GeoProjection->"Bonne"],{n,6}]
Table[GeoGraphics[GeoDisk[=[Eiffel Tower],Quantity[10^(n+1),"Meters"]],GeoProjection->"Bonne"],{n,6}]

There are many, many kinds of real-world knowledge built into the Wolfram Language, including some pretty obscure ones. Here’s a map of all the shipwrecks it knows in the Atlantic:

GeoListPlot[GeoEntities[=[Atlantic Ocean],"Shipwreck"]]
GeoListPlot[GeoEntities[=[Atlantic Ocean],"Shipwreck"]]

The Wolfram Language deals with images too. Here’s a program that gets images of the planets, then randomly scrambles their colors to give them a more exotic look:


Here’s an image of me, repeatedly edge-detected:

NestList[EdgeDetect,=[Stephen Wolfram image],5]
NestList[EdgeDetect,=[Stephen Wolfram image],5]

Or, for something more “pop culture” (and ready for image analysis etc.), here’s an array of random movie posters:


The Wolfram Language does really well with words and text too. Like here’s a program that generates an “infographic” showing the relative frequencies of first letters for words in English and in Spanish:


And here—just fitting in a tweet—is a program that computes a smoothed estimate of the frequencies of “Alice” and “Queen” going through the text of Alice in Wonderland:


Networks are good fodder for Tweet-a-Program too. Like here’s a program that generates a sequence of networks:


And here—just below the tweet length limit—is a program that generates a random cloud of polyhedra:


What’s the shortest “interesting program” in the Wolfram Language?

In some languages, it might be a “quine”—a program that outputs its own code. But in the Wolfram Language, quines are completely trivial. Since everything is symbolic, all it takes to make a quine is a single character:


Using the built-in knowledge in the Wolfram Language, you can make some very short programs with interesting output. Like here’s a 15-character program that generates an image from built-in data about knots:


Some short programs are very easy to understand:


It’s fun to make short “mystery” programs. What’s this one doing?


Or this one?


Or, much more challengingly, this one:


I’ve actually spent many years of my life studying short programs and what they do—and building up a whole science of the computational universe, described in my big book A New Kind of Science. It all started more than three decades ago—with a computer experiment that I can now do with just a single tweet:


My all-time favorite discovery is tweetable too:


If you go out searching in the computational universe, it’s easy to find all sorts of amazing things:


An ultimate question is whether somewhere out there in the computational universe there is a program that represents our whole physical universe. And is that program short enough to be tweetable in the Wolfram Language?

But regardless of this, we already know that the Wolfram Language lets us write amazing tweetable programs about an incredible diversity of things. It’s taken more than a quarter of a century to build the huge tower of knowledge and automation that’s now in the Wolfram Language. But this richness is what makes it possible to express so much in the space of a tweet.

In the past, only ordinary human languages were rich enough to be meaningfully used for tweeting. But what’s exciting now is that it seems like the Wolfram Language has passed a kind of threshold of general expressiveness that lets it, too, be meaningfully tweetable. For like ordinary human languages, it can talk about all sorts of things, and represent all sorts of ideas. But there’s also something else about it: unlike ordinary human languages, everything in it always has a precisely defined meaning—and what you write is not just readable, but also runnable.

Tweets in an ordinary human language are (presumably) intended to have some effect on the mind of whoever reads them. But the effect may be different on different minds, and it’s usually hard to know exactly what it is. But tweets in the Wolfram Language have a well-defined effect—which you see when they’re run.

It’s interesting to compare the Wolfram Language to ordinary human languages. An ordinary language, like English, has a few tens of thousands of reasonably common “built-in” words, excluding proper names etc. The Wolfram Language has about 5000 built-in named objects, excluding constructs like entities specified by proper names.

And one thing that’s important about the Wolfram Language—that it shares with ordinary human languages—is that it’s not only writable by humans, but also readable by them. There’s vocabulary to acquire, and there are a few principles to learn—but it doesn’t take long before, as a human, one can start to understand typical Wolfram Language programs.

Sometimes it’s fairly easy to give at least a rough translation (or “explanation”) of a Wolfram Language program in ordinary human language. But it’s very common for a Wolfram Language program to express something that’s quite difficult to communicate—at least at all succinctly—in ordinary human language. And inevitably this means that there are things that are easy to think about in the Wolfram Language, but difficult to think about in ordinary human language.

Just like with an ordinary language, there are language arts for the Wolfram Language. There’s reading and comprehension. And there’s writing and composition. Always with lots of ways to express something, but now with a precise notion of correctness, as well as all sorts of measures like speed of execution.

And like with ordinary human language, there’s also the matter of elegance. One can look at both meaning and presentation. And one can think of distilling the essence of things to create a kind of “code poetry”.

When I first came up with Tweet-a-Program it seemed mostly like a neat hack. But what I’ve realized is that it’s actually a window into a new kind of expression—and a form of communication that humans and computers can share.

Of course, it’s also intended to be fun. And certainly for me there’s great satisfaction in creating a tiny, elegant gem of a program that produces something amazing.

And now I’m excited to see what everyone will do with it. What kinds of things will be created? What popular “code postcards” will there be? Who will be inspired to code? What puzzles will be posed and solved? What competitions will be defined and won? And what great code artists and code poets will emerge?

Now that we have tweetable programs, let’s go find what’s possible…

To develop and test programs for Tweet-a-Program, you can log in free to the Wolfram Programming Cloud, or use any other Wolfram Language system, on the desktop or in the cloud. Check out some details here.

To comment, please visit the copy of this post at the Wolfram Blog »

]]> 0
Launching Today: Mathematica Online! Mon, 15 Sep 2014 19:12:21 +0000 Stephen Wolfram It’s been many years in the making, and today I’m excited to announce the launch of Mathematica Online: a version of Mathematica that operates completely in the cloud—and is accessible just through any modern web browser.

In the past, using Mathematica has always involved first installing software on your computer. But as of today that’s no longer true. Instead, all you have to do is point a web browser at Mathematica Online, then log in, and immediately you can start to use Mathematica—with zero configuration.

Here’s what it looks like:

Click to open in Mathematica Online (you will need to log in or create a free account)

It’s a notebook interface, just like on the desktop. You interactively build up a computable document, mixing text, code, graphics, and so on—with inputs you can immediately run, hierarchies of cells, and even things like Manipulate. It’s taken a lot of effort, but we’ve been able to implement almost all the major features of the standard Mathematica notebook interface purely in a web browser—extending CDF (Computable Document Format) to the cloud.

There are some tradeoffs of course. For example, Manipulate can’t be as zippy in the cloud as it is on the desktop, because it has to run across the network.  But because its Cloud CDF interface is running directly in the web browser, it can immediately be embedded in any web page, without any plugin, like right here:

Another huge feature of Mathematica Online is that because your files are stored in the cloud, you can immediately access them from anywhere. You can also easily collaborate: all you have to do is set permissions on the files so your collaborators can access them.  Or, for example, in a class, a professor can create notebooks in the cloud that are set so each student gets their own active copy to work with—that they can then email or share back to the professor.

And since Mathematica Online runs purely through a web browser, it immediately works on mobile devices too. Even better, there’s soon going to be a Wolfram Cloud app that provides a native interface to Mathematica Online, both on tablets like the iPad, and on phones:

Wolfram Cloud app: native interface to Mathematica Online

There are lots of great things about Mathematica Online. There are also lots of great things about traditional desktop Mathematica. And I, for one, expect routinely to use both of them.

They fit together really well.  Because from Mathematica Online there’s a single button that “peels off” a notebook to run on the desktop. And within desktop Mathematica, you can seamlessly access notebooks and other files that are stored in the cloud.

If you have desktop Mathematica installed on your machine, by all means use it.  But get Mathematica Online too (which is easy to do—through Premier Service Plus for individuals, or a site license add-on).  And then use the Wolfram Cloud to store your files, so you can access and compute with them from anywhere with Mathematica Online. And so you can also immediately share them with anyone you want.

Share access easily from Mathematica Online

By the way, when you run notebooks in the cloud, there are some extra web-related features you get—like being able to embed inside a notebook other web pages, or videos, or actually absolutely any HTML code.

Mathematica Online is initially set up to run—and store content—in our main Wolfram Cloud. But it’ll soon also be possible to get a Wolfram Private Cloud—so you operate entirely in your own infrastructure, and for example let people in your organization access Mathematica Online without ever using the public web.

A few weeks ago we launched the Wolfram Programming Cloud—our very first full product based on the Wolfram Language, and Wolfram Cloud technology. Mathematica Online is our second product based on this technology stack.

The Wolfram Programming Cloud is focused on creating deployable cloud software. Mathematica Online is instead focused on providing a lightweight web-based version of the traditional Mathematica experience. Over the next few months, we’re going to be releasing a sequence of other products based on the same technology stack, including the Wolfram Discovery Platform (providing unlimited access to the Wolfram Knowledgebase for R&D) and the Wolfram Data Science Platform (providing a complete data-source-to-reports data science workflow).

One of my goals since the beginning of Mathematica more than a quarter century ago has been to make the system as widely accessible as possible. And it’s exciting today to be able to take another major new step in that direction—making Mathematica immediately accessible to anyone with a web browser.

There’ll be many applications. From allowing remote access for existing Mathematica users. To supporting mobile workers. To making it easy to administer Mathematica for project-based users, or on public-access computers. As well as providing a smooth new workflow for group collaboration and for digital classrooms.

But for me right now it’s just so neat to be able to see all the power of Mathematica immediately accessible through a plain old web browser—on a computer or even a phone.

And all you need do is go to the Mathematica Online website

To comment, please visit the copy of this post at the Wolfram Blog »

]]> 0
Computational Knowledge and the Future of Pure Mathematics Tue, 12 Aug 2014 14:50:54 +0000 Stephen Wolfram Every four years for more than a century there’s been an International Congress of Mathematicians (ICM) held somewhere in the world. In 1900 it was where David Hilbert announced his famous collection of math problems—and it’s remained the top single periodic gathering for the world’s research mathematicians.

This year the ICM is in Seoul, and I’m going to it today. I went to the ICM once before—in Kyoto in 1990. Mathematica was only two years old then, and mathematicians were just getting used to it. Plenty already used it extensively—but at the ICM there were also quite a few who said, “I do pure mathematics. How can Mathematica possibly help me?”


Twenty-four years later, the vast majority of the world’s pure mathematicians do in fact use Mathematica in one way or another. But there’s nevertheless a substantial core of pure mathematics that still gets done pretty much the same way it’s been done for centuries—by hand, on paper.

Ever since the 1990 ICM I’ve been wondering how one could successfully inject technology into this. And I’m excited to say that I think I’ve recently begun to figure it out. There are plenty of details that I don’t yet know. And to make what I’m imagining real will require the support and involvement of a substantial set of the world’s pure mathematicians. But if it’s done, I think the results will be spectacular—and will surely change the face of pure mathematics at least as much as Mathematica (and for a younger generation, Wolfram|Alpha) have changed the face of calculational mathematics, and potentially usher in a new golden age for pure mathematics.

Workflow of pure math The whole story is quite complicated. But for me one important starting point is the difference in the typical workflows for calculational mathematics and pure mathematics. Calculational mathematics tends to involve setting up calculational questions, and then working through them to get results—just like in typical interactive Mathematica sessions. But pure mathematics tends to involve taking mathematical objects, results or structures, coming up with statements about them, and then giving proofs to show why those statements are true.

How can we usefully insert technology into this workflow? Here’s one simple way. Think about Wolfram|Alpha. If you enter 2+2, Wolfram|Alpha—like Mathematica—will compute 4. But if you enter new york—or, for that matter, 2.3363636 or cos(x) log(x)—there’s no single “answer” for it to compute. And instead what it does is to generate a report that gives you a whole sequence of “interesting facts” about what you entered.

Part of Wolfram|Alpha's output for cos(x) log(x)

And this kind of thing fits right into the workflow for pure mathematics. You enter some mathematical object, result or structure, and then the system tries to tell you interesting things about it—just like some extremely wise mathematical colleague might. You can guide the system if you want to, by telling it what kinds of things you want to know about, or even by giving it a candidate statement that might be true. But the workflow is always the Wolfram|Alpha-like “what can you tell me about that?” rather than the Mathematica-like “what’s the answer to that?”

Wolfram|Alpha already does quite a lot of this kind of thing with mathematical objects. Enter a number, or a mathematical expression, or a graph, or a probability distribution, or whatever, and Wolfram|Alpha will use often-quite-sophisticated methods to try to tell you a collection of interesting things about it.

Wolfram|Alpha tells you interesting things about mathematical objects—here "petersen graph", "stellated dodecahedron", "pareto distribution", and "42424"

But to really be useful in pure mathematics, there’s something else that’s needed. In addition to being able to deal with concrete mathematical objects, one also has to be able to deal with abstract mathematical structures.

Countless pure mathematical papers start with things like, “Let F be a field with such-and-such properties.” We need to be able to enter something like this—then have our system automatically give us interesting facts and theorems about F, in effect creating a whole automatically generated paper that tells the story of F.

So what would be involved in creating a system to do this? Is it even possible? There are several different components, all quite difficult and time consuming to build. But based on my experiences with Mathematica, Wolfram|Alpha, and A New Kind of Science, I am quite certain that with the right leadership and enough effort, all of them can in fact be built.

A key part is to have a precise symbolic description of mathematical concepts and constructs. Lots of this now already exists—after more than a quarter century of work—in Mathematica. Because built right into the Wolfram Language are very general ways to represent geometries, or equations, or stochastic processes or quantifiers. But what’s not built in are representations of pure mathematical concepts like bijections or abstract semigroups or pullbacks.

Mathematica Pura Over the years, plenty of mathematicians have implemented specific cases. But could we systematically extend the Wolfram Language to cover the whole range of pure mathematics—and make a kind of “Mathematica Pura”? The answer is unquestionably yes. It’ll be fascinating to do, but it’ll take lots of difficult language design.

I’ve been doing language design now for 35 years—and it’s the hardest intellectual activity I know. It requires a curious mixture of clear thinking, aesthetics and pragmatic judgement. And it involves always seeking the deepest possible understanding, and trying to do the broadest unification—to come up in the end with the cleanest and “most obvious” primitives to represent things.

Today the main way pure mathematics is described—say in papers—is through a mixture of mathematical notation and natural language, together with a few diagrams. And in designing a precise symbolic language for pure mathematics, this has to be the starting point.

One might think that somehow mathematical notation would already have solved the whole problem. But there’s actually only a quite small set of constructs and concepts that can be represented with any degree of standardization in mathematical notation—and indeed many of these are already in the Wolfram Language.

So how should one go further? The first step is to understand what the appropriate primitives are. The whole Wolfram Language today has about 5000 built-in functions—together with many millions of built-in standardized entities. My guess is that to broadly support pure mathematics there would need to be something less than a thousand other well-designed functions that in effect define frameworks—together with maybe a few tens of thousands of new entities or their analogs.

Wolfram Language function and entity categories

Take something like function spaces. Maybe there’ll be a FunctionSpace function to represent a function space. Then there’ll be various operations on function spaces, like PushForward or MetrizableQ. Then there’ll be lots of named function spaces, like “CInfinity”, with various kinds of parameterizations.

Underneath, everything’s just a symbolic expression. But in the Wolfram Language there end up being three immediate ways to input things, all of which are critical to having a convenient and readable language. The first is to use short notations—like + or \[ForAll]—as in standard mathematical notation. The second is to use carefully chosen function names—like MatrixRank or Simplex. And the third is to use free-form natural language—like trefoil knot or aleph0.

One wants to have short notations for some of the most common structural or connective elements. But one needs the right number: not too few, like in LISP, nor too many, like in APL. Then one wants to have function names made of ordinary words, arranged so that if one’s given something written in the language one can effectively just “read the words” to know at least roughly what’s going on in it.

Computers & humans But in the modern Wolfram Language world there’s also free-form natural language. And the crucial point is that by using this, one can leverage all the various convenient—but sloppy—notations that actual mathematicians use and find familiar. In the right context, one can enter “L2” for Lebesgue Square Integrable—and the natural language system will take care of disambiguating it and inserting the canonical symbolic underlying form.

Ultimately every named construct or concept in pure mathematics needs to have a place in our symbolic language. Most of the 13,000+ entries in MathWorld. Material from the 5600 or so entries in the MSC2010 classification scheme. All the many things that mathematicians in any given field would readily recognize when told their names.

But, OK, so let’s say we manage to create a precise symbolic language that captures the concepts and constructs of pure mathematics. What can we do with it?

One thing is to use it “Wolfram|Alpha style”: you give free-form input, which is then interpreted into the language, and then computations are done, and a report is generated.

But there’s something else too. If we have a sufficiently well-designed symbolic language, it’ll be useful not only to computers but also to humans. In fact, if it’s good enough, people should prefer to write out their math in this language than in their current standard mixture of natural language and mathematical notation.

When I write programs in the Wolfram Language, I pretty much think directly in the language. I’m not coming up with a description in English of what I’m trying to do and then translating it into the Wolfram Language. I’m forming my thoughts from the beginning in the Wolfram Language—and making use of its structure to help me define those thoughts.

If we can develop a sufficiently good symbolic language for pure mathematics, then it’ll provide something for pure mathematicians to think in too. And the great thing is that if you can describe what you’re thinking in a precise symbolic language, there’s never any ambiguity about what anything means: there’s a precise definition that you can just go to the documentation for the language to find.

And once pure math is represented in a precise symbolic language, it becomes in effect something on which computation can be done. Proofs can be generated or checked. Searches for theorems can be done. Connections can automatically be made. Chains of prerequisites can automatically be found.

But, OK, so let’s say we have the raw computational substrate we need for pure mathematics. How can we use this to actually implement a Wolfram|Alpha-like workflow where we enter descriptions of things, and then in effect automatically get mathematical wisdom about them?

There are two seemingly different directions one can go. The first is to imagine abstractly enumerating possible theorems about what has been entered, and then using heuristics to decide which of them are interesting. The second is to start from computable versions of the millions of theorems that have actually been published in the literature of mathematics, and then figure out how to connect these to whatever has been entered.

Each of these directions in effect reflects a slightly different view of what doing mathematics is about. And there’s quite a bit to say about each direction.

Math by enumeration Let’s start with theorem enumeration. In the simplest case, one can imagine starting from an axiom system and then just enumerating true theorems based on that system. There are two basic ways to do this. The first is to enumerate possible statements, and then to use (implicit or explicit) theorem-proving technology to try to determine which of them are true. And the second is to enumerate possible proofs, in effect treeing out possible ways the axioms can be applied to get theorems.

It’s easy to do either of these things for something like Boolean algebra. And the result is that one gets a sequence of true theorems. But if a human looks at them, many of them seem trivial or uninteresting. So then the question is how to know which of the possible theorems should actually be considered “interesting enough” to be included in a report that’s generated.

My first assumption was that there would be no automatic approach to this—and that “interestingness” would inevitably depend on the historical development of the relevant area of mathematics. But when I was working on A New Kind of Science, I did a simple experiment for the case of Boolean algebra.

Partial list of Boolean algebra theorems, from p. 817 of "A New Kind of Science"

There are 14 theorems of Boolean algebra that are usually considered “interesting enough” to be given names in textbooks. I took all possible theorems and listed them in order of complexity (number of variables, number of operators, etc). And the surprising thing I found is that the set of named theorems corresponds almost exactly to the set of theorems that can’t be proved just from ones that precede them in the list. In other words, the theorems which have been given names are in a sense exactly the minimal statements of new information about Boolean algebra.

Boolean algebra is of course a very simple case. And in the kind of enumeration I just described, once one’s got the theorems corresponding to all the axioms, one would conclude that there aren’t any more “interesting theorems” to find—which for many mathematical theories would be quite silly. But I think this example is a good indication of how one can start to use automated heuristics to figure out which theorems are “worth reporting on”, and which are, for example, just “uninteresting embellishments”.

Interestingness Of course, the general problem of ranking “what’s interesting” comes up all over Wolfram|Alpha. In mathematical examples, one’s asking what region is interesting to plot?, “what alternate forms are interesting?” and so on. When one enters a single number, one’s also asking “what closed forms are interesting enough to show?”—and to know this, one for example has to invent rankings for all sorts of mathematical objects (how complicated should one consider \[Pi] relative to log(343) relative to Khinchin’s Constant, and so on?).

Wolfram|Alpha shows possible closed forms for the continued fraction "137.036"

So in principle one can imagine having a system that takes input and generates “interesting” theorems about it. Notice that while in a standard Mathematica-like calculational workflow, one would be taking input and “computing an answer” from it, here one’s just “finding interesting things to say about it”.

The character of the input is different too. In the calculational case, one’s typically dealing with an operation to be performed. In the Wolfram|Alpha-like pure mathematical case, one’s typically just giving a description of something. In some cases that description will be explicit. A specific number. A particular equation. A specific graph. But more often it will be implicit. It will be a set of constraints. One will say (to use the example from above), “Let F be a field,” and then one will give constraints that the field must satisfy.

In a sense an axiom system is a way of giving constraints too: it doesn’t say that such-and-such an operator “is Nand”; it just says that the operator must satisfy certain constraints. And even for something like standard Peano arithmetic, we know from Gödel’s Theorem that we can never ultimately resolve the constraints–we can never nail down that the thing we denote by “+” in the axioms is the particular operation of ordinary integer addition. Of course, we can still prove plenty of theorems about “+”, and those are what we choose from for our report.

So given a particular input, we can imagine representing it as a set of constraints in our precise symbolic language. Then we would generate theorems based on these constraints, and heuristically pick the “most interesting” of them.

One day I’m sure doing this will be an important part of pure mathematical work. But as of now it will seem quite alien to most pure mathematicians—because they are not used to “disembodied theorems”; they are used to theorems that occur in papers, written by actual mathematicians.

And this brings us to the second approach to the automatic generation of “mathematical wisdom”: start from the historical corpus of actual mathematical papers, and then make connections to whatever specific input is given. So one is able to say for example, “The following theorem from paper X applies in such-and-such a way to the input you have given”, and so on.

Curating the math corpus So how big is the historical corpus of mathematics? There’ve probably been about 3 million mathematical papers published altogether—or about 100 million pages, growing at a rate of about 2 million pages per year. And in all of these papers, perhaps 5 million distinct theorems have been formally stated.

So what can be done with these? First, of course, there’s simple search and retrieval. Often the words in the papers will make for better search targets than the more notational material in the actual theorems. But with the kind of linguistic-understanding technology for math that we have in Wolfram|Alpha, it should not be too difficult to build what’s needed to do good statistical retrieval on the corpus of mathematical papers.

But can one go further? One might think about tagging the source documents to improve retrieval. But my guess is that most kinds of static tagging won’t be worth the trouble; just as one’s seen for the web in general, it’ll be much easier and better to make the search system more sophisticated and content-aware than to add tags document by document.

What would unquestionably be worthwhile, however, is to put the theorems into a genuine computable form: to actually take theorems from papers and rewrite them in a precise symbolic language.

Will it be possible to do this automatically? Eventually I suspect large parts of it will. Today we can take small fragments of theorems from papers and use the linguistic understanding system built for Wolfram|Alpha to turn them into pieces of Wolfram Language code. But it should gradually be possible to extend this to larger fragments—and eventually get to the point where it takes, at most, modest human effort to convert a typical theorem to precise symbolic form.

So let’s imagine we curate all the theorems from the literature of mathematics, and get them in computable form. What would we do then? We could certainly build a Wolfram|Alpha-like system that would be quite spectacular—and very useful in practice for doing lots of pure mathematics.

Undecidability bites But there will inevitably be some limitations—resulting in fact from features of mathematics itself. For example, it won’t necessarily be easy to tell what theorem might apply to what, or even what theorems might be equivalent. Ultimately these are classic theoretically undecidable problems—and I suspect that they will often actually be difficult in practical cases too. And at the very least, all of them involve the same kind of basic process as automated theorem proving.

And what this suggests is a kind of combination of the two basic approaches we’ve discussed—where in effect one takes the complete corpus of published mathematics, and views it as defining a giant 5-million-axiom formal system, and then follows the kind of automated theorem-enumeration procedure we discussed to find “interesting things to say”.

Math: science or art? So, OK, let’s say we build a wonderful system along these lines. Is it actually solving a core problem in doing pure mathematics, or is it missing the point?

I think it depends on what one sees the nature of the pure mathematical enterprise as being. Is it science, or is it art? If it’s science, then being able to make more theorems faster is surely good. But if it’s art, that’s really not the point. If doing pure mathematics is like creating a painting, automation is going to be largely counterproductive—because the core of the activity is in a sense a form of human expression.

This is not unrelated to the role of proof. To some mathematicians, what matters is just the theorem: knowing what’s true. The proof is essentially backup to ensure one isn’t making a mistake. But to other mathematicians, proof is a core part of the content of the mathematics. For them, it’s the story that brings mathematical concepts to light, and communicates them.

So what happens when we generate a proof automatically? I had an interesting example about 15 years ago, when I was working on A New Kind of Science, and ended up finding the simplest axiom system for Boolean algebra (just the single axiom ((p\[SmallCircle]q)\[SmallCircle]r)\[SmallCircle](p\[SmallCircle]((p\[SmallCircle]r)\[SmallCircle]p))==r, as it turned out). I used equational-logic automated theorem-proving (now built into FullSimplify) to prove the correctness of the axiom system. And I printed the proof that I generated in the book:

Proof of a Boolean-algebra axiom system, from pp. 811–812 of "A New Kind of Science"

It has 343 steps, and in ordinary-size type would be perhaps 40 pages long. And to me as a human, it’s completely incomprehensible. One might have thought it would help that the theorem prover broke the proof into 81 lemmas. But try as I might, I couldn’t really find a way to turn this automated proof into something I or other people could understand. It’s nice that the proof exists, but the actual proof itself doesn’t tell me anything.

Proof as story And the problem, I think, is that there’s no “conceptual story” around the elements of the proof. Even if the lemmas are chosen “structurally” as good “waypoints” in the proof, there are no cognitive connections—and no history—around these lemmas. They’re just disembodied, and apparently disconnected, facts.

So how can we do better? If we generate lots of similar proofs, then maybe we’ll start seeing similar lemmas a lot, and through being familiar they will seem more meaningful and comprehensible. And there are probably some visualizations that could help us quickly get a good picture of the overall structure of the proof. And of course, if we manage to curate all known theorems in the mathematics literature, then we can potentially connect automatically generated lemmas to those theorems.

It’s not immediately clear how often that will possible—and indeed in existing examples of computer-assisted proofs, like for the Four Color Theorem, the Kepler Conjecture, or the simplest universal Turing machine, my impression is that the often-computer-generated lemmas that appear rarely correspond to known theorems from the literature.

But despite all this, I know at least one example showing that with enough effort, one can generate proofs that tell stories that people can understand: the step-by-step solutions system in Wolfram|Alpha Pro. Millions of times a day students and others compute things like integrals with Wolfram|Alpha—then ask to see the steps.

Wolfram|Alpha's step-by-step solution for an indefinite integral

It’s notable that actually computing the integral is much easier than figuring out good steps to show; in fact, it takes some fairly elaborate algorithms and heuristics to generate steps that successfully communicate to a human how the integral can be done. But the example of step-by-step in Wolfram|Alpha suggests that it’s at least conceivable that with enough effort, it would be possible to generate proofs that are readable as “stories”—perhaps even selected to be as short and simple as possible (“proofs from The Book”, as Erdős would say).

Of course, while these kinds of automated methods may eventually be good at communicating the details of something like a proof, they won’t realistically be able to communicate—or even identify—overarching ideas and motivations. Needless to say, present-day pure mathematics papers are often quite deficient in communicating these too. Because in an effort to ensure rigor and precision, many papers tend to be written in a very formal way that cannot successfully represent the underlying ideas and motivations in the mind of the author—with the result that some of the most important ideas in mathematics are transmitted through an essentially oral tradition.

It would certainly help the progress of pure mathematics if there were better ways to communicate its content. And perhaps having a precise symbolic language for pure mathematics would make it easier to express concretely some of those important points that are currently left unwritten. But one thing is for sure: having such a language would make it possible to take a theorem from anywhere, and—like with a typical Wolfram Language code fragment—immediately be able to plug it in anywhere else, and use it.

But back to the question of whether automation in pure mathematics can ultimately make sense. I consider it fairly clear that a Wolfram|Alpha-like “pure math assistant” would be useful to human mathematicians. I also consider it fairly clear that having a good, precise, symbolic language—a kind of Mathematica Pura that’s a well-designed follow-on to standard mathematical notation—would be immensely helpful in formulating, checking and communicating math.

Automated discovery But what about a computer just “going off and doing math by itself”? Obviously the computer can enumerate theorems, and even use heuristics to select ones that might be considered interesting to human mathematicians. And if we curate the literature of mathematics, we can do extensive “empirical metamathematics” and start trying to recognize theorems with particular characteristics, perhaps by applying graph-theoretic criteria on the network of theorems to see what counts as “surprising” or a “powerful” theorem. There’s also nothing particularly difficult—like in WolframTones—about having the computer apply aesthetic criteria deduced from studying human choices.

But I think the real question is whether the computer can build up new conceptual frameworks and structures—in effect new mathematical theories. Certainly some theorems found by enumeration will be surprising and indicative of something fundamentally new. And it will surely be impressive when a computer can take a large collection of theorems—whether generated or from the literature—and discover correlations among them that indicate some new unifying principle. But I would expect that in time the computer will be able not only to identify new structures, but also name them, and start building stories about them. Of course, it is for humans to decide whether they care about where the computer is going, but the basic character of what it does will, I suspect, be largely indistinguishable from many forms of human pure mathematics.

All of this is still fairly far in the future, but there’s already a great way to discover math-like things today—that’s not practiced nearly as much as it should be: experimental mathematics. The term has slightly different meanings to different people. For me it’s about going out and studying what mathematical systems do by running experiments on them. And so, for example, if we want to find out about some class of cellular automata, or nonlinear PDEs, or number sequences, or whatever, we just enumerate possible cases and then run them and see what they do.

There’s a lot to discover like this. And certainly it’s a rich way to generate observations and hypotheses that can be explored using the traditional methodologies of pure mathematics. But the real thrust of what can be done does not fit into what pure mathematicians typically think of as math. It’s about exploring the “flora and fauna”—and principles—of the universe of possible systems, not about building up math-like structures that can be studied and explained using theorems and proofs. Which is why—to quote the title of my book—I think one should best consider this a new kind of science, rather than something connected to existing mathematics.

In discussing experimental mathematics and A New Kind of Science, it’s worth mentioning that in some sense it’s surprising that pure mathematics is doable at all—because if one just starts asking absolutely arbitrary questions about mathematical systems, many of them will end up being undecidable.

This is particularly obvious when one’s out in the computational universe of possible programs, but it’s also true for programs that represent typical mathematical systems. So why isn’t undecidability more of a problem for typical pure mathematics? The answer is that pure mathematics implicitly tends to select what it studies so as to avoid undecidability. In a sense this seems to be a reflection of history: pure mathematics follows what it has historically been successful in doing, and in that way ends up navigating around undecidability—and producing the millions of theorems that make up the corpus of existing pure mathematics.

OK, so those are some issues and directions. But where are we at in practice in bringing computational knowledge to pure mathematics?

Getting it done There’s certainly a long history of related efforts. The works of Peano and Whitehead and Russell from a century ago. Hilbert’s program. The development of set theory and category theory. And by the 1960s, the first computer systems—such as Automath—for representing proof structures. Then from the 1970s, systems like Mizar that attempted to provide practical computer frameworks for presenting proofs. And in recent times, increasingly popular “proof assistants” based on systems like Coq and HOL.

One feature of essentially all these efforts is that they were conceived as defining a kind of “low-level language” for mathematics. Like most of today’s computer languages, they include a modest number of primitives, then imagine that essentially any actual content must be built externally, by individual users or in libraries.

But the new idea in the Wolfram Language is to have a knowledge-based language, in which as much actual knowledge as possible is carefully designed into the language itself. And I think that just like in general computing, the idea of a knowledge-based language is going to be crucial for injecting computation into pure mathematics in the most effective and broadly useful way.

So what’s involved in creating our Mathematica Pura—an extension to the Wolfram Language that builds in the actual structure and content of pure math? At the lowest level, the Wolfram Language deals with arbitrary symbolic expressions, which can represent absolutely anything. But then the language uses these expressions for many specific purposes. For example, it can use a symbol x to represent an algebraic variable. And given this, it has many functions for handling symbolic expressions—interpreted as mathematical or algebraic expressions—and doing various forms of math with them.

The emphasis of the math in Mathematica and the Wolfram Language today is on practical, calculational, math. And by now it certainly covers essentially all the math that has survived from the 19th century and before. But what about more recent math? Historically, math itself went through a transition about a century ago. Just around the time modernism swept through areas like the arts, math had its own version: it started to consider systems that emerged purely from its own formalism, without regard for obvious connection to the outside world.

And this is the kind of math—through developments like Bourbaki and beyond—that came to dominate pure mathematics in the 20th century. And inevitably, a lot of this math is about defining abstract structures to study. In simple cases, it seems like one might represent these structures using some hierarchy of types. But the types need to be parametrized, and quite quickly one ends up with a whole algebra or calculus of types—and it’s just as well that in the Wolfram Language one can use general symbolic expressions, with arbitrary heads, rather than just simple type descriptions.

As I mentioned early in this blog post, it’s going to take all sorts of new built-in functions to capture the frameworks needed to represent modern pure mathematics—together with lots of entity-like objects. And it’ll certainly take years of careful design to make a broad system for pure mathematics that’s really clean and usable. But there’s nothing fundamentally difficult about having symbolic constructs that represent differentiability or moduli spaces or whatever. It’s just language design, like designing ways to represent 3D images or remote computation processes or unique external entity references.

So what about curating theorems from the literature? Through Wolfram|Alpha and the Wolfram Language, not to mention for example the Wolfram Functions Site and the Wolfram Connected Devices Project, we’ve now had plenty of experience at the process of curation, and in making potentially complex things computable.

The eCF example But to get a concrete sense of what’s involved in curating mathematical theorems, we did a pilot project over the last couple of years through the Wolfram Foundation, supported by the Sloan Foundation. For this project we picked a very specific and well-defined area of mathematics: research on continued fractions. Continued fractions have been studied continually since antiquity, but were at their most popular between about 1780 and 1910. In all there are around 7000 books and papers about them, running to about 150,000 pages.

We chose about 2000 documents, then set about extracting theorems and other mathematical information from them. The result was about 600 theorems, 1500 basic formulas, and about 10,000 derived formulas. The formulas were directly in computable form—and were in effect immediately able to join the 300,000+ on the Wolfram Functions Site, that are all now included in Wolfram|Alpha. But with the theorems, our first step was just to treat them as entities themselves, with properties such as where they were first published, who discovered them, etc. And even at this level, we were able to insert some nice functionality into Wolfram|Alpha.

Some of the output from entering "Worpitzky theorem" into Wolfram|Alpha

But we also started trying to actually encode the content of the theorems in computable form. It took introducing some new constructs like LebesgueMeasure, ConvergenceSet and LyapunovExponent. But there was no fundamental problem in creating precise symbolic representations of the theorems. And just from these representations, it became possible to do computations like this in Wolfram|Alpha:

Wolfram|Alpha results for "continued fraction theorems for sqrt(7)" Wolfram|Alpha results for "continued fraction results involving quadratic irrationals" Wolfram|Alpha results for "who proved the Stern-Stolz theorem"

An interesting feature of the continued fraction project (dubbed “eCF”) was how the process of curation actually led to the discovery of some new mathematics. For having done curation on 50+ papers about the Rogers–Ramanujan continued fraction, it became clear that there were missing cases that could now be computed. And the result was the filling of a gap left by Ramanujan for 100 years.

Ramanujan's missing cases are now computable

There’s always a tradeoff between curating knowledge and creating it afresh. And so, for example, in the Wolfram Functions Site, there was a core of relations between functions that came from reference books and the literature. But it was vastly more efficient to generate other relations than to scour the literature to find them.

The Wolfram Function Site, and Wolfram|Alpha, generate relations between functions

But if the goal is curation, then what would it take to curate the complete literature of mathematics? In the eCF project, it took about 3 hours of mathematician time to encode each theorem in computable form. But all this work was done by hand, and in a larger-scale project, I am certain that an increasing fraction of it could be done automatically, not least using extensions of our Wolfram|Alpha natural language understanding system.

Of course, there are all sorts of practical issues. Newer papers are predominantly in TeX, so it’s not too difficult to pull out theorems with all their mathematical notation. But older papers need to be scanned, which requires math OCR, which has yet to be properly developed.

Then there are issues like whether theorems stated in papers are actually valid. And even whether theorems that were considered valid, say, 100 years ago are still considered valid today. For example, for continued fractions, there are lots of pre-1950 theorems that were successfully proved in their time, but which ignore branch cuts, and so wouldn’t be considered correct today.

And in the end of course it requires lots of actual, skilled mathematicians to guide the curation process, and to encode theorems. But in a sense this kind of mobilization of mathematicians is not completely unfamiliar; it’s something like what was needed when Zentralblatt was started in 1931, or Mathematical Reviews in 1941. (As a curious footnote, the founding editor of both these publications was Otto Neugebauer, who worked just down the hall from me at the Institute for Advanced Study in the early 1980s, but who I had no idea was involved in anything other than decoding Babylonian mathematics until I was doing research for this blog post.)

When it comes to actually constructing a system for encoding pure mathematics, there’s an interesting example: Theorema, started by Bruno Buchberger in 1995, and recently updated to version 2. Theorema is written in the Wolfram Language, and provides both a document-based environment for representing mathematical statements and proofs, and actual computation capabilities for automated theorem proving and so on.

A proof in Theorema

No doubt it’ll be an element of what’s ultimately built. But the whole project is necessarily quite large—perhaps the world’s first example of “big math”. So can the project get done in the world today? A crucial part is that we now have the technical capability to design the language and build the infrastructure that’s needed. But beyond that, the project also needs a strong commitment from the world’s mathematics community—as well as lots of contributions from individual mathematicians from every possible field. And realistically it’s not a project that can be justified on commercial grounds—so the likely $100+ million that it will need will have to come from non-commercial sources.

But it’s a great and important project—that promises to be pivotal for pure mathematics. In almost every field there are golden ages when dramatic progress is made. And more often than not, such golden ages are initiated by new methodology and the arrival of new technology. And this is exactly what I think will happen in pure mathematics. If we can mobilize the effort to curate known mathematics and build the system to use and generate computational knowledge around it, then we will not only succeed in preserving and spreading the great heritage of pure mathematics, but we will also thrust pure mathematics into a period of dramatic growth.

Large projects like this rely on strong leadership. And I stand ready to do my part, and to contribute the core technology that is needed. Now to move this forward, what it takes is commitment from the worldwide mathematics community. We have the opportunity to make the second decade of the 21st century really count in the multi-millennium history of pure mathematics. Let’s actually make it happen!

]]> 16
Entrepreneurism of Ideas: An Education Adventure Mon, 28 Jul 2014 16:02:43 +0000 Stephen Wolfram For as long as I can remember, my all-time favorite activity has been creating ideas and turning them into reality—a kind of “entrepreneurism of ideas”. And over the years—in science, technology and business—I think I’ve developed some pretty good tools and strategies for doing this, that I’ve increasingly realized would be good for a lot of other people (and organizations) too.

So how does one spread idea entrepreneurism—entrepreneurism centered on ideas rather than commercial enterprises? Somewhat unwittingly I think we’ve developed a rather good vehicle—that’s both a very successful educational program, and a fascinating annual adventure for me.

Twelve years ago my book A New Kind of Science had just come out, and we were inundated with people wanting to learn more, and get involved in research around it. We considered various alternatives, but eventually we decided to organize a summer school where we would systematically teach about our methodology, while mentoring each student to do a unique original project.

From the very beginning, the summer school was a big success. And over the years we’ve gradually improved and expanded it. It’s still the Wolfram Science Summer School—and its intellectual core is still A New Kind of Science. But today it has become a broader vehicle for passing on our tools and strategies for idea entrepreneurism.

This year’s summer school just ended last week. We had 63 students from 21 countries—with a fascinating array of backgrounds and interests. Most were in college or graduate school; a few were younger or older. And over the course of the three weeks of the summer school—with great energy and intellectual entrepreneurism—each student worked towards their own unique project.
At the Wolfram Science Summer School 2014
The summer school is part idea incubator, part course, part hackathon and part mentoring event. And it’s become a tradition for me to open it with a concentrated burst of idea entrepreneurism: a live experiment in which over the course of an hour or so I try—live and in public—to discover or invent something new.

Realistically, what makes this—and indeed much of what’s done at the summer school—possible is what’s now the Wolfram Language, with all its built-in knowledge and automation, as well as immediate presentation capabilities.

My rule for live experiments is that apart from spending a few minutes beforehand coming up with a topic (sometimes just by opening A New Kind of Science at random), I don’t think at all about what I’m going to do. The experiment is always fresh and spontaneous—and quite an adventure for all concerned. It’s a strange kind of intellectual performance, and it takes quite a bit of concentration. But I think it’s pretty educational to watch—not least because most people have never seen something done from scratch in real time like this.

There are always ups and downs in the course of a live experiment—and sometimes it seems that all is lost. But so far, in dozens of live experiments I’ve done, I’ve always found a way to navigate them to some kind of success. And seeing this always seems quite empowering to people; and makes this kind of idea entrepreneurism feel like something close at hand, that they can do too.

This year I actually did two live experiments. My first one was a piece of pure science that involved numbers. The idea is pretty elementary: just take a number, write it in base 2, manipulate its digits, then add it to the original number. Then iterate this many times. Here’s the little piece of Wolfram Language code I wrote during the live experiment to do this:
Repeated digit manipulation sequence from live experiment at Wolfram Science Summer School 2014
In A New Kind of Science, I did a version of this where the digit manipulation consisted of reversing the whole sequence of digits. But now I wanted to try something simpler: just rotating the digits by some number of positions. I wasn’t sure this would do anything interesting. But in the spirit of the live experiment, I wrote a little piece of code to find out:
Simple input produces complicated behavior
What happened was quintessential NKS (New Kind of Science). Even though the underlying rule was incredibly simple, the behavior was far from simple—and in many ways looked quite random.

Here’s the whole notebook from the hour or so of live experiment:
The notebook from my live experiment
It’s got quite a few interesting results. And indeed—like many previous summer school live experiments—it’s got the core of what would make a nice research paper.

In addition to this pure science experiment I decided this year to do a second—more practical—live experiment. In a sense it was a meta experiment. Because it consisted of analyzing code of pretty much the type used to do live experiments. Specifically, I read in lots of code from the Wolfram Demonstrations Project, then started doing statistics on it.

At first I looked at the general distribution of functions used, and started analyzing correlations and so on. But then, following a suggestion from the audience, I decided to focus on one simple example, and just started looking specifically at the use of named colors. The result was this bar chart, showing that (for whatever reason) red and black are the most common named colors in this corpus of code:
Frequency of named colors in Wolfram Demonstrations
It’s always important to get visualizations at every step along the way. And in this particular live experiment, we quickly decided to visualize correlations between colors, generating bar charts showing what the distribution of colors is if one already knows that a certain color (shown as the background) appears:
Backgrounds show specific named colors; in each graph, bars show the frequency of other named colors in Demonstrations that include the background's named color.
My original idea for the live experiment was to look for repeated patterns of code that might suggest functions we’d want to name and implement. But we never quite got there—and when we ran out of time we were instead looking at how one could use “code corpus analysis” to develop good color palettes: a quite different, but interesting, direction that emerged from the experiment.

Doing live experiments in a sense provides a way to illustrate the spirit of idea entrepreneurism—as well as letting one introduce some specific methodologies and tools. But another important element of successful idea entrepreneurism is choosing a direction to pursue. And it’s become a tradition at our summer school that after I do my live experiment, I talk about the directions we’re currently pursuing—and about what science and technology I’m currently most excited about.

After that we launch into the most important business of the summer school: defining a project for each student. Over the years we’ve developed and steadily refined a whole process for doing this. I’m always accumulating lists of interesting problems and projects. And students often come in with definite ideas for projects. But I’ve found the best results come from pure real-time creativity: from learning about each student and then creatively coming up with a project that matches their skills and interests.

And this year, over the course of three fairly long days, we did that 63 times, defining a unique original project for each of our students.

There’s quite a bit of structure to the summer school. For example, there’s always “homework” done during the first week. Usually we pick some previously unexplored area of the computational universe, and ask students to find something interesting in it. This year lots of students found lots of interesting things—that are actually now being assembled into a paper.
Some of the Wolfram Science Summer School 2014 students' favorite cellular automata
Every day there are a few hours of lectures. About how to do a good computational experiment. About the types of systems studied in A New Kind of Science. About computational techniques. About implications for philosophy, music, engineering and more. About perception. About natural language. About what’s worked in previous student projects. About principles that emerge from A New Kind of Science. About all sorts of other things.

Being an instructor at the Wolfram Science Summer School has become a favorite activity for some of our top R&D employees. And as it happens, this year all the instructors were also summer school alumni from previous years (yes, we’ve done a lot of recruiting from the summer school). For three weeks they worked with students, bringing to bear on each project the kind of idea entrepreneurism that we’ve taught at the summer school—and practice at our company.

This year—not least because we’d just finished launching Wolfram Programming Cloud days before—I had the pleasure of spending plenty of time at the summer school, getting to know all the students (by the end, I knew everyone by name!), and watching lots of projects take shape. I’ve been involved with my share of hackathons and incubators. But the summer school is something different. It’s really about entrepreneurism of ideas: about the process of creating ideas and turning them into reality.
At the Wolfram Science Summer School 2014
And at the end of three weeks, there were 63 projects to present—and lots of interesting things discovered and invented:
Some student projects from the Wolfram Science Summer School 2014
Over the years at the Wolfram Science Summer School, there have been many hundreds of great projects done, as well as many careers launched.

There’s a lot to say about education. But I think for many people, doing a unique original project is the single most powerful and useful form of education there is. Doing this successfully is quintessential idea entrepreneurism. And I think that in the long run the most important achievement of the Wolfram Science Summer School may just be the framework it’s developed for spreading entrepreneurism of ideas.

The Wolfram Science Summer School is just one of a growing constellation of education initiatives that we’re involved in. (Another that runs alongside the summer school is our two-week Mathematica Summer Camp for high-school students—that directly uses ideas from the summer school.) And particularly with all the new technologies that we’ve been developing, there are vast new opportunities for education. For me the Wolfram Science Summer School is an important and fascinating success story about innovation in education—and an encouragement for us to do more.

]]> 1