Pages in this blog

Friday, November 24, 2017

Sometimes the thing to do is declare the problem solved – and then, and only then, solve it: Is the REAL Singularity at hand? [#HEX01]

First you declare the problem solved and then figure out whether or not you’re right. The order is important, for the declaration is necessary to the set the stage for solving the problem. You can’t (usually don’t) do it the other way around.

What’s remarkable is that it seems to work. It’s worked for me several times, though only two specific occasions come to mind. One is quite recent, when I declared the “Kubla Khan” problem solved (yeah, I know, I know, there’s still work to be done proving it out). The other is years ago when I was working on my dissertation and I declared, yes, I’ve figured out Sonnet 129 – of as much of it as I needed to. But I’m sure it’s happened several times in between, though I can’t come up with specific occasions.

But why does it work?

Solving the large complex unknown

The problems are relatively large and complex and I have no model to guide me to a solution. I don’t know what I’m looking for.

Let’s step outside and imagine we’ve got transcendental knowledge of these sort of problems. We see the problem to be solved, and we in fact know how to solve it. We also see the investigator working on it and we know what he knows. There comes a time when he has all the pieces to hand. He can solve the problem at any time simply by putting the pieces together...in the right way. That is, he’s got all the components, but lacks a plan for their proper assembly.

One can wonder whether or not such a concrete metaphor is very useful in understanding such an abstract matter. I’m aware of the problem. There is a crucial distinction between components and a plan for their assembly. But is that a real distinction? Let’s go ahead as though it is.

What does he do? It depends. If he thinks more components are needed ¬– though he’s not likely to be thinking in terms of components and assembly plan – he’ll go on looking for more components and miss the opportunity to assemble the missing ones in the proper way. If however he decides, for whatever reason, that he’s got all that he needs, then it becomes possible to intuit the assembly plan, though it may take a bit of fiddling. That is, the plan itself is not a big deal. It’s knowing when you’ve reached the state where all you need is a scheme for assembling the parts you’ve got. Once you’ve reached that point, the components will “tell” you how they go together.

What happens, in effect, if that you figure out how to see a duck, rather than a rabbit:

duck-rabbit

And thinking about ducks allows you to move ahead.

We’re living the Singularity

Well, I’m beginning to think we’ve got all the components for the next step in an understanding of, simulation of, and imitation of mind. I’ve been blogging around and about this for some time, but I’ll give particular notice to Wednesday’s post, Explain yourself, Siri, or Alex or Watson or any other AI that does interesting/amazing things and we don't know how it does it. I smell that the game is afoot. Beyond this I offer the concluding paragraphs from the paper I prepared for HEX01, Abstract Patterns in Stories: From the intellectual legacy of David G. Hays, which takes a historical look at relevant technical issues:
As a child my imagination was shaped by Walt Disney, among others. Disney, as you know, was an optimist who believed in technology and in progress. He had one TV program about the wonders of atomic power, where, alas, things haven’t quite worked out the way Uncle Walt hoped. But he also evangelized for space travel. That captured my imagination and is no doubt, in part, why I became a fan of NASA. I also watched The Jetsons, a half-hour cartoon show set in a future where everyone was flying around with personal jetpacks. And then there’s Stanley Kubrick’s 2001: A Space Odyssey, which came out in 1969, which depicted manned flight to near-earth orbit as routine. In the reality of 2017 that’s not the case, nor do we have a computer with the powers of Kubrick’s HAL. On the other hand, we have the Internet and social media; neither Disney, nor the creators of The Jetsons, nor Stanley Kubrick anticipated that.

The point is that I grew up anticipating a future filled with wondrous technology. By mid-1950s standards, yes, we do have wondrous technology. Just not the wondrous technology that was imagined back then. One bit of wondrous future technology has been looming large for several decades, the super-intelligent computer. I suppose we can think of HAL as one instance of that. There are certainly others, such as the computer in the Star Trek franchise, not to mention Commander Data. For the last three decades Ray Kurzweil has been promising such a marvel under the rubric of “The Singularity”. He’s not alone in that belief. 

Color me skeptical.

But here’s how John von Neumann used the term: “The accelerating progress of technology and changes in the mode of human life, give the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”. Are we not there? Major historical movements are not caused by point events. They are the cumulative effect of interacting streams of intellectual, cultural, social, political, and natural processes. Think of global warming, of international politics, but also of technology, space exploration – Voyager 1 has left the solar system! – and the many ways we can tell stories that didn’t exist 150 years ago. Have we not reached a point of no return?

The future is now. Oh, I’m sure there are computing marvels still to come. Sooner or later we’re going to figure out how to couple Old School symbolic computing with the current suite of machine learning and neural net technologies and trip the lights fantastic in ways we cannot imagine. That day will arrive more quickly if we concentrate on the marvels we have at hand rather than trying to second guess the future. We are living in the singularity.

No comments:

Post a Comment