Pages in this blog

Saturday, January 17, 2015

Is the world intelligible?

Rick Searle has some very interesting paragraphs at the end of his review of Tyler Cowen's Average is Over (bolded emphasis mine):
For Cowen much of science in the 21st century will be driven by coming up with theories and correlations from the massive amount of data we are collecting, a task more suited to a computer than a man (or woman) in a lab coat. Eventually machine derived theories will become so complex that no human being will be able to understand them. Progress in science will be given over to intelligent machines even as non-scientists find increasing opportunities to engage in “citizen science”.

Come to think of it, lack of intelligibility runs like a red thread throughout Average is Over, from “ugly” machine chess moves that human players scratch their heads at, to the fact that Cowen thinks those who will succeed in the next century will be those who place their “faith” in the decisions of machines, choices of action they themselves do not fully understand. Let’s hope he’s wrong on that score as well, for lack of intelligibility in human beings in politics, economics, and science, drives conspiracy theories, paranoia, and superstition, and political immobility.

Cowen believes the time when secular persons are able to cull from science a general, intelligible picture of the world is coming to a close. This would be a disaster in the sense that science gives us the only picture of the world that is capable of being universally shared which is also able to accurately guide our response to both nature and the technological world. At least for the moment, perhaps the best science writer we have suggests something very different. To her new book, next time….
Yes, the unknown and the unknowable gives us the creeps. Why? An obvious answer is that what we don't know might very well be harmful. 

But I think that's only part of it. I think the unknowability is itself bothersome independently of anything that may be lurking behind it. I think that's how our nervous system is. Why that's so, I do not know. It seems to be the 'other side' of our ability to make up (often arbitrary) stories about anything. Whatever it is that freed our minds of the tyranny of the present, also left us wide open to the terrors of the unknown. And it is the terror of the unknown, more than anything else, that has driven long-term cultural evolution. That's why Searle's review caught my attention.

Science will always possess a gap in its knowledge into which those so inclined will attempt to stuff their version of a creator. If George Johnson is right we may reach a place where that gap, rather than moving with scientific theories that every generation probe ever deeper into the mysteries of nature may stabilize as we come up against the limits of our knowledge. God, for those who need a creating intelligence, will live there.

There is no doubt something forced and artificial in this “God of the gaps”, but theologians of the theistic religions have found it a game they need to play clinging as they do to the need for God to be a kind of demiurge and ultimate architect of all existence. Other versions of God where “he” is not such an engineer in the sky, God as perhaps love, or relationship, or process, or metaphor, or the ineffable would better fit with the version of reality given us by science, and thus, be more truthful, but the game of the gaps is one theologians may ultimately win in any case.

Religions and the persons who belong to them will either reconcile their faith with the findings of science or they will not, and though I wish they would reconcile, so that religions would hold within them our comprehensive wisdom and acquired knowledge as they have done in the past, their doing so is not necessary for religions to survive or even for their believers to be “rational.”

For the majority of religious people, for the non-theologians, it simply does not matter if the Big Bang was inflationary or not, or even if there was a Big Bang at all. What matters is that they are able to deal with loss and grief, can orient themselves morally to others, that they are surrounded by a mutually supportive community that acts in the world in the same way, that is, that they can negotiate our human world.

4 comments:

  1. Hi Bill, thanks for linking to my posts. I’ll try to put this comment both here and on my blog to make sure you see it, for I would like to hear your response.

    Like yourself, I do not not believe in the Singularity as an “intelligence explosion”. Though, what I think Cowen discussed and which worries me as well is something different.

    We already have a problem of intelligibility when it comes to something like String Theory or even the fact that no one person can now know all the relevant fact of any field. One can imagine AI that isn’t intelligent in the broad human sense at all but is extremely good at mining for patterns in scientific data to come up with
    theories or techniques which we can essentially not understand- neither how they hang together or how they ultimately work. This is the possibility Cowen draws from computer chess where humans have written the programs but end up scratching their heads at what the program does even when it works.

    It’s an idea that was perhaps first developed by Stanislav Lem which I wrote about here:

    http://utopiaordystopia.com/2014/11/22/summa-technologiae-or-why-the-trouble-with-science-is-religion/

    ReplyDelete
    Replies
    1. Hi Rick, good to see you here. First of all, though I follow Cowen's blog closely, I've not read Average is Over. I've read about that sort of chess (at his blog) but that's about it. As I understand it, the final move is up to the human player who can always refuse to follow a computer recommendation that makes no sense whatever. But the cost of following a strange recommendation, or series of them, is ultimately rather low. At worst, you loose the game, but no one's well being is put in jeopardy (unless you've done something like bet your house on the game's outcome). But you might win.

      That seems to be where the problem lies. The computer makes good recommendations that are unintelligible. My first question would be: Will those recommendations always be unintelligible? Maybe in time we'll begin to see some method in the computational madness and gain a measure of understanding of what's going on.

      The implicit assumption among singulatarians seems to be that we'll never have any deeper ideas about computers or intelligence than we have now. But there's no a priori reason to think that is so. There's no a priori reason to think that our current theories in any area are the best we'll ever have. Human theorizing didn't stop in the 12th century or the 18th or 19th. Why should we think it's come to a stop now?

      As for coming up with patterns in data we don't understand, that happens all the time, doesn't it? It's one thing for a computer to come up with a pattern. It's something else for a computer, or for us, to come up with a theory about what's going on.

      Finally, some have argued that the possibility of broad intelligibility went out the window early in the 20th century. Relativity is weird and quantuum mechanics is even weirder. And the size of the universe just grows and grows. How many people have more than an understanding of what it means that a good many of those lights in the night scare are in fact whole galaxies? It seems to me that intelligibility has been a problem for awhile.

      Delete
  2. Hello again Bill,

    Perhaps I'm just more concerned about the question of intelligibility than you are. Cowen kind of brings it upon in a matter of fact manner and never really interrogates the idea.

    I think you're right that some branches of science have been basically unintelligible to everyone but a few specialists for a very long time, but let's imagine you get the same situation in something like economics. Then you would have "experts" telling us- "we must do such and such" because our "god-like" machine tells us to- although we have no idea why. Of course, such machines will be programmed using all sorts of assumptions that reflect vested interest. This is a kind of false singularity scenario that makes me nervous. Human beings surrendering agency to machines which at bottom are merely enforcing the interests and world-view of those who own/built them. Again, this isn't too far removed from what we have now minus the fact that the question of "why" can not be passed off to our newfangled oracle.

    ReplyDelete
    Replies
    1. Two things, Rick:

      1) John Brockman's Edge just published its annual question and this year it's about 'thinking machines'. Dan Dennet's reply speaks rather directly to your concern, and I've excerpted it at New Savanna, along with selections from several other replies. I think Cowen is too gullible on the issue of machine intelligence.

      2) In a different direction, roughly four decades ago a literary critic named Ed Mendelson published an article in which he argued for a genre he called the encyclopedic narrative, giving these examples: Dante's Divine Comedy, Rablais', Gargantua and Pantagruel, Cervantes' Don Quixote, Goethe's Faust, Melville's Moby Dick, Joyce's Ulysses, and Pynchon's Gravity's Rainbow. Somewhat later Franco Moretti was thinking about “monuments,” “sacred texts,” “world texts,” texts he wrote about in Modern Epic (Verso 1996). He came upon Mendelson's article, saw a kindship with his project, and so added Wagner's Ring of the Nibelung's (note, a musical as well as a narrative work), Marquez's One Hundred Years of Solitude, and a few others to the list.

      I've argued that Walt Disney's Fantasia is encyclopedic in that way. It's not a narrative, it's not a written text, and it doesn't belong to "high culture." It's "middle brow" all the way. But it's also encyclopedic in scope; What Disney covered in those two hours is remarkable. I don't know of anything else quite like it, not in two hours.

      Delete