Thursday, June 11, 2015

The sky is falling in computer land!

Every once in awhile I read something that demonstrates, at some length, how crazy and dysfunctional the world of computer software is. Not hardware, but software. Hardware World may not in fact be paradise, but, compared to Software World, it might as well be.

The first such text I read is a book called The Mythical-Man Month by Frederick Brooks. It was first published in 1975 and I read it within a year or three of publication at the urging of my friend, and fellow student, Bill Doyle. Brooks told stories of how, when time was running out on a project, adding more people to the team just made things even worse. The general message I took from the book is something like we really don't know jack about developing software.

Every few years, or anyhow, at least once a decade since then, that message would be driven home by some other text or by discussions with pros in the business. While this goes on I'm also hearing about how we're getting closer to developing intelligent computers. I wonder: will they be intelligent enough not to write crappy software?

I'm currently nearing the end of another such text. It's called What is Code? and it's by Paul Ford in BusinessWeek, at least I think that's the name of the publication. It's one of those Bloomberg imprints. It's quite different from Brooks' book. 

Which is appropriate, as the world of practical computing is different from what it was back then. Back then it was still mainframes and minis, though personal computers were about to be hatched. Now days hardware almost doesn't matter; it's all the web and the cloud. And Ford's text is much more along the lines of "how works software" than Brooks' "we don't know how to manage the beast". But still, the latter message gets through. Software is still a problem and problematic.

Meanwhile, the other guys are still promising us intelligent computers, even super-intelligent computers. It's not clear to me just how we're going to do this. Can we really build intelligent computers from crappy software? Oh, sure, once the software's passed into the land to super-intelligence, it'll be able to clean itself up. But how do we get there?

And yet IBM's Watson wasn't possible back in Brooks' day, nor (sorta' useful) machine translation for everyone, nor self-parking cars. So crappy software isn't necessarily an impediment to the development of new areas of computational cleverness.

Mostly what I think is that it's a brave new world and we don't understand it very well.


  1. Sure, software is hard, but creating modern depends on other things which are really hard, so it might not be an intrinsic issue.

    Large software projects are always carried out by organizations, not individuals. We don't know much about how to create effective learning organizations with good internal communications. In fact, most organizations don't know how to do this at all. They create teams that are badly organized, usually padded with multiple layers of analysts responsible for explaining, with metaphors, what the software should do. No one has a language which is useful for speaking about technical and domain concepts among members of the team. Nor do they have mechanisms for communicating this information; they rely primarily on email and meetings. ("The meetings will continue until performance improves.")

    And then there's culture clash. In the beginning, programming was a great job for an introvert. They sat at a desk and thought and coded. Now they have to be part of a team and go to meetings. They don't respect the extroverts who function as project managers or team leaders. It's hard to build a good team with this going on.

    And organizations start failing, they grab tightly onto management techniques which promise to fix things (e.g. agile methodology, kan ban, etc.) and focus their attention on the manufacturing process of creating software and not on the difficult problems of communication and abstraction.

    And finally, there's the endless temptation to change things. Most software problems are poorly specified in the beginning and are only partially understood when people see demonstrations of running code. Then they want to change the specs. And they should be able to, otherwise you are building something that no one wants.

    Effective engineering organizations are not used to that. The space shuttle was launched with ten year old computing technology on board because that was when the spec was written. It's unthinkable to release modern software projects which aren't using this year's (or this month's) latest and greatest thing. Building on ever changing systems, stacked in layers underneath you, with their own bugs, leads to failure.

    Yes, getting computers to write complex software will probably be easier than getting teams to do it. Look at Rails and Grails for easy examples. They produce functional software on clearly understood specs. Better specs will produce better automated software generation. You won't even need to call it AI.

  2. Biological systems are vastly complex and prone to many errors, yet they manage to replicate well. Maybe some day software will just reproduce.

  3. And then we've got the folks at DeepMinds developing a system that can teach itself to play Atari games (video). And while I've not got the foggiest idea of how it does it; it does make sense. Put the system in a world that it can observe and in which it can act and, provided the world isn't out of range, it will figure it out. It's pretty clear to me that human and animal minds are self-constructed from the inside, not engineered from the outside by someone poking around and soldering and bolting and gluing and ripping out and replacing, etc.

    As for culture clash, well, yeah, there's a lot of that going around. Over the past year+ I've been following some work in so-called digital humanities. Specifically, work where literary critics use machine learning to investigate large bodies of texts (I found out about Ford's BizWeek article from one of these folks). I've seen some pretty interesting results; I even think that something profound may be going on. But the people doing this work are skittish about analogies to biological evolution and even more skittish about using computers to model the minds of poets – something Dave Hays and I imagined back in 1976. It's not that I want to get on with that project NOW, but more like I've got over four decades of thinking about literature and the mind that has been informed by ideas of computation. And it's about time to get on with it. While there are literary critics who are interested in cognitive science, they didn't get interested in cognitive science until there were versions around that had little or nothing explicitly computational.

    And so forth and so on. The future's not going to be what we dreamed about. It's going to be stranger.