Basically, we want to know which problems computers can solve not only in principle, but in practice, in an amount of time that won’t quickly blow up in our faces and become longer than the age of the universe. We don’t care about the exact speed, e.g., whether a computer can do a trillion steps or “merely” a billion steps per second. What we care about is the scaling behavior: How does the number of steps grow as the number to be factored, the molecule to be simulated, or whatever gets bigger and bigger?I like this: scaling behavior.
For the DH folks interested in the nature of computing, THIS is the sort of thing that computer scientists worry about, but is of little direct relevance to practical programming.
A more mundane example: chess. From an abstract computational point of view chess is no more difficult than tic-tac-toe. Tic-tac-toe is a finite game. Not only does each game have a finite number of steps, but the number of possible games is finite as well. Chess is the same, provided that a convention is adopted to end some games in a draw (e.g. if no piece is exchanged in 50 moves, end the game). With that provision, each chess game has a finite number of steps and the number of games is finite.
Think about it.
However, The number of possible chess games is very very large while the number of possible tic-tac-toe games is pretty small. That makes programming a computer to play tic-tac-toe very different from programming one to play chess.
No comments:
Post a Comment