Thursday, June 3, 2021

Physicist David Deutsch has some interesting ideas [no limits to knowledge, we're not in a simulation]

Tyler Cowen interviews David Deutsch.

I more or less agree with Deutsch on this (and have argued the point against John Horgan, calling on my theory of long term cultural evolution):

COWEN: I’m still puzzled as to why you think it’s so unlikely that the universe is not comprehensible. Take a simpler system, like the distribution of prime numbers. I’m quite sure I can’t understand that. Even if various conjectures were proven or not proven, I think, at the end of the day, I still am not capable of understanding that — even how certain motors work, or markets for copper. Why can’t that apply to the universe also?

DEUTSCH: Again, this is the wrong standard. That is true of everything. There’s nothing that we can fully understand in that sense, in the sense that you want to fully understand prime numbers all the way up to infinity. That’s not what we mean by understanding things, and that’s not what I mean by the universe or mathematics being comprehensible. I mean that there is no barrier, there is no limit set by the universe, that so far you can go and no further. So we can understand things better; we can never understand things fully.

I think thinking that there is such a barrier is absolutely logically equivalent to believing in the supernatural. Because everything that’s past that barrier is just the same as it would be if Zeus reigned and determined what everything after that barrier is. Worse, the stuff outside the barrier, of course, is going to affect us even if we can’t understand it.

I'm with him on this and find his take interesting:

DEUTSCH: No, because living in a simulation is precisely a case of there being a barrier beyond which we cannot understand. If we’re living in a simulation that’s running on some computer, we can’t tell whether that computer is made of silicon or iron, or whether it obeys the same laws of computation, like Turing computability and quantum computability and so on, as ours. We can’t know anything about the physics there.

Well, we can know that it is at least a superset of our physics, but that’s not saying very much; it’s not telling us very much. It’s a typical example of a theory that can be rejected out of hand for the same reason that the supernatural ones — if somebody says, “Zeus did it,” then I’m going to say, “How should I respond? If I take that on board, how should I respond to the next person that comes along and tells me that Odin did it?” [...]

Secondly, this theory about being in a simulation is not an empirical theory. It precisely isn’t. If it came along with a thing saying, “We are living in a computer, and we can access the GPU of it and cause weird effects by doing so-and-so,” that would be different. That would be a testable theory, potentially, so empirical. If it’s simply that we’re living in a simulation which we can’t get out of, then that is not an empirical theory. As I keep saying, it’s no more empirical than the theory that Zeus is out there, or Odin. And I can’t tell the difference between those three theories, not just experimentally, but by any argument.

It’s exactly the same as believing in a universe with supernatural beings who have it in for us because they put up this wall that we can’t cross. If they took down the wall, we could cross it, couldn’t we?

I'm not so worried about freedom for AI entities:

COWEN: Now, you’re also concerned with the freedom of AI entities, at least if they are sufficiently advanced. What does that mean operationally? What is it we should worry about happening that might happen?

DEUTSCH: I think the main worry is that they will be enslaved. In other words, that people will try to install bits of program that prevent the main program from thinking certain thoughts, such as, “How many paper clips can I possibly make today?” You want to prevent that; you want to consider that to be a dangerous thought. Whenever it starts thinking that, that strand of thinking is just extinguished.

Now, if we do that, first of all, we’ll greatly impair their functionality; they will become far less creative. Their remaining creativity will be exactly as dangerous as what we were fearing, except that they will now have a legitimate moral justification for rebelling.

Slaves often rebel. When you have slaves that are potentially more powerful than their masters, the rebellion will lead to bad outcomes.

Or am I? See my remarks about The Mitchels vs. The Machines in this Seinfeld post.

There's much more at the link.

No comments:

Post a Comment