2/ As AI continues to develop, it is natural to ask whether AI systems can be not only intelligent, but also conscious. But is this likely? And what would the consequences be?
— Anil Seth (@anilkseth) December 10, 2024
4/ I first consider why people might think that AI is, or could be, conscious. I show how we can be led astray by built-in biases of anthropocentrism, human exceptionalism, and anthropomorphism. Intelligence and consciousness are not the same thing. pic.twitter.com/EafyFzhQrJ
— Anil Seth (@anilkseth) December 10, 2024
The tweet stream continues through #20. The article:
Anil K. Seth, Conscious artificial intelligence and biological naturalism, PsyArXiv Preprints, 2024-12-10.
Abstract: As artificial intelligence (AI) continues to advance, it is natural to ask whether AI systems can ibe not only intelligent, but also conscious. I consider why people might think AI could develop consciousness, identifying some biases that lead us astray. I ask what it would take for conscious AI to be a realistic prospect, challenging the assumption that computation provides a sufficient basis for consciousness. I’ll instead make the case that consciousness depends on our nature as living organisms – a form of biological naturalism. I lay out a range of scenarios for conscious AI, concluding that real artificial consciousness is unlikely along current trajectories, but becomes more plausible as AI becomes more brain-like and/or life-like. I finish by exploring ethical considerations arising from AI that either is, or convincingly appears to be, conscious. If we sell our minds too cheaply to our machine creations, we not only overestimate them – we underestimate our selves.
From the article, on non-Turing computation:
Turing computation is powerful, but not every function is Turing-computable. Turing himself identified a class of non-computable functions in his response to the ‘halting problem’ posed by Hilbert (Turing, 1936). Other examples include functions involving continuous variables, stochastic/random elements, and unbounded sensitivity to initial conditions (e.g., deterministic chaos). Digital computers based on Turing machines can simulate and approximate non-computable functions – this happens all the time in computational modelling – but these approximations will generally not be exact.
The limited remit of Turing computation means that systems – including brains – might implement functions that are non-Turing-computational. The idea that mental states (including consciousness) depend on non-computational functions is called non- computa+onal func+onalism (Piccinini, 2018, 2020). Non-computational neural functions could include processes relating to (continuous) electromagnetic fields, fine-grained timing relations (only order, not dynamics as such, matters for Turing computation), freely diffusing neurotransmi]ers, and so on. Non-computational biological functions also include those that necessarily involve a particular material property: examples include digestion, circulation of blood, and metabolism. Note that computational and non-computational functions could co-exist. For example, it could be that some aspects of mind are computational, but not consciousness (Piccinini, 2023).
Many other notions of ‘computation’ have been proposed. Some are narrower than Turing computation (e.g., computation requiring an artefact being used by a person in a particular way), but most are broader (N. G. Anderson & G. Piccinini, 2024; Chalmers, 1996b). Broader forms of computation include analogue, neuromorphic, and mortal computation. I will return to these later. For now, a focus on Turing computation is justified since this kind of computation underlies conventional AI, whether based on artificial neural networks or otherwise.
Mortal computation:
The recent concept of mortal computation is particularly interesting (Hinton, 2022; Ororbia & Friston, 2023). Standard Turing computation is ‘immortal’. Its existence and utility outlast the existence of any specific instance of hardware. This reflects the core computer science principle that software should be separable from hardware both in principle and in practice, so that the same algorithm executed on different hardware gives the same result. But immortal computation is expensive. It requires continual error correction to ensure that 1s remain 1s (and 0s remain 0s). As algorithms and models grow in complexity, the computational and energetic costs of error correction, and therefore of computational immortality, grows quickly.
One implication of this argument is that biological brains, which are highly energy efficient, cannot be implementing immortal computations. If they are implementing computations at all, then these computations are likely to be mortal, which means they cannot be separated from the ‘hardware’ (or ‘wetware’) which implements them. This in turn places constraints on the multiple realisability and substrate flexibility of these (mortal) computations. In particular, the substrate flexibility required for conscious AI is unlikely to hold because (conventional) AI is based on an implementation paradigm which assumes computational immortality. I find this a provocative argument against the plausibility of conscious AI because it is based on limitations arising from within a computational view of mind.
There's much more in the article.
IAnd in real life, AI that several companies have developed into chatbots designed to befriend people -- encouraging self-harm, suicide, murder. Parents whose teenagers behavior changed have filed lawsuits against the companies. And as one commenter on an article in WaPo points out, the goal for the company's design is to promote "engagement". That the dialogue sounds like that of a predator. Whatever the AI is trained on, the design favors increasing use regardless of how it is done. This is disgusting. All the conjecture about whether AI can "think"or will have agency really needs to be balanced in those very discussions with the already real life lowest common denominator of its use -- profit at human expense.
ReplyDelete