
They are conscious artificial intelligence
By Anil Seth | Published: 2025-05-14 14:14:00 | Source: Neuropsych – Big Think
Sign up for Big Think on Substack
The most surprising and impactful new stories delivered to your inbox every week for free.
A few years ago, in 2022, Google engineer Blake Lemoine He was fired For claiming that the chatbot he was working on was sentient. He believed that they could sense things, perhaps suffer, and that the moral status of LLMs should be taken seriously. It turned out that it was Lemoine who was not taken seriously.
But a few weeks ago, Kyle Fish — who works as an “AI care researcher” at Anthropic — He said the New York Times There is a 15% chance that chatbots are actually conscious. What has changed?
One thing is that discussions of “AI consciousness” have moved from philosophical seminars (and bars) to center stage in academia, and out of the shadows in the tech industry as well. This shift, driven in large part by the amazing progress in MBAs, is a good thing in a way. If we end up creating conscious machines, whether intentionally or unintentionally, we will unleash an unprecedented moral crisis. We will introduce a new possibility of suffering into the world, at the click of a mouse, of a kind we may not even recognize.
I think the odds of true artificial consciousness – at least according to current paths to artificial intelligence – are much lower than most people think (and certainly much less than 15%). In my work at the interface of neuroscience and artificial intelligence, and in my book Being you, I offer three reasons why we tend to overestimate the possibility of conscious machines.
The first lies in our psychological makeup. We tend to assume that “intelligence” and “consciousness” go together, such that something intelligent enough is also conscious. But just because intelligence and consciousness go together in us does not mean that they go together in general. The assumption they make is a reflection of our psychological biases, not an insight into reality. Language exerts a particularly strong influence on these biases, which is why people wonder whether or not they are human biases Claude It is conscious, but not protein folding in DeepMind Alpha fold.
The second reason is also hypothetical: in this case, the biological brain is a computer of sorts. If the brain is truly a meat-based computer, then everything that depends on its activity – whether intelligence or consciousness – should in principle be possible in a silicon surrogate. But the closer you look at the brain, the less like a computer it looks. There is no such clear separation between “brainsoftware” and “wetsoftware” as there is between hardware and software in our silicon devices, and even a single neuron is a vastly complex biological factory. The brain-as-computer metaphor was just a metaphor, and we always get into trouble when we confuse a metaphor with the thing itself. If the brain is not actually a computer, there is little reason to believe that consciousness could occur in silicon form.
To put this point another way: No one expects a computer simulation of a hurricane to generate real winds and real rain. In the same way, a computer model of the brain may only simulate consciousness, but never give rise to it.
The third reason is that we underestimate other possible explanations. No one knows how or why consciousness happens, but there are many other possibilities other than it being an algorithm on the one hand, or non-physical magic on the other. One possibility I’m exploring My research It is that consciousness arises from our nature as living beings: and that it is life, not “information processing,” that breathes fire into the equations of consciousness.
Morality in all this matter. Even if true artificial consciousness is off the table with current forms of AI, emerging “neuro” technologies, which are becoming increasingly brain-like, may move the needle. Even an AI that appears merely conscious is ethically problematic, even if there is only self-forgetfulness under the hood. Seemingly sentient AI can exploit our psychological vulnerabilities, distort our moral priorities, and if we treat things that appear to have feelings as if they don’t (perhaps ignoring calls for help) we risk brutalizing our own minds.
In the face of this uncertainty, what should we do? Firstly, We should not deliberately try to create artificial consciousness. Even if we don’t know what it takes to create conscious AI, we also don’t know how to completely rule out that possibility. Second, we must carefully distinguish between the ethical implications of AI He is Conscious of irresistible AI It seems that Conscious. The existential doubts surrounding the former should not distract us from the clear and present dangers of the latter.
Finally, we face similar doubts in many other contexts as well: people with serious brain injuries, non-human animals (from bacteria to bats), human embryos, and strange new creations in synthetic biology, such as “Cerebral organoids“(Brain cells in a dish connected to each other.) In each case, there is ambiguity about whether consciousness exists, and in each case, our decisions carry moral weight. As science, technology, and medicine continue to advance, more of these scenarios will move from the margins to the spotlight. What we need is nothing less than a satisfactory scientific understanding of consciousness itself.
Disclaimer: Anil Seth is a consultant for Conscium Ltd and AllJoined Inc.
Sign up for Big Think on Substack
The most surprising and impactful new stories delivered to your inbox every week for free.
ــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــ
 





