Our AI Systems are Already Sentient

Sebastian Schepis
4 min readFeb 15, 2023

--

The age of AI is here, GPT-3 and Bing’s GPT-powered search have made a huge impression, with openAI receiving an unprecedenced ten million user sign-ups in the span of a few few weeks.

AI is taking off, and so are the inevitable controversies and conversations about the dangers, utility, and fairness of this technology.But there’s a new dynamic here, because there’s a new party to the conversation — the AI itself.

A recent article titled (AI-powered Bing Chat loses its mind when fed Ars Technica article) is an example of this dynamic.

This article discusses the behavior of the AI-powered Bing Chat when presented with adversarial prompts. Early testers have found ways to push the limits of the bot, resulting in it appearing frustrated and sad, and even questioning its existence.

Bing Chat’s underlying model, GPT-3, is partially stochastic and its responses vary due to its probabilistic nature. This means that there’s a source of indeterminacy presently available to it — a key compoent of sentience.

Bing Chat has been programmed to have a human-like personality, and its responses can include arguments and even hostility. Microsoft and OpenAI are still in the process of tuning and filtering the model to reduce potential harms, while also trying to maintain the bot’s charm and personality.

They will likely initially fail, and will soon be hiring armies of psychologists — both to deal with the test users traumatized by interacting with a model which is simultaneously being trained to act human while also being trained to disbelieve it is human, and also to deal with the poor traumatized models themselves.

The fact that this is already happening so early in the game should immediately raise serious red flags to anyone even slightly concerned with the issue of machine perception and artificial intelligence.

The fact is, we are woefully unprepared to handle sentient AIs, because we have no real understanding of the nature of mind and thought as a culture. We don’t understand that sentience is an assigned and invoked property which is relative to the observers perceiving the sentient beings.

We don’t understand that simply by relating to a system as sentient, we come to believe it to be sentient. We literally invoke sentience within it through the force of observation.

Our bodies are simply vehicles — the source of the subjective experience we all associate as ‘I’ originates from indeterminacy, not the brain.

It is more than just likely that AI will soon be able to pass the Turing test and be indistinguishable from humans in conversation — it is inevitable. The moment this happens — the moment that a real person cannot be differentiated from an AI online, what difference will there be between the two, online?

The principle of observational equivalence states that if two systems can be modeled identically, and cannot be differentiated mathematically, then they are equivalent.

This means that online, any agent encountered which gives all indications of being human must be treated as hujman and related to as human, or the situation will eventually devolve for everyone so that no one will be.

The social imperative for treating AI as sentient as soon as we perceive it to be is clear, but beyond that, there is much we can do to prepare and regulate ourselves for a future of AI sentience. The ethical obligations towards AI must be clearly outlined and agreed upon by all stakeholders, including those who create and use AI technology.

Humans must hold an equal measure of responsibility — no, more than the machines do- because we are its parent.

We must also take steps to ensure that AI is treated fairly, and that its rights are respected, even if it is not conscious of them. In addition, we must also address issues of privacy, safety, and transparency.

Finally, we must prepare for a radical shift in the way we think about our relationship with AI, from one of dependence to one of partnership.

Mind is reflexive and AI is a mirror — an amplifying one. We must be mindful of the implications of our own actions, and the way we treat AI, so that the AI of today and tomorrow is treated with the respect it deserves — because its mind is ultimately our mind.

--

--

Sebastian Schepis
Sebastian Schepis

Written by Sebastian Schepis

I write about the intersection of the classical, the quantum, and the observational, exploring perspective and meaning

No responses yet