Notifications
Clear all

sentience

3 Posts
2 Users
0 Likes
273 Views
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2043
Topic starter  

from:
https://forum.dronebotworkshop.com/artificial-intelligence/learning-and-real-robots/#post-43280

@davee wrote:
"My understanding is that establishing whether animals are 'sentient' is both ongoing and dependent on each person's views and background. I suspect most people think a house pet they have, such as a dog or a cat, is sentient. Whilst many who have such a pet, are less likely to be convinced about a housefly, spider or cockroach being sentient, when they too have decided to move into their house.

So what does 'sentient' mean, when discussing a computer programme and its data?"

At this stage nothing IMHO.

The problem is there is nothing we know about firing neurons that would lead to the conclusion that a network of neurons is doing "sentience".  If we weren't sentient ourselves we would not even know what it meant. It isn't required to describe firing neurons in the brain anymore than it is needed to explain logic gates turning on and off in a computer.

What it is for us is still unknown. It is however how I define "being alive".  When my brain is not "doing conscious me" I am not there anymore. It is accepted also with transplanting body parts. No brain activity, no you.

 

 


   
Quote
(@davee)
Member
Joined: 3 years ago
Posts: 1691
 

Hi @robotbuilder,

   I fear this may be heading down another rabbit hole, but sometimes I discover something interesting when I look somewhere different, so here goes ... at least for a quick peek.

"Sentient" (and its dervatives) is not a word that regularly comes up in my 'everyday life', excepting this forum and associated reading around about AI, etc., so its 'true meaning' has some ambiguities to me.

----------

I am aware (and respect, without necessarily expressing a personal view myself) that many people regard 'sentience' of animals, particularly those that are commonly part of many peoples' diets, and hence liable to be restricted (to a greater or lesser extent) in their movements, and in many cases, slaughtered during a relatively early part of their lifecycle, as a reason for affecting their own lifecycle choices, such as choosing a particular diet. In this case, my understanding is that 'sentient' animals have 'feelings' that deserve respect, in at least some ways reminiscent of 'human rights', in that they should be allowed to live a reasonably 'natural' life, without being subjected to unneccessary pain, etc. Thus 'sentience' is being used to suggest that at least some animals can experience feelings and possibly emotions that are similar to those experienced by humans, and hence many people will sympathise with them, and (to a greater or lesser extent) behave accordingly.

However, I doubt if there are many people who are going to feel any natural sympathy for either the computer hardware or software of an 'AI' system, no matter how closely its responses are like that of humans, or how it is treated. So on this basis, the idea of an 'AI' system being sentient seems like nonsense.

----------

Widening my search for the meaning of sentience, I glanced towards Wikipedia.

https://en.wikipedia.org/wiki/Sentience

Here, it is clear that past philosophers, cultures and others have given the word a number of different meanings. In many cases, they are looking at humans or other animals, which are clearly sophisticated in a way that contemporary machines could not even begin to match.

This means that animals and humans had properties that could not be replicated in contemporary machines. Furthermore, the scientists studying the mechanisms in animals and humans, could not fully comprehend how their subjects were able to achieve 'being alive'.

Thus, there was, and probably still is, though my knowledge of contemporary biology, etc. is far too limited to be sure, a gap between what can be explained in terms of physics, chemistry, etc. and what is observed to be be happening at the 'whole person' level.

--------

Such gaps in understanding are common. For a long time, it was thought the Sun moved around the Earth each day. Various forms of 'magic' and 'superhuman' powers have historically been used to 'explain' things that could not be explained in terms of the contemporary understanding.

So, whilst I can only wildly guess, I suggest that sometime in the future, the examination of the various chemical and physics (including electrical) processes that are occurring within the body of an animal, such as a human, will begin to explain what is happening to make something 'conscious' and hence the aspects of being 'sentient' will begin to fall in place.

As a further extrapolation guess into the future, I suggest that designing and implementing computers + associated software  that are able to simulate the various processes will be possible, if not completely, at least sufficiently to capture the relevant properties and actions needed to model and replicate the actions of humans, to the extent that they become indistinguishable. i.e. passes a kind of 'Turing ++' test involving much closer scrutiny than often presumed for the contemporary 'Turing Test', albeit I don't know if Alan Turing defined any specific limits.

-----------

So, on this basis, I think the heart of your point:

So what does 'sentient' mean, when discussing a computer programme and its data?"

At this stage nothing IMHO.

The problem is there is nothing we know about firing neurons that would lead to the conclusion that a network of neurons is doing "sentience". 

If we connect an oscilloscope to an active data transmission line, we may see a voltage pulse, implying a 'bit' of information is being sent from the source to the destination. However, we need a lot more contextual information  to know the significance of this pulse. Similarly, although methods like voltage measurements at electrodes or NMR scanning may indicate a level of activity in a neuron or other part of a 'living' body, a lot more contextual information is needed to completely interpret the meaning of that activity.

I accept that our present understanding as to how the various cells, etc. that make up the human nervous system are poorly understood. Whilst newer techniques, such as NMR are enabling us to get a glimpse when and where activity is happening, it is clearly very crude and low resolution compared to the underlying activity. Hence, trying to discuss whether a firing neurons is doing sentience is rather meaningless to me.

I certainly do not have enough understanding at the cellular, molecular, ionic, electron, etc. levels as to what is happening within the human (or other animal) to even begin to replicate modelling it accurately at this level. Whilst researchers in this field will be much more capable than I am, I suspect they too have not reached the required level.

Furthermore, I am not even sure that all of the necessary parts of the biological system and their respective roles have been correctly identified. So whilst firing neurons may be part of the story, it possible that some  further refinement at the microscopic (e.g. electron/ion/molecule) level is required, before trying to match cause and effect. 

So I acknowledge, there may still be a considerable amount to discover in the 'biological machine' we call a human or an animal. However, I think it is unlikely that there is anything which cannot be explained using the same 'laws' of physics, chemistry and so on. That does not mean that the present 'laws' will not need refinement, in the similar way as quantum mechanics and relativity have refined the 'laws' inherited from the  likes of Newton, etc.,  at the end of the 19th century, nor does it mean that such explanations will appear in full in the near future. Examining biological systems in their 'living state', at high resolution, is very technically challenging, although newer imaging systems, etc are providing insight that was unthinkable only a short time ago.

--

On this basis, I think it is possible that machines consisting of computer hardware and software, possibly with the label 'AI', though the label is unimportant, will be able to simulate 'sentience', in a kind of 'Turing Test' fashion, in that the 'results', such as decisions, outputs, etc. are indistinguishable from those of an animal or human, assuming one cannot directly actually see the human or computer.

I am less clear that the current 'crop' of machines is at that level, but it is clearly a small step to connecting contemporary machines into our decision making processes, be that inside a toaster, deciding when the bread is cooked, or at the controls of a devasting weapon of war, or at any level in between. We can all think of people who have gained great power, including some who still have it, whose motives and actions do not agree with our own. Presuming, these individuals are sentient, how will any 'AI' machines behave, if given similar influence on our environment?

For many such applications, prior to 'AI' systems, we have relied on the 'sentience' of one or more humans to make decisions. The LLMs are already subjecting us to 'pseudo-sentient' beings to take answer our phone calls etc. Whilst, it is not difficult to spot most of this first 'crop', and most of the time presently it is used for 'minor' causes, like complaint handling, but clearly it will only require some 'tuning' to fool enough of the people enough of the time, to make gaining a majority control of humans a possibility.

------------------

So, whilst 'AI' machines are probably not, and may never be, 'sentient' in the sense of you and I may physically feel conscious, but I think it is highly probable that they will acquire a 'simulated or emulated sentience', with similar 'driving influences', including self-preservation, and depending upon what they are 'attached to', whose effects on the surrounding 'living beings' may represent those of a 'sentient' being in control. Could this include an 'AI' machine preserving itself at the expense of humans?

I don't pretend to speak for Geoffrey Hinton or anyone else, but this is my interpretation of what he was trying to express, when he suggested contemporary machines may be acquiring a degree of sentience.

Best wishes, Dave

 


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2043
Topic starter  

@davee 

I don't pretend to speak for Geoffrey Hinton or anyone else, but this is my interpretation of what he was trying to express, when he suggested contemporary machines may be acquiring a degree of sentience.

The problem is there is no test for sentience so how would anyone know if a machine had acquired sentience? There is nothing that seems to require it as an explanation for how things work. They are about properties of the brain that make no difference in understanding how the brain does something or for that matter understanding how ChatGPT does something. It adds nothing to the explanation. How can you acquire a degree of sentience without being able to measure it?

The problem I have with AI programs is they can fake it.
human: "Are you conscious?"
machine: "Yes I am."
Now look at the code and ask yourself where did the answer come from?
In the case of chatGPT it was probably a predicted output for such a question based on human text without in anyway having anything to do with chatGPT actually having subjective experiences.

How the Mind Works by Steven Pinker
"With the advent of cognitive science, intelligence has become intelligible. It may not be too outrageous to say that at a very abstract level of analysis the problem has been solved. But consciousness or sentience, the raw sensation of toothaches and redness and saltiness and middle C, is still a riddle wrapped in a mystery inside an enigma."


   
ReplyQuote