referencing post in thread:
https://forum.dronebotworkshop.com/raspberry-pi/need-help-with-my-addiction/paged/2/#post-9585
Robo Pi wrote: "I want this program to create it's own vocabulary from scratch learning each word individually not unlike how a human baby learns."
So we need to understand how a baby learns to communicate using a written language or a sign language in order to duplicate the process.
This is why I suggested that language has to be grounded in the sensory world. For example to name a thing a baby must have first recognized the thing as being different from other things. It must have an internal concept of what an individual object is. This I would suggest is a collection of sensory features that "go together".
Robo Pi wrote: "It won't impress me until it actually knows what the heck it's saying. Only then will I be impressed."
But how will you know when it knows what it is saying? The word "know" implies "conscious knowing" as opposed to being able to perform an action without any "conscious knowing" of what you are doing. In other words when will you know it has become "self aware"? And what is the functional determinate of being "self aware"?
If you ask me where a particular key is on my computer keyboard I might have to think about it. Indeed I would have to work it out by imagining I am touch typing the key for I have long forgotten where most of the keys are. This is annoying when the BackSpace, Enter and some other keys are located differently on different keyboards. My motor programs "know" where the keys are. Conscious knowing is linear and very high level the nature of which is not yet clear. Automatic knowing without "conscious over viewing of the outputs" is how I would describe canned speech. What I type is at a conscious level (although it would have arisen from unconscious processes) but typing it is an automatic response. Most of our behaviors are unconscious automatic parallel processes and thus very fast as opposed to "conscious monitoring or thought" that is slow an linear.
I remember a quiz show where the question was: "Which numerical key has the @ symbol? I use the @ symbol all the time without hesitation and very fast but was unable to answer the question. I had no conscious knowledge of the answer.
But how will you know when it knows what it is saying? The word "know" implies "conscious knowing" as opposed to being able to perform an action without any "conscious knowing" of what you are doing. In other words when will you know it has become "self aware"? And what is the functional determinate of being "self aware"?
It's certainly possible to go down the rabbit hole of philosophy here asking questions like "What is consciousness? And how would we know if we see it?"
The problem I see with these types of questions is that they are basically impossible to answer. We can't even say for certain that other humans we meet are "conscious" in the same way that we are. We can only assume that it's reasonable to conclude that they are. But the bottom line is that Solipsism cannot be disproved.
Also, when we speak of "Self Awareness", what do we actually mean?
If I program an Arduino to respond to a pressed button with, "You pressed my button", doesn't this result in proof that the Arduino is indeed "aware" that someone has pressed its button?
Obviously no one thinks that a simple microcontroller programmed in this way is "aware" of anything. At least not in the same way that a human would be aware of having been touched. None the less, it's technically obvious that the Arduino is indeed "aware" when the button has been pressed.
So down the rabbit hole of philosophy we go trying to figure out exactly what we mean by being "aware" of something.
Not to belittle philosophy, but just to be clear on my position, I'm simply not interested in those kinds of questions. For one thing, I personally believe that they are as unanswerable as the question of Solipsism. We can only make assumptions about what the answers might be.
Secondly, I have never claimed to be out to make a "conscious" robot. It's basically an ill-defined term not unlike solipsism.
If I have a robot that has vision via a camera and it has programming that will allow it to "know" what it's seeing, and be able to describe what it is seeing, then this is certainly a robot that "knows" what's going on.
It may only know what's going on in the same way an Arduino "knows" that its button has been pressed. But hey, being able to "know" things in this way is the first step toward obtaining even more capabilities.
Canned Robot Behavior
I can obviously program my computer to recognize various scenes. When it "sees me" sitting at the computer I can program it to answer the question, "What am I doing?" with "You are sitting at the computer".
No surprise there. Right?
And if I show it a scene where I'm sitting in a chair playing the guitar I can program it to say, "Your sitting in a chair playing a guitar", and so on.
This can be quite impressive to someone else who comes in and asks the robot what I'm doing periodically and the robot is able to say exactly what I'm doing just by looking at me. 😉
But you and I know that this is just a trick of "Canned behavior". I had to program the robot for all these possibilities. So you and I know that there isn't really a lot behind this. The machine is only as intelligent as its programmer.
Learned Robot Behavior
Today, we are far beyond this already. Now we can have robots that are capable of "figuring out for themselves" what we are doing based on various scenes. BUT ONLY, if the robot has been given enough prior information (i.e. data).
But wait? This is also true of humans. Even humans may have absolutely no clue what someone is doing if they never saw anyone doing that particular task before. So even humans rely on having prerequisite knowledge.
None the less, even the AI community isn't prepared to say that a robot that can learn is having the same experiences as a human.
But where's the line?
Where is the line, and is there even a line? Where's the line between understanding what's going on, and having an experience of what's going on?
No one knows the answer to this question. This is the deep philosophical mystery of being a human. Even we don't know how it is that we know and experience what's going on. And how far down the animal kingdom does this go? Clearly most animals (potentially all non-human animals) do not have the same understanding and comprehension of the world as we humans do. Yet, no one is likely to say that an ape, or dog, or cat, or horse, etc, aren't having an experience, or that they don't "know" what's going on around them.
What about an earthworm. When you shove a fishhook into an earthworm it clearly "knows" you are doing this because it begins to squirm in what we can only imagine is some form of discomfort or pain. Is it just acting like an Arduino that has had its button pushed? Or does it actually "feel" the hook being stabbed into it's fleshy body and experience this as pain?
And even more interesting is the question of whether these physical sensations are even important to "consciousness". Or are they just an additional sensation that coincidentally goes along for the ride?
Many philosophers and scientists have suggested that feeling physical sensations are unimportant to conscious awareness and claim that if they could isolate a human brain from all sensory input it could still have logical thoughts and "experience" dreams.
The Philosophical Question of Human Sentience.
It's a deep mystery concerning exactly what the totality of human experience is and which parts are important versus which parts may not be important. Those are indeed deep philosophical questions. Some of which may be as impossible to answer as the question of solipsism. I certainly don't mean to belittle those questions. However when it comes to robots those questions are not important to me.
The Question of Whether a Not a Robot can Know and Understand Something.
For me, when it comes to robotics, I'm only interested in what a robot can exhibit in terms of being able to "understand" or "know" about something.
If it can carry on an intelligent meaningful conversation, I'm not going to ask whether or not it's "conscious". But I will claim that it's obviously understanding the conversation because it's giving meaningful answers and reactions.
So if I build a robot that can talk to me and know what I'm asking it do etc. Then I will have achieved my goal. Whether or not philosophers want to call it "conscious" is entirely up to them.
Canned Behavior versus Contextual Behavior versus Innovative Behavior
We've had the ability to program robots with "Canned Behavior" since the early 50's. And that has progressed to a point where canned behavior can be quite impressive to those who don't fully understand how it works.
However, we have already gone far beyond canned behavior. Today's AI is already adopting "Contextual Behavior" where the robot behaves autonomously based on new information in the environment. And they have even already gone far beyond this with "Innovative Behavior". Where the robot basically comes up with novel ways of dealing with new situations. Often times surprising the very people who originally programmed it.
In fact, today's AI is quite often shocking the AI developers by coming up with solutions to problems that the AI developers would have even even thought of themselves.
Artificial Intelligence has become a Real Thing
Wanting to build an intelligent robot today is not even novel anymore. In fact, anything I build I can pretty much be guaranteed that it will be "behind the curve".
Although I do believe that my approach with Semantic AI based on how our own babies learn appears to be fairly novel. Everyone seems to just want to hook their robot up to the Internet and have it become an instant "Super Genius". From my perspective that's really nothing more than elaborate "Canned Behavior". Where the Internet serves as the "Can".
It seems to me that if we want to build a "Human-Like" intelligence we should have the robot learn in the same way that humans learn.
So that's my thing. 😎
I'm not claiming that it will be "conscious", or "sentient", or "self-aware", etc.
I'll let the philosophers bang their heads against the wall on those questions.
All I'm interested in is the behaviors. If I see the behaviors I'm shooting for, I'll be happy.
That's all I have to say about that.
DroneBot Workshop Robotics Engineer
James
I'm not claiming that it will be "conscious", or "sentient", or "self-aware", etc.
I'll let the philosophers bang their heads against the wall on those questions.
All I'm interested in is the behaviors. If I see the behaviors I'm shooting for, I'll be happy.
That's all I have to say about that.
Ok. However it was not a philosophical question.
But how will you know when it knows what it is saying? The word "know" implies "conscious knowing" as opposed to being able to perform an action without any "conscious knowing" of what you are doing. In other words when will you know it has become "self aware"? And what is the functional determinate of being "self aware"?
Ok. However it was not a philosophical question.
I don't know about you, but all the questions in your quote above are basically philosophical questions that we can't even answer about ourselves.
You may not think of them as being philosophical questions, but unless you can answer them with concrete technical explanations then how could they be anything else?
Everything you've asked about AI can be asked about a any living human. As I pointed out, we can't even disprove Solipsism. How do we know that other humans are conscious or self-aware.
And when it comes to other humans knowing what they are saying that's an even more dramatic question as often times people will hold beliefs, ideas and ideals, that may appear to us to be totally absurd and unwarranted.
As a prime example, I was talking with my cousin just the other day about corona virus. He stated with extreme confidence that he is immune to the virus because he has an invincible immune system. Okay, so it's not too abnormal for some people to believe unrealistic things about themselves. But then I told him that even if he's right about his immune system he could still contract the virus and be a carrier without even realizing it. He immediately argued that he cannot even be a carrier of the virus because of his invincible immune system. I should point out here that he was not joking, he truly believes this.
So what do we do now? We have a human who thinks he "knows" something that clearly is not true. I'm not a doctor but I know enough about biology, viruses and immune systems to know that even the most invincible human immune system will not prevent someone from potentially carrying and spreading a virus.
So now what? What if we had a robot that actually "knew better"?
What if we had a robot that actually understands biology, viruses, and immune systems, and from that information can logically conclude that even the best immune system in the world will not prevent a biological human from potentially carrying and spreading a virus?
Then what? Suddenly we have a robot that actually has a better "understanding" of the world than a mere human.
To me, getting the answer right is a far more compelling sign of "understanding" than getting the answer clearly wrong.
In fact, doesn't this bring up the question that perhaps some forms of AI may have already surpassed some forms of human intelligence and understanding?
You ask,...
But how will you know when it knows what it is saying?
When its answers make sense and are clearly correct. 😊
DroneBot Workshop Robotics Engineer
James
Ok. However it was not a philosophical question.
Okay, here are my answers to your questions without considering them philosophically
But how will you know when it knows what it is saying?
When its answers make sense and are clearly correct
The word "know" implies "conscious knowing" as opposed to being able to perform an action without any "conscious knowing" of what you are doing.
What is "conscious knowing" outside of a philosophical context?
If I view this as a purely technical question then consciousness is technically organized electrical activity in a brain. A Raspberry Pi has organized electrical activity when processing its logical thought process. Therefore technically a Raspberry Pi is conscious by technical definition.
In other words when will you know it has become "self aware"?
When it can distinguish between itself and the external world.
And what is the functional determinate of being "self aware"?
When it can tell when its own sensors have been activated.
~~~~~~
If we're not going to get philosophical about it, the technical answers are easy. Technically the Raspberry Pi becomes "conscious" the moment it is turned on and has booted up. At that point it exhibits organized electrical patterns which are caused by the logical thought processes of its own operating system. It becomes "self aware" the moment it receives any detectable input to which it gives an intelligent response. To even move the mouse pointer across its screen due to input from the mouse is an exhibition of "self-awareness" (i.e. it is aware that its own mouse has been moved). If we aren't getting philosophical and just speaking purely technically it will already be conscious and self-aware before I even start to program it.
DroneBot Workshop Robotics Engineer
James