Notifications
Clear all

What is Artificial Intelligence?

73 Posts
9 Users
7 Likes
12.8 K Views
Robo Pi
(@robo-pi)
Noble Member
Joined: 3 years ago
Posts: 1909
Topic starter  
Posted by: @casey

Without input a human brain would never develop nor would it have anything to think about.

That's a deeply philosophical question.   And a quite interesting question as well.

Consider having an actual physical human brain that is being kept alive in a tank of water.  It's a fully functional brain but all input sensors have been cut off so it cannot obtain any sensory input from the outside world.   Would that brain still be conscious and capable of having any thoughts at all?

Obviously we cannot know the answer to this question we can only assume that it would or wouldn't be conscious.

If we assume that its conscious then we must also assume that its capable of asking the question "What am I?" and from there it would surely begin to philosophize on that question.  Surely it would at least come to the famous conclusion "I think therefore I am".   What other kinds of ideas it might come up with is anyone's guess.  But to jump to the conclusion that it would not develop in any way nor think about anything seems like a unjustified conclusion.  Would it still experience emotions?  Could it become frustrated?  Angry? Peaceful? Could it experience the bliss of  peace and the anguish of frustration?   It would seem to me that if nothing else it could begin to focus on all the things it could experience even without any physical sensory input.

On the other hand, if we assume that without any sensory input a brain would then necessarily become unconscious, (i.e. unable to think or experience anything) this would then lead to the conclusion that sensory input is a necessary, and thus a critical part, of consciousness.

I would imagine that philosophers could spend a lifetime considering the possible answers and consequences to all these questions.

DroneBot Workshop Robotics Engineer
James


   
ReplyQuote
robotBuilder
(@robotbuilder)
Noble Member
Joined: 3 years ago
Posts: 1563
 

Disconnecting input to a brain after it has developed is different to a developing brain without input.

A brain is a network of connected computing units so you can ask the same question about any such network.

The brain or computer can be viewed as a system which is a list of all their variables and their values.

To understand the limits and certain properties of systems start with simple systems you can understand and derive from them principles which do not depend on size.

In the case of the brain its state at any given time would be a list all the neurons and their values at that time.

In the case of a computer the state of the system would be a list all the memory units, along with their values, at any given time. The computer's state changes with each clock cycle according to how they are connected and the values of what they are connected to.

An example of a pattern recognition network is the full adder. A list of all the NOR gates and their values at any given cycle would be the state of the system at that cycle. It can recognise two patterns, S and C. XYZ is the pattern input to recognise.

XYZ S C
000 0 0
001 1 0
010 1 0
011 0 1
100 1 0
101 0 1
110 0 1
111 1 1

 


   
ReplyQuote
Robo Pi
(@robo-pi)
Noble Member
Joined: 3 years ago
Posts: 1909
Topic starter  
Posted by: @casey

I would avoid philosophising about these questions or bring "consciousness" into the discussion.

If you want to avoid the philosophy then why make philosophical guesses about what a brain that has no sensory input might think?

Posted by: @casey

A brain is a network of connected computing units so you can ask the same question about any such network.

Agreed.   And the same question would apply here.  Would a system of connected computing units that has no sensory input still be able to compute?   I see no reason why it wouldn't be able to.

Posted by: @casey

The brain or computer can be viewed as a system which is a list of all their variables and their values.

I personally reject this view because I see even a simple computer program as being more than this.  A  program is not just a list of its variables and values.   A program is a complex algorithm whose behavior and output depend on a much  larger algorithm context.   So I wouldn't even reduce a computer program to this level of simplicity.  It already has "emergent properties" that are due to its larger algorithm structure.

Posted by: @casey

To understand the limits and certain properties of systems start with simple systems you can understand and derive from them principles which do not depend on size.

Neither size nor complexity is the issue but rather the nature of the overall structure and the properties that emerge from this larger structure are the important features.

Posted by: @casey

In the case of the brain its state at any given time would be a list all the neurons and their values at that time.

The problem I see with this is that given any frozen state of a brain in time would say nothing at all about what the brain is actually doing with the frozen state.   I do agree, however, that given the frozen state of a ANN will indeed tell you everything there is to know about an ANN.  But these are two entirely different things.  Unless, of course, you have already concluded that a brain is nothing more than a complex ANN.  Like Sabine I don't buy into that line of think.

Posted by: @casey

An example of a pattern recognition network is the full adder.

A totally agree that this is how ANNs work.

Apparently where we appear to have some departure in our views is associated with whether we have accepted that a brain is nothing more than a sophisticated ANN.   I'm not prepared to accept this idea.  In fact, if this were true then humans would be nothing more than predetermined I/O devices, just like ANNs.

Could this be the truth of human reality?  Perhaps so.  But I personally don't see where we have anywhere near enough evidence to jump to that conclusion.    And perhaps even more significantly I have ideas for building better machine brains that go far beyond what ANNs can do.   If I can build a machine brain that can go beyond what ANNs can do, then surely our brains can go beyond that as well.

DroneBot Workshop Robotics Engineer
James


   
ReplyQuote
robotBuilder
(@robotbuilder)
Noble Member
Joined: 3 years ago
Posts: 1563
 

I deleted the first paragraph regarding philosophy because I saw it as not helpful.

My assumption is, that in the context of this forum and robots, we are talking about algorithms that make them behave intelligently?

*Would a system of connected computing units that has no sensory input still be able to compute? I see no reason why it wouldn't be able to.

Compute what? You can't add two numbers without two numbers as input. Yes it can still "think" about its collected data but it needs to collect that data via an input and also be able to test its conclusion by actions and resulting inputs.

*A program is not just a list of its variables and values. A program is a complex algorithm whose behavior and output depend on a much larger algorithm context. So I wouldn't even reduce a computer program to this level of simplicity.

Physically it is a collection of computing units that can in principle be fully described at any point in time by listing those units and their values. In practice there are constraints which make it possible to use higher level descriptions but it all bottoms out at that level just as a computer program, no matter how complex, bottoms out as machine code. In practice we write our programs at a much higher level but there is always a danger of using word magic to imagine there is more to it.

*It already has "emergent properties" that are due to its larger algorithm structure.

Emergent properties are the result of not fully understanding the system. That is a limitation of the observer not a fact about the system.

* Unless, of course, you have already concluded that a brain is nothing more than a complex ANN.

All I observe is a network of neurons. Not sure what else you are seeing there.

*... if this were true then humans would be nothing more than predetermined I/O devices, just like ANNs.

They are statistically determinate machines with input and output unless you think there is some little spook between the ears.  I (my subjective self) is something my brain does it is not something it has

*And perhaps even more significantly I have ideas for building better machine brains that go far beyond what ANNs can do.

Something that is not made up of connected physical units?

 


   
ReplyQuote
Robo Pi
(@robo-pi)
Noble Member
Joined: 3 years ago
Posts: 1909
Topic starter  
Posted by: @casey

*A program is not just a list of its variables and values. A program is a complex algorithm whose behavior and output depend on a much larger algorithm context. So I wouldn't even reduce a computer program to this level of simplicity.

Physically it is a collection of computing units that can in principle be fully described at any point in time by listing those units and their values. In practice there are constraints which make it possible to use higher level descriptions but it all bottoms out at that level just as a computer program, no matter how complex, bottoms out as machine code. In practice we write our programs at a much higher level but there is always a danger of using word magic to imagine there is more to it.

Why introduce a word like "magic"?  I'm not the one who is introducing these words.  You are.

I would say that clearly a computer program is far more than just a CPU with an instruction set.    Knowing the architecture of a CPU and its instructions tells you nothing about what algorithm programs might be running on it.

In fact, I don't think anyone would suggest that a CPU with an instruction set represents A.I.  You don't get A.I. until you write software that fulfills whatever definition you have chosen for A.I.  In fact, that really gets to the heart of the question of this thread:  "What is Artificial Intelligence?"

You are more than welcome to offer up your favorite definition.  I started this thread by sharing a video from Sabine simply to show that many modern day scientists are starting to equate A.I. with ANNs.  I personally don't agree with that equivalency.   But this appears to be how many people are viewing A.I. today.

So does a CPU already constitute A.I. simply because it contains an ALU and can do arithmetic?   Or do you require that a more sophisticated program be running on the CPU before you would classify it as an A.I. program?   These are questions there apparently aren't any right or wrong answer for.  People are still offering personal opinions on these questions.  And that's ok.  The idea is not to try to prove each other wrong, but rather to share ideas so that we can all have a better insight into these concepts.

Posted by: @casey

*It already has "emergent properties" that are due to its larger algorithm structure.

Emergent properties are the result of not fully understanding the system. That is a limitation of the observer not a fact about the system.

I disagree.  Emergent properties exist whether we understand them or not.   Even if we can identify what has given rise to their emergence, they would still then be emergent properties.

Posted by: @casey

* Unless, of course, you have already concluded that a brain is nothing more than a complex ANN.

All I observe is a network of neurons. Not sure what else you are seeing there.

My analogy here would be the same as the CPU analogy.

This would be like you saying to me, "All I see is a CPU and an instruction set.  Not sure what else you are seeing there".

My reply is, "I'm seeing the actual computer program."

Posted by: @casey

*... if this were true then humans would be nothing more than predetermined I/O devices, just like ANNs.

They are statistically determinate machines with input and output unless you think there is some little spook between the ears.  I (my subjective self) is something my brain does it is not something it has

That's fine.  I wouldn't even argue with this view.  In fact, this actually fits in perfectly with my views on Semantic A.I.   The I (the subjective self) is definitely something that the system will create.  What I would suggest that just because the system creates this subjective self doesn't mean that it doesn't exist.   To the contrary, it does exist precisely because the system created it.

If you think that I'm attempting to argue for some sort of mystical spiritual or religious soul, you're barking up the wrong tree.   I'm talking solely about what can be created in a robotics lab.

It sounds to me like what you are attempting to argue is that humans are simply biological robots.  I'm not going to argue against that.  To the contrary, I'm more than happy to assume that premise to be true.

Posted by: @casey

*And perhaps even more significantly I have ideas for building better machine brains that go far beyond what ANNs can do.

Something that is not made up of connected physical units?

Well, of course it's going to be made up of connected physical units.  What do you think I'm proposing?  To create some sort of invisible non-physical robot? 

My personal interest is in creating an intelligent robot that is ultimately based on semantics (i.e. on an understanding of the meanings of words and speech).   And of course this would be physically implemented  using CPUs, ANNs, and complex computer programs and algorithms, and who knows what else might get tossed into the mix before its all said and done.

My main idea here is that the intelligence of this system will ultimately arise from the semantic structure, and not so much the physical structure of the units used to make the semantic system.   The question then will be, "When should we consider that this system crosses the line from being artificially intelligent, to becoming truly intelligent?"

In other words, what criteria should we use to decide when the Kitty Hawk actually "flies".

You've already agreed that when an airplane flies it's "really flying".  I'm just trying to find a criteria for when a computer system should be considered to have  really become intelligent.   But even that is the subject better left for another thread.

This thread is supposed to be on the question, "What is Artificial Intelligence?"

I think we need to get people's definition on that before it would even make sense to start talking about how  any "Real Intelligence" might differ from A.I.

So anyone care to offer any answers for the question, "What is Artificial Intelligence?"

Here are a couple definitions I found online:

Artificial Intelligence -  the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

The only problem I have with this definition is that it basically ends up defining any computer system or machine intelligence as being "Artificial" no matter how closely that intelligence resembles human intelligence.

Here's another definition that seems to allow for continual moving of the goalpost:

Artificial intelligence (AI) is the ability of a computer program or a machine to think and learn. It is also a field of study which tries to make computers "smart". ... As machines become increasingly capable, mental facilities once thought to require intelligence are removed from the definition.

I'm not sure what they remove from their definition here.  Are they saying that if a machine can do it then it should be removed from the definition of requiring intelligence?

I not personally happy with any of these definition if only because they don't appear to have any practical value or use.   So this is why I started this thread.  I'd like to hear what others have to say about this.

Also as I attempted to show in the OP with Sabine's video.    A lot of people are considering ANNs to be A.I. and this is something that I definitely reject.  As far as I'm concerned ANNs aren't even artificially intelligent.  In short they aren't intelligent at all.  Period.   That's my personal view on that.

DroneBot Workshop Robotics Engineer
James


   
ReplyQuote
Robo Pi
(@robo-pi)
Noble Member
Joined: 3 years ago
Posts: 1909
Topic starter  
Posted by: @casey

* Unless, of course, you have already concluded that a brain is nothing more than a complex ANN.

All I observe is a network of neurons. Not sure what else you are seeing there.

Hi Casey,

I would like to ask if this is your position on ANNs.

Please know that it is not my intention to argue against your views.  I may or may not agree with them but that's irrelevant.   The main idea is to share the views that we do hold.  Not to try to convince others that they should agree with our views.

Based on the above quote it appears that you see the brain basically as nothing more than biological ANNs.  I'm just curious if this is indeed your view on things.

The reason I ask is because it seems that if this is indeed your view then doesn't it follow from this that you see ANNs as being the correct solution to creating a human-like intelligence and that there is really nothing more to do beyond creating more complex ANNs at this point?    And that if we can create a complex enough system of ANNs we should be able to duplicate a human brain.  Since this is all a brain amounts to?

So I'm just asking if this is a correct understanding of your view?

Thanks

DroneBot Workshop Robotics Engineer
James


   
ReplyQuote
Duce robot
(@duce-robot)
Prominent Member
Joined: 3 years ago
Posts: 679
 

Wow! I don't  understand any of this but I think an actual learning thinking feeling emotional machine is a far off in the distance because along with the other things independence is what the main goal is just looking at this conversation it just solidifies my view that the creation of a c3po or a data is not a completely reachable goal at. All the best ones I have seen aren't smart enough to be stupid but its OK it will still get to a pointe  a very high level but face the facts guys the mind of god is unsearchable and I'm not really religious that's why I jokingly said just have kids its much faster lol. But still I'm cheering you on because .........if you guys come up with something cool ............?I can glom off of it lol?.


   
ReplyQuote
robotBuilder
(@robotbuilder)
Noble Member
Joined: 3 years ago
Posts: 1563
 

RoboPi: Why introduce a word like "magic"? I'm not the one who is introducing these words. You are.

Human brains and artificial intelligence have interested me since my teen years. I have read a lot of books on both subjects and although it has been over a decade ago since I last debated these issues on online forums. It was in that context that I mention the use of word magic that some people use without realising it to explained what their program is doing. However this is not the forum to discuss the philosophy of AI.

My conclusion is that "intelligence" is the classification of a behavior that shows itself to have a goal. It can be as complex as a human doing math or dealing with a social situation or as simple as an air conditioner that keeps the house at some fixed temperature.

As your question about my view on human intelligence requiring a neural network I would just repeat the definition of intelligent behavior as a classification of a behavior, not how it is done. You can wire up your robot with hardware "neurons" to seek light or you can program an Arduino controller to seek light the behavior is the same. It has a goal and moves to satisfy that goal. Note the use of the magic word "satisfy" 🙂

RoboPi: Even if we can identify what has given rise to their emergence, they would still then be emergent properties.

It comes down to the behavior of the whole not being explainable in terms of the behavior of its parts and some people imagine something extra is added. The behavior of the whole "emerges" from how the parts are connected and thus is explainable in terms of those connections. Change their connections and a new behavior "emerges".

 


   
ReplyQuote
Robo Pi
(@robo-pi)
Noble Member
Joined: 3 years ago
Posts: 1909
Topic starter  
Posted by: @casey

My conclusion is that "intelligence" is the classification of a behavior that shows itself to have a goal. It can be as complex as a human doing math or dealing with a social situation or as simple as an air conditioner that keeps the house at some fixed temperature.

Based on your view here then there would be no point in referring to any intelligence as "artificial".  It's either intelligent behavior or it's not.    And an air condition then, according to your statement above, exhibits "intelligent behavior".

Why call any intelligent behavior  "artificial intelligence" then?  What's artificial about it?  And at what point does intelligent behavior become "not artificial"?

DroneBot Workshop Robotics Engineer
James


   
ReplyQuote
Robo Pi
(@robo-pi)
Noble Member
Joined: 3 years ago
Posts: 1909
Topic starter  
Posted by: @duce-robot

Wow! I don't  understand any of this but I think an actual learning thinking feeling emotional machine is a far off in the distance because along with the other things independence is what the main goal is

Yes, I would agree that independence or autonomy is definitely one criteria that needs to be on the list of any success.   But autonomy alone is not sufficient.   There already exist robots that can do quite well when set off on their own.  Obviously not in comparison with human behavior.  But still quite independent of any need for any remote control or external intervention.   In fact, NASA is planning on sending some quad copter drones to various other moons in our solar system.  And those drones are necessarily going to need to be mostly autonomous because the long delay in remote control  signals makes remote control impossible.

Posted by: @duce-robot

actual learning thinking feeling emotional machine

We are rapidly approaching creating robots that have the ability to learn.  In fact, we've already had that ability for quite some time.  All we are doing on this issue is improving upon it in leaps and bounds.

Thinking for itself in terms of being able to make logical decision is already done as well.  In fact, we've been programming computers to do that for quite a few decades already.   The question there is not so much being able to think, but rather how much context can robot mind be able to deal with? 

As far as having "feelings and emotions", we can certainly program the robot to behave as though it has such things.  But whether a robot would ever actually be able to feel anything is a question that philosophers will be left to debate.

From my understanding of what @casey has said, there shouldn't be any difference between a robot brain and a human brain.  Therefore if we can have feelings, then a robot  should be able to have feelings too.   That's clearly a topic that would be quite difficult to back up with any actual evidence.   We can't even disprove Solipsism so we're hardly in a position to be guessing what a robot may or may not feel.

Posted by: @duce-robot

just solidifies my view that the creation of a c3po or a data is not a completely reachable goal at. All the best ones I have seen aren't smart enough to be stupid

I believe that if we want to move in that direction we absolutely need to focus on Semantic A.I.    Especially if we want to get there anytime soon.   It's highly unlikely that we'll get there anytime soon approaching the problem via the current approach to designing ANNs using back propagation based on training data.

But, of course, I could dead wrong.  We have to leave that possibility wide open too. ? 

DroneBot Workshop Robotics Engineer
James


   
ReplyQuote
robotBuilder
(@robotbuilder)
Noble Member
Joined: 3 years ago
Posts: 1563
 
Posted by: @robo-pi

Based on your view here then there would be no point in referring to any intelligence as "artificial".  It's either intelligent behavior or it's not. 

It is artificial in the sense that the entity doing the intelligent behavior is not biological but instead is man made.

If db1 seeks out a charger when its battery is low I would declare that intelligent behavior. 

Knowing how the Eliza program works would its behavior be deemed intelligent?  It may appear so but I think not for it does not really show itself as seeking a goal.


   
ReplyQuote
stven
(@stven)
Trusted Member
Joined: 3 years ago
Posts: 64
 

Still plowing through this interesting thread. At some point you asked for definitions of AI. I wanted to share this thought provoking  “Up and Atom” YouTube video that offers some historical context.

 


   
ReplyQuote
Duce robot
(@duce-robot)
Prominent Member
Joined: 3 years ago
Posts: 679
 

@stveni love her .....,,for her brain of course! 


   
ReplyQuote
Robo Pi
(@robo-pi)
Noble Member
Joined: 3 years ago
Posts: 1909
Topic starter  
Posted by: @casey

It is artificial in the sense that the entity doing the intelligent behavior is not biological but instead is man made.

This is a fair definition, and one that I imagine quite a few people might use.   Artificial Intelligence is then any intelligence that is man made versus having biological origins.

I personally do not like this definition.  I suggest that if we're going to distinguish between biological intelligence and man made intelligence, then why not just refer to it as man made intelligence and skip the use of the term artificial entirely.    As you said yourself, we don't call airplanes artificial flying machines.  But we do recognize that they are man made flying machines.

This does seem to have a potential problem though.   What are you going to call future man made intelligence if men start making intelligence using biological elements instead of electronic elements?  Would that still then be considered to be artificial intelligence?

Posted by: @casey

If db1 seeks out a charger when its battery is low I would declare that intelligent behavior. 

I would too.  But depending I wouldn't necessarily classify it as A.I.   That would all depend on how it was programmed to do this.   I might be more inclined to assign the intelligence to the person who programmed DB1 to seek out a charger when its battery gets low.

Clearly the term Artificial Intelligence is going to have about as many meanings as people we ask to offer their opinions on it. ? 

Posted by: @casey

Knowing how the Eliza program works would its behavior be deemed intelligent?  It may appear so but I think not for it does not really show itself as seeking a goal.

So now you have a different definition entirely.   Here you seem to be defining A.I. as the ability to seek a goal.   But then the question of DB1 finding a charger when the battery gets low comes back into question again.  Who was it that sought that goal?  DB1?  Or the person who programmed DB1 to seek out a charger when the battery gets low?

DroneBot Workshop Robotics Engineer
James


   
ReplyQuote
Duce robot
(@duce-robot)
Prominent Member
Joined: 3 years ago
Posts: 679
 

Well I'm going to post duce having the closest thing I can come up with to ai on YouTube and its going to be a cool song I'm setting it up now I hope it takes I had to dial the sound system way back the sub was rattling the whole machine which you are all welcome to by the way .gimme a few minutes. And it will be up  30 40 min or so 


   
ReplyQuote
Page 2 / 5