Notifications
Clear all

What is Machine Learning?

19 Posts
3 Users
0 Likes
6,454 Views
Robo Pi
(@robo-pi)
Robotics Engineer
Joined: 5 years ago
Posts: 1669
Topic starter  
Posted by: @casey

I don't have a fixed view about any approach but if I can't question a particular approach then no exchange of views or knowledge can take place.

It didn't sound to me like you were asking questions.   It sounded to me like you were attempting to make an argument that words are not the same thing as the nouns and verbs, etc, that they label.

If that was supposed to be a question then my answer is this:

Semantics is about the "meanings of words", not just about words themselves.   And this is the main idea behind my Semantic approach to A.I. that I have been talking about.  So it's not just based on words, it's based on the meanings of the words (i.e. having an understanding of what the words actually mean).   Creating a Semantic A.I. that can do this is the challenge.   And it's not easy.   If it was easy Semantic A.I. would most likely already be the front runner in A.I. today.   But just because it's not easy is no reason to reject the idea as being impossible.   After all, humans associate words with their meanings all the time.   If they couldn't do that then words themselves would be meaningless, and thus useless.   So we already know it's possible.   It's just a matter of figuring out how to do it using a computer, and/or other techniques such as ANNs or whatever other tools might be available.

Posted by: @casey

I now remember reading about semantic networks a long time ago in an AI book.  After you first wrote about them I actually spent some time reading about them in case I missed something.

No one has yet been able to create a successful  A.I. based on semantics (at least not that I am aware of).   Although it is quite possible that Hanson robotics is using a semantic approach with their robots.  I don't know the basis of their A.I. architecture.

The article you referenced is only one approach.   In fact, in that particular approach all they are doing is trying to make connections between words rather than attempting to build a system that understands what the words mean.  While semantic nets or trees can be useful, they cannot be the entire picture since they don't include the actual meanings of the words.   In fact, your argument that words are not the same as the concepts they represent would actually apply to that particular method.    I've seen that approach to building "Semantic A.I." as well, and I too saw the limitations of that method.   It's an attempt to get at the meanings of words by associating  words with each other.  This method actually has panned out for those who are in the business of data analysis.  It is possible to discover semantic connections in large databases using this method.   But in the end, a human is still required to make sense of those  connections.  So it's not really an intelligent system on its own.   It's just a way of organizing words into patterns that may have some conceptual commonality.

Here's an example of these kinds of connections from the link you posted:

Notice that in this tree all that is being done is an attempt to make connections between words that can be associated with an emu.   While this type of word association can indeed be useful, especially for humans who are looking for semantic connections, there is nothing in this system that actually associates these words with the concepts they they represent.

For example, the word "fly" is associated with an action a bird can take.  But this system doesn't say anything about what "flying" actually mean.   The same thing for the word "wings", wings are simply something that a bird has, but this system doesn't say anything about what wings are, or what they might be used for.

As I have noted above, these kinds of semantic nets can be useful for humans to be able to make connections between data in large databases.   This is because the humans already know what all these words mean.  But if you want to use semantics as a basis for A.I. in a machine, you'll need to have more than just a these  relationships between words.    You need to also include the actual semantics (i.e. meanings) of these words.

Associating the word fly with the word bird doesn't tell us anything if we don't already know what a bird is and what flying is.

So there's far more to it than just creating trees of related words. 

Having said this, it should also be obvious that in any semantic system trees of related words will naturally arise, and they will indeed be very useful.   For example, the word "fly" and the concept it represents will naturally be associated with the word "wings" and the concept it represents.  In fact, in this example it's worthy to note that it wouldn't even make any sense to try to speak about "wings" in a meaningful way until the concept of "flying" has already been learned.   It would be extremely difficult to try to explain to someone what wings are if they don't already have an understanding of what it means to fly.

This is why I am very interested in the link I've provided about how humans learn semantics.

It supposedly takes a human about 2 years to get to the following point: Knows 50 to 100 words. Uses short, two- or three-word sentences and personal pronouns ("I fall down!" "Me go school?").

We are talking about a very small system here.  Only 50 to 100 words.   Yet we are already capable of building 3-word phrases that convey meaningful ideas.   These aren't just related words, but instead these are words that the intelligent mind understands in terms of what they mean conceptually.

In fact, I question the accuracy of  this information.   I can't imagine a 2-year-old understanding a concept of "Me go school"  and having access to a vocabulary of only 50 to 100 words.   My guess is that the kid is far more likely to have some understanding of more like 500 to 1000 words, if not more at that point.  I'm not sure where they came up with the 50 to 100 words.   Obviously they would need to guess at that based on various assumptions and observations.

In any case, even looking at a system that only has 500 to 1000 words defined and can create meaningful 3-word sentences appears to me to be a system that should be somewhat manageable.   Starting at the beginning and working forward toward that goal seems to me to be a vary promising approach. 

I would think that if you were successful in creating even a small system of a mere 1000 words where meaningful 3-word sentences were being formed by an A.I. that demonstrated that the A.I. "knows" what it's talking about based on the meanings of these words that would be quite the achievement.

And if you got that far, you should be in a position to have a good idea of how the system works.  Knowing this you should be able to create a system that can grow on its own using those principles.   So at that point the robot should indeed be able to "Me go school" and learn even more words and concepts.

If I can get a robot to a point where I can actually start teaching it like we teach a human child that would be quite awesome I think.

DroneBot Workshop Robotics Engineer
James


   
ReplyQuote
Spyder
(@spyder)
Member
Joined: 5 years ago
Posts: 846
 
Posted by: @casey

Learning is a change in behavior over time for the better (as judged by some criteria).

As personal observation using empirical (however anecdotal) data 

The latest version of my ROS has applied a "fix" to the "learning mode" of my robot

I can now put the robot in a position on the floor with something in front of it, and take a picture of the thing being in front of it, and save the picture to a folder labeled "blocked"

Then I can remove the thing in front of it, giving it free space to move in a forward direction, and take a picture of the space in front of it being free to move in a forward direction, then save the picture to a folder labeled "free"

Then I can set the thing on a table near the edge, and take a picture, and put it in a folder labeled "blocked"

(Although, once I transfer this "brain" into my actual robot, I can't imagine a condition where it will be on a table, I am doing this anyway... Just in case. It might go outside and try to drive off the curb of the sidewalk, who knows. I'm trying to be thorough)

Once I've taken about 200 pictures for each category, I'm supposed to move that "dataset" (that's what they call it) to another folder that will do some crunching, which will apparently take a few hours according to the documentation), then I take the output from that crunchiness, and import it back into the robot, at which point, it will, theoretically, not run into things, or try to drive off a table that it should never be on in the first place

I'm "training" the robot. It's "learning" not to drive off a cliff or run into things (I'm going to let it run over small things tho)

The robot will "know" certain conditions where it's free to move, or blocked from movement...

This would be exhibiting a behavior that shows a positive improvement

The robot is situationally aware of its immediate surroundings, but only as far as "move" or "not move" is concerned

This does not mean, however, that the robot knows what will happen to it if it does decide to drive off a cliff

Yes, it has learned NOT TO drive off a cliff, but it doesn't have an understanding of WHY it shouldn't drive off a cliff other than its programming has told it not to

My robot can LEARN, but it can't UNDERSTAND

Am I making any sense ?


   
ReplyQuote
Robo Pi
(@robo-pi)
Robotics Engineer
Joined: 5 years ago
Posts: 1669
Topic starter  
Posted by: @spyder

My robot can LEARN, but it can't UNDERSTAND

Am I making any sense ?

Makes perfect sense to me.   This is why I'm creating (for my own sanity) a distinction between A.I. (an ability to learn without understanding) versus R.I. (an ability to learn with understand).   Of course these are just definitions, whether we can build a machine-based R.I. is another question entirely. ? 

But yeah, I agree there's a huge difference between learning how to do something versus understanding what it's all about.

And this can be scary in terms of advanced A.I.  If we build robots that can do things but don't understand what they are doing that can indeed be quite dangerous.

So my goal is to try to build R.I. not A.I. ? 

Whether I could ever achieve this goal is whole other question.   But the goal can be well-defined, I think.

It basically comes down to whether the machine understands what it's doing or doesn't.  Pretty simple distinction I think.

DroneBot Workshop Robotics Engineer
James


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2037
 
Posted by: @spyder

My robot can LEARN, but it can't UNDERSTAND

Am I making any sense ?

Yes I understand 🙂

The machine has no model of the world that it can use to generate such an understanding from.

The weakness with an ANN is it cannot explain why it categorises images as "blocked" or "clear".  It cannot even list the features it is using to determine this.  They are working on ways to remedy this.

Something I might try on my robot base is using sonar to avoid collisions.  I will think of an algorithm to achieve that task and program that into the computer. The computer will input the sonar distance data and processes it according to a set of rules given by me.

Another method might be to allow it to learn. First you need a signal to punish or reward its actions for a given input. Maybe a bumper switch would do the job. The machine would associate an input/output action with a collision.  I am guessing this will be a low distance sonar reading just before a collision.  It would then learn not to make certain moves when it gets a low distance reading.  Instead it will trial other moves perhaps based on a set of sonar reading in different directions eventually maybe learning to turn in the direction which has the highest distance reading?

Learning involves time and computing resources whereas to hard code a behaviour might use very little in the way of computing power. Maybe I will put my sonar hardware built earlier on the robot base and give it a go?

sonarScanner

Note it doesn't end well for this little bot 🙂

 


   
ReplyQuote
Page 2 / 2