Does my robot under...
 
Notifications
Clear all

Does my robot understand?

28 Posts
6 Users
1 Likes
5,526 Views
Robo Pi
(@robo-pi)
Robotics Engineer
Joined: 5 years ago
Posts: 1669
 
Posted by: @casey

So he will be defining exactly what he means by that with actual hardware and software.

Yes, it sounds very exciting.  Especially considering how well Bill explains what he does in great detail.  I'm really looking forward to watching this series on the DB1 build.

DroneBot Workshop Robotics Engineer
James


   
ReplyQuote
stven
(@stven)
Member
Joined: 5 years ago
Posts: 49
 

Interesting discussion.

I recently came across this video and a few others from Hanson Robotics which I also thought were interesting forays into new spaces for AI. I’ll grant there is some hype here, but also some progress along a quite difficult path. The dialog has been likened to an online chat or, but I think there is a little more than that going on. One would need to review the code to get a better feeling...

 


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2042
Topic starter  
Posted by: @stven

The dialog has been likened to an online chat or, but I think there is a little more than that going on. One would need to review the code to get a better feeling...

You can review the code on github.

I remember my first computer the TRS-80 running ELIZA.  Its source code is simple.

"Sophia is conceptually similar to the computer program ELIZA, which was one of the first attempts at simulating a human conversation."
https://en.wikipedia.org/wiki/Sophia_(robot)

"Sophia the robot’s co-creator says the bot may not be true AI, but it is a work of art"
https://www.theverge.com/2017/11/10/16617092/sophia-the-robot-citizen-ai-hanson-robotics-ben-goertzel

 


   
ReplyQuote
Recycled Roadkill
(@recycled-roadkill)
Member
Joined: 5 years ago
Posts: 75
 

I watched that TED talk just last week but found the dialog between the to AI units to be disappointing at best. Especially when the Sophia started basically repeating questions and answers initiated by the other.

The one on the right looked much better with wearing the hat of the guest speaker. Sophia looked a bit more naked at that point and the conversation dwindled even further.

Guess I got what I paid for.

My Vector and Cozmo seem to chatter endlessly in their own language which I find to be very entertaining.

All I need is a universal translator to understand what they're talking about.

This post was modified 5 years ago 2 times by Recycled Roadkill

This message was approved by Recycled.Roadkill. May it find you in good health and humor.


   
ReplyQuote
stven
(@stven)
Member
Joined: 5 years ago
Posts: 49
 

@casey

The Hanson Robotics repository at github is interesting and impressive. I see they are leveraging the work at OpenCog and the robots are running ROS.

   https://wiki.opencog.org/w/The_Open_Cognition_Project

 


   
ReplyQuote
Robo Pi
(@robo-pi)
Robotics Engineer
Joined: 5 years ago
Posts: 1669
 
Posted by: @stven

The Hanson Robotics repository at github is interesting and impressive. I see they are leveraging the work at OpenCog and the robots are running ROS.

I too am not impressed with the work of Hanson Robotics.   I agree that it's basically nothing  more than a gloried chatbot working on basically the same principles as other chatbots.

Their strategy appears to me to just be to keep throwing more and more data at the machine in the hopes that at some point it will "automatically" become intelligent simply due to the overwhelming data it has to work with.   To be perfectly honest about it I wouldn't waste my time on such a silly project.

I would be far more impressed by a robot that has extremely limited knowledge but can display a true understanding of the little information it has.   For me, that would be far more promising and worthy of building upon.

I will give Hanson Robotics credit for the physical robots they have built in terms of facial simulations etc.  But then again, considering how much money they most likely threw at these robots perhaps it's not so impressive after all.

I would love to know what I could do given a huge grant and a bunch of grad students or technicians working under my direction. ? 

Unfortunately I'm never likely to find out.

DroneBot Workshop Robotics Engineer
James


   
ReplyQuote
stven
(@stven)
Member
Joined: 5 years ago
Posts: 49
 

@robo-pi

I think I will continue to explore what is going on at the OpenCog projects, just to learn a little more about what they are doing there. I agree with much of what you say about Hanson, obviously a lot of money goes into R/D for what they are building. I suspect they are going to be a major player in the commercial space.

But there are a lot of interesting developments in the “maker” space, perhaps more truly innovative and groundbreaking.


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2042
Topic starter  
Posted by: @robo-pi

I would be far more impressed by a robot that has extremely limited knowledge but can display a true understanding of the little information it has.   For me, that would be far more promising and worthy of building upon.

The "views" expressed by Sophia and that other chat box are clearly the canned views inserted into the data base for entertainment value. There is no in depth knowledge processing or logically thought out ideas being expressed.

However you can give your robot the ability to handle speech input and generate speech output rather than just having a console or GUI interface with the robot to give commands and so on ...

Hopefully leave out the irritating canned opinions and have it just answer a question or carry out a command.
"Robbie, go to the kitchen and fetch me a beer"
"Will that be a Budweiser or a Miller Lite?"
"Miller Lite thanks"
"Ok. On my way"
and so on ...

Maybe using those deep nets to collect information as it moves about.
"Robbie, have you seen my keys?"
"Yes, I noticed them last on the corner bench"

 


   
ReplyQuote
Robo Pi
(@robo-pi)
Robotics Engineer
Joined: 5 years ago
Posts: 1669
 
Posted by: @casey

The "views" expressed by Sophia and that other chat box are clearly the canned views inserted into the data base for entertainment value. There is no in depth knowledge processing or logically thought out ideas being expressed.

I somewhat agree.  It's not exactly  pre-progammed canned speech.  Instead it's a play on word association.  The program figures out what the subject words are and then searched for information related to that subject.  And since these robots have a huge data base of information to search through it's impossible to know what  ideas they might stumble onto.  So their replies will be surprising even to the original programmers.   But for me, that's not impressive because I understand exactly what's going on and it's not what I would consider to be cognitive intelligence.  It's basically just a very complex word association game.

 

Ben Goertzel explains what he's doing in extremely abstract terms.  So abstract that it gives me the impression that even he doesn't have a clue what actually going on.   The idea is that there already exist a myriad o f AI systems.   For example, AI that plays specific games.  AI that organizes economic information.  AI for self driving cars.  AI for GPS locations.   AI for,.... any specific task or data organization.   The idea behind OpenCog is to find ways to connect all of these various AI systems for use by a central Artificial General Intelligence system or GAI.

But all this amounts to is an attempt to do what I described above.  Searching through the databases of various AI systems looking for terms that are related to what you are searching for.  In this case you also have the added advantage of being able to choose the AI system to search through based on the goals set by your  GAI system   So in other words, if you are asked a question about chess, you can search through only the AI systems related to playing chess or keeping track of  Chess history and players, etc.   This way your answers are going to be related to your GAI goal.    So this all Ben Goertzel's GAI amounts too.   It's just a system that searches through all other AI systems based upon the goals and purposes defined by the  GAI.

That's clearly an obvious approach to try to generalize AI, but that system is never going to become intelligent on its own.  In fact, the entirety of its intelligence lies within the GAI module itself.  The core program that is  choosing what information to search for.

Posted by: @casey

Hopefully leave out the irritating canned opinions and have it just answer a question or carry out a command.
"Robbie, go to the kitchen and fetch me a beer"

I would keep this far simpler:

"Robbie" <--- keyword used to get the robot's attention.  When it hears it's name it listens for a command.

In fact, I go further with my robot and use "Alysha Listen".   There's are two reasons for this.  For one thing the robot will not respond to just "Alysha" meaning that I can speak her name and it won't cause her to think that I'm speaking to her.  She will only go into listen mode when I say "Alysha Listen".   This also makes it better for the speech recognition engine because a longer phrase has a higher confidence level due to the additional syllables and the length of the speech.

I would also drop all unnecessary human words.

So I would simply say,

"Alysha Listen" and wait for her to reply with "I am listening".

Then I would say, "Kitchen Fetch Food", this is an entire phrase that Alysha would recognize.  It has a high degree of flexibility.   The term Kitchen tells Alysha what what "world" we are talking about.  The term Fetch tells Alysha what she is being asked to do.  The term Food gives her a specific category of items that are located within the kitchen.

If she recognizes the phrase she will understand the "world" that we are talking about, and she will understand what it is she is supposed to to.  All that is left now is to determine which "Food" is being requested.  So having understood the command she will reply with, "What kind of food would you like?"

This  tells me that she's all set to go to the kitchen and  fetch food.  All that is required now is to  say what kind of food I want.

If I say "beer"  she would then know what I want and offer me choices if choices are available.

Instead of using the single word "beer" I would more likely say something like "Michelob Lite".  This more complex phrase is much easier to detect with certainty using a speech engine.   Also, if I keep Michelob Lite in stock this term would also already be in her vocabulary.

In fact, I intend on building her vocabulary up  from scratch.   So she isn't even likely to  have the word "beer" in her vocabulary at all since it's not a term I am personally likely to use.  She doesn't need to know what beer is.   All she needs to know is how to find a Michelob Lite and bring it do me. ? 

I'm not concerned with trying to build a robot that can understand just anything anyone might happen to say.  I'm only concerned with a robot that understands what I'm telling it to do.   She can learn superfluous words and terms over time on her own  as she grows.

In fact, if she ever gets to the point where I ask her to fetch a Michelob Lite and she replies, "What don't you just say that you want another beer?"   Then it's time to break out the champagne!  That shows that she's learning and understands exactly what's going down. ? 

DroneBot Workshop Robotics Engineer
James


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2042
Topic starter  
Posted by: @robo-pi

The idea is that there already exist a myriad o f AI systems.   For example, AI that plays specific games.  AI that organizes economic information.  AI for self driving cars.  AI for GPS locations.   AI for,.... any specific task or data organization.   The idea behind OpenCog is to find ways to connect all of these various AI systems for use by a central Artificial General Intelligence system or GAI.

Thanks for explaining how they are trying to implement a more general AI.

Not sure if I will ever reach the stage of using speech recognition. Currently I just communicate via a wireless keyboard. I am also building up its vocabulary on a needs basis starting with simple commands like "Turn Left", "Move forward two feet", "Stop", "Go" or at least they will become that as at the moment I use a single key-presses for a word. TL45[Enter] (turn left 45 degrees) and MF45[Enter] to move forward 45 inches. These functions can then be called by higher level functions.

Maybe a program can be written to turn a sentence into code.
"Move forward until you hit a wall"
becomes,
DO
MOVE FORWARD
UNTIL HIT

Also maybe a program can extract commands from long winded statements.

Would you please to to the kitchen and fetch me some food?"
becomes,
"Kitchen Fetch Food"


   
ReplyQuote
Robo Pi
(@robo-pi)
Robotics Engineer
Joined: 5 years ago
Posts: 1669
 
Posted by: @casey

Also maybe a program can extract commands from long winded statements.

Would you please to to the kitchen and fetch me some food?"
becomes,
"Kitchen Fetch Food"

Yes you could definitely do that.  But that would require a lot of programming on your part.    I'd rather work the other way around just like we do with human children.   Teach the robot "Kitchen Fetch Food" and then let the robot evolve to figure out how to extract this from more complex speech later as it grows.

This way the robot is learning and doing its own programming instead of the programmer doing everything for the robot.

DroneBot Workshop Robotics Engineer
James


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2042
Topic starter  

If you know how to program a robot to evolve to figure out things that would be good.

Maybe a child who understands "kitchen", "fetch" and "food" works it out like this?
"Would you please go to the kitchen and fetch me some food?"
becomes,
"xxxxx xxx xxxxxx xx xx xxx kitchen xx fetch xx xxxx food?"
becomes,
"Kitchen Fetch Food"

I suspect children evolve a world model first which they understand using mentalese, "a hypothetical mental system, resembling language, in which concepts can be pictured and combined without the use of words."

Perhaps that was the kind of world Helen Keller lived in before she understood the use of language.

 


   
ReplyQuote
Robo Pi
(@robo-pi)
Robotics Engineer
Joined: 5 years ago
Posts: 1669
 
Posted by: @casey

I suspect children evolve a world model first which they understand using mentalese, "a hypothetical mental system, resembling language, in which concepts can be pictured and combined without the use of words."

This is basically what ANNs do.  They make associations between things.   This is no doubt why the AI community is so fascinated with ANNs.

Posted by: @casey

Perhaps that was the kind of world Helen Keller lived in before she understood the use of language.

I agree.  But the key point to recognize here is that her world didn't truly open up to be very meaningful until she learned how to communicate.   So Helen Keller is actually a prime example of how language is paramount to facilitating true understanding.   Language most likely also opens the door to abstract thinking as well which may not have been as readily available in the primal mode of just reacting directly to stimuli or direct data.

DroneBot Workshop Robotics Engineer
James


   
ReplyQuote
Page 2 / 2