Autonomous robots a...
 
Notifications
Clear all

Autonomous robots and ai ?

50 Posts
8 Users
12 Likes
15.6 K Views
Duce robot
(@duce-robot)
Member
Joined: 5 years ago
Posts: 672
Topic starter  

@pugwash

I think that word recognition is the place to start when a robot can associate a word with an object is where it starts  get my coffee cup not only a cup but my cup maybe a series of identifying " bar graph " stickers  for lack of a better term flash cards for robots through this  it can learn slowly through repetition it may learn to distinguish objects  but I'm basically in robot pre school so its just an idea.


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2037
 

You can use vision to identify objects and their positions.  Simple detection can be done with color or bar stripes or fiducles. To fetch coffee or a can of beer you will need an arm. First you will need your robot to be able to map and navigate the house or building.
My assumption is that as DB1 is going to be a "real" robot it will be able to do all the above once some arms are added. I also imagine at a dinner party the robot will be able to act as a waiter walking around with a tray of food and drinks to offer the guests.

Click text to see video.
https://hackaday.com/2013/09/08/brewster-fetches-your-beer-automatically/

 


   
ReplyQuote
Duce robot
(@duce-robot)
Member
Joined: 5 years ago
Posts: 672
Topic starter  

I think one of the top characteristics of a robot one of its main functions should be ................are you ready .................. Shhhhhhh I'm going to whisper this one it must be fun!? how much fun is a dirty sock retriever or a waiter or some robotic cocktail server branch out ai or not make it totally fun   for everyone and not in some weird way that might sully the unit because if it isn't fun then it isn't going to be very much fun?????d


   
ReplyQuote
(@pugwash)
Sorcerers' Apprentice
Joined: 5 years ago
Posts: 923
 

@duce-robot

how much fun is a dirty sock retriever 

I have seen sensors that can "see me", "hear me", "feel me" (pressure), "touch me", that covers Tommy from The Who.

But I have not come across any sensors that can smell or taste.

Let's be fair, I don't think that any self-respecting electronic sensor would want to put a dirty sock in its mouth. ? 

So, I guess we have to restrict ourselves to the sensors available to glean information about the robot's immediate environment.


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2037
 
Posted by: @pugwash
But I have not come across any sensors that can smell or taste.

As with  "seeing" there is no single sensor for "smell". The electronic camera provides an array of light values. "Seeing" is what is done with that array of light values. We see with our brain not with the eyes although we need eyes to provide the data to enable "seeing".  Smell would involve the analysis of chemical combinations requiring not a sensor but many sensors capable of detecting different kinds of molecules.

We do have sensors that may be useful for a robot such as for example detecting carbon monoxide, a useful sensor if you have a gas stove. We have sensors to check how much alcohol is in a human's breath which we might connect to a verbal output of "Have you been drinking?". We have sensors to detect the acidity (sourness taste?) of a liquid.  There are sensors for ethylene (ripeness of fruit).  More complex "noses" can reveal the presence of many things such as different types of drugs. Can these machines be reduced to a matchbox size? I don't know.

 

 


   
ReplyQuote
Duce robot
(@duce-robot)
Member
Joined: 5 years ago
Posts: 672
Topic starter  

Also with ai and the quest to make robots human is going to encounter the need for emotion and that is the real stumbling block for ai it isn't going to happen ever because with emotion comes dis function and so on the heart is far more complicated than the brain figuratively speaking of course .


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2037
 

Emotions are not dysfunctional they are essential part of our ability to quickly decide what to do next and what activities to maintain. We now understand a lot about why we have emotions and purpose they serve.

There was one hobby robot builder who added simple "emotions" to his robot.
http://www.johncutterdesign.com/blog/category/robot

 


   
ReplyQuote
Duce robot
(@duce-robot)
Member
Joined: 5 years ago
Posts: 672
Topic starter  

They are not dis functional I like that one lol but it does come along with them so to make not emotions but the base which all of it can flow there is just to many variable's to account for then you get into individuality the builder would almost have write the whole program around them self taking all. Of they're quarks and idiosyncrasies yes and dysfunctions pet peivs and other things that make up that persons personality .I don't think it will ever reach this you just might as well just have kids its much faster. Lol ?.


   
ReplyQuote
Robo Pi
(@robo-pi)
Robotics Engineer
Joined: 5 years ago
Posts: 1669
 

@duce-robot

I found a video for you that might help you in your quest to build an Autonomous AI robot.   It might give you some ideas on how to tackle various tasks.

Oh, by the way.  To test your AI robot see if it can beat the chimp at 19:24 in the video.  This ape actually does better than humans at this particular task.  And it even involves being able to recognize the numerals 1 thru 9 in a split second and then recounting the locations of the numerals in the correct numerical order.  Pretty interesting stuff.   

If a chimp can do this surely you can design a robot to do it too.  Right? ? 

DroneBot Workshop Robotics Engineer
James


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2037
 

So you might ask why Chimps aren't as smart as us in other areas?
The computer is valuable because it has perfect memory and yet cannot process data the way we can ... yet.
There is probably a reason we don't have perfect memories it might not be all that useful when it comes to making advanced models of the world (abstractions) and quickly getting to the gist of things. If we forget our times table we can still work it out.

 

 


   
ReplyQuote
Robo Pi
(@robo-pi)
Robotics Engineer
Joined: 5 years ago
Posts: 1669
 

@casey

I recently watched a TED talk by a neurologist who says that studies show that human brain size and capacity has actually been deteriorating over the last few millennia.   He suggests that once humans began to form large societies where they could depend on each other we became lazy since we no longer need to do everything for ourselves in order to survive.   So it could be that at one point in time we were as smart as the chimps but now we're starting to devolve.   At least in terms of innate primal abilities.

Collectively, as a species, we are becoming far more intelligent on an abstract level.  But as individuals we could actually be devoling in terms of individual capabilities.

Back when I was 20 I might have had a chance at competing with the chimp.  But at my current age of 70 if I had to compete with that chimp playing that same game in order to survive I'm afraid I'd have to say that that would be the end of me.

This old ape is on the way out. ? 

DroneBot Workshop Robotics Engineer
James


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2037
 
Posted by: @robo-pi

@casey

Collectively, as a species, we are becoming far more intelligent on an abstract level.  But as individuals we could actually be devolving in terms of individual capabilities.

Maybe like neurons in the brain we are becoming specialised to work in an evolving intelligent social system.

 


   
ReplyQuote
Robo Pi
(@robo-pi)
Robotics Engineer
Joined: 5 years ago
Posts: 1669
 
Posted by: @casey

Maybe like neurons in the brain we are becoming specialised to work in an evolving intelligent social system.

Humans becoming like neurons
in a socially structured liaison
where the world is a brain
with no one to train
but A.I. mechanical morons

Brains cells will die
and we'll never know why
as we collectively reach for the sky

And someone will write
a poetic slight
as a proper historic goodbye

And the Borg will come
and take us away
"Resistance is futile!" they'll say

And thus it will end
the suffering and pain
that humans have known for so long

The sun will explode
like an overfed toad
and our lives will end with a song

Premonitions of the future by Robot Pi

DroneBot Workshop Robotics Engineer
James


   
Pakabol reacted
ReplyQuote
(@pugwash)
Sorcerers' Apprentice
Joined: 5 years ago
Posts: 923
 

@robo-pi

I was wondering, if you take a break from waxing lyrical, whether you might expand on the concept you mentioned in an earlier post, of using OpAmps to create a neural network.

I am curious to know how something like that would work! And what practical application it could be used for?


   
ReplyQuote
Robo Pi
(@robo-pi)
Robotics Engineer
Joined: 5 years ago
Posts: 1669
 
Posted by: @pugwash

@robo-pi

I was wondering, if you take a break from waxing lyrical, whether you might expand on the concept you mentioned in an earlier post, of using OpAmps to create a neural network.

I am curious to know how something like that would work! And what practical application it could be used for?

I'm hoping to make some detailed videos on this subject this winter.   It's both extreme simple and extremely complicated.   By that I mean that it's actually quite simple to understand, but quite complex to explain, especially starting from scratch.  And also especially because anything you already know about neural networks isn't going to be very helpful since my methods are quite unorthodox.

But to save you the wait, I will try to explain some things very briefly here.   And I'm going to be glossing over main points pretty quickly because I don't want to try to explain everything in a post. So I'll try to do this in a sort of bullet format.

  • What is a Neural Network or Perceptron Circuit useful for?

In a nutshell it's a circuit (or computer program) that can recognize a very specific data input condition.  It's main use is to classify input data, and/or recognize a specific data set (i.e. object recognition, word recognition, face recognition, etc.)    So in answer to your last question, a neural network or perceptron is practical when you want to be able to recognize a specific object or class of objects either physical or information (i.e. physical objects or abstract concept like words, or sounds etc.).

  • How does a Neural Network or Perceptron Circuit work?

Well, this is a rather broad question that actually has many different valid answers.  In fact, there is so much research going on in this field that there are as many different answers to this question as there are approaches to how people are building and designing Neural Networks and Perceptrons.  By the way, in case you aren't aware, the whole idea is to model how neurons and neural networks work within a biological brain.  And it is informative to realize that there are many different types of biological neurons and neural networks.  So it's not like one-size fits all.  It's a quite diverse subject and there are many ways to approach it with no way yet known which approaches maybe better or worse.   So it''s not carved in stone.  It's wide open to research and the artistic creativity of the designers. 

  • We can however classify ANNs and PCs in two distinct ways:

Note: ANN = Artificial Neural Network,  and I'm going to use PC = Perceptron Circuit (my term)

The two ways to classify them is simply:

  1. Before Training
  2. After Training

Much of what you'll see and read on the Internet is focused on training ANNs.   ANNs are trained dynamically via computer programs that use something called "back propagation".  This is done via a process of  trial and error over many iterations.   And learning how to train ANNs is what a lot of people are devoted to studying.  Again this is basically because nobody knows the best way to do this.  This is why everyone if frantically researching this process in the hope that they might make a major breakthrough and discover clever ways to make the process more efficient and effective. 

During training the ANN is pretty much useless.   It isn't useful until after it has been trained to be able to classify the input data or recognize the objects or processes that it has been trained to recognize.   Only after it has been trained does it become useful.  Only then can it actually recognize or classify the things you want it to recognize and classify.

What I am working on is a new and unorthodox way of training ANNs.  My method does not use a trial and error training algorithm.  Instead I analyze the data myself using mathematical techniques and then use those analytic results to  correctly design a pre-trained ANN.  So my ANNs won't need to go through a training process. They will already be able to recognize what I want them to recognize as soon as they are built.  They will also be able to recognize the target data much faster since they will be direct analog circuits and not a computer algorithm that depends on CPU or MPU to crunch data.

  • How about a Practical Example?  Hypothetical.

Let's say that we want to build an ANN that can recognize the face of a specific person.  That's all it does.  It can only recognize one specific person's face.  But it can do that very well. 

If we build a conventional ANN in software that trains via trial and error it will take some time for this ANN to program itself to recognize this particular face.   Plus it will need to be told that it got it wrong with every iteration of trial and error.  In fact, it's actually far more complicated than this.  It actually needs to be told whether it's guesses are getting closer or further from the target.   So it takes a lot to train it.  But once it's finally trained it should be able to recognize that single face pretty well.

Please note that this is then all it can do.  If  you "re-train" it to recognize someone else, (which could be done), it would then no longer be able to recognize the original person.  Well, that's not exactly true.  It can save all the synaptic weights from the previous trial and error run and in this way  it could "remember" the first person it had originally been trained to recognize.   But in order to recognize more than one person's face it would need to switch all of it's synaptic weights to those values.   And it wouldn't even know that it needs to do this for any particular face.   In short, to even use it in this way would quickly become a very high processing situation where the CPU and memory would need to be extremely fast if any practical results were expected.

  • Now Let's think about a hard-wired Op-amp ANN

To begin with, there is no training.  Instead the data is analyzed ahead of time before the ANN is built.  Then the ANN is built to recognize that individual's face.   This is then all it can do.   It can't be easily reprogrammed, or re-trained to recognize someone else.   But then again, it requires no CPU, no memory, and it can recognize the face much faster.  Like instantly.  With no processing or CPU power required.

The Op-amp ANN then simply sends an interrupt signal to the computer and basically says, "I see Fred". And then the computer program knows that Fred is present.   Of course, if you also want to be able to recognize Sally you'll need to build a second OP-amp ANN to recognize Sally, and so forth.  This may seem cumbersome, but if your goal is to just have your robot be able to quickly and easily recognize a few faces, this may very well be a very practical way to go.

  • But you said this was "Hypothetical"?

That's exactly correct.   I'm not claiming to be able to currently build an OP-amp ANN that can recognize the face of a specific individual.  I was just offering up an example of what could be hypothetically possible once this process is perfected.   I'm certainly not there yet.

In fact, thus far this is all theory along with a lot of mathematical ideas.  My first attempt at trying to build an OP-amp ANN will certainly not be one that can recognize anything as complex as a human face.  Although, that may not be nearly as difficult as you might think.  There are ways of reducing the data to just a handful of special traits that are unique to an individual.  The ANN isn't going to be "seeing" what you and I see.   The ANN is only going to look for features that are necessary to tell that Fred isn't Sally, etc.   And there has actually been a lot of studies on how to do this. 

Well, I'm already well beyond what I originally expected to say in this post.

However, just to end this ramble, my first prototype Op-amp ANN will most likely be set up to recognize something like an ice cold beer in the fridge . ? 

After all, I might get thirsty as I move forward toward trying to build an ANN to recognize a human face.  So I may as well have the robot learn how to bring me an ice cold beer in the meantime.

I hope this answers your questions.

DroneBot Workshop Robotics Engineer
James


   
ReplyQuote
Page 2 / 4