Learning and real r...
 
Notifications
Clear all

Learning and real robots

9 Posts
2 Users
1 Likes
555 Views
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2043
Topic starter  

@davee

Moved a response here as I think I've been hijacking @inq's thread.
@inq wrote:
"There are some things I really don't want help with."
I think that was a big hint 🙂

@davee wrote:
"... one of the main questions, namely how is 'Fitness' defined?"

You need to write a function to determine if the desired behavior is occurring and to what degree. The difference between the desired behavior (moving parallel to a wall as the wall changes direction) and the actual behavior is the error signal (the fitness factor). But that is your requirement. In biology the fitness factor is simply reproductive success which can be achieved by many types of behaviors, there is no one deciding what those behaviors have to be. You could do the same in a machine but of course the behaviors may not be what you want. In essence we are a factor in the reproductive success of the program, we are part of its environment just as we are with animal and plant breeding.

On the learning to recognize hand written characters it is possible for a program to learn to categorize inputs just the way we do by what they have in common.  An O and Q may go into the same category until feedback says no then an extra weight is put on the difference between them separating them into different categories. Categories do not have to be visual they can be functional or have some other criteria in common.

You wrote in another thread,
https://forum.dronebotworkshop.com/raspberry-pi/experimental-robotics-platform-powered-by-the-pico-w/#post-43122

"Emulating a robotic vacuum cleaner or lawn mower, I fear is unlikely to score more than about 2 on a scale of 10."

But the abilities of the latest robot vacuum cleaners is exactly what you have to duplicate in any robot except you might give it other tasks other than vacuuming!

They are the only real autonomous mobile robots in the home. A simple line following, obstacle avoiding, wall following robot may teach some important basic things but they are only the start. A modern vacuum cleaner can map a house, navigate the house, feed itself, recognize dog poo, not fall down the stairs and so on. They can even converse with you and take verbal commands.

A vacuum cleaning robot could fetch a beer given an arm.

One day they might evolve into a more humanoid looking robot that can make use of the same house hold items that you use.

 


   
Quote
(@davee)
Member
Joined: 3 years ago
Posts: 1691
 

Hi @Robotbuilder,

Thanks for your reply.

Apologies if my reply was not precise enough when I asked "How is 'Fitness' defined?", but I meant for the specific simulation in the video, not what the generic definition of fitness was. Whilst I welcome extra detail, I was only looking for a sentence or phrase to describe how it was defined for that case, and, if applicable, that sentence could have been a reference to a former definition.

Please understand that although @inq had discussed the principles he was considering in general terms for Inqegg and/or other hardware implementations, I hadn't spotted a definition for the simulation. Perhaps I was remiss in not tying two parts of the discussion together, but that is already a long discussion and as I said, I am presently a 'casual observer', rather than someone who is following or even understanding every detail.

-----------------

I realise that one of the abilities of almost any mobile robot is going to include moving around in a walled environment without bumping into them, but in the common environment of a single level floor, etc. that is a task that can be handled with relatively simple logic and sensors .. it doesn't neccesarily require an AI/ML engine of some type. Hence, I didn't see it as a demanding task and gave it a low achievement score. Perhaps AI/ML could be used to enhance its ability to cope with the 'unexpected', but I wasn't convinced this would be the norm in present day commercial products.

-----------------

My understanding (and apologies if I have it wrong) is that Inq is eventually dreaming and looking towards a 'humanoid-ish' robot that can navigate in a complex environment, where the can of beer would be on a shelf alongside lots of other objects placed in semi-random layouts, etc. Whilst I hope he achieves it, just tackling some of the issues the task poses would be amazing. To me, following a wall and fetching a beer from an adapted dispenser is a novelty party trick, like teaching a dog to sit and offer its paw. The real issue, is can the robot do a number of really useful tasks in a world that is not really designed to fit around it, which is very difficult to do by conventional programming approaches.

--------------------

As for beer fetching, or poo sensing, vacuum cleaners, I was certainly only considering a more primitive cleaner than the one in the video ... perhaps I am living in yesterday's world, but I still didn't see much in the video that I thought couldn't be programmed by 'conventional' means, given reasonable resources. Plus, to do the beer fetch, it needed a specially adapted fridge, that meant more expense, availability of space, etc, which would limit its realistic suitablity to small minority of households.

--------------------

That is not to say AI (and/or ML) is not involved in this video, but sorry, this example just doesn't personally inspire me to think this is really something new that was hitherto regarded as virtually impossible.

And in my world, I would need another robot to find my phone, as I rarely look at it more than once a day, whilst the fridge is easy to find. 😀 

-------------------

As for adding a robot arm, you have probably seen it, but alongside the video you referenced is:

which is a nice demo of some expensive kit, and tackles some issues, but even it needs a world (e.g. the fridge) adapted to it, and it doesn't clean the floor, or a million other things we all need to do from time to time.

-----------

That does not mean I am not interested or impressed by AI and/or ML.

I am much more impressed by other developments using AI/ML, including examples. Perhaps by neccessity, the more interesting applications are also more complex and difficult to understand, but I would like to think that their approach could at least be explained in high level terms. Then, if I can get my head around the 'high level' picture, I might be interested to fill in some of the detail.

----------

Best wishes and thanks for interest and input.

Best wishes, Dave

 


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2043
Topic starter  

@davee 

Apologies if my reply was not precise enough when I asked "How is 'Fitness' defined?", but I meant for the specific simulation in the video, not what the generic definition of fitness was.

https://www.iotworldtoday.com/robotics/robot-dog-learns-to-walk-starting-on-its-back

but I still didn't see much in the video that I thought couldn't be programmed by 'conventional' means,

It probably was programmed by 'conventional' means, that was the point, no GA or RL required to achieve the outcome. Yes it was a special fridge and beer can collector "arm".  A more advanced arm could also be programmed by conventional means to open a conventional fridge door and visual processing could be used to identify the objects inside. There is a cost factor in a more advanced arm.

And in my world, I would need another robot to find my phone, as I rarely look at it more than once a day, whilst the fridge is easy to find.

Finding objects isn't limited to finding a beer can.

I have no issue with these self learning algorithms,  they have interested me for decades. If @inq can achieve something with them more power to him. You can do learning of different kinds with GOFAI as well. These programs can also generalize over a set of examples.

 

 


   
ReplyQuote
(@davee)
Member
Joined: 3 years ago
Posts: 1691
 

Hi @robotbuilder,

  I am probably being really thick (yet again), but surely the point of AI is to offer capabilities that are beyond conventional programming.

I may be mistaken, but a view (rule of thumb perhaps?) that I have seen expressed elsewhere, and also seems plausible but is clearly not formally proven, says that a task that is implemented using conventional programming will usually be more efficient (in terms of hardware requirements, power consumption and so on) than an AI implementation.

The catch being, some tasks might be thoretically implementable using conventional programming, but the time and cost would be prohibitive. If they can be implemented using an AI approach at an affordable cost and time, there is a solution to a previously unsolvable challenge.

Of course, researchers (amateur and professional), may legitimately choose to break such rules of thumb for their own reasons, but when they do, they should make that clear when publishing their work, preferably with explanations of the reasons.

And perhaps, the rule is itself broken, in which case exposure of the flaws would be most helpful to guide further progress.

--------

The self-righting robot dog appeared to be utilising approaches encompassed or similar to AI. I haven't examined the details, but superficially, that looked interesting, as it appeared to not require the designer to implement the answer.

The 'low score' I gave to the 'basic' vacuum cleaner, as it seemed likely conventional programming would be feasible, if its task was limited to cleaning the accessible area and returning to its charging station.

The assertion that the beer fetching cleaner was only capable of fetching a beer was based on the premise that the designer had to provde specific interventions at every step. Of course, it may have used an AI based optical recognition approach to select the desired brand of beer, and that engine could probably taught to recognise something else, like a bottle of fruit juice, but it would probably require more designer intervention to provide a 'fruit juice bottle' dispenser, and so on. Meanwhile, a conventional programmer could probably code a pattern recognition program, that could look for both beer and fruit juice labels. All of which leads me to wonder if the most 'AI' orientated part was Amazon's or Google's Large Language Model contribution.

--------

I think we both agree that Inq's research is very interesting, and I personally don't care if the 'AI' is new, old or somewhere in between. Unlike you, I clearly need to do some digging to try to understand the different flavours and approaches, which to me are still something of an acronym soup. This has been the source of many of my recent questions, but I have an enormous catch up journey.

Best wishes, Dave


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2043
Topic starter  

@davee 

... surely the point of AI is to offer capabilities that are beyond conventional programming.

Let's not over think this. Concentrate on what the program does, how it does it, and decide for yourself if it is behaving "intelligently".

You have a black box with a set of inputs and a set of outputs. You can record its sequence of i/o behaviors and decide if it is behaving "intelligently".

I would not label a particular algorithm or programming method as AI rather I would label a behavior (a sequence of inputs/outputs) as having some measure as an intelligent behavior from none to advanced.

I might also add that a system can show intelligent behavior in one area and not so much in another. 

 


   
ReplyQuote
(@davee)
Member
Joined: 3 years ago
Posts: 1691
 

Hi @robotbuilder,

  I suspect our thinking is closer than may appear.

 My problem is that I am struggling to find terms to describe what, as you say  is really 'a particular algorithm or programming method', that can also be typed in a few letters, such as AI or ML

Hence, I use AI and the like to label a generic programme, that has not been coded for the user target purpose, but that can somehow be 'taught' to perform a task, without changing the programme code. For example, controlling a self-driving vehicle that can cope with a vast range of the 'unexpected' that will be met on everyday roads, and whose only specific adaptation to the user target purpose has been through 'experiences', such as watching a human operator, videos of a human operator, or linked to some kind of simulator/analyser that analyses responses of the program and gives feedback as to a measure of its success, and that 'learning' is captured as a set of data parameters, typically numbers.

Unfortunately, 'AI' in particular, which includes the word 'Intellligence', tends to suggest some sort of vast intellect, when in a reality we are probably looking  at one or more arrays of processors, running rather basic matrix manipulation programs, of near indecipherable numbers, that it or a partner programme has previously generated, which is being mixed with numerical inputs (including codes) from external sensing sources. 

Perhaps there are more suitable terms for such parameterised number crunchers, but I have yet to find them.

Similarly, I struggle a bit with describing 'Intelligence' in terms of 'living' organisms. Similarities and differences in the 'thinking', control and behavioural abilities of  Einstein, a fruit fly, a homing pigeon, an octopus and a fungus are a whole research field in their own right, though probably not that relevant to this forum. I am not expecting a fruit fly to understand general relativity, and I guess that even if Einstein had ben born with wings, he might have taken a longer time to learn how to control them for acrobatic flight than the entire lifetime of a fruit fly.

So yes, you might be right to think in terms of 'behaviours'. I can see a reasoning to suggest it, whilst also wondering if it it too has its problems.

Clearly the word 'intelligence' is applied in flexible ways, depending on the circumstances. Meanwhile. I'll probably continue to use 'AI' as a convenient, albeit over grandiose, acronym, until someone finds a more appropriate term that is also widely recognised, and hope you will forgive me for any incidental corruption to the meaning of 'intelligence' that may be implied.

Best wishes, Dave


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2043
Topic starter  

@davee 

So yes, you might be right to think in terms of 'behaviours'. I can see a reasoning to suggest it, whilst also wondering if it it too has its problems.

What problems? It is a scientific way thinking. There is no ambiguity in what you mean. The state of your program at any time is a list of all the variables and their values at any given time.  A behavior is a record of those values as they change over time. It makes no reference to the physical thing itself. Thus you might say "this variable is undergoing a simple harmonic oscillation" and that is complete in itself without concern for it being a variable measuring the point on a wheel as it turns or a potential in an electric circuit.

Where there is confusion is in the rubbery way the word "intelligence" might be used in everyday life. Usually "real intelligence" assumes sentience. This means the system is having subjective experiences.

If the word "intelligence" is used to mean "a way of behaving" then even the word "artificial" in AI is not correct. It assumes in is not real like an artificial flower made out of plastic. However the behavior is real. A chess playing program actually plays chess and makes moves we would call intelligent if made by a human.


   
ReplyQuote
(@davee)
Member
Joined: 3 years ago
Posts: 1691
 

Hi @robotbuilder,

========

  I fear this may be a case where 'overthinking' might apply to both of us, albeit probably in different ways, but as always, I am open to a challenge. I just hope I do not accidentally cause any anger ... I apologise in advance if that has or will be the case. I do not wish any contributor, and particularly not any contributor who is making valuable contributions, any harm or stress.

======

In terms of language, words like intelligence and behaviour tend to mean different things to different people, including, maybe especially, within the scientific communities. My own limited background is largely based around physics and chemistry, which study simple materials and physical properties. Whilst there are still many things to discover, they do not tend to display the variability of living organisms. Hence, when people from other disciplines, such as perhaps psychology try to describe and define their subjects of interest, there seems (to me) to be a wide variation in definitions, methods and so on, which probably arise and are used for good pragmatic reasons, but which allow much more flexibility and cause more confusion, than would normally arise in say physics. I am not criticising .. just observing and trying to understand. As an example, sometimes will describe a word class athlete as displaying some form of high intelligence, perhaps because of their gymnastic capability. Such intelligence maybe prefixed with an adjective, like 'physical' or 'athletic', but it is not clear to me whether such 'intelligence' is in any way related to that associated with being able to solve difficult maths problems or play top-level chess.

Behaviour similarly seems to depend on the field of study. Newton's apocryphal apple falling from a tree may be described in terms of a behaviour that means two objects of mass are attracted to each other .. he explained it as a force ..  gravity. We have evidence to show that applies on Mars and the Moon, and we expect it to apply to anywhere in the 'known universe'. Einstein and others has since revised our concepts of what gravity is, and its behavioural effects, but these changes continue to appear to have a degree of universality, albeit some predictable, but unexpected behaviours become apparent when looking at very small particles.

Meanwhile, scientists observing animal (including human) behaviour are likely to find cases where the same person behaves in different ways to the same stimulus on different occasions. Of course, in some cases there maybe an obvious explanation, but in others the explanation may not be forthcoming.

======

  My problem, which may be a mistake on my part, is that both intelligence and behaviour, when applied to a person, and possibly when applied to an animal, say a dog, is that they both imply a degree of predictability and a degree of unpredictability, in how they will respond to a particular situation.

Conventional computer programs tend to be strongly predictable in how the responses they show to a specific set of stimuli -- indeed, particularly with respect to safety critical systems, and probably other 'critical' systems, like major financial management systems, a large amount of testing is usually aimed at demonstrating that the responses are always as specified.

Of course, a degree of unpredicability can be inserted, such as by including a random number in a calculation, or by an accidental fault, such as a sensor returning noise instead of the expected measured value. But neither of these imply intelligence. By contrast, a human may deliberately decide to react in a manner which is different from the 'expected norm'. In some cases, this may be the result of simple anger or greed or whatever, such as someone stealing something, and I think that would generally be be described as a 'bad' behaviour.

But occasionally it might be the result of an 'insight' or 'intuition' that suggests to the person concerned that the effect may unexpectedly be beneficial.  Maybe Einstein's assumption that nothing can travel faster than the speed of light would be such a thought. As general experience to most people in the early 1900s would have been that items can be accelerated by applying a force, and that continuing to apply that force, will continue to cause the object to accelerate in a directly proportional manner. The suggestion that some kind of natural and fundamental speed limit exists is not an obvious conclusion. Of course, when the first trains were implemented, some cynics had suggested that exceeding a few miles per hour could be fatal, which soon proved to be hopelessy wrong, but Einstein's suggestion was not of that type. I guess many people would describe that as demonstrating intelligence.

By contrast, the chess playing programmes, demonstrating similar moves to that of a human, at least until fairly recently, were largely predictable. They may have been perceived as showing 'intelligence', but in reality they were calculating many predetermined what-if simulations, and picking the one that looked the 'most promising' on a rather simple mathematical basis, after a small number of moves in the future. Their 'power' was based on the speed that a (then) powerful computer could brute force calculate all of options, several moves into the future, thereby anticipating all of the possible responses of the opponent.

Thus an 'average human' player, who knew the rules of the game, but was not an expert in chess strategies, past games and so on, could easily be outplayed by the predetermined computer programme, because at best, the humans could only foresee (perhaps) 2 or 3 moves into the future, they would not have time to consider all of the more unusual options, and they would make mistakes. Hence, for many years, the very best chess players could beat their computer opponents, by using their knowledge and intuition to effectively set up the future to beyond the position the simulations could predict, simply because each extra move roughly doubles the number of possible outcomes.

More recently, computers have been trained on past performances, and/or learnt games by repetitive attempts, that did not include the programmer coding for that specific task. These systems have proved to be more formidable opponents to even the best human players.

It appears the best contemporary systems that have been trained by observing real chess games, etc, or that have 'taught themselves' winning strategies, without those strategies being expressly coded by humans,  are in some way imitating the very best chess players. If we describe the best chess players as being able to win games by being 'highly intelligent', then to me at least, it seems the computer systems that match and exceed the ability of their human rivals may be showing some form of intelligence. 

=========

Your suggestion that such intelligence is not 'artificial', implying that it is actually 'real', I find difficult to decide on. It might be, that the mechanisms are so similar, that although the physical implementation is different, then it might be inappropriate to use the word artificial.

However, if the mechanisms are different, and it is clear the physical implemetation is different, then maybe a word to differentiate is appropriate ... perhaps a there is a better word than artificial, but for the moment, it is probably the strongest contender.

My understanding is that we do not fully understand the mechanisms involved in either the biological or computer based implementations, so deciding if they are similar is currently impractical ...  perhaps this will be resolved in the future.

---------------------

I am not sure where this discussion is heading ... I personally find it difficult and challenging, but possibly useful. I say 'possibly' because although it easy to dismiss it as ust discussing the meaning of words, etc., which are merely convenient shorthands to describe something that can be redefined at will, it is also considering what may be worthy of looking into.

Thanks for your thoughts and suggestions. Best wishes, Dave


   
Inst-Tech reacted
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2043
Topic starter  

@davee wrote:

"My own limited background is largely based around physics and chemistry, which study simple materials and physical properties. Whilst there are still many things to discover, they do not tend to display the variability of living organisms."

And that is why I mentioned the book by Ashby
http://pespmc1.vub.ac.be/books/IntroCyb.pdf
which was how to tackle complex systems in a scientific and rigorous manner relevant to robotics and AI.

There are different levels of understanding but at each level you are talking about how things change in time (a behavior). Biology reduces to chemistry which reduces to physics but at each level there is scientific rigor and mathematics involved with a different language being used and different models created.

Where I was coming from was the cybernetic point of view of AI.

quote from the book,

"It [the book] starts from common-place and well-understood concepts, and proceeds, step by step, to show how these concepts can be made exact, and how they can be developed until they lead into such subjects as feedback, stability, regulation, ultrastability, information, coding, noise, and other cybernetic topics."

"Here I need only mention the fact that cybernetics is likely to reveal a great number of interesting and suggestive parallelisms between machine and brain and society. And it can provide the common language by which discoveries in one branch can readily be made use of in the others."
[My emphasis]

@davee wrote:
"... sometimes I will describe a word class athlete as displaying some form of high intelligence, perhaps because of their gymnastic capability."

There are "smart" moves in sport although they do take talent and practice to execute. Sporting tactics is probably closer to computational intelligence as it organizes the moves.

@davee wrote:
"Behaviour similarly seems to depend on the field of study."

Of course. An apple falling from the tree can be assigned a height value which changes over time. It is an accelerating behavior. We would not call it an "intelligent" behavior. I would however say that what we call intelligent behavior is just as determinate as a falling apple but unlike the apple an intelligent entity can change its responses because gravity is not its only input and thus can change its state to move toward some stable state we call a "goal state".  A sky diver does not just fall like a leaf instead he/she can change the orientation of the body to move toward some goal landing site.

@davee wrote:
"Your suggestion that such intelligence is not 'artificial', implying that it is actually 'real', I find difficult to decide on. It might be, that the mechanisms are so similar, that although the physical implementation is different, then it might be inappropriate to use the word artificial."

The behavior is the same. It is not the physical components that do the behavior that defines intelligence. This is the whole point made in Ashby's book. A system is a list of variables and a behavior is how their values change over time. The values at any point define the state of the system at that time. That is it!

I could elaborate but if you read the book it does elaborate and I would just be quoting from it.

All the best,
John


   
ReplyQuote