Notifications
Clear all

AI

6 Posts
3 Users
0 Likes
333 Views
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2043
Topic starter  

 

https://forum.dronebotworkshop.com/artificial-intelligence/saying-goodbye/

@davee wrote:
Simple tasks like a vacuum cleaner can be achieved with relatively simple algorithms without invoking AI, but applying AI as an alternative approach is still valuable, because assuming it is successful, then it may indicate a means of expanding into a further 'skill' which may less amenable to traditional programming.

For some reason I see the term "artificial intelligence" being equated with the term "neural network". They are both examples of traditional programming. All programs are doing something "intelligent" that is why they are useful. "Neural networks" are doing multivariate statistical processing on data. 

Steven Pinker: How the Mind Works.
"... connectionist networks are not particularly realistic models of the brain, despite the hopeful label “neural networks".

The vacuum cleaner has always used artificial intelligence to control its behaviors,  starting with a simple wandering robot that when it bumps into things turns and heads off in another direction to the more complex artificial intelligence being used in the latest vacuum mopping robots.

But, if something is too far out of my experience and understanding, I may stay mute in deference to those who can make a positive contribution.

I take it that your expertise is electronics rather than programming?

A robot has a mechanical, an electronic and a software component. Actually you could build one out of electronic components alone as was done years ago with the first attempts at making a machine behave in a life like way. One of the first books I read on robotics was "Build your own working robot" by David Heiserman and that robot used logic gates to do the "thinking". Actually what we call a computer is simply a collection of electronic components that you wire up with a program.


   
Quote
THRandell
(@thrandell)
Brain Donor
Joined: 3 years ago
Posts: 224
 

Posted by: @robotbuilder

The vacuum cleaner has always used artificial intelligence to control its behaviors,  starting with a simple wandering robot that when it bumps into things turns and heads off in another direction to the more complex artificial intelligence being used in the latest vacuum mopping robots.

I totally agree with this one (for what it's worth). The behavior-based approach to programming robots came from the work of Rodney Brooks at MIT.

 

Tom

To err is human.
To really foul up, use a computer.


   
ReplyQuote
(@davee)
Member
Joined: 3 years ago
Posts: 1691
 

Hi @robotbuilder,

   I certainly hope I am not offending anyone, so apologies if my language and ignorance is any way upsetting.

  As AI is certainly not an area I claim any expertise, and I offer this, not as a sermon, but as a discussion that I would appreciate some useful feedback.

I am not very comfortable with simply equating useful programs with intelligence. There can be few on this forum that haven't come across 'blinky' to check out a microcontroller board .. blinky does a very useful job ... as does it's VDU era forerunner 'Hello World'  ... by neither strike me as intelligent.

So personally, to be of interest Artificial Intelligence, Machine Learning or "What's it called?" must offer something that 'traditional programming' can't do.

And because the machine I am imagining, starts with 'traditional' hardware and software roots, like statistical processing on machines with arithmetic and logic processors, etc. does not preclude it from acquiring a more interesting 'life' of it own.

A foetus is largely isolated from the outside world and its stimuli, develops to the stage at which it can survive outside of the womb. It is only after it is born that the main 'learning and developing by experience' process can begin in earnest.

I find it interesting, but by no means a critical factor, if the system in some ways imitates a 'living' nervous system. For someone looking for an insight into brain and nervous systems, a close parallelling would probably be essential. But I am more interested in looking for a class of machines that can do something 'traditional' programming methods find impractically difficult to construct. Whether or not it simulates a living system in the design of its mechanism is largely immaterial.

------------------------------

My technical background includes both software and electronics .. it includes R&D, proposing and prototyping new product concepts, and so on, where the usual descriptive job titles are much more blurred. That has including designing and building (small) computers from the chip level upwards, as well 'animating' them with appropriate software.

-----------------------------------------------------------------

I appreciate terms like AI & ML, may have formal definitions, and that I am being cavalier with my usage, for which I apologise. My feeble excuse is I don't know any better ones to describe one or more class(es) of machines involving computing capability, which are different from the majority over the period of roughly 1940-2000+.

I would suggest during that period, the 'norm' would be to design a machine to do a specific range of tasks, and the means of achieving those tasks would be expressly embedded in the design and implementation of the machine. The embedding may be hardware .. an analogue-to-digital (A/D) converter to measure a  voltage ... it may be software .. a program to count how many times a light beam is interrupted ...commonly a mix of both.

Whilst the designer of the A/D converter might not know the final usage of his design, even if it was hardware interfaced to processor, it still wouldn't do much until the processor had been given an explicit programme.

So in effect, human design needs to be employed to enhance a system by adding hardware and software up to the point it is functional. Once a machine is functional, it might be able to discover things within its environment, like its battery voltage ... but only if a human has explicitly embedded with instructions to do that. Those instructions may be embedded in different manners .. part of some hardware logic, some microcode, a paper tape with holes ... the list is endless, but they all involve one or more humans doing something to embed its specific capability.

------

My very simplistic understanding of the machines now being developed are a bit different.

They often start from fairly similar roots to traditional computing based machines .. arithmetic/logic processors, memory, communications, peripherals to connect to real things, etc. The design of the parts and their interconnections is still manually derived.

They also often have a level of 'manually' written code for the processors. I am sure there are a range of approaches, but a glance, some look a bit like curve fitters, and interpolators/extrapolators of those curves, combined with yet more arithmetic capability for weighting the numbers derived from these operations.

So up to this point, there is a kind of parallel with a 'standard' computer, with input/output devices, and some library algorthms for handy things like calculating, sums, differences, plotting curves.

--------------------

Maybe imagine a small computer with keyboard, screen, and just enough operating system to support a fresh copy of Excel installed on it, with no data, spreadsheets, macros, etc.  (This copy of Excel is over 20 years old and has no features developed after 2000.)

----------------------------------------

The Excel spreadsheet could be connected (by typist or machine interface) to a constant supply of numbers ... but other than display, store and (maybe) lose them, Excel couldn't do anything with the numbers.

It could only begin to 'analyse' them, when someone, i.e. a human, adds some extra 'information' as to what to do with these numbers.

--------------------------

Let's further imagine this Excel's job is to take each incoming number, multiply it by 3, and put the result in an adjacent column to the incoming data 

Obviously a human could directly 'program' the operation. That is the 'traditional' programming approach.

-------------------------

Now let's consider if the machine can figure out what to do itself. Obviously we need to give some clue as to what we want, but we can't explicitly 'programme' it  .. perhaps we don't know the 'times 3' operation is required.

----

Further imagine this task of taking a number, and storing 3 times that number, is something useful that has been going on for the last year, so there is data file with thousands of these pairs of numbers, but the calculation was performed manually, in complete secrecy, by an unknown, untraceable person. 

Perhaps, most of the pairs are correctly calculated, but some have errors.

-------------------------

We could now feed the pairs of numbers to same Excel spreadsheet, but though most humans would realise the likely relationship in a few seconds, the Excel program would never 'realise' it, without human intervention.

------------------------------------------------------------------------------------------------------------

But, if my understanding is correct, the machines we are now discussing could be fed a short run of the data pairs as training data, marking only the first column as 'stimulus' and the second column as 'response', and the machine will 'train' or 'program' itself. It will build itself an abstract model that achieves the same result as the human.

If this process is successful, thereafter, we should be able to submit new stimulus values, and it will predict the correct response answers, or at least something close to them

(All sorts of problems like rounding errors, etc. may make this task harder than it sounds, but this assumes this machine has no such practical limitations.)

---------------

This case is obviously incredibly trivial, but in principle, if we have a second training set, for which the relationship to calculate the response is more abstract, then there is a good chance the operation could be repeated and the machine will 'learn' to predict the values according to the second relationship.

---------------------------

This ability to 'learn by observation' is the type of machine I am referring to. This trivial case I might suppose fits within the ML Machine Learning category. I am presuming a far more complex example might be categorised as AI Artificial Inteligence. I confess to not knowing whether this true. Personally I would like a blanket term that covers all of the cases in which the application is learnt by observation, without being explicitly commanded or programmed to do that task.

Does such a term exist?

------------------------------------------------------------------

I haven't worked on vacuum cleaners and I don't know what approach commercial vacuum cleaner robots have adopted, but I would have thought a relatively logical approach, of determining boundaries and sweeping those boundaries could be coded, maybe somewhat similar to the fill techniques for colouring irregular areas on a screen, and which includes tricks when an object is encountered?  Not trivial, maybe, but I didn't think it required AI or any related methodlogy, as the only practical approach. (That is not to say AI or similar is not an alternative option.)

Note, a machine which has been specifically programmed to scan and/or explore, maybe with or without some randomisation, and build up a map of data is a traditionally crafted program. It may be very useful and skillfully crafted but it doesn't fit my view of a class of machine that is 'different from the norm'.

--------

Also Please Note: Whilst I have said that my interest is to look for machines that can do something that is 'too difficult' for a traditional programme approach, that does not mean it isn't sensible or worthwhile to undertake a project that is also amenable to traditional programming, since there will be many practical lessons that need to be learnt if this technology is to become practical.

The more 'purist' definition referred to a 'final' direction, rather than any steps on the way.

I suggested the definition because it can be argued, but probably not rigourosly proven, that a traditional programming solution has several natural advantages, including less processor hardware and lower power consumption, easier to demonstrate system is stable and safe, easier to understand how it will react to 'unexpected' situations, and so on.

These factors put an AI/ML system at a disadvantage, particularly in safety critical situations, like a self-driving car. However, if the AI/ML system can be developed to produce a system that can be demonstrated to be safer than the 'average' human alternative, and is performing a task that cannot be achieved by 'traditional programming' techniques, then the 'traditional programming is better and safer' style argument becomes irrelevant.

------------------------

Thanks for reading this diatribe.

  Is it just bovine manure?

   Is there a more appropriate term for machines that learn by observation or other indirect means?

   Am I breaking the rules by aligning such machines with AI and/or ML?

(I am not looking for an advanced treatise to any of these questions, maybe just a phrase or two to ponder.)

Best wishes  and thanks in advance to all, Dave


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2043
Topic starter  

@davee 

Is there a more appropriate term for machines that learn by observation or other indirect means?

Learning by observation (or experience) is an intelligent behavior and is, as you suggested, a different class of machine than one that cannot learn from experience.

However I would suggest that the word "intelligence" really refers to a behavior that makes sense to us.

A chess playing program makes intelligent moves (they are not random) and thus shows intelligent behavior even if it hasn't been programmed to learn from experience.

You might say a machine that shows learning behaviours is "more" intelligent than one that doesn't show such behaviours but I would still say a behavior can be called intelligent regardless of it being innate or learned.

So called neural nets have been programmed to change their weights until some equilibrium is achieved, they did not learn to do this by themselves.

I think about all this in the most abstract manner which today I think they call systems thinking but when I was first was exposed to these ideas it was called cybernetics and was simply explained in a book I bought called "An introduction to Cybernetics" by Ross Ashby. It was written for those with limited mathematical skills like biologists that had to deal with very complex natural systems. It really made the abstract concept of a machine clear to me with respect to my understanding of how computers are built.

http://dspace.utalca.cl/bitstream/1950/6344/2/IntroCyb.pdf

 


   
ReplyQuote
(@davee)
Member
Joined: 3 years ago
Posts: 1691
 

Hi @robotbuilder,

   Thanks for your thoghtful reply. I can see your general argument, it is clearly expressed, and it is not one that is easy to argue against, because there is a good deal of merit and reality in it, yet it still leaves me with an uncomfortable feeling. 

-----------------

One issue, which may be at the heart of my confusion, and has 'concerned' me for at least a couple of decades, is 'What is "intelligence"?'

With a little help from Microsoft's 'AI' engine, Bing, I rediscovered a quote I vaguely remembered:

'When I use a word,' Humpty Dumpty said in rather a scornful tone, 'it means just what I choose it to mean - neither more nor less.' 'The question is,' said Alice, 'whether you can make words mean so many different things.' 'The question is,' said Humpty Dumpty, 'which is to be master - that's all.'"

(from Lewis Carroll's **Through the Looking Glass**)

"Intelligence", relating to living things including people, seems (to me) to be one of those words, that means, roughly the same thing to everyone, but in detail, a different thing to everyone.

An example I came across a couple of decades ago that still troubles me, is (say) a Gold medal Olympic gymnast, with no particular indications of 'high intelligence' outside of gymnastics abilty, 'Highly Intelligent'? 

It troubles me, because I can see an argument for answering 'yes' and I can see an argument for saying 'no'. Both arguments look plausibly correct, but are contradictory, leaving me with an enigma.

----------

So until I can establish what 'Intelligence' is, the term 'Artificial Intelligence' feels like it has a foundation of quicksand.

----------

I need to read some more and cogitate on it. If I feel I have something to add later, I'll post again.

-------------------------------

As an aside, the Cybernetics book you provided a link to, is authored by the director of a mental hospital, that is now long gone, but was based less than 10 miles from where I live. I had never heard of it, but a quick Google suggests it was a leading research mental hospitals in the first half of the 20th century. Small world isn't  it?!

-------------

Best wishes, Dave


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2043
Topic starter  

@davee 

An athlete may be said to have physical intelligence in that they select moves that work to achieve some goal outcome.

Some time ago there was emphasis put on "emotional intelligence" for social success vs "intellectual intelligence."

Natural selection could be seen as an intelligent process.

If you type "intelligence" into the pdf find text box of IntroCyb.pdf

"It is clear that many of the tests used for measuring "intelligence" are scored essentially according to the candidate's power of appropriate selection."
...
"Thus, it is not impossible that what is commonly referred to as "intellectual power" may be equivalent to "power of appropriate selection".

Another thought is how people confuse intelligent behavior with being sentient (alive, aware, conscious).

"And though intelligence — the capacity to gain and apply knowledge — isn't a synonym for sentience, the two are often equivocated."

https://futurism.com/ai-isnt-sentient-morons

 

 


   
ReplyQuote