Notifications
Clear all

What is Machine Learning?

19 Posts
3 Users
0 Likes
4,648 Views
Robo Pi
(@robo-pi)
Noble Member
Joined: 3 years ago
Posts: 1909
Topic starter  

My personal answer to the question, "What is machine learning?" is to simply say that machine learning is any machine system that can learn.   This would include any computer programs that are written to achieve this goal.

Unfortunately (IMHO) today machine learning is almost exclusively associated with data analysis, recognition, and classification, specifically using ANNs (this would  be software ANNs that are basically simulations of Neural Networks).  I say that this is unfortunate because this perspective tends to ignore other methods of creating machines that can learn.   But this is where everyone in the A.I. community is heading and so this is what the term "Machine Learning" basically means today.

Here's a video that offers "5 Beginner Friendly Steps to Learn Machine Learning"

I have personally been going down this particular path.   The only difference is that I do not view this path as being "exclusive" in terms of other methods of creating machines that can learn.   In fact, I see this entire  approach as merely one small tool in the toolbox.

None the less, this is what most people who are in the field of A.I. will tell you Machine Learning is.   It's basically learning how to construct various types of software ANNs using the tools referenced in the above video.

I would like to add here that these presentations make it appear to be far more difficult than it actually is.  Although I agree with the host of this video that just starting to write programs along these lines is the way to go.   That's certainly how I've been learning.

I'm only a few months down the road of learning the methods described in the above video.   Most of this time has been spent learning about the software tools mentioned.   Fortunately for me this is quite enjoyable to study so it doesn't seem like work to me. ? 

In the video he suggests an approximate time from to learn all of this is two years.  But that seems quite arbitrary to me.   Without specifying exactly what a person will know in those two years  it doesn't make a lot of sense to put a timeline on it.    You simply keep learning more over time. 

Anyway I just thought I'd post my comments on machine learning and the above video to demonstrate what the A.I. community currently sees "Machine Learning" as meaning.

As I have already stated.  I see the concept of machine learning covering a lot more than just software ANNs that are used to classify and recognize data.   In fact, I personally prefer to study the concepts of algorithmic approaches to machine learning (i.e. old-fashioned programming).  This is not to say that these software ANNs wouldn't also be useful in that context, but rather to suggest that the software ANNs alone are not the whole picture.   Again, just my thoughts for whatever they might be worth.

DroneBot Workshop Robotics Engineer
James


   
Quote
Topic Tags
robotBuilder
(@robotbuilder)
Noble Member
Joined: 3 years ago
Posts: 1556
 

Suggested definition:

Learning is change in behavior over time for the better (as judged by some criteria).

My view about building a learning machine, either with just hardware or with hardware and code, is that you are building a theory about a particular type of learning (a model). This model will be one perfectly precise definition of the word learning. So the idea is to develop a set of functions that we agree show learning behavior.  We test our theory by running the program to see if it does what we said it would do or not.

 


   
ReplyQuote
Robo Pi
(@robo-pi)
Noble Member
Joined: 3 years ago
Posts: 1909
Topic starter  

@casey

I agree. 

The modern definition of machine learning has become almost exclusively associated with designing and training ANNs.  But there are clearly other methods of creating algorithms that will allow machines to learn.   In fact, I am not even personally convinced that ANNs actually "learn" anything.  Instead they just respond to more and more data via classification and recognition methods, but they don't really have a clue what they are even doing.   So I'm not so sure that qualifies as "learning".

None the less, in today's world if someone asks us what "Machine Learning" means, the most commonly accepted answer is that it refers to training ANNs to recognize and classify larger and larger amounts of data.

I think it's important that we recognize that just because specific definitions have become popular, we shouldn't allow those definitions to dictate how we necessarily proceed to address these questions.

Having said all of this, I do see useful value in ANNs to be sure.  I just think it's a grave mistake to think that ANNs represent machine learning in and of themselves.   Yet that appears to be the main idea in A.I. today.

I've seen too many online courses and videos that address "Machine Learning", and all they do is discuss methods of designing and training ANNs to the exclusion of everything else.   In other words, they certainly make it appear that as far as they are concerned, studying the design and training of ANNs covers the subject of "Machine Learning" completely.   After all, if they thought there was more to machine learning than just this you'd think they would at least mention this in their lessons.  But typically they don't.

Just like the video in the OP "5 Beginner Friendly Steps to Learn Machine Learning".  All he addressed was methods that ultimately apply to dealing with the design and training of ANNs and large data sets.   Nothing else was mentioned.  He may not have mentioned ANNs specifically, but all the tools and methods he was referring to are designed to do just that.

It's really hard to find information on machine learning techniques anymore that isn't focused on Artificial Neural Networks.   You basically need to do a search for retro methods to find alternative methods.

DroneBot Workshop Robotics Engineer
James


   
ReplyQuote
robotBuilder
(@robotbuilder)
Noble Member
Joined: 3 years ago
Posts: 1556
 

My view is learning is not a single function.  If it fits the definition I suggested above then it is an example of a learning function.   An artificial neural network changes its behaviour over time for the better (as defined by the feedback) and thus is an example of a learning function. The brain seems to behave like a parallel ANN of some kind when forming intuitive ideas but riding on top of that there is a serial symbol processing system associated with the contents of consciousness or directed action.  A good layman read on the subject can be found in Steven Pinker's book, "How the Mind Works".  He is a proponent of the computational theory of mind. He writes that an ANN is doing multivariate statistical computations.


   
ReplyQuote
Robo Pi
(@robo-pi)
Noble Member
Joined: 3 years ago
Posts: 1909
Topic starter  
Posted by: @casey

An artificial neural network changes its behaviour over time for the better (as defined by the feedback) and thus is an example of a learning function.

Actually this isn't how a ANNs work.    ANNs do not change their behavior over time for the better as defined by feedback.   Instead they are "trained" ahead of time using computer algorithms that go through many iterations based on "known" input data.   The desired outputs for the input data is also known.

An untrained ANN is incapable of "learning" anything on its own.  An even a trained ANN does not continue to learn after it has been trained, unless it is put through yet another training session.   But those training sessions aren't actually part of the ANN itself.

This is why many scientists point out that ANNs are not mimicking how a brain actually works.

Posted by: @casey

The brain seems to behave like a parallel ANN of some kind when forming intuitive ideas but riding on top of that there is a serial symbol processing system associated with the contents of consciousness or directed action.

This is true.   The brain updates it parallel neural networks on the fly.  Something that ANNs cannot do.   Or perhaps I should say, "If ANNs are being updated on the fly then this is being done by something other than the ANNs themselves" (i.e. some other computer algorithmic program).

In fact, I am hoping to create a thread on this very topic at some point.  The nature of these other computer algorithms or programs that update ANNs are where the real learning abilities lie.  And the way this is currently being done is not likely to be the way our brains do this.   It's currently more of a trial and error process rather than advancing an actual understanding of what's going on.

This is why even those who fully understand ANNs will tell you that a self-driving car would need to drive off a cliff 10,000 times before it finally changes it's behavior to stop doing that.   This isn't because the car has finally recognized that driving off a cliff is  a bad thing, but rather it has simply updated its response to match a desired output.   This does not constitute and "understanding" of anything.

This is why I say that it hasn't "learned" anything.   All it has done is adjust an final output to match a desired out put.  It has absolutely no understand of why this adjustment works or what it even does.

So, because of this, ANNs aren't truly "intelligent" at all.    Of course, I'll grant that this can all depend on how a person is defining "intelligent".    If we define intelligent as nothing more than a change in behavior that leads to an improvement then many inanimate objects could be said to be exhibiting intelligent behavior.

For example, I once had a lawn mower that made a terrible racket and vibrated a lot.   One day, something happened and it quit that bad behavior and starting running very quietly and smoothly again.   Apparently there must  have been something tangled up in the blade that was throwing it off balance and whatever it was finally broke off and everything was good again.  There was certainly no intelligence involved in that process.

Posted by: @casey

Suggested definition:

Learning is change in behavior over time for the better (as judged by some criteria).

This definition alone can be misleading because this would imply that anything that appears to change behavior for the better must then be intelligent.  But that doesn't necessarily follow.  It would need to do this intentional.  And from my perspective it would need to be more than just intentional, it would also need to have an understanding in some sense of why it was changing its behavior.   And this latter requirement appears to be lacking in ANNs, but not in a human brain.  Humans tend to make adjustments based on "reason".    ANNs are adjusted based on whether or not their output was acceptable.  But they have no  understanding of the reason they had to make the adjustment beyond the fact that the current output was considered to be unacceptable by some other "agent" outside of the ANN.   Even if that other "agent" was just a computer program.

DroneBot Workshop Robotics Engineer
James


   
ReplyQuote
robotBuilder
(@robotbuilder)
Noble Member
Joined: 3 years ago
Posts: 1556
 

When I wrote that an ANN learns I assumed the understanding that it included its support hardware and software as part of the learning system. Biological brains could not learn without training data and some feedback from their innate reward system as to what was a good or bad outcome.

It is possible for a neural network with working weights to continue to re weight its connections as it continues to receive new inputs and feedback.  Biological brains are evolved neural networks that come with inbuilt structures and connections. In the case of some animals they are up and running at birth using such connections without the ability to learn very much at all. A neural network could be trained by thousands of examples (as evolution would have provided) or it could simply be a clone of a trained network and simply improve on those innate weighted connections via its life time of experience. Evolution can be thought of as a learning system of the species with its reward system being reproductive success.  Our society is also a learning system as complexity accumulates. Once we discover something like making fire that skill is passed on not by the genes but by example and speech (and now writing and utube videos).

Learning is a change in behavior over time for the better (as judged by some criteria).

RoboPi wrote: This definition alone can be misleading because this would imply that anything that appears to change behavior for the better must then be intelligent. It would need to do this intentional. And from my perspective it would need to be more than just intentional, it would also need to have an understanding in some sense of why it was changing its behavior.

Casey replies: What is it you think "understanding" or "intentional" means in terms of a machine? We may make sensible non random adjustments using reason but I would suggest that what we consider needs adjusting only requires a reward system. The agent outside of us was evolution.  I would also suggest that learning is an intelligent behavior.  Intelligence is not something you have,  it is a description of how you behave.

 

 


   
ReplyQuote
Robo Pi
(@robo-pi)
Noble Member
Joined: 3 years ago
Posts: 1909
Topic starter  
Posted by: @casey

When I wrote that an ANN learns I assumed the understanding that it included its support hardware and software as part of the learning system.

But ANNs don't include the weight calculating software as part of their support system.

Posted by: @casey

It is possible for a neural network with working weights to continue to re weight its connections as it continues to receive new inputs and feedback.

Not really.  Weights can only be recalculated by going through the entire iteration process again with the "new" data that  has been verified to be correct added into the mix.   In fact, it wouldn't make much sense for any new data to be added to the weights unless that new data had already been verified to be correct and true.  But an already "trained" ANN could never verify that new data is correct and true.   It would have no way of knowing this.   In fact, this is why I question calling ANNs "intelligent" because in reality they don't have any intelligence at all.  None whatsoever.  They just do whatever their weights have been programmed to do.

Posted by: @casey

A neural network could be trained by thousands of examples (as evolution would have provided)

But that can't be the same process used to train ANNs because nature doesn't provide us with guaranteed correct sample data.  We need to figure out on our own whether incoming data makes any sense or not.   So our brains are not using the same process that we use to "train" ANNs.

Posted by: @casey

Evolution can be thought of as a learning system of the species with its reward system being reproductive success.

I agree.  But in this case failure means death.    A human isn't given guaranteed correct sample data and programmed based on whether or not it got the right answer.   So this is not the same process.

Posted by: @casey

Once we discover something like making fire that skill is passed on not by the genes but by example and speech (and now writing and utube videos).

And by understanding and comprehension.   Our  brains aren't just programmed weights of neurons.  We actually have a full understanding of what's going on.    So there's a lot more going on in our brains besides neurons being weighted.

Posted by: @casey

What is it you think "understanding" or "intentional" means in terms of a machine?

The same as it means for a human.  This is why I favor a Semantic approach to A.I.   I don't see how understanding and intention could ever be meaningful without a semantic system behind it.   Even before words were invented, we still had a sense of "semantics" in terms of comprehending concept and ideas.  Language was a result of this understanding of concepts.

Posted by: @casey

Intelligence is not something you have,  it is a description of how you behave.

This is where you and I part paths.   And that's fine.   We don't need to agree.  ? 

Everyone has their own ideas of how things should be defined and or designed, and that's fine.

But I disagree that intelligence is a description of how something behaves.  I reject that idea.   We can design machines that are not intelligent but behave as if they are.   You gave the example in another thread of an Air Conditioner and considered that to be an example of "intelligence".  I simply reject that notion.  For me there's nothing intelligent about the air conditioner.  The intelligence resides in the person who designed the air conditioner not in the air conditioner itself.

DroneBot Workshop Robotics Engineer
James


   
ReplyQuote
robotBuilder
(@robotbuilder)
Noble Member
Joined: 3 years ago
Posts: 1556
 
Posted by: @robo-pi
Posted by: @casey

Intelligence is not something you have,  it is a description of how you behave.

This is where you and I part paths.   And that's fine.   We don't need to agree.  ? 

Everyone has their own ideas of how things should be defined and or designed, and that's fine.

But I disagree that intelligence is a description of how something behaves.  I reject that idea.   We can design machines that are not intelligent but behave as if they are.   You gave the example in another thread of an Air Conditioner and considered that to be an example of "intelligence".  I simply reject that notion.  For me there's nothing intelligent about the air conditioner.  The intelligence resides in the person who designed the air conditioner not in the air conditioner itself.

Behavior is all there is apart from the thing that is behaving.  A computer consists of memory units and their connections. The state of a computer at each cycle is a list of the values in each memory units. The behavior of the computer is seen as changes in its states over time.

You can use high level words to label behaviors as showing "understanding" or having "intentions" but it all bottoms out as a behavior,  a change in the machine's state over time.

I take an abstract functional behaviorist systems approach to describing and understanding how machines work as explained in "An Introduction to Cybernetics" by Ross Ashby that was written for those with only elementary mathematical knowledge.

This is the behavior (list of states over time) of a digital counter.
000
001
010
011
100
101
110
111
And repeats
000
001
...

That is a simple machine but the principle is the same for any digital machine including machines we say are showing intelligent behavior.

 


   
ReplyQuote
robotBuilder
(@robotbuilder)
Noble Member
Joined: 3 years ago
Posts: 1556
 

Here is a thought for you.

When you have the conscious intention to lift your arm how does that intention convert into all those lower level impulses required to actually lift the arm.?

Also how is that intention represented in your neural circuits and where did it come from (that is how did the machine form the intention)?

When you decide to learn a motor skill, say touch typing,  juggling balls or riding a bike,  you must first direct your intention to the task and monitor the results.  However after lots of practice you will be hitting those keys, juggling those balls and riding that bike without any conscious direction or attention required.  The higher levels have trained the motor modules to carry out the task all by themselves.  PET scans show that during the learning process the higher levels of the brain are involved but after you learn the task that activity is no longer there.

 


   
ReplyQuote
Robo Pi
(@robo-pi)
Noble Member
Joined: 3 years ago
Posts: 1909
Topic starter  
Posted by: @casey

Behavior is all there is apart from the thing that is behaving.

Agreed.  I take this to be an observation that the thing that is behaving must then be taken into account as well since it's clearly part of the equation.

Posted by: @casey

The higher levels have trained the motor modules to carry out the task all by themselves. 

Agreed.  And this is why I'm interested in studying these "higher levels" rather than focusing on the motor skills that are clearly secondary.

DroneBot Workshop Robotics Engineer
James


   
ReplyQuote
robotBuilder
(@robotbuilder)
Noble Member
Joined: 3 years ago
Posts: 1556
 

In what way would the thing be taken into account?  Running is a behavior that can be applied to many things but is not itself a thing.  When we say someone or something is "intelligent" we are referencing their or its ability to acquire and apply information or knowledge which is a behavior.  It is the behavior that decides if the thing is intelligent not the thing itself.

The word 'intelligence' is an abstract noun. It is often conflated with 'sentience' or 'consciousness' or 'soul' which people do imagine as being a thing in the brain.  It may feel that way but that is not evidence it is the case.

My impression of a semantic net is that it involves a man made data base and is not grounded in the real world? If I ask the robot, "Is there a book on the table" it has to be grounded in the real world with a vision system capable of recognizing both the table and the book and their spatial relationship.

I would say the motor skills are primary not secondary. Reasoning is the same as goal directed motor behavior and is supported by the same neural machinery.


   
ReplyQuote
Robo Pi
(@robo-pi)
Noble Member
Joined: 3 years ago
Posts: 1909
Topic starter  
Posted by: @casey

When we say someone or something is "intelligent" we are referencing their or its ability to acquire and apply information or knowledge which is a behavior.  It is the behavior that decides if the thing is intelligent not the thing itself.

You seem to be taking these concepts down an unnecessary philosophical rabbit hole.

I've already explained why behavior itself does not represent intelligence.   I can program a robot to behave in an apparent intelligence fashion.   And therefore the robot will exhibit what appears to be intelligent behavior.  But this doesn't mean that the robot itself has any ability to figure anything out or understand anything beyond what I had programmed it to be able to recognize.   Therefore I think it would be absurd to suggest that the robot itself is intelligent even though it clearly exhibits intelligent behavior.

So I simply reject your definitions.  They are not definitions that I find to be either meaningful or useful.

 

Posted by: @casey

My impression of a semantic net is that it involves a man made data base and is not grounded in the real world? If I ask the robot, "Is there a book on the table" it has to be grounded in the real world with a vision system capable of recognizing both the table and the book and their spatial relationship.

Again I simply disagree with your view on this.  I agree that words that humans have chosen to invent for the sake of communicating ideas and concepts are man made.  In fact, mankind has invented many different kinds of languages.   However, what I disagree with is  your suggestion that these semantics systems are not grounded in the real world.    To the contrary, I see them as being firmly grounded in the real word.   After all even language of different cultures can be translated in a meaningful way into other languages.   Why?  Because the whole idea behind these languages is to convey meaningful concepts and ideas that are indeed grounded in the real world.   So I don't understand your views on semantics here when you suggest that semantics isn't grounded in the real world.

Also, you go on to use an example of a robot being able to recognize a "book" on a "table".   But you are already using semantics to refer to these objects.   Does the robot know what a "book" and a "table" is?  Or has the person who programmed the robot just given the robot these terms to refer to various objects?

Could you fool the robot into thinking that a mere block of wood sitting on the table is a "book".   And would the robot  have any way of checking to see if that's truly the case?

Psychologists actually do this sort of thing with small children to test at what age a the human mind begins to  be able to recognize when things aren't as they had originally appeared to be.

In fact, I've actually been studying research that these scientists have been doing on young children with the hopes of using these same types of experiments and criteria to measure how well my robot is evolving in terms of its cognitive skills.

This is where the idle philosophy ends, and the real world begins.  We can actually design tests to see what a robot can actually understand and what it cannot understand. 

Here's a link to an article on the timeline of human child development in terms of learning to talk:

Your child's talking timeline

I found this timeline quite interesting and I'm going to see how well my robot compares with this.

Toddler 12 months:

Notice that it takes a full year for a human toddler to learn to use only one or two words in a meaningful way.   I'm hoping that my robot will get off the ground semantically much quicker than this. ? 

But it is important the robot knows what the words mean and isn't just saying them because they have been programmed into a speech engine.   And this is where the Semantic A.I.  model begins.  The robot has to know what these words mean.  Not just speak them because it was programmed to do so.

At 16 months:

Talks to someone much of the time as opposed to just babbling to no one in particular. Calls you to get your attention ("Mommy!"), nods and shakes head for yes and no. Makes many common consonant sounds, like t, d, n, w, and h.

At 18 months:

Has a vocabulary of about 10 to 20 words, including names ("Mama"), verbs ("eat"), and adjectives ("cold"). Uses common phrases ("want doll") to make requests.

Notice that at 18 months we're only working with a vocabulary of 10 to 20 words!  The keys issue is to fully know and understand what those 10 or 20 words actually mean.

Almost 2 years in:

Starts putting two-word phrases together for more novel purposes ("Daddy go," "milk mess").

Two years and we're only putting together very crude two-word phrases.

A solid 2 years in:

Knows 50 to 100 words. Uses short, two- or three-word sentences and personal pronouns ("I fall down!" "Me go school?").

Things are starting to accelerate.   If I had a robot that could construct meaningful 3-word sentences and actually know what it's talking about I'd be breaking out the champagne! ? 

These wouldn't just be 3-word sentences I programmed into a speech engine.  These would be meaningful sentences the robot is constructing to convey meaningful ideas.   Yep, I'd have champagne all over the place     at this point.

In about 3 years:

Can carry on a simple conversation about something in the immediate environment. Asks simple questions frequently. Expands phrases from three- to six-word sentences and develops a vocabulary of 200 to 300 words, including lots of verbs.

Uses past tense by adding a "d" sound to verbs ("runned") and plurals by adding an "s" sound to nouns ("mans"). Uses pronouns (I, she, we) correctly. <- shows signs of understanding the meanings of these words.

If I make it this far I will proclaim to have created a valid A.I. ? 

A robot that I can actually talk with and it will give me meaningful replies and understand the context of the conversation.

If I make it this far I'll consider the project to be a total success.

Not only this, but if I make it this far why should I think that this couldn't just continue to evolve?

3- 4 years:

Favorite words often include "why," "what," and "who." Can be understood most of the time. Can tell you what happened if you were out of the room.

4 - 5 years:

Communicates easily and can retell a simple story with a beginning, middle, and end while looking at pictures. Can use four to five sentences to describe a picture, with most of the grammar elements in place. Uses more than one action word in a sentence.

Pronounces most sounds correctly but may still have trouble with th, r, s, l, v, ch, sh, and z. Uses lots of descriptive words, including time-related words like "yesterday."

6 - 7 years:

Can describe how two items are the same or different, retell a story or event without the help of pictures, and recount past conversations and events. Uses some irregular plural nouns ("men," "teeth").

8 years:

Has mastered all speech sounds as well as the rate, pitch, and volume of speech. Uses complex and compound sentences correctly and is capable of carrying on a conversation with an adult.

Time to send the robot off to college! ? 

DroneBot Workshop Robotics Engineer
James


   
ReplyQuote
robotBuilder
(@robotbuilder)
Noble Member
Joined: 3 years ago
Posts: 1556
 
Posted by: @robo-pi
You seem to be taking these concepts down an unnecessary philosophical rabbit hole.

I can assure you there is no hand waving philosophy involved. 

Language rides on concepts we already have derived from learning about our world via our sensors. A child first interacts physically with its world to learn about it before it can learn to use a language.  Objects are probably experienced as features that move together in 3d space. Different objects have different features. Some objects have features in common thus belong together as a class. You cannot tell a child "this is called a book" if it doesn't already know what object you are referring to.  All you are doing at the beginning of teaching language is associating a word with an object.  The child has to make that connection perhaps by the word and object continually being associated as occurring at the same time. That is what I mean by language being grounded in the real world. Once the concept of a word being used to represent an object is made then the road to learning and using language begins.

Take the famous case of Helen Keller born deaf and blind.
https://en.wikipedia.org/wiki/Helen_Keller

Sullivan arrived at Keller's house on March 5, 1887, a day Keller would forever remember as my soul's birthday.[13] Sullivan immediately began to teach Helen to communicate by spelling words into her hand, beginning with "d-o-l-l" for the doll that she had brought Keller as a present. Keller was frustrated, at first, because she did not understand that every object had a word uniquely identifying it. In fact, when Sullivan was trying to teach Keller the word for "mug", Keller became so frustrated she broke the mug.[17] But soon she began imitating Sullivan's hand gestures. “I did not know that I was spelling a word or even that words existed,” Keller remembered. “I was simply making my fingers go in monkey-like imitation.” [18]

Keller's breakthrough in communication came the next month, when she realized that the motions her teacher was making on the palm of her hand, while running cool water over her other hand, symbolized the idea of "water".

 


   
ReplyQuote
Robo Pi
(@robo-pi)
Noble Member
Joined: 3 years ago
Posts: 1909
Topic starter  
Posted by: @casey

Language rides on concepts we already have derived from learning about our world via our sensors.

And I've already explained that when I'm speaking about Semantic A.I. I'm not limiting this to the words themselves but rather the concepts the words are being used to label.  Perhaps you didn't catch that part?

In any case, I'm not interested in arguing about this.   If you have no interest in a semantic approach to A.I. and machine learning that's perfectly ok.  No one is trying to convince you otherwise.   All I'm doing in these threads is sharing my approach.

If you have a different approach to A.I. and machine learning you are more than welcome to share the approach that you prefer.   This is what makes the world an interesting place. Everyone has their own ideas on how they would prefer to approach and tackle various problems.

So please feel free to describe your approach to A.I. and machine learning    That's the whole point of these forums. ? 

DroneBot Workshop Robotics Engineer
James


   
ReplyQuote
robotBuilder
(@robotbuilder)
Noble Member
Joined: 3 years ago
Posts: 1556
 
Posted by: @robo-pi

In any case, I'm not interested in arguing about this.   If you have no interest in a semantic approach to A.I. and machine learning that's perfectly ok.  No one is trying to convince you otherwise.   All I'm doing in these threads is sharing my approach.

I have had an interest in A.I. and an interest in how it might be implemented for a long time. I now remember reading about semantic networks a long time ago in an AI book.  After you first wrote about them I actually spent some time reading about them in case I missed something.

However in these posts I was writing about the theory of machine learning as I understood it not about semantic networks in particular.  I don't have a fixed view about any approach but if I can't question a particular approach then no exchange of views or knowledge can take place.

https://users.cs.cf.ac.uk/Dave.Marshall/AI2/node59.html

Anyway I will drop the subject and get back to coding my simple brained robot base ...


   
ReplyQuote
Page 1 / 2