Notifications
Clear all

What is Artificial Intelligence?

73 Posts
9 Users
7 Likes
18.1 K Views
Robo Pi
(@robo-pi)
Robotics Engineer
Joined: 5 years ago
Posts: 1669
Topic starter  
Posted by: @stven

Still plowing through this interesting thread. At some point you asked for definitions of AI. I wanted to share this thought provoking  “Up and Atom” YouTube video that offers some historical context.

A couple things to note about the "Up and Atom" video.

First and foremost.  No mention of Semantic A.I.   This is not surprising since Semantic A.I. is not currently a hot topic.

Instead she mentioned Symbolic A.I. which is quite different from Semantic A.I.    We should however note that Symbolic A.I. was not a complete failure.  It simply didn't produce the ultimate results they had originally expected.

Then in the second half she addresses Neural Networks.  No surprise there as this is the RED HOT topic in A.I. today.  However, there are quite a few things to note about ANNs

First, just like Sabine, Mia also acknowledges that ANNs do not learn like humans learn.

In fact, ANNs actually aren't capable of learning anything at all.   This is actually a quite misunderstood aspect of ANNs.   ANNs need to be "trained'  before they will even work.   And it is during this training process that it is said that an ANN is "learning" something.   But this is really nothing more than bad terminology.   All that's happening is that the ANN is being mathematically programmed to generate the correct outputs for given inputs.

Even Mia points out that it requires thousands of known examples as data input before an ANN "learns",  and again the term "learn" is a bad term because it's not learning anything. Weights between connections are simply being calculated based on known input examples.

So if the A.I. community thinks that ANNs are the answer to A.I. I think they are going to be just as disappointed as they were when they thought that Symbolic A.I. would be the answer to A.I.

I think ANNs will certainly be a useful component in an A.I. system, as well as symbolic logic, but neither or those two are going to take it to the ultimate goal.   On the other hand Semantic A.I. may very well be the key.  After all, if we stop and think about it semantics is how humans learn almost everything they know.   So why not use the way humans think to create an A.I. System?

DroneBot Workshop Robotics Engineer
James


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2037
 
Posted by: @robo-pi

Who was it that sought that goal?  DB1?  Or the person who programmed DB1 to seek out a charger when the battery gets low?

I would say it was the builder's goal and it was given to DB1 along with the means to achieve that goal.  The goal now belongs to DB1 even if it didn't choose to have that goal.  Our primary goals were given to us by an evolutionary process but they are now our goals even though we didn't choose them.

Regardless of how it all comes about, if we observe something acting as if it had a purpose then we would call it intelligent. 


   
ReplyQuote
Robo Pi
(@robo-pi)
Robotics Engineer
Joined: 5 years ago
Posts: 1669
Topic starter  
Posted by: @casey

Regardless of how it all comes about, if we observe something acting as if it had a purpose then we would call it intelligent. 

That's certainly one possible definition.

If we choose the definition that any system that has a goal and can fulfill it satisfies our definition of intelligence, then our task is easy.   All we need to do is design a system that has a goal and can fulfill it.

As you have already suggested, a simple heater or air conditioner that automatically turns on and off based on a thermostat already satisfies that requirement.

So that's a quite low bar for defining intelligence don't you think?

Also you say,

Posted by: @casey

if we observe something acting as if it had a purpose then we would call it intelligent. 

Wouldn't we need to clarify exactly what we mean by "acting as if it had a purpose"?

I'm looking out a window and it's raining outside.  There are water droplets on the window.  They all appear to be making their way to the bottom of the glass.   Should I say that they are exhibiting intelligent behavior because it appears that they have the purpose to reach the bottom of the window?

I just heated up some water to make some soup.  I noticed that air bubble form in the bottom of the pan and without fail they all appear to rapidly float to the top of the water as the water heats up.  Should I consider those air bubbles to be intelligent since they are acting as if they have the purpose of getting to the surface of the water?

It's not my intent to give  you a  hard time.  I'm simply pointing out that often times definitions may not be as rigorous as we might have first thought. An appearance of purpose, or even an actual purpose, may not have any intelligent agent behind it at all.

Without fail, every time I toss a rock into the river it sinks to the bottom.  Does that mean that a rock is acting as though it has the purpose of going to the bottom of the river?

I mean, if you're willing to give an air conditioner credit for being intelligent because it turns on and off based on a thermostat how is that much different?  I see the air conditioner doing basically the same things as a drop of water sliding down a window, an air bubble floating to the top of a pan of water, or a rock sinking in the river.  The air conditioner is basically doing the only thing it can do.  It doesn't need to be intelligent to react to thermostat.    It's just doing what it does because that's all it can do.  It's not seeking a goal to keep the temperature at a certain level.  That just happens due to the physical properties of the thermostat that controls its on/off switch.

So is it really useful to classify it as being "intelligent" just because it might appear to have a goal, when in reality it's just doing the only thing it can do.?

DroneBot Workshop Robotics Engineer
James


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2037
 

RoboPi: Wouldn't we need to clarify exactly what we mean by "acting as if it had a purpose"?

Place your robot with a low battery charge in a room and see if it moves to the charger. If it does move to the charger pick it up and place it somewhere else in the room and see if it still moves to the charger. If the robot always moves to the charger whenever its battery has a low charge then you could reasonable conclude it has a "purpose".

RoboPi: The air conditioner is basically doing the only thing it can do.

As is true for all systems. We make the best move we can to achieve a goal as determined by a set of computations. Smart people make better computations as to what is the best move.

RoboPi: It doesn't need to be intelligent to react to thermostat.

The thermostat is part of the intelligent system.

My understanding of these concepts were formed a long long time ago after reading the book "An Introduction to Cybernetics" by W.Ross Ashby
It was written to be understood by those without a deep understanding of mathematics.
http://dspace.utalca.cl/bitstream/1950/6344/2/IntroCyb.pdf

I did read somewhere that the book is now a bit out of date but still it made sense to me.

 


   
ReplyQuote
Duce robot
(@duce-robot)
Member
Joined: 5 years ago
Posts: 672
 

I still say it will be very difficult if not impossible to make a mechanical human consider the heart and the small intestine have almost as many neurons as the brain that's why we can get a "gut" feeling so that would have to be factored in .?? off to work a day life now for me .??


   
ReplyQuote
Spyder
(@spyder)
Member
Joined: 5 years ago
Posts: 846
 

@robo-pi

(Disclaimer- the statements made in this post may or may not reflect my own, as they are merely put forth as a point of argument (not the screaming and yelling kind, the discussion kind))

When I first started reading page 2, it was yesterday morning, and I was on a bus on my way to work, and I was planning to argue that man is just a machine too, albeit a much more sophisticated one than the ones we're trying to program into fooling us into believing that they are just as sophisticated as the ones who programmed them to fool us into thinking that we had succeeded in programming them to be more than the sum of their programming

So, in order to make a case that man is nothing more than a machine, following a basic set of preconfigured instructions, I googled the most basic set of preconfigured instructions I could think of... "Basic Human Instinct", which I expected would be three things, eat, reproduce, and self preservation. Of course those could each be broken down into sub-categories, and sub-sub categories, and sub sub sub categories until at some point I would be pointing out that my brother's daughter would "simply die if I can't facebook my bff"

To my surprise however, the first link that came up was from the University of Texas, which claimed that "revenge" was among the top basic human instincts, which gave me a quick case of cognitive dissonance. Maybe because I had fooled myself into thinking that I had evolved beyond that. Or maybe I actually have. Or maybe... I learned that I didn't need revenge, but, animals seem to. Stories of elephants and crows and various other animals remembering the faces of people who mistreated them aren't rare

I had a friend since I was 16 years old. He dropped out of high school, but he was still the smartest, wisest man I'd ever met. Hippie/hillbilly kind of guy who just made sense. I remember saying to him one day that "I hate that guy" (whomever that guy happened to be), and he said to me "Why would you want to waste all that mental energy like that when you could be doing something productive with your brain instead ?"

That was a total WOW moment for me

That lesson itself isn't important to this conversation, it's the fact that it changed my whole way of thinking

It reprogrammed me

A child can go to a government school and be indoctrinated into believing whatever the teachers tell them these days. They're programmed

A person who a clinician dubs "alcoholic" can be sent to Alcoholics Anonymous, and be reprogrammed

(I won't go into the success/failure rate on that one, it's probably about the same as the number of people who are susceptible to being hypnotized)

Government scientists trained people to be psychic by electroctuing them every time they didn't guess the right circle, star or square (ok, bad example. That one didn't work... At all)

If you get chased by a certain dog every time you walk past its house, you will probably end up avoiding walking past that house

Remember Star Trek 5, where Kirk said "I don't want my pain taken away. I need my pain. My pain makes me who I am"

He learned things, even in failure.

Did you ever know a parent who said "Don't stop him. LET him stick his hand on the stove. He won't do THAT again"

I'm trying to do the training for the "Machine Learning" for my Jetbot, and part of it includes putting it near the edge of a table, taking a picture, and listing it as "blocked" or setting it on a flat surface and taking a picture, and listing it as "free"... It's learning

What makes me smarter than a machine ? Awareness ?

Aren't I teaching the robot awareness ?

Understanding language ? If a baby never hears its parents speak, it can never learn a language

Or... And this is the big one, I think this should be your goal. Not understanding, not awareness, but...

Original Thought

That is what I think would separate me from a machine, unless...

Unless my original thoughts aren't actually original thoughts, but merely pieces of different puzzles that my mind connected, to form, as a whole, a new thought which is actually the sum of 15 other previous experiences

So, can you create a Real Intelligence, or General Intelligence ?

Sure, absolutely

How ?

Easy...

More Input. Give it enough input, and, most importantly, some way to cross-reference itself, and at some point, I think, you will reach a "tipping point" where the bot experiences Pareidolia. And that, I believe, is the point where it will become, not "aware", but, more importantly, "self aware"


   
Duce robot reacted
ReplyQuote
Robo Pi
(@robo-pi)
Robotics Engineer
Joined: 5 years ago
Posts: 1669
Topic starter  
Posted by: @spyder

That was a total WOW moment for me

That lesson itself isn't important to this conversation, it's the fact that it changed my whole way of thinking

It reprogrammed me

Excellent points Spyder.   A robot that can have a WOW moment due to a revelation of insight that changes its perspective on how it used to think would  certainly be an astonishing behavior.  In fact, some humans even seem to be incapable, or at least highly resistant, to such profound change of perspective.   So that behavior is a very good one to put at the top of the list as something worthy of note.

Posted by: @spyder

I'm trying to do the training for the "Machine Learning" for my Jetbot, and part of it includes putting it near the edge of a table, taking a picture, and listing it as "blocked" or setting it on a flat surface and taking a picture, and listing it as "free"... It's learning

What makes me smarter than a machine ? Awareness ?

Aren't I teaching the robot awareness ?

I would hesitate to compare what you're doing with the Jetbot with awareness.   The Jetbot actually is not aware of the situation at all.   It has no clue what the pictures mean.   All that's happening is that it basically "blindly" (from an awareness perspective) reacts to incoming pictures in the same way an air conditioner reacts to the signals coming in from a thermostat sensor.   Neither the Jetbot nor the air conditioner are aware of what they are actually doing.

Posted by: @spyder

Understanding language ? If a baby never hears its parents speak, it can never learn a language

Exactly.  Also a human learns language over a timeline of continual exposure to contextual conversations.   We doesn't just toss a dictionary at the kid filled with words and definitions and expect them to figure out how to put the words together to make sense.  In fact, this is a huge part of my Semantic A.I. approach.  When I first turn it on I don't expect it to be able to speak much at all, or recognize very many words.   I will cheat a bit in that I will program it with a small preliminary vocabulary of words that we can think of as having been primordially programmed into it's brain by evolution.   A human doesn't even have that going for it.

The thing that would be exciting is if the robot started calling me "da da", because it heard other people calling me "Daddy".    That's going to be a bit difficult in my situation since there won't be anyone around other than me and the robot.   So my robot  is going to have limited exposure to hearing other humans speak.

But yeah a robot that can learn new words by hearing them spoken in a contextual situations is the idea behind Semantic A.I.  (at least the model of Semantic A.I. that I have in mind).  

Posted by: @spyder

Or... And this is the big one, I think this should be your goal. Not understanding, not awareness, but...

Original Thought

That is what I think would separate me from a machine, unless...

Unless my original thoughts aren't actually original thoughts, but merely pieces of different puzzles that my mind connected, to form, as a whole, a new thought which is actually the sum of 15 other previous experiences

So, can you create a Real Intelligence, or General Intelligence ?

Sure, absolutely

I agree that this is a paramount behavior that would be great to observe in the robot.  However, this is the tricky part.  If we program this feature in as a primal  instinct, then how can we say that it's "original thought" when all the robot is doing is what it was programmed to do?

So there's a very fine line between creating a robot that is simply reprogrammed to seek out original ideas, versus a robot that is having "WOW moments" of insight that actually cause it to have original thoughts.

One might argue that these thoughts aren't technically "original' because they were triggered by insights of previous knowledge.  From my perspective those kinds of questions are better left for philosophers to waste time thinking about.  As a robotics engineer I just want to figure out how to create the behavior, I'm not interested in the deeper philosophical questions that might underpin it.

Posted by: @spyder

I was planning to argue that man is just a machine too

I would be very hard-pressed to come up with a counter-argument for that.   I have no need to take any particular position on what a human may or may not be.   My interest is in trying to build a robot that has what I consider to be "Real Intelligence" versus "Artificial Intelligence".   And of course, I will be defining exactly what I mean by those terms when I actually use them.  Other's may reject my definitions, but that's ok.  My intent is not for force  my definitions onto anyone else, but instead to simply use them to show what I consider to be the difference between these concepts.

Whether man is just a biological machine, or something else, is totally irrelevant to my robot projects. ? 

Posted by: @spyder

More Input. Give it enough input, and, most importantly, some way to cross-reference itself, and at some point, I think, you will reach a "tipping point" where the bot experiences Pareidolia. And that, I believe, is the point where it will become, not "aware", but, more importantly, "self aware"

I totally agree.  But at the same time I think the architecture of the underlying A.I. system plays an important role concerning whether "self awareness" can ever be achieved.

For example, having an understanding of how ANNs work I simply cannot see how any system of ANNs could ever become self aware by simply training them on more and more data.   That process simply doesn't lead down a path toward self awareness.   All the ANNs are doing is being trained to categorize more and more data.   I just see no reason why self awareness would ever arise from that.

Also, if we want to compare this with a human baby, it doesn't work because a human baby is  self-aware early on.  Of course they don't achieve "Theory of Mind' until about 3 or 4 years old.   But even prior to that I'm confident that they have some sense of self awareness.  It's just not a global social self-awareness.

So part of my Semantic A.I. model  include the creation of a "Self" from the very beginning.  In other words one of the very first words the robot will be given are like "I, me, mine,..." etc., as this will become the building blocks for a sense of self.   So my robot is going to have a "self" build into it from the beginning.   The questions will then become, "Will it ever reach the stage of theory of mind naturally on its own just like a human child?"   Obviously if the robot shows signs of having matured to a state where it exhibits behavior associated with theory of mind, that would certainly be a good excuse to pop open a bottle of champagne. ? 

Whether that can be taken that it has actually achieved true sentience is once again, a question for philosopher to waste time arguing about. ?  Philosophers can't even disprove Solipsism so they are hardly in a position to say anything about robots.

DroneBot Workshop Robotics Engineer
James


   
ReplyQuote
Robo Pi
(@robo-pi)
Robotics Engineer
Joined: 5 years ago
Posts: 1669
Topic starter  
Posted by: @casey

RoboPi: Wouldn't we need to clarify exactly what we mean by "acting as if it had a purpose"?

Place your robot with a low battery charge in a room and see if it moves to the charger. If it does move to the charger pick it up and place it somewhere else in the room and see if it still moves to the charger. If the robot always moves to the charger whenever its battery has a low charge then you could reasonable conclude it has a "purpose".

One problem I see with this approach is that your conclusions require that you personally pass a subjective judgment on whether or not you think a particular behavior constitutes "intelligent" behavior or not.

For example, let's say that you build a robot that every time you turn it on it goes into a corner and gets stuck there just going around in circles.   You might judge that behavior to be stupid, and unproductive.  But one could argue that this still qualifies as a "purpose".     Every time you turn it on it goes into a corner and stays there going in circles.  If it does this dependably who's to say that this isn't its purpose?

Your criteria is dependent upon what you subjectively expect to be "intelligent" behavior.

Think of it this way as well.   You have a air conditioner whose thermostat is stuck in the on position.  So every time you turn on your air conditioner it works toward the purpose of cooling the place down as much as possible.   Then you would say that the air condition is no longer exhibiting intelligent behavior simply because you don't like what its doing.  But it now has the "purpose" of just cooling things down as much as possible.   It still has a "purpose" the only difference now is that you may not personally judge that purpose to be very "intelligent".   So at this point your definition of intelligence boils down to nothing other than whether  you personally judge a behavior to be intelligent or not.

In any case, the purpose in this robotics forum should be to define criteria that we can use as guidelines for design projects.  And so we can then be able to tell whether or not we have achieved that design goal.

Based on your definition of "intelligent purpose" being the criteria, then when you start a project you would need to define the "intelligent purpose" you expect to achieve.  In that case, if your purpose is to design a robot to seek out a charger when its batteries are low, then obviously that's the criteria  that you'll use to decide whether or not your design has been successful.

You could then argue that I'm using a similar approach.   If I chose as my "intelligent purpose" to be the criteria of having to have my robot carry on a meaningful coherent conversation with me, then this is the "intelligent purpose" that I'll be attempting to design.

In fact, this is actually a good way to look at it.  Because in the beginning I'm only going to expect my robot to have very limited capabilities (just as we wouldn't expect a human baby to converse on a Ph.D. level on the day they are born), so in this sense I suppose your generalized abstract approach can be made to work by simply adding more detailed information to it concerning the precise criteria we are shooting for in term of "intelligent purpose".

DroneBot Workshop Robotics Engineer
James


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2037
 
Posted by: @robo-pi

For example, let's say that you build a robot that every time you turn it on it goes into a corner and gets stuck there just going around in circles.   You might judge that behavior to be stupid, and unproductive.  But one could argue that this still qualifies as a "purpose". 

You have made some good points and made me think about revising my thoughts on what constitutes intelligence. The use of the word "purpose" in this context is a magic word that doesn't apply to such simple systems.

The air conditioner is an example of goal seeking in a very abstract sense. Our bodies are dependent upon such feedback systems. Like the thermostat they can have different settings in different individuals. But clearly when we set a goal it is not automatic in that sense. We can analyse it in terms of how well it is doing. Blood pressure too high? We can override the body settings with medication.

Maybe what we call intelligence is the ability to analyse our behaviors in terms of our innate needs (hunger, thirst, sex, social contact, and so on ...) and devise plans to bring them about? And indeed our ability to analyse the world in general.

A vacuum robot does not have a need to keep the floor clean nor does it devise plans to achieve that goal.

 


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2037
 
Posted by: @spyder

I was planning to argue that man is just a machine too,

We are only machines in the abstract sense.  We are in fact a colony of cells. More like a robot made out of smaller robots (cells) which in turn depend on robots made out of proteins. Things are not physically stuck together they are in flux and communicate with each other. It is a dynamic stability that has to be maintained by the continual input of energy.

 


   
ReplyQuote
Robo Pi
(@robo-pi)
Robotics Engineer
Joined: 5 years ago
Posts: 1669
Topic starter  
Posted by: @casey

Maybe what we call intelligence is the ability to analyse our behaviors in terms of our innate needs (hunger, thirst, sex, social contact, and so on ...) and devise plans to bring them about? And indeed our ability to analyse the world in general.

One thing that we as humans seem to value most highly as "intelligence" is our ability to choose to do things that often have no practical purpose.   Like creating music, and art, etc.    Or even having a desire to build a machine that mimics  our own behaviors.  Of course for many robot designers this may boil down to a practical purpose (i.e. then then intend to use the resulting robot to do their work for them or profit them in some way).

But then there are others who are attracted to building a robot much like a musician or artist is attracted to creating music or art.   They just want to do it for the sake of doing it.

My purpose for wanting to build a robot brain that can comprehend concepts and contextual situations is driven in-part by my desire to better understand how we has humans manage to pull this off.   If we could build a robot that has the same level of cognitive awareness that we have then the only thing that would be left to distinguish between us and them would be the question of whether they are actually having a conscious experience of their existence.  I'm not sure if that question could even be answered.  It's basically the same as the question of Solipsism.  We have no way of knowing for certain that other humans are having the same subjective experience as we are having.   We can only assume that they do.    But if we're going to assume this of other humans then shouldn't we also assume it of a fully cognitive robot?

I don't know the answers to those questions.  But I see no reason why a fully cognitive robot could not be built based on a system of semantics.   If it truly understand the meanings of all the words in its vocabulary it seems to me that it would be doing at least as good as a human in term of intelligence.

So this is why I'm focused on semantics as being the main criteria for the ultimate intelligent robot.   Seeking out a battery charger when the battery gets low does not impress me.  I fully expect that this will be one of the things I will program my robot to do as part of its primal BIOS.    Kind of like a baby will automatically seek out its mother to suckle milk.  It doesn't need to be trained to do that.  It's already in the genetic BIOS.  It's hardwired into the brain.  Not really much different from ROM memory.

DroneBot Workshop Robotics Engineer
James


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2037
 

We seek pleasure and avoid pain. Music brings us pleasure as do beautiful landscapes.

My interest in automation began as a child when I used to bring my parents breakfast in bed on a weekend. I used to imagine how I might make a machine to cook the food and make the drinks and deliver it to the bedroom.  Now I realised it was their child they wanted to see in the morning not some contraption.  Still my interest is still in some machine doing all the repetitive cleaning jobs around the house which I hate.  Also the thought that  demonstrations of autonomous controllers might tell us something about how our own brains work or at least test theories about how real brains works. A program can be thought of as a theory. We test the theory by running the program.

RoboPi: Seeking out a battery charger when the battery gets low does not impress me.

Along with navigating and mapping a house! It does impress me as I know they have actually got it working not just imagining how they might do it. I may dream of building the perfect house while someone else without any flair for design may actually build a rubbish house. The difference is they have a house I only have a dream.

Semantic AI is something I know nothing about. I get the impression it is like a language interface to a data base capable of processing the data. I have over the years played with different types of neural networks and in particular liked the idea of evolutionary networks that could evolve in working stages the way we believe our own brains evolved.

However at this stage all I want is AI for a working robot which does more than bounce off walls.

 


   
ReplyQuote
Duce robot
(@duce-robot)
Member
Joined: 5 years ago
Posts: 672
 

@spyder

I'm glad you found that link excellent work and yes we do live in a revenge based society but a lot of it is reactionary . but at the end of the day if somebody does actually make what is being discussed here it would literally have to raised from infancy to adult with all of the stages in between so it comes to adult hood moves out of the room over the garage into its own apt now its a 330 lb machine that may or may not share your opinions any more and it isn't going to listen to you and is going to do what it sees fit robotron 2020 I used to love that game I was lucky enough to be doing what you do when all of these games were just on their cutting edge in this eighty's  but most of the designs I see the projects that get the most money thrown at them don't have any good intention toward humans terminator types which I'm totally against but people want what they want I still say there will likely never be a mechanical human  .


   
ReplyQuote
Robo Pi
(@robo-pi)
Robotics Engineer
Joined: 5 years ago
Posts: 1669
Topic starter  
Posted by: @duce-robot

but at the end of the day if somebody does actually make what is being discussed here it would literally have to raised from infancy to adult with all of the stages in between

That's exactly what I'm taking about.   So this isn't a robot that could be built by next Friday. ? 

The robot will need to be mentored and trained every step along the way just like a human child.  Although there are ways the development could be sped up.    Also a robot isn't likely to become impatient or bored with continual lessons where as a human child needs a lot of play time, etc.

Posted by: @duce-robot

but most of the designs I see the projects that get the most money thrown at them don't have any good intention toward humans terminator types which I'm totally against

Same here.  When people ask me if I have a fear of A.I. I definitely say YES!   But it's not A.I. in general that I fear.  Instead what I fear is who's behind the A.I. and what have they programmed it to do in terms of goals.

I predict the following.

When self-driving A.I. cars become common place, which appears to be right around the corner.  I predict that autonomous A.I. policemen will be soon to follow.  And of course, the military will also be employing A.I. soldiers since they can treat those as expendable.

So I predict that just due to who will have the money and authority to develop A.I. the future of A.I. will most likely be authoritative A.I.   Police, soldiers, even potentially A.I. supervisors and bosses in the workplace.

The idea that people will be devolving Hippy Flower Child A.I. simply isn't likely because the people who would be interested in developing this kind of A.I. simply won't have the cash or resources to do it. 

I can say that I'm definitely looking at building a Hippy Flower Child A.I.   Physically I've already gone to great pains to give her a very child-like voice.   That wasn't easy.  It's actually quite difficult to find child voices for speech engines.  I had to use an adult voice and modify it to sound like a child.

I'm also physically building the robot to look like a little girl.  She has a female human face (see my avatar).  She'll only be about 4 feet tall.   And while I'm hoping to build a couple human-like arms and hands for her, her base unit will just be a small wheeled unit.

But yeah, I'll be treating her as my man-made "daughter".   And so I'll be seeing her as a child.   And as I teach her, my teaching will indeed be highly biased toward teaching her to learn and think good thoughts.   It's going to be very interesting to see how she evolves.   

Speaking of money, however, that's a major crippling aspect of my project.   It not just because I don't have the money to spend on Alysha, but being financially strapped causes me to have to do all manner of home maintenance work on old broken junk that I can't afford to replace.   So the lack of finance quickly translates into a lack of time as well.  You might think that a poor person would have all the time in the world.  But it doesn't work that way, unfortunately.

But yeah, my robot project could be classified as a Hippy Flower Child project. ? ? ? ❤️ ? ? ? 

DroneBot Workshop Robotics Engineer
James


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2037
 
Posted by: @stven

Still plowing through this interesting thread. At some point you asked for definitions of AI. I wanted to share this thought provoking  “Up and Atom” YouTube video that offers some historical context.

Thank you for pointing to the Up and Atom videos they are great. Very entertaining and informative. Young people have a wonderful resource with the internet for learning.

Relevant to this thread?

 


   
ReplyQuote
Page 3 / 5