neural nets vs huma...
 
Notifications
Clear all

neural nets vs human brains

13 Posts
3 Users
5 Likes
676 Views
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2042
Topic starter  

I moved this post here so as not to hijack @duce-robot thread giving his latest on his robot.

Posted by: @davee

Hi @robotbuilder,

RE your discussion starting, "It seems that way but it doesn't really recognize for example a shoe as a shoe."

  I think you are raising some valuable questions which deserves a thread or even an entire section in its own right, assuming there are enough participants with an interest .. which given the apparent interest in AI tools now coming on line there should be in the near future, if not already.

I would humbly suggest that a person recognising a "shoe" actually has at least two independent parts. The first is recognition of the 'intended use' of something that has characteristics such as hollow and 'foot-shaped and sized' , etc. suggesting it might be a something to wear on your foot. If your early years were spent in an 'urbanised' country, then the chances are your parents will have been fitting shoes on your feet from well before your earliest memories, so the connection will be 'ingrained' by learning from the 'start'.

You will also have been taught during those same early years that this object was called a 'shoe' .. if your home language was English .. but probably something different if your home language was not English ... so the connection between the name and the appearance of the object is separate from the connection between the usage and the appearance.

I would suggest that the contemporary AI systems which appear to be based on statistical matching maybe getting closer to the way that people, and presumably animals with brains, actually work than you suggest. Of course, I am only considering the similarity in terms of using a digital calculation machine being used to simulate a an immensely complex chemical processing 'machine'. Recently, the practical scale of digital calculation has begun to  approach the level of complexity needed to make a plausible simulator ... until now, the scale of calculation was just too small to do anything more than the simplest of 'brain' simulations.

Even assuming we can now make computational machines with sufficient complexity, so far we have very limited means of 'teaching' them ... getting them to 'read' endless petabytes of data swirling around on the Internet is much less direct and interactive than the infant being taught to put their shoes on ... so whilst the present machines may be far superior in finding research papers with keyword tags, their appreciation of the physical world (or universe) is limited by the focused exposure.

I recall my first attempts at driving a car (with manual transmission) ... At the age of 17, I knew how it worked, and had done simple repairs to my father's car, watched my father drive for all of my life, but not actually driven it a centimetre. But none of that helped me coordinate steering, brakes, clutch, gears, etc. in the required manner .. that required a lot of 'hands-on' experience - 'exposure' to the real task.

So, I would suggest the limitations of 'exposure' also applies to humans .. the difference being the 'exposure' limits are different. As part of an urbanised society of citizens with a lifelong access to 'shoes', it is not surprising we have little difficulty recognising them. However, until a few hundred years ago, many people thought the Earth was flat ... allegedly some still do ... but for a person who lived inland a few hundred years ago, and only travelled a relatively short distance in their lifetime from their place of birth, the reasons for coming to the conclusion of the Earth being flat seem perfectly reasonable.

Similarly, with your tank recognition story, of which I heard a slightly different version about 40 years ago. I am unclear about the countries involved so I will call them 'A', who also instigated the study, and their potential foe, I will call 'B'. Namely, the military of 'A' wanted a machine that could distinguish 'A' tanks from 'B' tanks. Hence, they fed a series of photos tanks from 'A' and 'B' into the image recognition machine they had built. However, the photos for 'A' tanks were taken under 'realistic battlefield' conditions, which were dark and dull, grey days, whilst the only pictures of 'B' tanks were from displays or other 'publicity' material, all showing the tank in its 'splendour'. So yes, as in your version of the story, they had developed a machine that could distinguish lighting conditions, but knew nothing about tanks.

However, to me, that is similar to the people of a few hundred years ago, who maybe farmed on some 'flat' fields and had no reason to doubt the whole Earth was flat.

The semiconductor industry is only just beginning to make machines with sufficient calculating power, and they are still much larger and power hungry than the chemical processing systems found in biological systems. Perhaps the 'computers' wil evolve to be less like digital calculating simulators and more like our 'analogue' brains ... of course, this is part of contemporary research ..., but maybe the bigger challenge is connecting these machines with the 'real world', so that they can learn about it directly.

Best wishes all, Dave

 

This is a subject that has interested me most of my life but it is not really of practical importance to the software requirements of the electronic projects of these forums.

Over the years I have read layman explanations of brain research and AI.

I would suggest that the contemporary AI systems which appear to be based on statistical matching maybe getting closer to the way that people, and presumably animals with brains, actually work than you suggest.

You might like to read Steven Pinker's "How the Mind Works". There is a pdf version online.

I personally think we are born with evolved abilities to learn certain things. Often learning is confused with maturation of a developing brain. This may require input but is not determined by it. For example stereo vision is innate but does require stereo inputs to develop. If it doesn't get the input early in life this innate ability will not develop and some people end up without the depth perception of stereo vision.  Another example is pop out features in our visual system.

But none of that helped me coordinate steering, brakes, clutch, gears, etc. in the required manner .. that required a lot of 'hands-on' experience - 'exposure' to the real task.

When we decide to learn something such as driving, tying our shoelaces or touch typing we have to "consciously" direct the motor behaviors we want to learn. The learning actually takes place in other parts of the brain. One part of the brain, the part that decided to learn something, essentially programs the sensory/motor systems. Once they are programmed the higher part of the brain only has to "want" to do something and the lower parts do the works unconsciously thus freeing the higher parts to do other things like where to drive and watch out for the pedestrian ahead. Actually things like "where to drive" can also become automatic if repeated often enough. The lower programs will however signal the higher parts if something isn't going as expected so the higher part can attend to it.

... but maybe the bigger challenge is connecting these machines with the 'real world', so that they can learn about it directly.

Most animals "know" what to do the day the are born and simply refine the connections. Predators do have childhood learning as they need the extra smarts, grazing animals are up and running the moment they are born. There is a good survival reason why we are born knowing some things. Learning is expensive in time (which we may not have before being eaten) and extra computing power (larger power hungry brain) required to learn something. It is the old nature vs. nurture debate.


   
Quote
Duce robot
(@duce-robot)
Member
Joined: 5 years ago
Posts: 678
 

This is no hijack it's interesting I think you are correct the real challenge is getting it to recognize and translate that to real world actions . Concrete implementation actually performing the task.tgat maj be why we se a lot of jittery movements from some of the real ones they are learning the task. What do you think about the oak d lite camera I think it could be useful for this .plus the coral usb seems a must for this.


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2042
Topic starter  

@duce-robot 

What do you think about the oak d lite camera.

There have been some pretty impressive 3d cameras being developed but I have no experience with any of them.

I used to spend time writing my own visual processing software and although it gave me an insight into how it is done I never really used it in any practical project. I like to mess about with software but if you have the money then messing about with some of the new high tech hardware would also be fun.

 


   
Duce robot reacted
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2042
Topic starter  

@davee wrote:

I would suggest that the contemporary AI systems which appear to be based on statistical matching maybe getting closer to the way that people, and presumably animals with brains, actually work than you suggest.

When I first read about visual processing all feature extraction was hand engineered as I had done in recognizing the target image.

https://forum.dronebotworkshop.com/user-robot-projects/visual-input-for-robot/

At the time I only knew about the multilayer perceptron,  but today,  while surfing the net on the subject,  I now see that the design of neural nets has advanced. For vision they use a convolutional neural network which is a variant of multilayer perceptron and it can take the spatial structure of image data into account. They are designed to emulate the behavior of a visual cortex. It is all very mathematical and above my understanding. And is is done quickly using graphics processing units (GPUs).

 


   
ReplyQuote
(@davee)
Member
Joined: 3 years ago
Posts: 1670
 

Hi @robotbuilder & @duce-robot ,

  I should start by apologising for writing such a long missive, whilst skating on very thin ice, as this is not a subject I have any expertise. However, a few trends seem to be apparent, although I may be mistaken. I offer my previous note and this one, as personal impressions, which will almost certainly have massive errors and oversights, in the hope of provoking further observations, thoughts and corrections, that may prove useful to someone. I am not presently planning any AI project, but I am keeping it under review. I certainly hope it usefully pervades 'personal' scale systems, and is not confined to cloud-based mega systems.

I didn't find the Pinker book, except on sale, but I spent a few moments looking at Youtube featuring him, and also read a few commentary items regarding his work. My first impression is that he is approaching it from the angle of studying human behaviour and trying to form a 'specification' of what it is trying to achieve and to form some rules to describe and maybe predict the actions, presumably with aim of forming a framework that can be coded on a digital computer. Whilst I in no way to wish to denigrate his achievements, I also have the impression that this is not main principle driving the AI systems emerging from OpenAI, Google, Microsoft and so on.

The latter seem to be starting with fitting methods to somewhat generic 'arithmetic' models that can be replicated on a vast scale. Then the 'training' resembles a kind of trial and error process to find parameters for these models, so that for the 'training' data the outputs from the models most closely resemble the 'outputs' found in the training data.

Whilst I in no way pretend to understand these models, I have the impression that they are essentially 'simple' formulae, which have been picked for their ability to replicate a range of behaviours, by varying the parameters of the formulae. Hence a 'trained' model essentially consists of large number or 'tuned' formulae. As far as I am aware, correlating these 'trained' formulae with individual training data items is not possible at present. I am unclear as to the extent they model structures found in biological brains, but I have the impression the may be some parallels, accepting that the AI models are often using digital calculations to model analogue biological systems.

I haven't looked at the image recognition specifically, but I would expect AI systems specialising in image recognition to have some specialist modifications. Similarly, I would not be surprised if AI systems specialising in a particular field to also have specialist modifications. Whether those modifications are (largely) limited to data input and output interfaces, or pervade the whole AI machine is also unclear.

I note and agree that animals, including humans, seem to have different methods and levels of acquiring skills. Clearly, even youngest individuals have the ability to do certain essential skills like eating, breathing, etc. They also tend to have behaviours to look for safety, such as staying close to parents. I presume these are in some way genetically encoded. Discovering such codes appears to be consistent with evolutionary principles that these behaviours would improve survivability.

To some extent, modern computer processors are intentionally designed in this way .. that is the processor has many structures, in hardware design, precoded ROM look up tables, microcode, etc. , which can all be replicated in the manufacturing process, that enable it to do certain basic operations.

Whilst I have yet to find a computer that 'wants' to learn the ability to do something new, it is easy to find robot cleaners that will return to charging base when they feel 'hungry'. It seems a fairly small step to deliberately extend the 'want to something' beyond the basic 'low power' reaction.

I also note the 'mechanical' skill learning that enables us to do relatively complex things like drive a car without consciously having to calculate each pedal and steering wheel movement. Again, multi-processor based systems are specifically designed to 'offload' the more predictable and repetitive tasks. Perhaps the 'smart' bit about the biological systems is that they can actively adapt to new skills without needing a complete system redesign. I am unclear about how the biological systems implement this flexibility, but I have the impression they, at least to a limited extent, can 'grow' new 'wetware' to meet the demand. I have yet to see an ESP32 or a I5  processor grow an extra core when fed a new programme! So I guess, AI systems will need substantial amounts of redundant 'processing' ability for expansion purposes .. though for many years I have heard the software manager's lament that "all programmes grow to exceed the available memory",  so maybe this just extends this.

-----

Whilst I have not tried doing any AI projects myself, I confess I have yet to be motivated to spend much time considering either 'simple' pattern matching image recognition systems that have trouble differentiating a shoe from a wardrobe, or chatbots that rely on massive cloud servers. I am hoping someone will suggest something else that is realistic, intereting and also compatible with a minimal budget. 😀 

Best wishes, Dave


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2042
Topic starter  

@davee 

I didn't find the Pinker book,

After reading your post I kind of think it would not be of great interest for you anyway.

http://hampshirehigh.com/exchange2012/docs/Steven%20Pinker%20-%20How%20The%20Mind-Works.pdf

He argues the computational theory of mind and against the idea we are born blank slates.
A neural network is a blank slate. He has commented on ChatGPT and is impressed.

Neural networks start as a blank slate (random weights) to determine the output for the inputs to each "neuron".
They use "hill climbing" meaning that the actual output is compared with the desired output and a mathematically proven method called back propagation is then used to adjust the weights to move them toward giving the desired output for the trial input.

Possibly we are evolving the networks and we can continue to allow them to improve. The downside is they will be like the brain in that at this stage we don't really know how they are wired or what techniques they have evolved to produce their outputs.

What makes us interested in something is probably genetic. I have noticed that most people don't want to know what they are (a brain activity of some kind) and just want to get on with life having fun. Devices like TV's or mobile phones are just magic toys to be used.  I always had a desire to understand how things work from machines to biological systems to social systems.


   
ReplyQuote
(@davee)
Member
Joined: 3 years ago
Posts: 1670
 

Hi @robotbuilder,

Thanks for the reply and the link. Whether I will ever read all 674 pages, I doubt, but I hope to at least have a look.

I guess I see a trichotomy of potential interests.

Trying to understand how the brain works at a functional level, which is where I suspect Pinker is aiming;

trying to understand the biological/chemical processes that make up a brain and relating systems;

and "Artificial Intelligence" electronic systems intended to replicate the tasks that brains can do, but are impossible, or at least very difficult to achieve with 'conventional' coding practices of constructing a program to do a task.

All three of these interests seem to be entirely legitimate, and all three can quickly become extremely complex in their own right. To some extent, they are clearly entwined, but in other ways they seem to be heading in different directions.

Presently, AI seems to have gained a momentum because to many, it appears to have become a magic money tree that can provide vast wealth, providing it is nurtured in the right way. I may be misjudging, but outside of few academics and hobbyists, I suspect no one cares whether artificial intelligence is anyway related to 'biological intelligence', but instead they are looking for machines that can be economically trained to do tasks that are otherwise impossible or very expensive in 'person power'.

Understanding the biological and chemical processes of a 'real' brain is probably mainly of interest to people working in the medical and related fields .. clearly there are lots of reasons why this could prove beneficial to mankind, particularly with regard to dealing with more physical brain illnesses, such as dementia. In addition, there is the possibilty that such studies will yield clues on how to build a better AI machine. I know some of the terminology has been exported to the AI field, but I am less clear how closely AI is attempting to simulate the biological world, or how closely the electronic AI technology will ever be able to simulate it in a reasonably efficient manner.

The utility of the functional level studies are more difficult to categorise. Clearly they have legitimate place in understanding how people will react and potentially in how to understand psychological problems that can arise. I may be mistaken, but I have the impression they have also been the basis of many years of AI research, possibly including the period when Pinker wrote the book you referenced, but with only limited success.

========

I can understand suggestions that we are not born with clean slates, and maybe an untrained neural network looks to be in contrast to this. Presently I am not sure whether this comparison is completely fair, or maybe even useful.

Clearly, humans are born with the ability to do certain tasks, which implies some kind of minimal infrastructure. But to the best of my knowledge, this also applies to all of the computer based systems, including AI. At the simplest level, even a tiny processor must be manufactured with a course of action to take when power is applied and the reset made invalid. Normally this will involve jumping to a preset memory instruction location, and starting to decode and interpret that instruction. Furhermore, if it is to do anything useful, some part of the processor's memory must have been filled with an appropriate set of instructions before the reset pin was made invalid. Of course, this first set of instructions may be quite small, so that it can go and fetch some more instructions from some external location, but it is not a completely empty starting point.

I presume the situation will be roughly similar for any neural net machine. Of course, variants to the situation can be constructed, such as having an 'external machine' which spoon feeds and directs the neural net machine with data and commands, but presently I can't imagine a 'blank' neural net machine being able to train itself, etc. without some sort of preordained mechanism.

In the same way, I imagine a baby to born with a 'minimal bootstrap' coding level, together with an amount of 'blank memory' that will be progressively 'programmed' by experience. Of course, I am greatly oversimplifying the analogy, and I am in no way assuming the 'blank memory' is as simple or limited in its ability to be modified as a conventional electronic RAM or ROM.  

As I have never worked with a neural net machine, and my biology is similarly limited, please advise me if I have this completely wrong.

========

Personally, I am probably not well suited to the functional level studies, and my biology background is too limited to make much of the 'real' brain studies, which leaves me with AI as the strongest candidate of interest. Whilst I can understand why the 'industry giants' are pouring money into developing cloud based systems, and I have no doubt  that some will become as 'indispensible' as Google search or email, I am presently finding it hard to get 'excited' about them. I not only have no illusions of growing my own AI money tree, I am not even convinced I would want to, if I could.

Perhaps the paradox I find most interesting, but I haven't the faintest idea of how to approach, is the situation of having machines which can apparently perform better than human brains in at least a limited way, yet apparently we have no understanding as to how they are doing it. Of course, the designers know exactly what algorithms, connections, etc. the machine has, but we cannot interpret the data captured within a 'trained' machine. And at this point, I am left wondering if something from the other two parts of this trichotomy can be a 'Rosetta Stone'.  

--------

On the other hand, and perhaps more realistically, I have always liked toys and figuring out how they work, so maybe there is something that AI can achieve that is sufficiently 'different' to be interesting.

Best wishes, Dave


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2042
Topic starter  

@davee 

Perhaps the paradox I find most interesting, but I haven't the faintest idea of how to approach, is the situation of having machines which can apparently perform better than human brains in at least a limited way, yet apparently we have no understanding as to how they are doing it. Of course, the designers know exactly what algorithms, connections, etc. the machine has, but we cannot interpret the data captured within a 'trained' machine.

I assume you are talking about the weighted values in a neural network (which we can simply print out)? We cannot yet tell by looking at those printed values what "rules" they are applying to the input data. There is an effort being made to develop methods to reverse engineer the rules being used in a neural network's solution.

In the meantime we have to see a neural network as a black box that given an input will give a useful output most of the time particularly under ideal conditions. The down side is the amount of data and time needed to train them, however once trained the weights can be transferred to as many other networks as you like so you don't have to train every new network. We can't do that with people except in science fiction stories.

This technology will most likely get better and better and fun for hobby use when embodied in hardware like the Huskylens,
https://dronebotworkshop.com/huskylens/
Or using software like tensor lite that @duce-robot was interested in using.
https://forum.dronebotworkshop.com/show-tell/coming-to-you-live/paged/14/


   
ReplyQuote
(@davee)
Member
Joined: 3 years ago
Posts: 1670
 

Hi @robotbuilder,

  Thanks for your reply.

   Yes I was thinking about the weighted values, together with any other adjustable parameters that might be built into present or future AI systems. Of course, it is possible to print out the numbers, and I am aware that trying to interpret them is a source of active research. However, my impression is that the interpretation of the numbers has not yet progressed very far, which to me makes it an interesting question, whilst realistically, it is obviously attracting some other very bright and well-informed individuals who are apparently struggling, so the chance of me making any progress is lower than winning a major lottery with a single ticket.

I remember watching the Bill's HuskyLens video, which was to his consistent excellent standard, but the device's performance was too limited to capture my personal interest.

I have been meaning to look into Tensorlite and the like ... so far I have been distracted by other things ... but maybe I'll make a bit more of an effort. Meanwhile, I'll be interested to hear of anyone's experience on the subject.

Best wishes, Dave


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2042
Topic starter  

@davee 

I have been meaning to look into Tensorlite and the like ... so far I have been distracted by other things ... but maybe I'll make a bit more of an effort.

If your project needs software that recognizes faces or some objects in an image then using something like Tensorlite and the online training using your own images is probably worth the effort. Some other things like color recognition, tag recognition, line tracking and simple shape recognition can be implemented in simpler code.  I am not personally working on any project that requires the use of a neural network.

 


   
DaveE reacted
ReplyQuote
Duce robot
(@duce-robot)
Member
Joined: 5 years ago
Posts: 678
 

That oak d lite camera and the coral usb seems a must for this but they ain't cheap.camera is 149 us and the coral usb 114 us .


   
DaveE reacted
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2042
Topic starter  

@duce-robot 

All this comes down to what your interests are. If it is playing with electronic stuff be it your mobile phone or in a hobby context with something like Huskylens and things like it then you follow your interest. A lot of fun technology has been developed. I think software that can translate languages, convert text to speech and speech to text, recognize faces and so on are things that you might use in many projects including robotics.

With the vision stuff you still need to figure out what you are going to do with the output of a technology that can place a rectangle around a bunch of pixels that make up the image of some object. I have seen video where all the objects like cars and pedestrians all have rectangles around them and a printed label of what they are. So what are you going to do with that information? It is not the kind of data created or used by biological brains.

 


   
DaveE reacted
ReplyQuote
Duce robot
(@duce-robot)
Member
Joined: 5 years ago
Posts: 678
 

@robotbuilder true  . Inside the mind ........a very scary place .lol that's a pretty deep discussion.  I don't know if machines will ever be good enough or not .maybe.


   
DaveE reacted
ReplyQuote