Inqster - The Next ...
 
Notifications
Clear all

Inqster - The Next Generation

125 Posts
7 Users
44 Likes
5,177 Views
Ron
 Ron
(@zander)
Father of a miniature Wookie
Joined: 3 years ago
Posts: 7083
 

@robotbuilder Probably a different interview. He does a lot of them. It depends on what you call 'intelligence.' I know I don't know for sure, but I suspect Joe Average thinks he does and, therefore, draws the wrong conclusions. Maybe Joe Average is confusing intelligence with sentient. I am as sure as I can be only humans and maybe a few other mammals are sentient. 

I just googled 'intelligence' and one hit is the following link. It sounds good to me, and I think is in agreement with the members who have commented on this topic so I will paste that one on my wall.

I was a bit surprised to hear it includes reproduction, and therefore we have not yet created any AI that I know of. Perhaps ChatGPT is some subset of true AI? I suspect when the likes of Wozniak, Musk, and Hawking say AI is one of the two possible extinction scenarios that they were including self-reproduction; otherwise, it makes no sense.

The Link

First computer 1959. Retired from my own computer company 2004.
Hardware - Expert in 1401, and 360, fairly knowledge in PC plus numerous MPU's and MCU's
Major Languages - Machine language, 360 Macro Assembler, Intel Assembler, PL/I and PL1, Pascal, Basic, C plus numerous job control and scripting languages.
My personal scorecard is now 1 PC hardware fix (circa 1982), 1 open source fix (at age 82), and 2 zero day bugs in a major OS.


   
ReplyQuote
(@davee)
Member
Joined: 3 years ago
Posts: 1711
 

Hi All ... @Inq, @robotbuilder, @zander

  Comparing an 'AI' approach with a 'traditional' software approach is valid, but should be done with care.

The 'traditional' software aims to be determinate in its outcome, in that when it is done competently, the designer will have envisaged the ranges of different possible data input values, and coded a solution that appropriately performs the most appropriate outputs for those ranges. If this approach is done with a high degree of diligence and competence, then the result may be effectively 'determinate' in the required manner, and hence seen as appropriate for even safety critical situations, etc. However, for any programme beyond the most trivial, it is very expensive and often still has flaws. Of course the flaws may be mitigated as part of a larger system design approach that assumes the software maybe 'faulty', often alongside the assumption that the associated hardware may physically fail, but this makes the whole project even more expensive and complex.

It is also vulnerable to 'data' that is outside of the range it was designed to cope with. This maybe due to accidental oversights, or deliberate targetting of a system with contrived data .. the latter being a major type of hacker attacks.

By contrast, the 'smarter' AI systems appear (to my very inexperienced eye), to be based on 'relatively' simple mathematical/logical building blocks, repeated on an industrial scale. Because the fundamental building blocks are relatively simple, their design may be scrutinised and tested to very high levels, so that the probability of a systematic error is extremely low. Designing and building such machines is now feasible, and 'affordable'  to those with the deepest pockets, but that produces a machine whose fundamental decision making basis is much less determined than that of an unborn baby. To make a 'machine' which can make useful decisions in the real world it must be 'taught', by some kind of experience method with similarities to teaching that baby after is born.

Of course the AI machine's 'knowledge and experience' can subsequently be cloned in a simple copy and paste manner, to make many identical machines, that is not yet achievable with humans, but that does not change the situation regarding the method and limitations of gaining the knowledge and experience in the first place.

========

Maybe comparisons between human intelligence and artifcial intelligence .. or perhaps more accurately between human behaviours and decisions, and 'AI' based behaviours and decisions, are actually begining to show more similarities than differences.

Present AI based systems are clearly showing examples where the system has made the 'obviously' wrong decision. But humans do this all the time ... To err is human, etc.

In many cases, human errors occur because the person, or even millions of people have either found themselves in a situation that they do not have the experience to handle correctly, or they do not have the appropriate data to process. We are all aware of the effects of being deliberately fed misleading or inaccurate data ... it is called propaganda .. and it can have devasting consequences. In other cases, the human(s) simply has(have) not had experienced comparable situations sufficiently, either by direct experience or by simulated experience/teaching.

Similarly, an AI system can find itself in a situation for which its experience (Millions of hours of video and other data, for example.) have not covered with sufficient quantity and quality.

We all (must?) accept that a surgeon that offers a life-saving or life-changing operation will make the best possible job of it, whilst knowing that they may encounter a problem for the first time, and consequently make the wrong decision. In a few cases, this maybe because the surgeon took on a task that they should have referred to a colleague for (say) selfish pecuniary reasons, but in many cases, there may not be any such colleague available, perhaps because the particular issue was so rare, that few surgeons in the world would have ever enountered it before. Similarly, perhaps we need to give some leeway to AI systems not getting it right everytime.

Instead, perhaps try to assess the situation in a more balanced view. I am certainly not suggesting we allow Elon Musk to acquire his 2nd trillion dollars by flooding the world with poorly trained, self-drive Teslas that cause mayhem and destruction wherever they go.

In some circumstances, for example rail traffic, the physical limitations of rail, including the relatively small number of vehicles per track, the 'can only follow the track' and so on has meant fully automated trains have been practical for decades. To some extent the same applies to commercial jet aircraft, which generally live in a well-disciplined environment and lots of space around them for minor errors, and hence systems like autoland and autopilot also have a long history, whilst still being 'overseen' by human pilots when things get unxpectedly complex. Both of these are cases are amenable to 'hard coding' all of the decisions in a traditional prgramming approach. However, our road system is vastly more complex, with literally billions of automonous moving objects, and many more fixed objects, with many of the objects, at least occasionally operating outside of fixed rules. So far as I am aware, this is too complex for the 'traditional' programming approach. Whilst the jury may still be deliberating on whether the AI approach has reached a 'safe to proceed' status, I think the jury should be advised as to what it should expect the AI system to achieve.

I think it is not only unrealistic, but also counterproductive, to expect the AI system to never make a mistake. Rather, a probablistic approach should be aimed for ... e.g. begin to accept it if the chance of it causing injury is say 1/10th of that of a human when driving the same vehicle. Of course the factor of 10 is purely arbitrary .. one could suggest a factor just 2 or even less would be a remarkable improvement. Nor need the factor be fixed indefinitely .. it could be incremented with time as the systems become better educated.

And there are lots of other areas where AI might be the right choice. e.g.  Cancer and other diseases can be discovered at a stage when treatment has a high chance of 100% cure, but only if a large number of asymptomatic patients can be screened using imaging techniques like MRI, and their scans examined in minute detail. The scanning process can be quick and expanded relatively easily, albeit the machines are not cheap. However, manually scanning billions of images is very difficult to achieve, expensive and far from error free. Hence, if AI provides a screening process, that finds a higher proportion of positive cases than humans, does not flood the system with false positives, and is readily expandable in scale, then the advantages are self-evident. Of course, this is not a 'fit and forget' situation ... it needs to be kept under constant 'audit' to ensure it has been adequately trained to find all variations of the problem it is searching for.

And the training mechanism is not a 'one-off' coding ... rather the live data from widespread deployment, combined with outcomes forms a constant stream of new data to improve the accuracy of the process.

---

I think the present time provides an opportunity to 'open up' the technology to sharing and scrutiny. Whilst a favourable commercial return is almost certainly a requirement to fund R&D, even Microsoft, Oracle, etc. can see advantages in more open approaches. It is to be hoped this includes industries like cars.

-----------------------

So maybe we should be thinking 'To err is human; to err less frequently implies AI'

Best wishes .. I hope this mini-sermon is of interest and helpful. As with most preachings, it has its limitations.

Take care everyone! Dave


   
Inq reacted
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2043
 

@davee 

I just feel we need to be careful not to confuse what we imagine is there with what is actually there. In an exchange with chatGPT it may appear as if we a talking to a very knowledge entity on the other side but I don't think we are.

I see in your post you are assigning person hood to programs that are based around the use of a neural net and I think that is a mistake. There are no inner thoughts taking place. There is no entity there thinking about what is going on anymore than there are inner thoughts taking place in a GOFAI chess playing program.

Cancer screening is using a pattern recognition system. There is no AI entity with a life of its own looking for cancers. It has no hopes or dreams. It has no "feelings". It is just a filter. It filters out the visual features common to examples it has been given which humans have determined are examples of images showing the presence of cancer.


   
ReplyQuote
Will
 Will
(@will)
Member
Joined: 3 years ago
Posts: 2535
 

Posted by: @robotbuilder

@davee 

I just feel we need to be careful not to confuse what we imagine is there with what is actually there.

Humans have an innate requirement for having an overriding order or something being 'in charge' or 'guiding' them and seem to be so uncomfortable with random or pseudorandom behaviour that they feel required to create government systems and religious systems to pass the responsibility of living off to something else.

So it's natural that we perceive phenomena such as Eliza or ChatGPT as being some form of person or entity, often even after they've been provided with ample evidence to the contrary. Once a AI device (or even a pet) has been anthropomorphized, it's hard to stuff the Matrix back into the bag.

 

Anything seems possible when you don't know what you're talking about.


   
ReplyQuote
(@davee)
Member
Joined: 3 years ago
Posts: 1711
 

Hi @robotbuilder and @will,

   Thanks for your interesting comments. I am sorry this a bit of a ramble ... I hope it makes some sense.

...................

   anthropomorphic is not a word I often use , so I double checked its meaning in online Cambridge dictionary:

https://dictionary.cambridge.org/dictionary/english/anthropomorphic

anthropomorphic

treating animals, gods, or objects as if they are human in appearance, character, and behaviour:

Now I could relate to Disney converting a mouse into Mickey Mouse as meeting this definition.

But I don't think I was suggesting that driving a car or an image screening system involves suggesting the AI based system was human, even in behaviour, though it may well be influenced by its training to respond in a similar manner to humans. Rather that the humans had found a way of doing something, and that methohd could be synthesized in a machine.

Humans could perform simple calculations, like counting and multiplying, thousands of years before digital calculators were invented, but I can't see the iconic HP35 as having the slightest sign of being anthromorphic.

--

If you program (using AI or otherwise) a computer to behave, at least in a limited way like human, then does imply anthropomorphism?

Many of the earliest computer programs were designed to perform tasks that were previously performed by humans. The first 'computers' were humans, typically women .. for many years before the electronic computer arrived, people who performed calculations were 'computers'.

Admittedly the Lyons tea shop computer was called Leo, and no doubt at least someone was tasked with making it seem friendly, but hardly anthromorphic. ( https://www.sciencemuseum.org.uk/objects-and-stories/meet-leo-worlds-first-business-computer)

Similarly much production line machinery that is now computer controlled has evolved from machines that were directly controlled by humans.

And whilst everyone has heard of the 'Turing Test', one of his early major contributions was to establish what could be computed, to try to determine what machines could achieve in terms of replacing human controlled activities ... as a part of his PhD thesis I think!

I don't pretend to fully comprehend the conclusions of his work, but I have the impression it says that a computer can achieve any process that can be defined. Tasks like driving appears to me to be largely definable, albeit complex. Of course, there will always be the 'amazing' situations, when someone appears to be a hero, but perhaps AI systems would also 'be lucky' sometimes.

----------

So are computers that can do a task that humans previously achieved, anthropomorphic? e.g. a computer controlling a router

    In general, I would say no.

For example, whilst I would expect the computer controlled router to be controlled by a 'traditional' computer program, would it be any different if it was based an AI program that had been trained by data based on a human operating a similar router manually, e.g. with some kind of joystick system.

Again, I have difficulty regarding that as anthropomorphic, even though it bears a resemblance to the way a new apprentice might be taught by an experienced operator.

-----

Instead, it appears to me that an AI based computer can be a machine which has been designed and programmed to have at least some of the similar underlying functionality to the nerves, brains etc of a human, although of course the actual materials and communication mechanisms are quite different.

Most simulators, even those describing inanimate systems like an electrical circuit simulator, only predict a subset of the full behaviour. Thus the extent to which the AI based system can be taught to drive a car or analyse an image in the same way as a human is presently the subject of some very active research. My unqualified impression is that in both cases, the best AI systems are managing to replicate human capablities for those specific tasks, at least as effectively as the 'averagely competent' humans manage to do.

That is not to suggest that either humans or AI based computers systems always perform the task without flaws ... nor does it necessarily suggest humans and AI systems will exhibit the same flaws .. although it would not be surprising if the training mechanism passed some 'bad' habits from humans to AI systems if it was trained using 'uncensored' data, including sub-optimal examples of performing the task.

----------

I confess to being concerned with 'chat' machines which have been deliberately contrived to pass the 'Turing Test' of being able to fool humans into believing they are 'conversing' with a real person ... that feels like a type of fraud ... but perhaps not much different from 'real' humans claiming to have some competence or capability they do not possess.

Unfortunately, I feel our present systems are very poor in dealing with fraudulent humans, so I will not be surprised if their machine based counterparts are also allowed to do some unsavoury things.

------------------

Do I think an AI system has empathy or other 'feelings' or 'intuition' as to the 'bigger picture', that are usually ascribed to humans? ...

No, but it seems plausible that at least some of these properties could be at least partially simulated. Much of our society is driven by carrot and stick drivers .. surely these are fairly easy to simulate? The effects of them may even be inherited as part of the training data, if the training data are recordings actions of human behaviour. 

-----------

Whether there is a similarity in the 'thinking process' is less clear to me. When I am driving, especially on familiar roads, etc., I suspect I am just following a process I started to learn over half a century ago. Consequently, it nearly all feels like an automatic process.

-----------

Could a 'competent' AI machine be taught at least some of the same lessons as humans? Quite probably.

I am not judging whether present AI machines are 'competent', nor that the present machines have been 'adequately trained', but I suspect the best ones are heading in that direction.

I also suspect the best ones will achieve many complex tasks better than the 'average' person, so that in general, using the best machines instead of 'average' humans may make sense.

---------------

I am much less clear as to whether AI will 'supersede' real human intelligence in every respect.

But perhaps the biggest danger is in how much authority we allow it to have over us. 

That is going to be 10% technology and 90% politics, so for now, I'll leave that for others to ponder.

---------

Best wishes, Dave


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2043
 

@davee

If you program (using AI or otherwise) a computer to behave, at least in a limited way like human, then does imply anthropomorphism?

I was only suggesting that some people imagine there is something "alive" and aware behind things like chatGPT,  call that kind of thinking whatever you like.

I also suspect the best ones will achieve many complex tasks better than the 'average' person, so that in general, using the best machines instead of 'average' humans may make sense.

Machines can do many things better than any human that is why we use them. In a sense they are an automated version of the use of pen and paper to store and process data.

We can program them to do a statistical analysis on large amounts of data and patterns can be found (new discoveries made) much faster than we could ever do in a lifetime if we had to do it ourselves using pen and paper.

Computers supplement and enhance human intelligence. Will we ever build or evolve a machine that can think like a human? I don't know. I just don't think that is what our current AI programs are doing.

With regards to neural nets they have the best potential to include the ability to monitor their outputs and retrain themselves should the environment change and the robot is no longer achieving some desired goal. So @inq and @thrandell might put their trained robot into a different layout and it will fail but still be able to fix itself or evolve without human intervention.

 


   
ReplyQuote
Ron
 Ron
(@zander)
Father of a miniature Wookie
Joined: 3 years ago
Posts: 7083
 

@robotbuilder @davee One of the first things I learned when I began building admin systems at IBM was NOT to confuse that which is best done by a machine and by a human. Let me give you my best remembrance of an actual conversation I had with a user of a system I was building. I may have added a bit of humour and exaggeration, but the point is bang on.

I was talking to this user to see if there were any more unfiled requirements. He said

'I was thinking of asking you to add up all these numbers in a column that goes on for several pages, but I guess that would be hard for you.

He then said

I want you to create my xyz report. I asked him how to do that and he said, well if it's a Tue or Thur but only if they are even numbered days and the sky is clear of clouds then create the report, otherwise don't bother.

In his mind the column of numbers was hard work and the glance out the window was easy. In this case a 180 degree difference in human vs machine capability. Even though he was employed by IBM, he had no computer experience and was an older worker put out to pasture because he was burned out. He truly did not know the difference.

First computer 1959. Retired from my own computer company 2004.
Hardware - Expert in 1401, and 360, fairly knowledge in PC plus numerous MPU's and MCU's
Major Languages - Machine language, 360 Macro Assembler, Intel Assembler, PL/I and PL1, Pascal, Basic, C plus numerous job control and scripting languages.
My personal scorecard is now 1 PC hardware fix (circa 1982), 1 open source fix (at age 82), and 2 zero day bugs in a major OS.


   
ReplyQuote
Will
 Will
(@will)
Member
Joined: 3 years ago
Posts: 2535
 

@zander 

He's probably a Senator now ... almost overqualified !

Anything seems possible when you don't know what you're talking about.


   
Inq reacted
ReplyQuote
Ron
 Ron
(@zander)
Father of a miniature Wookie
Joined: 3 years ago
Posts: 7083
 

@will Unlikely since that was 48 yrs ago and he was past 50 but still technically possible.

First computer 1959. Retired from my own computer company 2004.
Hardware - Expert in 1401, and 360, fairly knowledge in PC plus numerous MPU's and MCU's
Major Languages - Machine language, 360 Macro Assembler, Intel Assembler, PL/I and PL1, Pascal, Basic, C plus numerous job control and scripting languages.
My personal scorecard is now 1 PC hardware fix (circa 1982), 1 open source fix (at age 82), and 2 zero day bugs in a major OS.


   
ReplyQuote
Will
 Will
(@will)
Member
Joined: 3 years ago
Posts: 2535
 

@zander 

So, his age is creeping up to his IQ 🙂

Anything seems possible when you don't know what you're talking about.


   
Inq reacted
ReplyQuote
Ron
 Ron
(@zander)
Father of a miniature Wookie
Joined: 3 years ago
Posts: 7083
 

@will No, his age is well past his IQ, remember normal is 80 to 100, his age makes him gifted (100 to 120)

First computer 1959. Retired from my own computer company 2004.
Hardware - Expert in 1401, and 360, fairly knowledge in PC plus numerous MPU's and MCU's
Major Languages - Machine language, 360 Macro Assembler, Intel Assembler, PL/I and PL1, Pascal, Basic, C plus numerous job control and scripting languages.
My personal scorecard is now 1 PC hardware fix (circa 1982), 1 open source fix (at age 82), and 2 zero day bugs in a major OS.


   
ReplyQuote
(@davee)
Member
Joined: 3 years ago
Posts: 1711
 

Hi all again .. @zander @robotbuilder @will @inq,

  Whilst it is be sensible, and sometimes even a requirement, to match tasks to either machines or humans, and this includes machines with AI capabiities/approach, this does not mean the scenery hasn't changed over the last decade or two.

I am not claiming AI machines can 'think' better than humans ... indeed I am not even completely sure what 'thinking' is, and even less clear if the AI systems are doing it, but it does appear that the AI approach is providing a technique that enables computers to achieve things that up till recently, have been limited to humans.

Perhaps the difference in our views is in the way we are looking at the situation. I am suggesting that at least of substantial part of the capability that humans have, but which has been difficult for computers to achieve, is not exceptionally magical, but it is something that is difficult/impossible to achieve with a 'traditional' computer with a relatively small number of processors, etc.

Humans have the advantage of being built from a vast number of analogue processing elements, using chemical technology. each of those elements is effectively processing in parallel with the others. Furthermore there are communication systems to collate and distribute the data. To match this, maybe you need a different type of computer  .. and the AI systems are somewhat different. They may still use transistors, resistors, capacitors and inductors, but they are arranged differently, and there are a lot more of them in a single machine.

----

I am certainly not qualified to accurately compare humans and AI systems, but perhaps some similarities are far from accidental.

My impression is that AI machines are deliberately trying to imitate or simulate some of the processing style involved in brains and nervous systems at the 'nuts and bolts' end of the process. If this is true, then it seems reasonable that in at least some ways, they are actually going to have some of the same characteristics, good and bad.

I am not claiming AI is doing the same thing as a person thinking, but perhaps there are similarities and overlaps that are not present in 'traditional' computer programme approach. Furthermore, this is a fast evolving area, so that the degree of similarity might increase as the AI machines become more capable.

Furthermore, perhaps 'thinking' and 'learning', or at least a substantial part of it, is not an exceptionally difficult thing to do, providing the appropriate technology is available. That technology may be impracticable with computers using a relatively small number of processors, running programs that must be individually tailored to the intended task.

In general, programming from before 1950, and possibly from Ada Lovelace, until the 21st century has generally aimed to write a program that did the final task.  A 'simple' accounting program is likely to be aimed at reproducing a manual accounting procedure that would have been used prior to a computer being available. That is the programmer, or at least the 'chief programmer/systems analyst', if they have other staff writing low level procedures, will have conciously been thinking about the accountancy needs and requirements.

AI approach seems contrary to that ... albeit I haven't worked in this area, so I may have this wrong ... the 'code' programmers seem to be producing a massive set of generic curve fitting machines. It is portrayed as having some physical similarity to the visible/observable nervous system of humans and other animals, with no idea or care as to whether it will be used for accountancy, driving a car or judging a cat exhibition. In that sense, it is like the unborn baby ... providing the baby grows up to be an 'average' capability person, then they may learn how to do any, all or none of those different possibilities.

The specialisation comes from the training process, involving data that can be extracted from performing, or the results of performing, the particular skill/task. Whilst some parameterisation and interfacing of the generic machine will be needed to fit it to a particular application, the internal basic design is largely unchanged. Again, this has similar feel to the way that the brain and nervous system, appears to consist of a range of building blocks, adaptation appears to mainly consist of expanding the number of these blocks and the ways that they are joined together, rather than novel block designs appearing, at least within the life of an individual. Of course, different individuals may have differences, and differences may occur over a number of generations as part of the evolutionary process.

So whilst I can't predict the future, and I am not particularly clear about the present AI state of the art, but I think some of the practical restrictions of programming for a specific task are being lifted.

This is not happening at zero cost, since AI machines tend to require a lot more processing 'horsepower' than a machine programmed to do a specific task, but as extra 'horsepower' can to some extent be created by relatively simple copy and paste approaches, and the underlying technology is so far continuing to support ever increasing quantities of computing engines in smaller and lower energy consumption forms, so new possibilities are emerging.

I am not saying AI machines are the only or best approach to any particular task, but I think possibilities are emerging that have previously been unobtainable. Furthermore, these possibilities will almost certainly include many tasks that have until now been confined to the human domain. This implies that the consequences for many people, perhaps all of us, could be comparable to the invention of the Jacquard loom was to the hand loom workers.

----------

Whether this will be a case "of closing an old door, opening a new window" is not clear -- that has largely happened for the industrial revolution and the microprocessor/IT/Internet "revolution", so perhaps it will happen again.

--------------------------------------------------------------------------------------------

Ron's observation  "NOT to confuse that which is best done by a machine and by a human" is still relevant, but the boundary conditions are dramatically changing, as these machines are capable of doing so much more.

And we must be careful in how that match is done.

Surgery to mitigate the effects of a very rare genetic disease might only require a few specialists to cover the world. Thus the boundary level would naturally set very high, assuming sufficient specialists to meet demand were available.

But for self-driving cars, I assume there are billions of drivers, and it is likely that a large proportion of accidents involve the less capable and careful drivers. so whilst I would be ideal for the self-driving car to be more capable than the very best drivers, in practical terms, having a system that was considerably better than just the 'average driver' would be a positive improvement if it replaced a substantial proportion of the 'below average' drivers.

----------------

I have often heard 'pain' experienced by living creatures as a vital protection mechanism to protect the creature from danger. I am not suggesting an AI system will feel pain in the same way, but I think it is plausible to teach it to avoid situations that would be destructive to it, which is what pain is supposed to achieve.

---------------

Will these possibilities be used for good or bad? 

       Both I presume!

Are the machines going to be sentient, alive or even thinking?

       I suspect it depends on the definitions. Such descriptions were created in a world in which a simple machine was something like a lever and a complex machine was something like a car or a plane. The idea of either a lever or a plane being 'alive' was easy to decide. An AI system which is accurately able simulate pain, and maybe other emotions, could be more difficult to decide.

In summary, when you have a machine, whose operations are based on data that is itself based on actions of 'living things', and the machine has some structures that roughly simulate the processes of those living things, then maybe the boundaries are starting to get fuzzy.

-------------

And by the look of a recent report "IBM: Workers have three years to reskill in AI" 

https://www.itpro.com/technology/artificial-intelligence/ibm-workers-have-three-years-to-reskill-in-ai?utm_term=84DF0FAD-C443-4703-9212-9F87B1B69E94&utm_campaign=5E16BB2A-24C8-43FE-B600-711A9F31FE61&utm_medium=email&utm_content=4F1EE949-A801-4AC5-8D3C-6FE26C8090FC&utm_source=SmartBrief

perhaps I am not alone in suggesting AI could be another 'revolution'.

Best wishes and take care my friends, Dave


   
ReplyQuote
Ron
 Ron
(@zander)
Father of a miniature Wookie
Joined: 3 years ago
Posts: 7083
 

@davee I worked for IBM so I know exactly what they are doing there. I was even involved in a similar campaign where IBM sought US government protection from computers from another country. We used social engineering tricks (called the elevator whisper technique before social media) to suggest they were superior and would ruin our computer industry.

First computer 1959. Retired from my own computer company 2004.
Hardware - Expert in 1401, and 360, fairly knowledge in PC plus numerous MPU's and MCU's
Major Languages - Machine language, 360 Macro Assembler, Intel Assembler, PL/I and PL1, Pascal, Basic, C plus numerous job control and scripting languages.
My personal scorecard is now 1 PC hardware fix (circa 1982), 1 open source fix (at age 82), and 2 zero day bugs in a major OS.


   
DaveE reacted
ReplyQuote
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  

Posted by: @davee
My impression is that AI machines are deliberately trying to imitate or simulate some of the processing style involved in brains and nervous systems at the 'nuts and bolts' end of the process. If this is true, then it seems reasonable that in at least some ways, they are actually going to have some of the same characteristics, good and bad.

Quite true - Even one of the training methods is very Biological Evolution Theory based, even using chromosomes, individual as terminology.  

Posted by: @davee
Furthermore, this is a fast evolving area, so that the degree of similarity might increase as the AI machines become more capable.

I'm certain you're correct.  There MUST be some process in our brains that allows us to make decisions like, swat that gnat, financial planning for our retirement, help other people learn.  I haven't wrapped my nogg'n around quantum computer theory, but I suspect it will lead to some of the responses we attribute to human intelligence.

Posted by: @davee
the 'code' programmers seem to be producing a massive set of generic curve fitting machines. It is portrayed as having some physical similarity to the visible/observable nervous system of humans and other animals, with no idea or care as to whether it will be used for accountancy, driving a car or judging a cat exhibition.

La-La Land Time

I was very surprised to find that ANN (Artificial Neural Network) is very much like FEM (Finite Element Method) or CFD (Computational Fluid Dynamics).  I'm sure these are not common, every day terms, but the concept behind them is really quite easy to grasp.  It's very easy for us to understand, if you pull a spring, it will stretch a given amount.  Likewise, a small cube of steel can be characterized by crushing it, stretching it, shearing it.  All FEM does is break-up some huge structure say... a bridge into millions of little cubes of steel mathematically.  These form some huge matrix that represent the differential equations of each of these cubes and how they interact with each other.  Solving this matrix is an exact solution, but because the cubes are finite instead of infinitesimal, the answer is an approximation.   Once the matrix is solved, inputting the forces applied to the bridge (truck lumbering across it) the matrix will spit-out the stresses and deflections of the entire bridge.  

Likewise, ANN is nothing but some huge matrix that takes inputs and spits-out actions.  The only difference is in the training of this matrix brain.  In ANN, instead of building the matrix like FEM, we throw in a bunch of random numbers into this matrix.  Of course when we send it the input, we get garbage out.  This is not GIGO!  The data going in is valid... the brain is untrained.  It is an infant.  It spits out... well... it spits up, pees all over itself and generally just looks cute.  We literally, tell it how right it is and how wrong it is in the results.  The backpropagation phase of the mathematics simply tweaks the matrix a little.  After a while, the summation of all these tweaks from right and wrong and the matrix will get it right... about the same and sometimes better and sometimes worse than a human would.  

Posted by: @davee

would be ideal for the self-driving car to be more capable than the very best drivers, in practical terms, having a system that was considerably better than just the 'average driver' would be a positive improvement if it replaced a substantial proportion of the 'below average' drivers.

Posted by: @davee

Will these possibilities be used for good or bad? 

       Both I presume!

A-men!

Posted by: @davee

And by the look of a recent report "IBM: Workers have three years to reskill in AI"

Jump in, the water is warm.  Just ignore the gator behind the curtain!

 

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
DaveE reacted
ReplyQuote
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  

Posted by: @will

@zander 

He's probably a Senator now ... almost overqualified !

Posted by: @zander

@will Unlikely since that was 48 yrs ago and he was past 50 but still technically possible.

Posted by: @will

@zander 

So, his age is creeping up to his IQ 🙂

Posted by: @zander

@will No, his age is well past his IQ, remember normal is 80 to 100, his age makes him gifted (100 to 120)

You two crack me up! 🤣 🤣 🤣 Laurel and Hardy would be proud!

 

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
DaveE reacted
ReplyQuote
Page 8 / 9