What is Semantic A.I.?
Semantic A.I. is basically any form of Artificial Intelligence that is based primarily on the semantics or meanings of words. The idea has been around for quite a few decades and has taken many different forms. Or another way to say this is that there are many different ways to approach trying to create an intelligent system based on semantics.
If you do a search for Semantic A.I. today there is a very good chance that you'll be introduced to data companies that are attempting to sell their Semantic A.I. approach for data analysis to businesses. What they are calling Semantic A.I. is actually a means of using Neural Networks or ANNs to categorize large data sets based on the semantics or meanings of the words contained within the data sets. While this is obviously a valid use of the term Semantic A.I. it's not really an approach for building a truly autonomous mind. Instead it's just a means to try to organize data using semantics in the hope that the resulting organization of the date will reveal information about the data that was not previously apparent. So while this is a valid use of the term Semantic A.I. it's not really a method for building an autonomous brain or mind. Although, organizing data based on semantics may potentially be useful for an autonomous mind.
My own personal interest in Semantic A.I. is associated with the idea of building an autonomous mind or brain that is based on the meaning of the words the mind or brain has learned. This is certainly not my idea as this approach to Semantic A.I. has been around for many decades. In fact, many people have attempted to employ the idea of using semantics as a basis for A.I. via chatbots that make semantic connections between various words and phrases. There have been various degrees of success. Unfortunately I am not aware of all of the studies or experiments that have been done in this area so I can't share those here. However, what I can share are some ideas that I have come up with myself over the years.
My Early Attempts:
Some years ago I began working with the Microsoft Speech Engine Platform. This platform allows for both TTS (Text To Speech), and SR (Speech Recognition). Their architecture includes a Grammar Builder (used with SR) and a Prompt Builder (used with TTS). Traditionally, (and based on just about every example code I could find) the main idea is very similar to how chatbots are constructed. You build "Grammars" based on what words you expect that your computer might need to recognize. And you build "Prompts" based on what you expect your computer might need to say.
Obviously how you build these Grammar and Prompt data components will dictate what words or phrases your computer will be able to recognize and speak. They even have a feature that allows you to assign semantic values to words or phrases. And another one called Semantic Key that allows you to begin to organize and make connections between semantics or the meanings of words. I found this all quite exciting as using semantics as a basis for A.I. was my interest. Unfortunately all the examples I could find on this were quite limited in scope and there is nothing in their software that actually deals with keeping track of semantics so you need to build the semantic connections yourself. And that's actually great, especially when you could do this using a spreadsheet to keep track of your semantic connections and just load that information into the Speech Engine to be used.
In any case, after working with this system for a while I actually found it to be quite limited and cumbersome. While it could be very useful for very limited semantics, it quickly became a nightmare if you tried to take it much beyond this. So I could see that a new approach would be needed.
The World, the I, and the id.
I took a break from the Speech Engine because I didn't want to continue working on something that was really not much more than a very sophisticated chatbot. In the interim I discovered Marvin Minsky's book "Society of Mind". In addition to this I was also watching every lecture I could find on neuroscience and how the human brain works. What I soon discovered, and highly suspected, is that the human brain actually has two centers of self so-to-speak. And both of these are required to make a coherent whole. Taking this perspective along with insights offered by Marvin Minsky I've come to believe that it may be possible to create an artificial brain based on three basic entities.
- The World
- The I
- The id
The world is basically any sensory input or information that comes in from outside of the artificial brain.
The I can be thought of as the robot's sense of self. This is not to say that the robot will be conscious, or even aware that it has a "self". This is just a definition I am giving to this component. This component of the artificial brain will contain everything the robot identifies as belonging to it. Like it's current state of being (where it's at and what it's doing etc.), what it current possesses (or has in its possession), what it's name is, etc. Everything the robot has determined to be a property of itself will be contained within this unit. And of course this information can change over time.
This is another important component that cannot be left out. The "I" alone cannot function without this id.
Keep in mind that I have chosen to use these terms to don't take them too literally or expect them to satisfy any definitions beyond what I am assigning to them here. What does the id do? And why is it important?
The id contains primal information. This will be information that I, as the creator of the robot, will initiate. You can think of this as basic instincts and primordial skills. I have decided that this is not a "cheat" to build an artificial intelligence in this way because even we as humans are born with primal instincts and abilities due to the structure of our brains. So I see this as being an important part of any autonomous A.I. project. (obviously my opinion here). I'm not attempting to define anything for anyone else. I'm just sharing my perspective on things.
One thing the id can do is cause the robot to "want" to learn the meanings of new words. This isn't to say that the robot will actually "want" to do anything. It's just a metaphor. The id causes the robot to seek out new words as a primal instinct, because I, as the creator of the robot, want the robot to do this. 😊
Of course there is a lot more to this than I am outlining here. But this is the basic idea. In fact. this may seem to have gotten off track from the original purpose of Semantic A.I. but not really. I just needed to explain the roles of the World, the I, and the id, to give a better idea of how the whole thing works in its entirety.
Back to Semantic A.I.
Now we have a system that has three components. And as the robot builds up its semantic maps it can use this system to help with the topology of that effort. In other words, it's going to be able to distinguish between semantics that are associate with itself versus semantics that are associated with the real world. And of course, the real world will then be broken up into many different categories just as we humans do this. For example, even as humans we interact differently with our environment when we are at home, versus when we are at work. We also act interact differently when we are with different people. Our robot must also be able to create this "many worlds" approach to the environment. And this will be done using semantic categories.
This post is already way too long. So I think I'll stop here. The next topic that needs to be address is Grammar which is at the heart of being able to make sense of the meanings of works. I'll just leave this post with the following list of 8 parts of grammar. It's a little bit of a tease because it seems rather simple since there are only 8 parts to grammar right? Actually these 8 parts explode into much more as I'll hopefully have time to cover later. Anyway here are the 8 parts of grammar that all of semantics will ultimately be based upon:
The 8 Parts of Grammar
Nouns – Things specific
Pronouns – Things generic
Verbs – Describe an action or state of being
Adjectives – Modifiers or descriptions of nouns and pronouns
Adverbs - Modifiers or descriptions of verbs, adjectives, or other adverbs
Prepositions – Tells when or where
Conjunctions – Joins words and phrases to form larger sentences
Interjections – Expressions of emotion
These 8 parts of grammar would naturally be a key foundation for any Semantic System.
So anyway this is my answer to "What is Semantic A.I.?"
Not meant to dictate the meaning of the term, but rather simply meant to describe what I mean by it when I use the term, and how I approach this concept when it comes to building an autonomous A.I. brain.
Comments and alternative ideas are most certainly welcome.