Well, now that I'm finally back programming mode I'm realizing that I have no clue how to work with this Speech Platform. ?
I actually started off doing really well. I wanted to find a way to have the computer report the percent of confidence that is has when recognizing words, and fortunately I was able to find that function right away. It's call, "Confidence". Imagine that.
Here's screen shot of the program I'm currently working on. I only have 6 robot names on the grammar set right now. They are all in a Grammer set called "Robo Chooser". It recognizes these 6 names very well. And I have it printing out the percent confidence after each name. These names were printed to the screen as they were recognized as I spoke them. So not bad for a start.
What I'm trying to figure out now is how to add other named Grammar sets to the mix and have them all loaded at one time. Thus far I'm not sure if that's even possible. I know I can have multiple sets of named Grammars but I don't think I can have all loaded simultaneously. I think I can only load them one at a time.
Unfortunately I can't find any good forums associated with programming Microsoft Speech Platform to pick the brains of other programmers. I'm probably stuck with having to figure this all out myself from scratch.
I might make a video on it and post that and see if I can get anyone else to climb onboard the Grammar Wagon. I like the idea of creating the Grammar for my robot from scratch rather than using a speech recognizer that is already programmed. And the reason for this has to do with how I am hoping to develop my A.I. program around this Grammar structure. It's a long story, but it fits in very well with how human babies actually learn. After all how many babies have you seen that had a fully developed vocabulary on the day they are born? Having the robot actually learn how to build its own dictionaries is key to creating an A.I. system that actually has a good understanding of what it's actually talking about.
At least that's my thinking. And I'm probably wrong. ?
None the less I'd like to give this a shot and see where it goes.
DroneBot Workshop Robotics Engineer
James
What I'm trying to figure out now is how to add other named Grammar sets to the mix and have them all loaded at one time.
I did it! I did it! I did it! ?
I found a way to load multiple named Grammars simultaneously. And they can easily be loaded and unloaded programmatically. So I'm all set to go! For a minute there I was afraid this wasn't going to be possible and really mess with my hopes and dreams. But fortunately Microsoft is on top of their game and included this capability.
This will allow my robot to dynamically switch between grammars and even have multiple grammars open simultaneously when required. This was precisely what I was hoping for. This will ultimately allow my robot to have a vast database of Grammars while not having to have them all loaded at the same time. This will make it possible for the robot for focus the speech recognition on a much smaller vocabulary while maintaining the capability of actually having a vast knowledge base of Grammar.
I just now created a Grammar named "Self". This Grammar will contain words associated with the robot itself. Information such as its name, age, etc. This Grammar set will ultimately become the central "I" of the robot. In other words, this Grammar set, and it's associated dictionaries will contain everything the robot knows about itself. And this set can even grow into multiple sets. For example,
- Self - a Grammar set about the robot's characteristic like name, age, etc.
- Self_Possessions - a Grammar set that contains thing the robot consider it owns.
- Self_Skills - a Grammar set that contains skills the robot has.
And so on, and so forth.
This is great because this speech recognition program will not only be the basis for speech recognition, but it will be the basis for the robot's identity.
When the robot speaks it will use a similar program feature called "PromptBuilder". The PromptBuilder builds Prompt sets in much the same way that the GrammarBuilder builds Grammar sets.
So the Prompt sets will have a similar structure. In fact, they will most likely evolve to be a mirror image of the Grammar sets.
This is great. I don't why I didn't continue with this back in the days I first started with it. But back then I was building robots using the 68HC11 chip that had to be programmed in machine language. It was quite a major deal to program the robot to even do small tasks. But now with the Arduinos, STM32 boards, a Raspberry Pi, and a Jetson Nano building robots is a piece of cake. Everything is done in high level languages like C#, C++ or Python. Everything has been done for us. ?
DroneBot Workshop Robotics Engineer
James
Lookie!
Notice now that when Alysha recognizes her name she also simultaneously recognizes that this word refers to her "self". That's because "Self" is the name of the Grammar set that Alysha's name is in.
As I continue to say Arathoon and Charles, the robot also knows that these words came from the "Robo Chooser" Grammar set. So that tells the robot what these words are related to.
Then when I say, "you", or "hey you" again Alysha will know that I'm talking about her because she will also know that these words are members of the "self" grammar set.
So this tells the robot when I'm referencing her "self".
Notice that when I returned to words like Butler and Rover, once again those words are in the Grammar set named "Robo Chooser". So Alysha knows that these words are not referring to her "self".
This is going to be GREAT! I'm happy now. I'm going to bed!
Four days of troubleshooting to get this old program back up and running, but it was well worth it. I'm ready to start building Alysha's A.I. now. ?
DroneBot Workshop Robotics Engineer
James
So the screen issue is a known problem
Although there doesn't seem to be a fix other than to try a new monitor, which I did, but, only 2 others, with the same results, so, I'll just live with it and stay headless I guess. It's no actual big deal I guess
Now I have to figure out how to link them, but, the documentation isn't totally clear as to which commands I should type into which system (master or slave), beside the fact that I still haven't worked out the hostapd private wifi network that it seems to be wanting to live in
What exactly are you working on? I see you have an Atomic Pi there that has Ubuntu on it. How does TurtleBot come into the picture?
I also noticed from reading about the Atomic Pi that it can run Windows 10. Not sure if I like running Windows 10 on my robot, but the current Grammar program I'm working with seems to be exclusive to Microsoft Windows. So I might end up needing to run Windows on an SBC on my robot just to keep this Grammar Scheme I'm building. Windows seems like an overkill OS for a robot. I wonder if there are stripped down versions of Windows OS I could run? Like Windows 10 Lite? I never heard of any such thing, but it sounds like a good idea.
As it stands now I'm actually thinking of creating my own Grammar system in Python. Just to escape Windows OS. Their Speech Platform is too good to give up on. But if it means being tied to Windows OS that could be a real bummer.
DroneBot Workshop Robotics Engineer
James
That's a story (and what isn't when it comes to me ?) There are a number of iterations when it comes to ROS (Robot Operating System) when building a robot, it seemed an obvious choice to install ROS on it. The first few I tried failed miserably during the install due to poor documentation and/or a lack of a place to download repositories to install it from with the final one merely lacking a piece of hardware that seems to be sold out every place I look (probably due to the popularity of ROS for Jetson), so I moved on to yet another iteration of ROS, this one called "Turtlebot". Now, the thing with THIS version of ROS is that it uses TWO computers, one an SBC (single Board Computer) which we will, for lack of a better word, call a slave, installed inside the actual robot which is connected to the motors and sensors which sends that data over to a "master", which is the computer that actually uses that data to make the calculations, then sends "orders" back to the "slave" to move the robot's body
The thing with this iteration of ROS is that the software for the "master" computer is written for X86, and is not available for the ARM based Raspberry Pi. The Atomic Pi, however, IS an X86 SBC, which made it ideal for the "master" because at some point, I could stick it inside the robot, and not be tied to a desktop stuck on a shelf somewhere. All it needs is a network connection to the "slave", and the system becomes complete
Now, if you're considering the Atomic Pi because it's windows based to run your M$ speech, then you don't actually have to be married to a full version of windows because windows 10 has a version called IOT (Internet Of Things) which MIGHT work for you, and if it does, then obviously, you could stick it inside a robot because IOT doesn't need to be tied to a desktop architecture
Thanks for the information. I shy away from anything that has to do with IOT. Why? Because I don't want my robot to be dependent upon the Internet, and most IOT systems are totally dependent on the Internet. After all, that's what IOT stands for "Internet Obligatory Ties". So one reason the Windows IOT system might be "lightweight" is because you need to be connected to the Internet in order to use it.
I want absolutely nothing to do with the Internet in my robot projects. That's a major criteria I demand. The last thing I want is a robot that is going to die on me if the Internet goes down.
I'm actually turned off to ROS as well. As far as I can see it's just more software to become dependent upon. Also, from what I've heard about it thus far all it basically does is everything I can already do without it. So I'm not convinced that it's worth messing with.
Having said that, I can imagine it has a very useful place in industrial robotics. But for this mountain man I'd rather do without it. I don't even like being dependent on M$ Speech Platform. But I love the software architecture so much I just can't pass it up. I am thinking about eventually freeing myself from it by writing my own version of it, most likely in Python. Because it's really the Grammar structure that I'm interested in. I could actually use some other software for the actual speech recognition of the words.
In any case, good luck with TurtleBot. I thought TurtleBot was the name of a robot kit. I didn't know it was a version of ROS.
DroneBot Workshop Robotics Engineer
James
I do as well James. I think it has it's placed, just not on my bots yet. For many, it may be the thing they need or want.
Now...
There are IoT Edge products being released that will allow IoT connection and processing (i.e. cloud) OR local on the device processing. A few weeks back, I was at a meetup for IoT and a member of the group that works for MS presented the Kinect DK. It is a great product and will work locally or in the cloud. It was amazing what it can do. My issue is... It's 400$ USD new. I'm sure it's pretty close to the cost of the parts they make it with, all top self.
ROS is something without which we cannot understand complete capability of robotics
I was always avoiding ROS and trying anything that won't require ROS but belive me, when i started it became easy to follow...
Turtlebot3 burger is a basic model but costly interms of its hardware, you can build one without buying it actually witg rpi3
@sdey76
I'm fairly certain that the ROS Turtle that I'm using is pretty much the same as the Turtlebot you mentioned. The only difference is that I'm building it myself instead of ordering it online, taking it out of the box, plugging it in, and having it just work
As for your question from the "Backup" thread, I'm in the same boat as you are at this point. I want to connect the master to the slave, but the wifi isn't working. The best I can figure is that the slave is creating a sort of VPN kind of thingy called "hostapd" which is host Access Point Daemon. So it's trying to create its own little world to live in. I can see the wifi card in ifconfig, but, I can't make it connect to anything. I've tried manually editing things (hostapd, mdnsmasq, wpa_supplicant), I've even tried a direct connect command
iwconfig wlan0 essid <name> key s:<password>
And nothing worked. They (master and slave) can talk to each other (ping) via hardwire, but, I've had no luck connecting them via their own builtin little world. I may end up having to either install a router inside the robot on top of all the other electronic stuff, or just ditching the whole ROS thing completely, and going with something that actually works for somebody as stupid as me in this area
I'm about to start issuing the ROS commands to make them talk to each other in whatever language it is that they want to talk to each other in, but, this is going to be over the hardwire network, and I'll jus have to worry about their virtual paradise later
Have you looked into SNIPS ? It requires no internet access, can control hardware, and has voice recognition
I have a SNIPS box running right now, but, it doesn't currently do anything other than tell me the weather cuz I haven't done that much with it yet due to me getting caught up in ROS
Have you looked into SNIPS ? It requires no internet access, can control hardware, and has voice recognition
Yes I did look into it after you had mentioned it. From what I could tell SNIPS doesn't offer the Grammar structures offered by Microsoft Speech Platform. I'm not just looking for a speech recognition system. I'm looking for one that I can program from the ground up. In terms of a meaningful language structure.
So what I'm actually doing is creating a foundation for an A.I. system. What I'm doing really has very little to do with actual speech recognition. The speech recognition just happens to be part of the overall package.
It's a long story but the architecture I'm hoping to build are similar to the ideas published by Marvin Minsky in his book "Society of Mind". I saw an opportunity to use the Microsoft Speech Platform as a great way for the robot to be able to create a meaningful "Society of Mind".
I'm not sure if SNIPS could be used to do that. Although without trying SNIP in-person I can't be sure, but I did watch some YouTube videos on SNIPS and based on how they were programming it, I didn't see where I could use that architecture to build a language-based "Society of Mind".
I should probably add here that the people at Microsoft most likely had no intention of providing a tool that could be used for this purpose when they created their Speech Platform. But it turns out that their Grammar system loans itself to this purpose quite well. It's their Grammar structuring capabilities that I'm after. Not the actual speech recognition. But it is nice to have the speech recognition function integrated into the package. This way I can test out my Grammar schemes in real time without any additional interface to an actual speech recognition system.
It's my understanding that there is a Speech Recognition package available for Python too. I haven't yet looked into that. I don't know whether it has the same type of GrammarBuilder scheme as the Microsoft package or not. It will be great if it does. I will look into the Python speech recognition package eventually.
Right now I would just like to play around with building a Grammar structure using MSP since I already have this up and running and had done earlier work with it in past years.
DroneBot Workshop Robotics Engineer
James
Thanks a lot
Yes the master slave over WiFi is not accessible...TELEOP_KEY am trying to control over laptop to make motor move which is connected to arduino mega via raspberry pi.
So i cannot give velocity command remotely.
It is upsetting my project ! As i cant do SLAM NAVIGATION on a rpi3 ..need a host laptop.
Can there be a possibility to conotrol robot motor over esp32 or esp8266 wifi as a server client and make teleop keyboard access
@sdey76
I've given up on the Turtle, and am trying this
https://downloads.ubiquityrobotics.com/pi.html
So far it looks like it's a full install of ROS, and, while I haven't read all the docs yet, briefly scanning them looks really nice
Hii
Oh yes i am using it on rpi3, i had downloaded the image last friday it works
But the issue is it is built for ubiquity robot
So when i run roscore , it already has a ROS MASTER RUNNING
Nex you login as ubuntu with ubuntu as password, then the wifi does not connect and hangs repeatedly.
I failed to get into this rpi3 with ssh command from my laptop over same wifi
Please try it! If any luck
Thanks a lot
There is a command to disable the ubiquity robot in the ros wiki page that worked and i could get the roscore ok..finaly this sunday