Inqling Junior - Ro...
 
Notifications
Clear all

Inqling Junior - Robot Mapping, Vision, Autonomy

240 Posts
10 Users
87 Likes
19.5 K Views
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  
Posted by: @davee

(Confession, I have never tried 3D printing ... yet ... so I know how the 'shyness' can arise!)

Best wishes, Dave

I regret not jumping into 3D Printing when they first came out.  At $200 for a perfectly serviceable printer, that is 10x more reliable than those early ones, I'd buy it long before an Inkjet or Laserjet printer for my computer.   I wouldn't even attempt a robot without one.  The idea of using the metal erector set lattice work like...

image

... would very quickly sour me to messing with robots!  He's of stouter stuff than I.

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
DaveE reacted
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2043
 

@inq 

5Mpixel depth image sensor

That was something I would not have predicted decades ago when thinking about how to extract 3d information from two stereo images!

My other thoughts so far were to map the rooms, passing the data to a computer (for now) and allow me to label things.

No reason the main brain can't be a remote computer controlling a simpler robot, or many simple robots, via a wireless connection. That would be like a human using remote control of a robot via video feedback. Send all the visual data to the main computer and it can process it and send the results back to the simple robot. Something I haven't yet done but I see you seem to have it all figured out.

Letting the room geometry be the identifying characteristic to "discover" where it is and then handle the path determination and navigation.

Something you could easily experiment with by rotating the VL53L5CX 360 degrees.

Looking at your tiled floor in the posted images reminded me of experimenting with a downward looking camera with the idea of using the floor patterns to determine position and/or direction,  amount moved or rotated,  by comparing images taken between moves.

I am impressed with what you have done so far.

 


   
Inq reacted
ReplyQuote
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  
Posted by: @robotbuilder

5Mpixel depth image sensor

That was something I would not have predicted decades ago when thinking about how to extract 3d information from two stereo images!

It's 0.5 Mpixel, but I still totally agree!  Although I can handle the programming/Math for stereoscopic depth the idea of taking images from two cameras and trying to identify a pixel that represents the same "THING" in both images to feed the Geometry... just seems unimaginably complex.  Doing it quick enough with even a 15 fps feed... seems like wishful thinking.  Doing it on a microcontroller sounds like Black-magic.  And here... a component gives me the answers via I2C without so much as a blink at 15 fps leaving the Micro to do other things.  Way cool! 😎 

Posted by: @robotbuilder

Something I haven't yet done but I see you seem to have it all figured out.

Good god!  I sure hope I don't sound like some stuffed shirt with all the answers.  I just feel (emphasis on feel versus know) like I have a path to get there.  I think I have the parts.  I think I have the tools and I know I have the help.  You guys have been instrumental in focusing attention on blind spots.

Posted by: @robotbuilder

Something you could easily experiment with by rotating the VL53L5CX 360 degrees.

That was kind of why I wanted to use a stepper.  The servo is only good for +/- 90... but I can supposedly turn the robot on a dime also.  However, my first blush with the ToF... seemed to have range limitations.  I don't recall any distances greater than 3 meters.  Which means I'll have to walk the robot around even in small rooms.  

That is where I have a big fuzzy unknown!  I don't have a clear course of action about stitching the data together in a room, much less the whole house.  I can get 64 points from the sensor.  I can do the Geometry to find those Cartesian coordinates in a global coordinate space.  I can even get the offsets using the stepper motor counts and direction.  What I'm struggling with is what to do for tire slippage!  I've seen my old dumb robots hit a tile seem, or hardwood to carpet transition and veer ten degrees or better.  I'm just not seeing even an approach to re-orient the robot or update the offsets.

VBR,

Inq

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
ReplyQuote
Ron
 Ron
(@zander)
Father of a miniature Wookie
Joined: 3 years ago
Posts: 7012
 

@inq There is a good chance I am confused but what about 'computer vision' systems. The RasPi is apparently quite good at it and IIRC has stereo cameras. 

First computer 1959. Retired from my own computer company 2004.
Hardware - Expert in 1401, and 360, fairly knowledge in PC plus numerous MPU's and MCU's
Major Languages - Machine language, 360 Macro Assembler, Intel Assembler, PL/I and PL1, Pascal, Basic, C plus numerous job control and scripting languages.
Sure you can learn to be a programmer, it will take the same amount of time for me to learn to be a Doctor.


   
ReplyQuote
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  
Posted by: @zander

@inq There is a good chance I am confused but what about 'computer vision' systems. The RasPi is apparently quite good at it and IIRC has stereo cameras. 

Nah @zander.  It's more likely my ignorance on the subject. 

  • I've made a web enabled surveillance camera using a RasPi Zero W and their little button camera.
  • I have several ESP32-CAM's aging to a fine vintage and you've enlightened me on those on the forum.  So... I know they can stream video just fine. 
  • I've heard of OpenCV.  From what I gather, it can do edge detection and such.  I don't know at what rate it can do that.
  • The next thing would be to do a second camera to get the geometry for stereo vision.  But can a second instance of OpenCV successfully gather edge data... both instance competing for the CPU time.
  • AND then can the RaspPi compare to the two images looking for the same point in each image... then do all the stereo Trigonometry. 
  • And all that is just to find one point.  What happens if you need to find the range on ten items in those stereoscopic frames? 
  • The little ToF sensor can do 64 items at 15Hz.  It can't tell what those items are or if they're one item like the camera can.  IOW... it's hard to win a gunfight with a knife.

Are you aware of something I should look at?  Is there some hardware gizmo that has the two cameras and does all that software magic?

Don't get me wrong.  I can still see adding a camera at some point.  The ToF sensor can see two beer bottles in the fridge, but it can't read the labels.  Once the camera does recognition and picks the right bottle... it's back to the ToF sensor to guide the hand to pick it up.

VBR,

Inq

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
ReplyQuote
Ron
 Ron
(@zander)
Father of a miniature Wookie
Joined: 3 years ago
Posts: 7012
 

@inq Have you seen this, is it any help? https://hackaday.com/2019/08/03/high-performance-stereo-computer-vision-for-the-raspberry-pi/

First computer 1959. Retired from my own computer company 2004.
Hardware - Expert in 1401, and 360, fairly knowledge in PC plus numerous MPU's and MCU's
Major Languages - Machine language, 360 Macro Assembler, Intel Assembler, PL/I and PL1, Pascal, Basic, C plus numerous job control and scripting languages.
Sure you can learn to be a programmer, it will take the same amount of time for me to learn to be a Doctor.


   
Inq reacted
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2043
 
@inq
Posted by: @robotbuilder

Something I haven't yet done but I see you seem to have it all figured out.

Good god!  I sure hope I don't sound like some stuffed shirt with all the answers. 

🙂

No I didn't think you had all the answers.  I was making reference to your video where you are transmitting and displaying data from the robot to a PC.  I can't remember if you wrote what software you are using to do that.  If your robot has a little camera maybe you can transmit images that can also be used by whatever software language you are using.

I have played with OpenCV to grab images from a webcam which I have been processing with my own code using FreeBASIC and the Processing language.  Actually with FreeBASIC I use a little .dll to grab images from one or more webcams but it will not work with Linux only Windows.

Have you seen the commercial RoboRealm software?
https://www.roborealm.com/index.php

 


   
Inq reacted
ReplyQuote
byron
(@byron)
No Title
Joined: 5 years ago
Posts: 1122
 
Posted by: @inq

Are you aware of something I should look at?  Is there some hardware gizmo that has the two cameras and does all that software magic?

 

Indoor positioning systems are the answer but your pockets need to be deep.

https://medium.com/@newforestberlin/precise-realtime-indoor-localization-with-raspberry-pi-and-ultra-wideband-technology-decawave-191e4e2daa8c

OpenCV can recognise coloured blobs, so a large blob on your bot could be seen by several cameras and the bots position could be calculated by triangulation.  Conversely I guess that a camera on the bot could see coloured blobs places around the room.  But I don't suppose festooning your house with a bunch of blobs or cameras in each room will go down well with SWIMBO so OpenCV is probably a non-starter.  

A search of indoor positioning systems may bring up some more expensive ideas 😎.  

Whilst you may be thinking of getting your bot to roam about to build up a 'map' how about you manually build a map for the bot to reference, and then concentrate on knowing exactly where your bot is currently positioned, so it can calculate or receive instructions as to what route to use to proceed to a destination.  I rather like the idea of utilising a powerful central computer that directs the bot, or indeed bots to proceed to various places about the house and don't see any need to constrain the bot navigation to just being an 'on bot' process.

 


   
Inq reacted
ReplyQuote
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  

@robotbuilder, @zander - Thank you for the vision links.  I've bookmarked them and skimmed through the links.  I follow the progress of self-driving cars and recognize the staggering complexity of the problem.  They can throw millions $ at the research and at even a single research item... with the eventual expectation that a system can be made for thousands $.  Both of these links show pushing hardware (still, pretty expensive hardware) to its limits.  Its good to see that they are tacking a million $ problem at a RasPi/PC price point.

Now it is my turn to ask the question, "Am I being thick headed and missing your all's point???"  I'm not quite ready to tackle the Tesla humanoid robot promised by Musk... (at least not yet 🤣)  I'm thinking this ToF sensor gives me most of the information those systems do.  Are my expectations too optimistic?  In the grand scheme of things, mapping is trivial compared to self-driving cars and at this stage, it's as big a bite as I'm able to bite off.

VBR,

Inq

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2043
 

@inq 

I don't have a clear course of action about stitching the data together in a room, much less the whole house. I can get 64 points from the sensor. I can do the Geometry to find those Cartesian coordinates in a global coordinate space. I can even get the offsets using the stepper motor counts and direction. What I'm struggling with is what to do for tire slippage!

If the TOF is continually updating its position relative to the walls then slippage will be detected along with the new position and orientation.

You do a 360 degree scan and end up with a list of distances per degree. You can plot these distance around an x,y axis and you will have the shape of the room. It will probably be rotated but that doesn't matter and in fact the degree of rotation is the direction your robot is moving.

I'm thinking this ToF sensor gives me most of the information those systems do. Are my expectations too optimistic? In the grand scheme of things, mapping is trivial compared to self-driving cars and at this stage, it's as big a bite as I'm able to bite off.

Without seeing the numbers being returned by the ToF (which I don't have to play with) I can't answer that question but the numbers returned by a LIDAR will work fine as you can see with examples on the internet to map rooms.

When or if you get a camera I can talk about using that to identify a room and locate the orientation and position of the robot using natural features.

 

 


   
ReplyQuote
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  
Posted by: @byron

But I don't suppose festooning your house with a bunch of blobs or cameras in each room will go down well with SWIMBO so OpenCV is probably a non-starter.

🤣 🤣 - "She who must be obeyed!"  Never heard this - what a riot!  

I was about to do a post defining the problem I see coming and that I don't have a good answer I like.  I know @robotbuilder knows this problem...

Posted by: @robotbuilder

And how well does it navigate the house? I have been experimenting with dead reckoning using the encoders but accumulated errors means you need some way to reset to an exact position and orientation.

excerpt from October 2019:  https://forum.dronebotworkshop.com/user-robot-projects/other-robot-projects/paged/2/#post-5031

ToF sensor OR the vision systems above still have to solve this problem.  

Posted by: @byron

Indoor positioning systems are the answer but your pockets need to be deep.

https://medium.com/@newforestberlin/precise-realtime-indoor-localization-with-raspberry-pi-and-ultra-wideband-technology-decawave-191e4e2daa8c

I think this link is discussing this issue.  I wanted to fire off my thanks in case I lose Internet for the day shortly.  I will be studying it and trying to glean a cost effective solution... or @robotbuilder, have you come up with some solutions to the problem of accumulated movement error using encoders (or in my case stepper motor steps)???

VBR,

Inq

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
ReplyQuote
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  
Posted by: @robotbuilder

If the TOF is continually updating its position relative to the walls then slippage will be detected along with the new position and orientation.

I was thinking (before):

  1. Start out scanning a 360
  2. In many rooms there will be some sector that has no returns in the 4m range.  
  3. Move to the most open area
  4. Take another 360.

This was where I saw the problem that I could not be assured that some slippage error in my movement could be accounted for in the calculations.

I now see your point... If I find a wall, and move along it (keeping the scanner in the same orientation) the next "frame" at 15 Hz will still see the same wall with only a couple of inches of new wall added at the extreme.

I like this approach... it should allow me to avoid slippage and Mathematical round-off error by resetting at each frame.  I think this will work!

Got to get Inqling Jr motoring!

VBR,

Inq

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
ReplyQuote
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  
Posted by: @byron

Whilst you may be thinking of getting your bot to roam about to build up a 'map' how about you manually build a map for the bot to reference, and then concentrate on knowing exactly where your bot is currently positioned, so it can calculate or receive instructions as to what route to use to proceed to a destination.  I rather like the idea of utilising a powerful central computer that directs the bot, or indeed bots to proceed to various places about the house and don't see any need to constrain the bot navigation to just being an 'on bot' process.

Although, its good to hear that an external compute system would not be considered an over indulgence, I really would like my goal not to require one.  I've always been disappointed seeing umbilical cords connected to robots (usually for power requirements) and although not visible, WiFi to a robot with most of the brainpower off robot is still an umbilical cord in my eyes.  I may be overly optimistic about my chances, but its a goal.

I just like the idea of it being self contained.  I want to be able to plop it down at a completely new location without beacons, guides or any intervention for its "needs".  I want to just let it roam and map a building.  Say... even a warehouse.  I would not want to have to follow it merely to keep the WiFi in range.  That's the goal I think even this Inqling Jr can achieve.  

Beer fetching is for a future robot. 😉 

VBR,

Inq

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2043
 

@inq 

Although, its good to hear that an external compute system would not be considered an over indulgence, I really would like my goal not to require one.

And the end goal is that the external system could be disconnected. These are not finished products. With the Arduino based robot I had to pick it up, connect it to the pc and download another modification, and then place it down again and press start. See what happens and then repeat the procedure again and again and again. A wireless connection would have saved lots of time. Same with developing higher level programs.

In fact my robot has an onboard pc I just have to take remote control of it using something like TeamViewer but I just haven't got around to doing it yet.

 


   
ReplyQuote
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  
Posted by: @robotbuilder

In fact my robot has an onboard pc I just have to take remote control of it using something like TeamViewer but I just haven't got around to doing it yet.

I just realized your avatar is a robot.  I always thought it was a traveling workstation.  How cool is that?  Can you point me to a link on it?  Like how powerful of a laptop, battery power, motors, etc?

BTW, I've started using VNC viewer for headless on the Raspberry Pi's (since its built-in) and have clients on other RaspPi's and on Windows.  It works very well.  Much better than Windows Remote Desktop and/or XRDP.  If you have a monitor on the remote machine, it stays active while being remote controlled... so you can use it a teaching tool to someone at the remote end.  

VBR,

Inq

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
ReplyQuote
Page 5 / 16