Inqling Junior - Ro...

Clear all

# Inqling Junior - Robot Mapping, Vision, Autonomy

240 Posts
10 Users
87 Likes
12.7 K Views
(@inq)
Noble Member
Joined: 1 year ago
Posts: 1616
Topic starter
Posted by: @robotbuilder

It will be interesting to see what kind of data you get from the TOF.

In the mean time I have been reading up about SLAM algorithms and wonder what kind of computing power must be in a robot vacuum cleaner to perform the task. I have updated the simulation to test out different algorithms to navigate and map a house. Although the "lidar" data is clean at the moment I will add realistic noise to test those algorithms. Also the simulated robot has perfect odometry which means I will have to add error to that as well to make sure the algorithms can work even if the robot is not exactly where it should be or even if it is moved physically by a human.

In the snapshot below the actual simulated world is on the left and the "lidar" data seen by the robot is on the right. When the robot turns the actual data on the right will rotate. My previous simulated lidar code I posted actually returns the x,y position of the hit point when in fact the only data you have is the direction and distance of the hit point so the x,y position has to be calculated before it is plotted.

To enlarge an image, right click image and Open link in new window.

Your post is requiring a LOT more thought... that I haven't gotten to yet.  Let me see if I understand what you have here...

• First off... is the simBot8 a program you've already written?!
• The left side is a hand-generated set of lines.
• I see robot position
• The right side it computer program, ray traces being sent out to intersect walls.
• It even looks like the data points have a consistent angular increment.
• As you described earlier with your ray-trace game.

If I'm reading the above right... this is 90% of the problem.

Your making me have to do a rethink.  Me designing a 30 minute tail-dragger versus another week of self-balancing (might work) logic may need to get back-burner'd.

I'll be glad to share my bone with you.

Posted by: @robotbuilder

It will be interesting to see what kind of data you get from the TOF.

Let me see about getting some data out of this sensor.  Would this be of interest to you?

• Multiple samples perpendicular to a plain white wall at different ranges - Gives baseline sensitivity, noise content is best conditions.
• Some oblique to wall.
• Some with furniture and obstructions.  With photos to correlate.

Let me know if you're interested.

VBR,

Inq

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide

(@byron)
Noble Member
Joined: 4 years ago
Posts: 1144

Posted by: @inq

I finally thought...  I'll be able to add the orientation data to the Math and just let it shotgun the whole area.    Make sense?

For me... this is the good part.  The previous 8 pages, I file under the "No pain... no gain!" category.

I guess it makes sense.  I'm good for a bit of trigonometry when I can look up a known formula like a compass bearing calcs from GPS data etc. but having never had to use advanced maths since I left school (and I cant really remember doing much back then either) the sort of maths required to adjust your bots scan data is beyond me.

I do have some doubts as to the effectiveness in terms of  the resolution you will get from your sensors to enable mapping, as opposed to the more simple obstacle avoidance data.  But I presume (as the presumer has been slated as adequate 😎 ) your early experimental data will settle this.

So for me its inq's pain is my gain 😀 (maybe) and thanks for putting us all in the front seat of your robot arena

(@inq)
Noble Member
Joined: 1 year ago
Posts: 1616
Topic starter

@byron - That I have an audience of what... maybe two people interested.

• \$3 ESP8266
• \$7 stepper motor
• \$1.50 driver
• \$2 plastic

... knowing someone else in the whole world is interested... priceless!

Thanks!

Inq

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide

(@robotbuilder)
Noble Member
Joined: 4 years ago
Posts: 1987

Honestly I never really appreciated what this VL53L5CX was!! I was confusing it with the TF Mini LIDAR. What I was doing really isn't relevant to its use to map or navigate a room. I don't know enough about it yet to make some kind of simulation version for a simulated robot base.

(@robotbuilder)
Noble Member
Joined: 4 years ago
Posts: 1987

Posted by: @inq

@byron - That I have an audience of what... maybe two people interested.

When you have this sensor and robot up and running and on utube I sure you will get some interest.

Have you any sources showing its use for mapping and navigation?

It appears like a very low resolution camera that returns a depth map of up to 4m

(@robotbuilder)
Noble Member
Joined: 4 years ago
Posts: 1987

Shows I didn't really read your post very carefully. I went back to look again to re-read the post with your video of your experiments. I didn't really understand at the time what you were showing.  I will give it some more thought.

(@robotbuilder)
Noble Member
Joined: 4 years ago
Posts: 1987

(@robotbuilder)
Noble Member
Joined: 4 years ago
Posts: 1987

Posted by: @inq
Posted by: @robotbuilder

It will be interesting to see what kind of data you get from the TOF.

Let me see about getting some data out of this sensor.  Would this be of interest to you?

• Multiple samples perpendicular to a plain white wall at different ranges - Gives baseline sensitivity, noise content is best conditions.
• Some oblique to wall.
• Some with furniture and obstructions.  With photos to correlate.

Let me know if you're interested.

VBR,

Inq

Difficult to know what sort of data I would really need.

Although it says the VL53L5CX can be used for things like scene understanding, complex scene analysis and 3D room mapping I can't find any examples of that use on the internet.

If you moved your hand closer would you get more squares covering your hand (higher resolution) and if you move your hand away from the sensor do you end up with only one square covering the hand?

(@inq)
Noble Member
Joined: 1 year ago
Posts: 1616
Topic starter
Posted by: @robotbuilder

Shows I didn't really read your post very carefully. I went back to look again to re-read the post with your video of your experiments. I didn't really understand at the time what you were showing.  I will give it some more thought.

In that very limited test, I came away with an impression.  I don't have any experience with Lidar, so I can't compare the technologies, but from your simulations above, I get the impression that these rotating Lidar units are using a laser beam like a laser pointer...

I was originally expecting this VL53L5CX unit to have 64 beams and if I moved it close to a wall, I'd see the grid's 64 points...

One of my phone cameras sees into infrared.  I used it on the https://inqonthat.com/inq-speed-racer/ project and could see the infrared emitters.  Looking directly at the VL53L5CX, I can see the purplish hue as it is the same frequency as the race car emitters.

But this think, I think works a little differently.  I don't think the infrared is focused.  I think it just blasts out the light pulses over the whole FOV area and the ToF receiver is the one that segments out the 64 regions it is looking at.  That would explain the limited range of 4 meters... it gets too dim by the square of the distance.

It might also explain some of the other data that comes back that I mentioned in the test post.  I'm thinking that a partial blockage of a square... returns both the ToF distance, but also a strength that can estimate the percentage of the square's area being reflected back.

Anyway... I think, I'll pause the high-wire balancing act and test this more thoroughly and get you some data while I'm at it.  I'll also read the links in your next post.  😉

VBR,

Inq

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide

(@inq)
Noble Member
Joined: 1 year ago
Posts: 1616
Topic starter

@robotbuilder - I've looked through some of those links on the VL53L5CX sensor.  Looks like there are many things that can be done.  The gesture aspects might be something for later on.  I liked the business card following robot... and is on the shorter term list of things after the mapping.  I think having it follow me will be a hoot to take it to schools for class demonstrations trying to get more kids involved in my area.

VBR,

Inq

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide

Noble Member
Joined: 4 years ago
Posts: 1506

@robotbuilder

Posted by: @robotbuilder

Don't get me started on the economic politics of today and our foolish reliance of imports instead of being self sufficient even though that might be more costly in the short term!

As for computing power I have the laptop to play with until a RPi become affordable or even available.

Mate... our \$dollar is currently sitting at ~67 cents USD ... we have been slammed for far too long, and it's time we all revolt from out pathetic globalist NON LEADERS - throw the #\$%^& out!

(@zander)
Illustrious Member
Joined: 3 years ago
Posts: 5520

@robotbuilder  I can't find any examples of that use on the internet.

Is the example 3D depth map not what you are looking for?

Arduino says and I agree, in general, the const keyword is preferred for defining constants and should be used instead of #define
"Never wrestle with a pig....the pig loves it and you end up covered in mud..." anon
My experience hours are >75,000 and I stopped counting in 2004.
Major Languages - 360 Macro Assembler, Intel Assembler, PLI/1, Pascal, C plus numerous job control and scripting

(@byron)
Noble Member
Joined: 4 years ago
Posts: 1144

Posted by: @inq

I liked the business card following robot... and is on the shorter term list of things after the mapping.

Another good example of robot following something that it recognises is the dronebot video on the pixy2.  This is based on camera image technology rather than your sensors but its another cool way to do a class demo to interest kids

I especially like this demo:

(@robotbuilder)
Noble Member
Joined: 4 years ago
Posts: 1987

Posted by: @zander

@robotbuilder  I can't find any examples of that use on the internet.

Is the example 3D depth map not what you are looking for?

Which one where? All I see is the 8x8 depth maps?

(@zander)
Illustrious Member
Joined: 3 years ago
Posts: 5520

@robotbuilder I don't know if it's 8x8 or what, I just saw 3D and thought you might be interested.

Arduino says and I agree, in general, the const keyword is preferred for defining constants and should be used instead of #define
"Never wrestle with a pig....the pig loves it and you end up covered in mud..." anon
My experience hours are >75,000 and I stopped counting in 2004.
Major Languages - 360 Macro Assembler, Intel Assembler, PLI/1, Pascal, C plus numerous job control and scripting

Page 9 / 16