attempting to use s...
 
Notifications
Clear all

attempting to use sonar to make maps

34 Posts
3 Users
10 Reactions
2,190 Views
TFMcCarthy
(@tfmccarthy)
Member
Joined: 2 years ago
Posts: 444
 

Posted by: @robotbuilder

... create a pretty point cloud to admire then it is indeed slow but so what? ... Researches might ... create a detailed 3d point cloud map that they can then study and explore later.

This comes close to answering the question but somehow never does. Game developers make theses maps in order to navigate a player in the space. MicroMouse developers make similar maps in order to navigate through a maze and for avoidance. The Roomba sweeper is probably the closest example

For simple cases, like a MicroMouse maze, where there are no obstacles in the path and everything is at a uniform height, a single level scan, with a restricted field of view, is sufficient and can be done in relative real time. The obstacle avoidance sonar scanner cars already do this. Unfortunately, these cars use just the obstacle detection and discard the distance information. When an obstacle is detected, they don't move right or left based upon the distance to the object; they back up and move in a random direction and try again. They have problems going under things. The line follower car does navigate using the sensor feedback but has no memory.  (I'm reminded of the game Rogue that has some interesting variations on this type of exploration and navigation.)

With enough memory and a good sensor, you can develop the single level map and navigate it with relative ease. e.g., a Roomba first learns a room layout via obstacle avoidance and search algorithm and subsequently quickly traverses the remembered room using simple collision detection. Of course, traversal speed is the point of the MicroMouse challenge.

But this isn't the single level map problem. This is an arbitrary map, with varying obstacle shapes and heights. A single level scan is insufficient for this. An obvious example,

horror3

The single level map on the left doesn't notice the low bridge and speed bump that appear on the right.

So no, this isn't about producing a fancy 3D map. It's a very practical problem in 3D navigation.


The one who has the most fun, wins!


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 7 years ago
Posts: 2420
Topic starter  

@tfmccarthy

The single level map on the left doesn't notice the low bridge and speed bump that appear on the right.

If the lidar says 30cm but the bumper bar says HIT then the bumper bar decides that it is an obstacle no need to record the different levels.

The single level map on the left doesn't notice the low bridge and speed bump that appear on the right.

The vacuum robots I have seen have a 60% wrap around bumper bar at the front which will hit any obstacle up to its top. If anything is higher than the bumper bar and higher than the lidar beam the robot simply goes under which is neat for collecting the dust bunnies under beds. The lidar is its own obstacle detector. The lidar and the bumper bar have it all covered. If bumper or lidar hit then is an obstacle at that position.

 



   
ReplyQuote
TFMcCarthy
(@tfmccarthy)
Member
Joined: 2 years ago
Posts: 444
 

Posted by: @robotbuilder

If the lidar says 30cm but the bumper bar says HIT then the bumper bar decides that it is an obstacle no need to record the different levels.

If this is how you want to navigate, you don't need a camera or a LIDAR.

I'd like to avoid running into things in order to get about.

Posted by: @robotbuilder

The lidar is its own obstacle detector

It's not really avoidance, is it? More like, "Oops! Another obstacle! Also, need a new LIDAR."

We're not working off the same script.


The one who has the most fun, wins!


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 7 years ago
Posts: 2420
Topic starter  

@tfmccarthy 

We're not working off the same script.

Apparently not.

The lidar doesn't hit anything so you don't need a new lidar it "sees" the obstacle at that level but at the lower level it does not. I am talking about the robot vacuum cleaner, not a scanning 3d lidar, as that is what you drew in your picture above. Another obstacle detector that can be in the bumper are IR proximity sensors to avoid actual physical contact.

Robot touch sensors haven't come anywhere near the human level. Our ability to manipulate objects is a function mainly of touch and force feedback. Vision can locate an object but trying to manipulate it visually is rather impossible which is the problem I believe fruit picking robots have in trying to position the grabber visually. Touch is fast and the positioning is exact when contact is made.

If this is how you want to navigate, you don't need a camera or a LIDAR.

It wasn't an all or nothing scenario.

I'd like to avoid running into things in order to get about.

I wasn't saying that is how you have to navigate it is a back up system although it is exactly how the first vacuum robot "navigated" along with odometry. Hitting a wall was a very exact way to reset their odometry and how you would operate if you were blind.

You can also use touch to recognize objects and know their orientation often just by touching part of the object.

You may not be aware of it but you are continually manipulating and recognizing objects by touch and would fail without it or at least be as awkward as some of the attempts at picking fruit you will see in robot fruit picking robots.

By the way I wasn't deriding the slow 3d sonar scanner as just making "pretty pictures". I think they look really great which is why I posted the example point cloud images I generated of planets.

 



   
ReplyQuote
Page 3 / 3