Notifications
Clear all

Time of Flight (ToF) VL53L5CX - 8x8 pixel sensor

251 Posts
5 Users
74 Likes
4,928 Views
Inq
 Inq
(@inq)
Noble Member
Joined: 8 months ago
Posts: 959
Topic starter  
I am using this ToF sensor for my Inqling Jr. robot's main:  vision, mapping, obstacle avoidance, and I think it may even handle drop-offs (stairs).  I figured I'd dump this information in a separate topic.  I hope you find it useful.
 
 
Here are some references I'm using:
 
I've the VL53L5CX on a breadboard and have done some tests.  Some things I've learned.  
  1. Unlike the rotating Lidar units (and I'm assuming the non-rotating also) the laser on this unit is not focused.  When I think of a laser, I typically think of those you see in experiments bouncing around mirrors and prisms or a laser pointer.  This does not.  IOW, it does not have 64 beams going out in a 8x8 pattern.  It just blankets the rectangular prism shape going out.  This also explains the range limitations (more later)
  2. You may not be familiar that most cell phones are able to see into Infrared.  One of my older ones do, but strangely, my new phone does not.  Here you can looking directly at the sensor.  The small purple light in the center is the Infrared laser.  The bright red light and cast over the whole picture is not the laser.  It is simply a red led on the back of the breakout board.

    ToF1
  3. It has two resolution/rate modes.  It can do an 8x8 grid at 15 Hz or a 4x4 grid at 60 Hz.  For all the following tests, I am using the 8x8 mode.
    ToF2

     

  4. I almost never saw the 4 meter range.  It only seemed to reach 4 meteres in a very dark room pointing at a white wall.  At any normal interior light levels, it seemed to top out at around 3 meters.
  5. It seems to work fine in well lit rooms as long as it is LED or Fluorescent lighting.  Halogen and Incandescent has an odd behavior.  Waving something in its FOV shows the short range numbers just fine.  However, on removal the range does not go out the background distance.  In my case, I'm pointing at the ceiling that is a little over 2 meters above my workbench.  
  6. I have not tested it outdoors yet, but I suspect sunlight will give it troubles also.
  7. Even though it says it can do 15 Hz and it can do slightly better than that on average, it seems to have a fairly common "not ready" returns using 67 ms timers.  Even at 100 ms timers, it isn't ready all the time.  The sensor and break-out board supports triggering an interrupt to let me know when it is ready so I don't have to poll it. 

The image above shows that this sensor gives a lot more information than just range.  I have not digested it all and the documentation does not go into detail about what all these extra values represent or how they might be used.  Each 8x8 pixel gives the distance and six other values.  According to the comments in the header file, they are:

                // LINE 1
                //  Measured distance in mm
                data.distance_mm[y],
                
                // LINE 2
                // Ambiant noise in kcps/spads
                data.ambient_per_spad[y],                       
                // Number of valid target detected for 1 zone
                data.nb_target_detected[y],   
                // Number of spads enabled for this ranging
                data.nb_spads_enabled[y],
                
                // LINE 3
                // Signal returned to the sensor in kcps/spads
                data.signal_per_spad[y],
                // Estimated reflectance in percent
                data.reflectance[y],
                // Status indicating the measurement validity 
                // (5 & 9 means ranging OK)
                data.target_status[y]

 

I'll be gathering data using various conditions like multiple distances from a wall, oblique and also moving along the wall.  I'll be video recording the output above and storing the data in spreadsheets in case you would like to look at the "noise" variability.  I'll be adding to this thread as I accumulate stuff.

Enjoy!

VBR,

Inq

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, Access Point Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
Quote
robotBuilder
(@robotbuilder)
Noble Member
Joined: 3 years ago
Posts: 1549
 

@inq

You may not be familiar that most cell phones are able to see into Infrared. One of my older ones do, but strangely, my new phone does not.

Probably the new one has an infrared filter.

I'll be gathering data using various conditions like multiple distances from a wall, oblique and also moving along the wall. I'll be video recording the output above and storing the data in spreadsheets in case you would like to look at the "noise" variability. I'll be adding to this thread as I accumulate stuff.

Best to be selective we don't want an information overload 🙂

This shows a low resolution depth map of a hand. Now if you move the hand away does the depth resolution eventually become one value and if closer up to eight values?

TOF HAND

 

 


   
ReplyQuote
Inq
 Inq
(@inq)
Noble Member
Joined: 8 months ago
Posts: 959
Topic starter  
Posted by: @robotbuilder

Best to be selective we don't want an information overload 🙂

I'm already there.  Too many numbers changing too fast to even get a feeling?  

image

VBR,

Inq

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, Access Point Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
ReplyQuote
robotBuilder
(@robotbuilder)
Noble Member
Joined: 3 years ago
Posts: 1549
 

@inq 

Probably colors might look better instead of numbers?

Waving the sensor all over the place or waving something in front of the sensor is obviously going to give a pointless display of numbers (distances).

You need the data to be addressed by the position of the sensor and the direction in which it points.

Maybe replace the ultrasonic sensor with the TOF sensor in this project.

This should give a set of numbers for each position.

 


   
ReplyQuote
Inq
 Inq
(@inq)
Noble Member
Joined: 8 months ago
Posts: 959
Topic starter  

Like the first video... it does 10 colors... evenly distributed over the 4 meters.

red, orange, yellow, lime, green, teal, blue, navy, indigo, black.  That one image was of the fixed ceiling.  

More to come...

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, Access Point Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
ReplyQuote
robotBuilder
(@robotbuilder)
Noble Member
Joined: 3 years ago
Posts: 1549
 

@inq 

Colors are eye candy for us but for the robot it needs to relate those numbers to its position and orientation.

It has to stitch the data together using those positions and orientations although with SLAM you use the actual data to help reset the robot's position and orientation by comparing what it expects to see with what it actually sees.

With 3d you will have a 3d cloud of data to store.  However the idea that more data is good is wrong. The data you need has to be goal orientated.  In the case of using internal bit maps (not used by humans!) you might only need the x,y position in those maps.  Humans do not use their x,y positions. They just see where they are by recognizing what they are looking at.

 

 


   
ReplyQuote
Inq
 Inq
(@inq)
Noble Member
Joined: 8 months ago
Posts: 959
Topic starter  

This post is BORING... but it is data for those that might want to see how the sensor behaves.  This is the first of several tests that I think will be necessary before I can actually use the sensor effectively.  There are some unexpected results that I still haven't been able to resolve in my mind.  I'll try to summarize what I've seen, but the raw data is also provided for those of an Inquisitive nature so they can study it for themselves.

The following presumptions I make are totally based on my observations as the datasheet is near worthless in describing things like how it works.  Unlike an ultrasonic sensor, this thing returns seven pieces of information about the 64 "pixels" besides distance.  The previous video shows all those extra values and I've not been able to discern any use for them as yet.  Many have the term something/spad (single-photon avalanche diode).  The header file gives a one-liner description and is the only information I have on it yet.

The first pre-conceived notion I need to get rid of is the term laser.  When I hear that, I always assume a nice strait line like a laser pointer, laser cutter or those used in the rotating lidar units.  I wanted to believe this thing was sending out an 8x8 array of laser beams at equal angles.  But no... it appears to be more like a flashlight that spreads out with a FOV of 45° in the horizontal and vertical directions.  This would explain the range limitations.  A laser beam should be good nearly out to extreme distances as is used by NASA to accurately measure ToF distance to the Moon.

I also make this presumption based on pointing a phone camera at the sensor.  The infrared light coming from the senso seems of equal intensity from various orientations.  I don't see any sudden bursts, then darkness between "pixel" angles. 

It appears this sensor does the 8x8 discretization on the receiving end.  The receive sensor must simply use a screen door type grating to isolate the distances to the 8x8 pattern. Maybe if I had a microscope, I might be able to see it, but... using a magnifying glass gives bupkis.  

The Test

The following results for this first test are plain as possible.  I've set the sensor up to point at a plain white-painted sheetrock wall.  It is as precisely 1000 mm as I can do it with a measuring tape.  It is a perpendicular as I can make it.  It is stationary during the whole test and all ambient light was extinguished.  I actually monitored the results from another room so there was no light at all in the test room.  I took 271 data samples at 15 Hz.  The raw data is in the spread sheet attached below.  The video... (just one step more exciting than paint drying) is also attached.  You should be able to pause the video and see the frame number at the lower-left corner should correlate to the spreadsheet data.

Expectations / Results / Evaluation

I was expecting two things from this test to get a baseline for future tests.

Noise

To see what kind of digital/sensor noise could be expected in the data.  I calculated a 4.1% variation in the data.  As mentioned in the primary Inqling Jr. thread, my background in statistics is rather non-existent.  I'm still struggling with how to massage the data into a SINGLE geometric plane representing the wall when the depth varies so much.  I'm not seeing an algorithm that will distinguish from noise of a plain wall or a plain wall with a chair leg next to it.  The latter trips up the robot if I statistically considered that datapoint "noise".

Convex Pattern

Since the sensor is looking at the wall from a perpendicular viewpoint, the pixels in the center should be closest while the ones at the corners should be furthest.  That did not pan out as I expected.  I don't know why... yet.  The raw pixel data starts at line 20 in the spread sheet.  Each column is the distance to each pixel.  Columns are labeled to indicate which pixel.  The first column (C) is the upper left pixel, the last column (BN), the lower right.  Columns AD,AE,AL,AM are highlighted showing the 4 pixels at the center.

The rows above 20 are calculations summarizing things:

  • The first 4 rows show the minimum, average and maximum distance reported for each pixel over the test period.  The 4.1% variation comes from these calculations.
  • Rows 5-7 just are some calculations needed for the rows following.  Mainly... that each pixel should have a FOV of about 5.625°
  • Rows 9-12 show calculations of the theoretical distance to the center of each pixel.  The assumption being that the sensor averages the readings within the pixel.  The %error of what the data says the distance is seems way too high.  I need to recheck my geometry.  I must have an error in my Math or assumptions.
  • Rows 14-17 show calculations of the theoretical distance to the closest point within each pixel.  The assumption that the sensor gets this first and ignores the average.  Again, even this value his far higher than reported. 

The Award Winning Movie

The Raw Data

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, Access Point Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
robotBuilder and Will reacted
ReplyQuote
Will
 Will
(@will)
Noble Member
Joined: 1 year ago
Posts: 2111
 

@inq 

Thanks for doing the research.

I was kidnapped by mimes.
They did unspeakable things to me.


   
ReplyQuote
robotBuilder
(@robotbuilder)
Noble Member
Joined: 3 years ago
Posts: 1549
 

@inq 

Unable to view the raw data as I don't have a spreadsheet so I am unable to analyse it. Just the raw numbers in NotePad would be fine as I can load that into a program.

From what I could gather this is simply a 8x8 resolution TOF camera that returns the average distance (or maximum, minimum and average distances within each pixel's FOV) while an ordinary camera with a 8x8 resolution would return RGB levels of light.

An experiment I would do is have a pole (say a broom handle or pipe) and move that between the wall and the sensor.

How do the numbers change with different distance from the wall?

What is the resolution of the distance measure (analogous to number of light intensities in an ordinary camera).

What do you get when you point it at a face close enough to frame the whole face in the 8x8 "pixels"?

I don't remember ever seeing any explanations of how to use the data to create a 3d map.

Can you put the sensor on your laptop and move it around so you can get an image like the first video?

 


   
Inq reacted
ReplyQuote
DaveE
(@davee)
Prominent Member
Joined: 2 years ago
Posts: 678
 

Hi @inq,

  Looks like you are going on a voyage of discovery... and you have already figured out that not all lasers produce highly collimated beams for bouncing to the moon and back, or exploding objects miles away.

I confess I know even less about LIDAR than you, but there did seem to be some info on ST's website. e.g.

https://www.st.com/resource/en/user_manual/um2884-a-guide-to-using-the-vl53l5cx-multizone-timeofflight-ranging-sensor-with-wide-field-of-view-ultra-lite-driver-uld-stmicroelectronics.pdf

This talks about two targets in view .. a small one in the centre that is much closer, and hence in front of, a larger target behind.

It then starts discussing a 'sharpener' ... which I suspect is essentially a detection level adjustment ... that is required to stop 'stray' light from the nearer target, obliterating the  signal from the more distant target. It shows when the sharpener is not in use, the nearer target appears to fill the whole detected area - probably because the imaging is 'fuzzy' and the sensors are all being 'blinded' by the locally reflected light ... similar to looking through the windscreen when a low sun is shining straight in.

Maybe try to imagine viewing the scene through a very out of focus lens .. and that objects nearer the sensor may reflect a lot more light back into the sensor. I think the temptation is to imagine the sensors have a focused image, but I suspect this far the case.

Also note that the distance measurements are far from precise .. this will also tend to hide the difference in distances for the sensors to your target.

I think maybe you need to set up a target, more like the one in the guide I just mentioned, and see if you can 'tweak' it to display its presence, as a starting experiment.

Good luck ... Dave

 


   
Inq reacted
ReplyQuote
Inq
 Inq
(@inq)
Noble Member
Joined: 8 months ago
Posts: 959
Topic starter  
Posted by: @robotbuilder

Unable to view the raw data as I don't have a spreadsheet so I am unable to analyse it. Just the raw numbers in NotePad would be fine as I can load that into a program.

Here is a comma delimited file that will open in a text editor OR Excel or Google Sheets.  The raw data starts on line 19 has the heading for each of the 66 columns (frame and pixel 11,12,...8,8)

Posted by: @robotbuilder

From what I could gather this is simply a 8x8 resolution TOF camera that returns the average distance (or maximum, minimum and average distances within each pixel's FOV)

Yes... but the distance isn't come out what I expect as you move away from the center.  Still TBD what these are.

Posted by: @robotbuilder

An experiment I would do is have a pole (say a broom handle or pipe) and move that between the wall and the sensor.

I may try that, but I was thinking I would slowly move a vertical board and see at what FOV angle, the numbers start changing on that first pixel column so I can start to identify the real FOV. 

If I keep moving the board laterally, it might also tell me if its an average or min/max until it starts changing the second column.  At that point the 1st column should stay constant no matter what method it's measuring.

Posted by: @robotbuilder

I don't remember ever seeing any explanations of how to use the data to create a 3d map.

By me... or anyone?  It would be just like you explained with the ray being sent out hitting the wall in your game simulation.  Or... are you talking about something else?

Posted by: @robotbuilder

Can you put the sensor on your laptop and move it around so you can get an image like the first video?

Did you see the one starting about 5 minutes in, in this video... 

You must have fallen asleep before the excitement started. 😆 Is this what you are looking for?  But yes... I can move it around manually.  Still dink'n with the new Inqling Jr. chassis.

VBR,

Inq

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, Access Point Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
ReplyQuote
Inq
 Inq
(@inq)
Noble Member
Joined: 8 months ago
Posts: 959
Topic starter  
Posted by: @davee

I confess I know even less about LIDAR than you, but there did seem to be some info on ST's website. e.g.

Thirty minutes later, I'm still trying to download it.  All I've seen so far is their datasheet and their marketing pages.  This sounds promising.  I await with baited breath... and much cussing.

I'm thinking about becoming homeless and plug my laptop in at the library and scarf their WiFi.  Maybe, I can hide in the stacks after hours.

I want try objects but at 64 pixels, I'm guessing I might be able to see a sphere versus a disk, but a sphere versus a head... hmmm!

At the moment, I'm still just trying to identify where I send out the trigonometry lines to translate into a 3D Cartesian point cloud like @robotbuilder put in the Inqling Jr. thread.  Also, like he and you mentioned above... what are the numbers coming back... nearest, furthest, average, brightest... It's anyone's guess so far.

Like in this example with the robot cruising down the wall... what is the number coming back within that 1 pixel FOV width (and height).  Will I be sending the ray out dead center for average, or the angle of the line nearest the centerline.  TBD with these tests.

image

 

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, Access Point Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
ReplyQuote
Will
 Will
(@will)
Noble Member
Joined: 1 year ago
Posts: 2111
 
Posted by: @inq

Like in this example with the robot cruising down the wall... what is the number coming back within that 1 pixel FOV width (and height).  Will I be sending the ray out dead center for average, or the angle of the line nearest the centerline.  TBD with these tests.

image

 

Would you actually get any valid returns from that situation ? I'd think that the rays would just graze or bounce off the wall and not offer anything back to the collector at all. Unless the wall was very rough.

I was kidnapped by mimes.
They did unspeakable things to me.


   
ReplyQuote
DaveE
(@davee)
Prominent Member
Joined: 2 years ago
Posts: 678
 

Hi @inq 

The pdf file was only 0.75Mbyte ... perhaps you need to pull the string between the tin cans tighter, to get a stronger Internet signal? 🤨  Seriously, you have my sympathies.

--------------

When Internet comms went from modems to trying to squeeze a 'broadband' signal down the old telephone wire, some 20 odd years ago, the small village I live was 'stuffed' ... I don't know why, as we are not far from a main exchange, but the 'last mile' of POTS wiring is useless.

However, at that time, an IT guy with an entrepreneural streak, decided he needed a change of career, as technology was moving on and leaving him behind. So he set up a rooftop to rooftop Wireless net, and fed it 'centrally' from a fibre link to the Net. Furthermore, he has updated the tech a couple of times, so now we get 100Mbit/sec download and 35Mbit/sec upload during the quieter times ... maybe half that in a busy time. (Of course many websites aren't that quick, but some are, like the big semiconductor companies.) And his charges are reasonable, and if there is a problem, which is rare, I simply text his mobile ...  7 days a week personal service !! I don't know if that makes any sense in your situation/area, but hoped you didn't mind the diversion.

----------

The pic I noticed from the app note was this:

image

which is about as simple as you can get .. just a square occuping about 1/9th of the 2D field of view, which is much closer than the surrounding 'background.

But note that the 'sharpener' needed to be adjusted to actually see the closer target! I didn't spend enough time to figure out what they were tweaking, but it clearly has to do with the sensitivity ...

My analogy, is you have a poor quality greyscale scene, maybe an old dirty page of text, which you have scanned, and you are trying to 'enhance it', to pick out the text. One trick is play with contrast, brightness and filter levels, so that 'pale grey' becomes white, but 'darker grey' becomes black.

My analogy of how it works is almost certainly naive and wrong, as I haven't looked for the details,  compared to the actual 'sharpener' algorithm, but I think the effect might be somewhat similar .. that is the 'sharpener' will need to be tweaked, possibly in real time, to actually pick out the objects, preferably before your bot runs into them....

I think your questions about angles and so on are valid, but maybe you need to start at a much more basic level .. just looking forward and try to see something immediately in front.

(I notice you are asking if you will get a reflection from a wall at an angle ... I am guessing the answer is yes, in 'most' cases,  in terms of photons coming back to the detector, unless perhaps its something like a mirror fronted door, since most walls are very rough at light wavelengths (940nm?), so the light will scattered over a wide range of angles, including back to the detector.  As to how easy it will be to make any sense of the data produced by the sensor is probably a different question.

And it is likely the range will decrease as the incidence angle deviates from 90 degrees. And I think if you read the data sheet and app notes, it discusses the rather severe effect the colour of the surface can have on the range. I would also be concerned about it's ability to spot 'thin' objects like legs of tables and chairs. But all of this is probably more appropriately looked at when you have done the 'large' object stuff. ) 

And as another throwaway comment ... the same app note says the image is upside down and left to right reversed ... I think that is the same as 180 degree rotation? obviously, you may have already figured all this out.

Just a pondering from an cynic .. best wishes, Dave


   
Inq reacted
ReplyQuote
Inq
 Inq
(@inq)
Noble Member
Joined: 8 months ago
Posts: 959
Topic starter  
Posted by: @robotbuilder
Posted by: @robotbuilder

Unable to view the raw data as I don't have a spreadsheet so I am unable to analyse it. Just the raw numbers in NotePad would be fine as I can load that into a program.

Here is a comma delimited file that will open in a text editor OR Excel or Google Sheets.  The raw data starts on line 19 has the heading for each of the 66 columns (frame and pixel 11,12,...8,8

I uploaded this while at the library and it was in the, "My Media", but couldn't get the My Media to open till just now... 90 minutes later.  Jeeze!

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, Access Point Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
ReplyQuote
Page 1 / 17