Notifications
Clear all

Time of Flight (ToF) VL53L5CX - 8x8 pixel sensor

251 Posts
5 Users
74 Likes
14.6 K Views
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  
Posted by: @davee

I think your data is (probably) consistent with the example I pasted previously (and again below) from the ST's app note. I think all of the pixels are detecting photons from the area of the wall, closest to the sensor, whose distance will be about 1000 mm, and this is 'blinding' the sensor from reporting photons that must take a longer path, near the perimeter of the field of view.

I sure hope it's not that overpowering in the default state.  I setup this test up to be as near their simple/optimum wall as stated in their datasheet.  I thought the condition you reported was with a significantly nearer/brighter object, not just the same wall, being 10% further at the corners.  I've found where the sharpener is exposed in the library.  I'll do some more experimenting.

VBR,

Inq

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
ReplyQuote
(@davee)
Member
Joined: 3 years ago
Posts: 1699
 

H@inq ,

   The ST application note is a 'wall' at 500 mm with a square/rectangle in front of the wall at 100mm from the sensor.

Please bear in mind that when I talk of being 'blinded' etc., and sunlight on a windscreen, these comments are analogies, rather than precisely what is happening.

Our eyes 'average' the amount of light over (say) 0.1 seconds to give an image. We have no visual way of differentiating things that happen within that period of time, so it is tricky to imagine how the sensor works ... and unfortunately the quick flick I did on the app note wasn't very helpful.

However, these detectors are able to differentiate when the light turns up to fractions of a nanosecond. Even at maximum range of 4 metres, the whole viewing time for a single data value is under 30 nanoseconds. Light travels at 300 mm per nanosecond, and it is using that timing function to measure distance. So the detection 'characteristics' are very different from our eyes. To understand it, you need to pretend your eyes have been replaced by the sensor.

Each pixel is looking at a wide area, so there may lots of different light paths that can land on the detector, but the pixel can only report a single distance. So the sharpener is in some way filtering what the sensor can 'see'. I suspect this filtering is partly or wholly in terms of time, but this is only my hunch.

Happy reading my friend, Dave


   
ReplyQuote
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  

I'm a little dis-enchanted with this sensor.  I was in a gymnasium type room making the video for the Inqling Jr project.  Finished that up and started testing this sensor.  I had lots of space and a blank wall.  I was able to set it up high enough at 3 meters from the wall to have its bottom FOV not hit the floor.  I got it all plumb and square to the wall from watching the values and making sure all four corner values were around the same value.

My next step was to use some folding tables that I could stand up on their ends to slide in from the sides so I could start identifying the true FOV.  As described the wall is 3 meters.  The tables I was moving inward from the sides were at 1 meter from the sensor.  When I move them into the field and noted the the drop from ~3m to the ~1m, I'd move them back out.  The sensor was still showing them as being there.  I noted this problem in earlier tests when under incandescent light.  This time I was under ONLY ambient light coming in windows of the gymnasium.  None of it was direct sunlight as their was a thunderstorm going on at the moment.  Just AMBIENT SUNLIGHT was causing a huge latency with the sensor.  Over several tens of seconds it would finally return to "seeing" the far wall at 3 meters.

IOW - This thing will be worthless outdoors under any conditions.  Even inside, it seems to have trouble with even non-direct window light, incandescent and halogen light.  So far, it  only seems to handle total darkness, florescent and LED light.

May be... I can just do all the room mapping at night.  🤔 🤨 🤪 

VBR,

Inq

P.S. @davee - I was hoping to get to the point that I could test your theories about sharpening.  The wall was a light green color, and I think the numbers appeared to be more convex, but I got discouraged by the results above before I started running the numbers.  Maybe after a nice bottle of wine, a good nights sleep, things will look better tomorrow.

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
DaveE reacted
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2043
 

@inq 

I'm a little dis-enchanted with this sensor.

I thought you were getting some good results using your Inq Portal Admin software.

Did you duplicate the video experiments in one of the videos you posted from utube of it pointing up to the ceiling while a hand was being moved above the sensor?

 

 


   
ReplyQuote
Will
 Will
(@will)
Member
Joined: 3 years ago
Posts: 2535
 
Posted by: @inq

I'm a little dis-enchanted with this sensor.

That's a shame, it sounded pretty good back at the start 🙁

Anything seems possible when you don't know what you're talking about.


   
ReplyQuote
(@davee)
Member
Joined: 3 years ago
Posts: 1699
 

H @inq,

   Just looking at the specs, I was far from convinced that it is good enough for what you were hoping to do, but the it is the 'bottom end' of a technology that seems to be making an impact, so figuring out how it works, etc. may pay dividends in the longer run ... at least that was my attempt at being optimistic ... which by nature I am certainly not. I have the impression a lot of the 'real' lidar work is with a scanning system, more like a miniature version of the classic rotating radar dish.

I realise that is not how this device has been presented, and ST have obviously spent a fair bit of cash on this 'stationary' device, so they must think it has use which will pay them back ... even if it turns out to be somethihng trivia like a coffee cup detector that stops the coffee washing your feet in brown gunge when the cup fails to fall into position. 😀 

I am mystified by your observation When I move them into the field and noted the the drop from ~3m to the ~1m, I'd move them back out. The sensor was still showing them as being there.

I should emphasise the mystification is not a disbelief in your observation ... but about how the device could be doing such a thing.... what is producing this 'memory' effect.

The app note I glanced at seemed to be preoccupied with how to get some data out of the thing, and unfortunately didn't explain much about the inner workings of the device --- or at least, I didn't spot any such explanation. At the moment, I am having difficulty imagining how it could remember objects ... but as I don't know enough about the innards, I can't be confident about anything.

Although the data sheet talks about detecting at distances of up to 4m, I noticed the example page I copied was only 100mm and 400mm. I wondered at the time if this was a hint to its real limits.

Hope I haven't wasted your time. For me, it all seems like good research... but I can imagine you might be feeling frustrated.

And the 'accepted' rule for research is that at least 90% is discarded .. the catch is, until you do the "100%", there is no way of knowing which 10% might turn out to be useful.

Best wishes, Dave


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2043
 

It is an interesting bit of technology which I didn't really know much about until @inq posted about using the VL533L5CX sensor.

A TOF camera seems to me to have many applications but the reduced 8x8 sensor not so much except maybe in some specialized applications. Not having the sensor means I can only read about it and there doesn't seem to be many (any) real build it yourself application examples.

I looked at the Arduino examples,
https://github.com/stm32duino/VL53L5CX/tree/main/examples
and apparently the examples need an STM32 Nucleo Board.
https://ph.rs-online.com/web/p/sensor-development-tools/2300082

boardAndSensor

The sensor seems to me to be the 8x8 TOF equivalent to an ordinary webcam camera limited to 8x8 pixels which doesn't make for much of an image. I see also there are 3 techniques that might be used,

https://www.allaboutcircuits.com/technical-articles/how-do-time-of-flight-sensors-work-pmdtechnologies-tof-3D-camera/

 

 

 


   
ReplyQuote
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  
Posted by: @robotbuilder

I thought you were getting some good results using your Inq Portal Admin software.

Did you duplicate the video experiments in one of the videos you posted from utube of it pointing up to the ceiling while a hand was being moved above the sensor?

Posted by: @will

That's a shame, it sounded pretty good back at the start 🙁

Posted by: @davee

H @inq,

   Just looking at the specs, I was far from convinced that it is good enough for what you were hoping to do, but the it is the 'bottom end' of a technology that seems to be making an impact, so figuring out how it works, etc. may pay dividends in the longer run

With a nice wine and good sleep, I'm going back out to the gym/auditorium (good Internet) this morning and give it the ole college try again. 

@robotbuilder - The InqPortal software can easily sustain data transfers using small packets at 100 Hz.  Although only at 15 Hz, this sensor is providing far bigger packets than I've ever tested with it.  Theoretically (TCP/IP) at the same data throughput, sending fewer larger packets is more efficient than faster/smaller packets.  So, I don't think InqPortal is a culprit, but I will confirm that. 

I'll duplicate all the tests and video them with a better camera so it will be clearer what I'm trying to describe with words.  It'll make it easier to talk about the behavior.

@will, @davee - Really, my expectations were only that it would be better than the HC-SR04 Ultrasonic sensor (HC) that are sprinkled on just about every robot found on this forum.  In that regard, I do think it is far superior in the right conditions.  I never had the expectations of seeing a video like @robotbuilder dug-up with the hand waving in front of a high-res depth sensor.  None of us had delusions with a 64 pixel camera. 🤣

With the HC, you get one piece of information, you have no idea where it is in the large FOV or even if its valid.  It could be a strong reflection from a previous Ping that blinds the current Ping and gives a false (close) number.  It could be an insignificant reflection from the floor.  It could be totally blind to a pillow.

Discerning different objects... hand vs head vs ball, etc... I believe is beyond this things capability.  But, I'm still holding out that for the desired project (Room mapping) it will be superior.  Having a snapshot of 64 distances in a grid pattern horizontal and vertical relieves a lot of scanning and gives vertical / 3D data versus just a floor plan.  Maybe the other pieces of information provided for each pixel contain useful information to at least rule it out.  Validity checker?

Last night's discouragement was that it appears to be severely handicapped by even relatively dim ambient (no direct) sunlight.  I'll try to quantify/video what I'm seeing today.

VBR,

Inq

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
ReplyQuote
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  
Posted by: @davee

I have the impression a lot of the 'real' lidar work is with a scanning system, more like a miniature version of the classic rotating radar dish.

I believe you are right.  It (scanning) using a laser point gives a very good location basis and a very strong, overpowering signal of a hit at very long distances.  This thing is shining a (relative) bare bulb and trying to interpret the rather scattered data coming back.  Maybe, I end up with one of those rotating monsters, but for now experimentation is enjoyable in its own right.

Posted by: @davee

ST have obviously spent a fair bit of cash on this 'stationary' device, so they must think it has use which will pay them back

I think this is at least conceptually (may be actually) what was used in the (aborted) Android Pixel 4 Pro.  Google wanted us to hand wave information at our phones.  Who else thought that was a good idea?  🤣  I also think its the same concept used by the Microsoft Kinect.

Posted by: @davee

I am mystified by your observation When I move them into the field and noted the the drop from ~3m to the ~1m, I'd move them back out. The sensor was still showing them as being there.

I totally agree.  From your all's perspective (not having one in-hand) and even mine, I am beyond words to explain the behavior.  The simplest way I can put it... is bad ambient light (sun, incandescent, halogen) seems to make it go drowsy.  The data changing back to seeing the back walls takes many seconds as opposed to the next 15 Hz frame should report the back wall.  I'll try to capture that in video better.  Like @will had said in other posts, "It didn't happen if we don't have video." 😆

Posted by: @davee

Although the data sheet talks about detecting at distances of up to 4m, I noticed the example page I copied was only 100mm and 400mm. I wondered at the time if this was a hint to its real limits.

At one point yesterday, I was getting value over 4 meters that seemed valid.  I'll re-address those too, today... on video.

Posted by: @davee

Hope I haven't wasted your time. For me, it all seems like good research... but I can imagine you might be feeling frustrated.

And the 'accepted' rule for research is that at least 90% is discarded .. the catch is, until you do the "100%", there is no way of knowing which 10% might turn out to be useful.

I completely agree.  In fact, I'm more into things like this, than getting the robot done.  With getting Inqling Jr back in a moving state yesterday and videoing it felt more anti-climatic than I expected.  It didn't really feel like a milestone reached.  So... you're certainly not "wasting my time".  If anything you are focusing me on something that I might have missed because I'm chasing some other rabbit.

 VBR,

Inq

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
DaveE reacted
ReplyQuote
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  
Posted by: @robotbuilder

A TOF camera seems to me to have many applications but the reduced 8x8 sensor not so much except maybe in some specialized applications. Not having the sensor means I can only read about it and there doesn't seem to be many (any) real build it yourself application examples.

I think I've covered your last post before I read it, but I do think this sensor has a good purpose for what I want to do.  Room mapping.  I just got to get it under my thumb!  Will it do a better job than a scanning ray.  No way!  But I don't have to do all the monkey motion moving the ray around or pay for one of those.  I tried to do a simple search now on Amazon for one of those, bandwidth (and my patience) keeps me from that at the moment.  Aren't they several hundred dollars?

Anyway there is one other aspect about this device, that a scanner can't do and one I haven't explored.  Don't really need it for my purposes, but apparently it has some way (built-in) that allows it to quantify gestures from frame to frame... 3D, horz/vert/depth swipes.  Could be interesting in its own right.

VBR,

Inq

 

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
ReplyQuote
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  

@robotbuilder, et al - More thought just now.  Are there any relatively inexpensive single laser pointer type sensors available (say < $100 US)?  The one in one of your earlier posts where the guy used 2 servos to move what looked like a gun scope looks to be overkill for what I want, but one that sends out a laser point sounds like it would be more ideal to room mapping... just far more laborious on the mechanisms to move it.  I don't think I've ever run across one.  

Again, this 8x8 sensor sounds like it would be ideal for obstacle avoidance compare to the ultrasonic.  Not only would it see the object faster, it would tell you its on the left or right and all without scanning back and forth.

VBR,

Inq 

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
ReplyQuote
(@davee)
Member
Joined: 3 years ago
Posts: 1699
 

Hi  @inq,

  Thanks for the kind and thoughtful reply.

  I note the three sources of ambient light you mention are all based on something being hot, and hence radiating light with a spectrum that will extend down into the infra red, which is where it works. LEDs and fluorescent lights might be expected to be far less troublesome (unless by some misfortune, they also have a 'stray' emission line in the area of the near IR spectrum)

The way it seems to become 'drowsy' puzzles me. I am wondering if there is some kind of 'averaging' or 'smoothing' process buried in the software. Note, in addition to the obvious averaging of grouping the data from a number of successive data readings together, it might have some way of actively tweaking the controls on the sensor, depending on the incoming data. (This is only speculation of course, as I haven't looked at the software.)

I don't know anything about Android Pixel 4's ambitions were ... when it comes to phones, I keep my shopping to the 'cheap and simple' end.

I had a quick glance at some Kinect documents a couple of years ago, and it seemed to be using 'ordinary' video, attempting to work out the posture of a person, a bit like the 'crash dummies' used for testing cars etc, except it had to do it without target stickers on the persons joints, etc. This was rather more than a simple gesture recognition, which, I agree, is a possibility for this lidar device. I suspect Microsoft has lost interest in this, although I haven't checked. And I haven't got a games box of any sort.

This sensor appears to have the ability to be 'superior' to well-known, simple ultrasound, and with that in mind, is probably worth a little effort. It could be that no single detection method is good in all circumstances, so that combining two very different ones would be a smart way to go. Obviously, the self driving cars, etc. are using this approach.

Best wishes .. I hope you enjoy the challenge .. the Inq-ling videos are very impressive!! Dave


   
ReplyQuote
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  

OK... I was wrong except about the wine and sleep.  I do believe there are some light type dependencies, but that is not what I experienced yesterday.  

Test A

This is at home in LED only lighting.  The ceiling is 1.6m.   Note mainly the numbers fly by so fast.  Secondly, when the obstruction (my hand wave) is over, they return to the 1.6 meters almost instantly.  The first video is from the camera's standpoint with my dialogue.  The second video is the screen shot video of the GUI.  It would only be useful to someone who wants to inspect the raw data and see the frame counter and values visually.

 

Test B

This is where I was wrong about my bad behaving test was about the light conditions.  But I'm finding the truth to be equally bad behaving and I haven't an explanation.  The test is at 4 meters (the maximum rated distance of the sensor).  In a nutshell... it is distance dependent.  When something is placed in front of the sensor at close range, it is near instant.  When it is removed, the numbers change very slowly back to the 4 meter wall.  See the movie...

Anybody have any ideas why this would be so?

VBR,

Inq

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
robotBuilder reacted
ReplyQuote
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  

More boring data / videos.

Test C

Same test as 4 meters, but run at 3 meters.  No camera video, but I found how to add sound recording to the screen capture video.  Basically, the responsiveness is better the closer it gets.

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
ReplyQuote
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  

Test D

Piled higher and deeper.  Here is the 2 meter test where the output looks like I'm used to seeing... meaning, it's so fast, I can't really register it fast enough.

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
ReplyQuote
Page 3 / 17