Notifications
Clear all

Time of Flight (ToF) VL53L5CX - 8x8 pixel sensor

251 Posts
5 Users
74 Likes
14.1 K Views
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  
Posted by: @will
Posted by: @inq

Like in this example with the robot cruising down the wall... what is the number coming back within that 1 pixel FOV width (and height).  Will I be sending the ray out dead center for average, or the angle of the line nearest the centerline.  TBD with these tests.

image

 

Would you actually get any valid returns from that situation ? I'd think that the rays would just graze or bounce off the wall and not offer anything back to the collector at all. Unless the wall was very rough.

I would assume a lot of it would get reflected, but like an ultrasonic sensor, it still gets enough signal back to see it.  If it reflected 100%, we'd not be able to see it.  BUT, point well taken.  Note to self - Do test to check for shallow angle signal strength.

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
ReplyQuote
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  
Posted by: @davee

The pdf file was only 0.75Mbyte ... perhaps you need to pull the string between the tin cans tighter, to get a stronger Internet signal? 🤨  Seriously, you have my sympathies.

S#!+... that'd be an upgrade to the smoke signals I'm using to get my data from the Indian Reservation about 5 miles away.  They have gigabit fiber that our tax dollars paid for.

The download... failed twice.  I'll try in the morning (2 am is usually pretty good).

 

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
ReplyQuote
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  
Posted by: @davee

So he set up a rooftop to rooftop Wireless net, and fed it 'centrally' from a fibre link to the Net. Furthermore, he has updated the tech a couple of times, so now we get 100Mbit/sec download and 35Mbit/sec upload during the quieter times ... maybe half that in a busy time. (Of course many websites aren't that quick, but some are, like the big semiconductor companies.) And his charges are reasonable, and if there is a problem, which is rare, I simply text his mobile ...  7 days a week personal service !! I don't know if that makes any sense in your situation/area, but hoped you didn't mind the diversion.

You must be another of our down-under members.  Your profile doesn't say.  The last time I heard stuffed in that context was when our Australian foreign exchange student about lost her cookies when I said I was stuffed after a huge dinner.   

We have that in our area too, but... alas, I'm deep in a cove and don't have line of site with his towers.  I'm on the Starlink waiting list.  My last hope.  And no... I don't mind.  

Posted by: @davee

And I think if you read the data sheet and app notes, it discusses the rather severe effect the colour of the surface can have on the range. I would also be concerned about it's ability to spot 'thin' objects like legs of tables and chairs. But all of this is probably more appropriately looked at when you have done the 'large' object stuff. ) 

And as another throwaway comment ... the same app note says the image is upside down and left to right reversed ... I think that is the same as 180 degree rotation? obviously, you may have already figured all this out.

Color - More tests, but Ultrasonic has trouble with carpet and pillows, so nothing's perfect.

Order - Yes, I've rearranged the pixel data so the raw data and the video images above are consistent as if you are looking in that direction for up/down/left/right.

VBR,

Inq

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
ReplyQuote
(@davee)
Member
Joined: 3 years ago
Posts: 1684
 

Hi @inq,

 I obviously know nothing of the politics, ownership, landscape, local laws and so on, but my simple understanding is that if you can get a 'line of sight' route, then 'off the shelf' boxes with antennas, etc built in can span that sort of distance,and I presume it is not excruciatingly expensive ... at least not if shared with a number of people. The comms boxes on the roof are smaller than a  packet of breakfast cereal .. and you just feed them a twisted pair Ethernet cable, with a PoE power supply and get the usual router input signal back down the same cable. Somewhere nearer the fibre input feed end, there are some servers managing the whole thing, providing email server and the like, as it direct to the Internet, (ie he is an Internet Service Provider) but I guess that could be different. His business covers about 3 or 4 different (separated) villages. So far as I know, it is just him and 'young lad' he employs to run up and down the ladder,  etc.

I am sure, getting it all started could take a bit of work. Just a thought if you live in an area with a number of people like you in reasonably close proximity and no prospect of things improving in the near future.

PS, I just saw your comment about location ... no I am Northern hemisphere .. Near Gloucester, in the UK.

Best wishes, Dave


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2042
 

@inq 

@robotBuilder
I don't remember ever seeing any explanations of how to use the data to create a 3d map.

@inq
Did you see the one starting about 5 minutes in, in this video...

I mean a complete 3d map static in memory (points in absolute space) not the 8x8 distance data as it comes in. The whole idea is that the robot moves around creating a universal map which it can then use to compare with its current input to determine its position and orientation within that map and plan paths from one point to another.

https://www.stmicroelectronics.com.cn/content/dam/dc21/conference-sessions/tt_6_kvam.pdf

tofRobot

To enlarge an image, right click image and choose Open link in new window.

It is a LiDAR board based around the VL53L5CX ToF sensor from STMicroelectronics.

https://hackaday.io/project/181656-mini-cube-robot/log/206181-add-on-board-lidar

Episode 5 - ST VL53L5CX Depth Sensor and Pointcloud2

8x8 ToF Gesture Classification
https://docs.edgeimpulse.com/experts/tof-gesture-classification

https://notblackmagic.com/projects/mini-cube-robot/

miniCube

As I don't actually have the sensor I am not as motivated as you to read about it as I can't play with it.
I have read through the "A guide to using the VL53L5CX multizone Time-of-Flight ranging sensor" for which DaveE gave a link and it all looks very very complicated 🙂

Did you get it from SparkFun? They provide examples for use with the Arduino.

https://www.arduino.cc/reference/en/libraries/stm32duino-vl53l5cx/
https://www.arduino.cc/reference/en/libraries/sparkfun-vl53l5cx-arduino-library/
https://github.com/sparkfun/SparkFun_VL53L5CX_Arduino_Library
https://github.com/sparkfun/SparkFun_VL53L5CX_Arduino_Library/tree/main/examples

 


   
ReplyQuote
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  

@robotbuilder - I tried to upload it a dozen times yesterday / last night.  Finally, now, here is the raw data.

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
ReplyQuote
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  

@robotbuilder - I'm heading back to the library this morning, I'll dig into your links.  Yes, I'm using the Sparkfun breakout board (the smaller one) and their library.  I'm not sure, but it seems like their library doesn't support all the features of the chip yet.  I'd really hate having to roll my own or even digging into theirs to add those features.  I think the chip is still relatively new as I haven't seen it in the China clones at commodity prices yet like ST's VL53L0X chip.  Now, that they've come out with a 500K pixel chip, it'll drive down the price and demand for this 8x8 one.  

VBR,

Inq

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
ReplyQuote
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  
Posted by: @davee

but there did seem to be some info on ST's website. e.g.

https://www.st.com/resource/en/user_manual/um2884-a-guide-to-using-the-vl53l5cx-multizone-timeofflight-ranging-sensor-with-wide-field-of-view-ultra-lite-driver-uld-stmicroelectronics.pdf

I finally got a chance to go through this.  Beside, crazy lack of patience waiting on 5 minute Google searches that often fail, I must have overlooked this.  It's far better than their datasheet or marketing data. 

I saw some things to explore, but it still didn't help define where the returned distance is coming... average or closest.  There apparently are some setting that can return data based on strongest or nearest.  Can't quite tell if that is over the whole field because they mention the ability to detect multiple "targets" within a pixel.  It looks like emperiacle is the only way to tell what a pixel returns.  

Thanks for posting this.

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
DaveE reacted
ReplyQuote
(@davee)
Member
Joined: 3 years ago
Posts: 1684
 

Hi @inq

  With your current state of comms, I am surprised you can find anything...I hope it improves soon!

 I didn't read any of it carefully, but clearly there is some smoke and mirrors that need to be deciphered ... Clearly a pixel can only return a single number at a time, but it seems there is a way of adjusting what distance range that same pixel must report with the next number return. I suspect the obvious max, min and average is a little too simplistic.

For an analogy (Not the physical reality), I am imagining viewing through a wide aperature camera lens, so that only a small range of distances are in focus at any one time. (Typically used for portrait pics, where the background is in 'soft focus', but in this case much softer focus, so the background is a blur.) Then, if there are objects at different distances in the view, only one object can be in focus at a time, but adjusting the focus, will select a different object. So maybe, it is necessary to view the same scene several times, each time with different "distance/focus" setting, to build up a view. That is my 'simple and instant' interpretation of the diagram I uploaded -- further research needed to discover truth.

Remember that one pixel covers a wide area (compared to a normal digital camera pixel), so there can be several objects within that area reflecting photons back into the same pixel. However, if they are different distances from the detector, the photons will arrive at different times.

If the chip had a means of counting photons into different buckets, choice of bucket depending on time of arrival, then that one pixel would be able to report several numbers, ie it would be time slicing to show objects at different intervals. But if it has only bucket, then it may be able to discard photons which are too early or too late, where the acceptable time window is configurable.

As I say, the above is nearly all speculation .. and maybe 100% wrong. I am just making a suggestion as to what to look for.

I have a feeling you might have to plough through some of ST software, alongside some experiments. Whether, in practice, this means buying some ST dev board or similar, I don't know. I have the impression they try to make them 'affordable', but maybe not to the extent of the likes of ESP8266, etc. It's an interesting technology, which I would expect to get more effective with time, but I am not sure how useful it is to you now. I would hope it could be a smarter alternative to say the ultra sound transducers looking for an object immediately in front. It may be a case of find out some principles now, but make more use of them a device generation or two further on.

Just looking at the device sheet showed that it was far from trivial to manufacture. They must be expecting a big market to pay for the investment needed.

I'll continue to watch your discoveries .. good luck.

Best wishes, Dave


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2042
 

@inq 

TOF is an interesting and developing technology which you can learn about with a "TOF camera" Google search.
Unlike the VL53L5CX sensor,  TOF cameras can snap a whole 3d scene in high resolution.
There are two types, one an IR flash and another IR laser points. I suspect the VL53L5CX sensor uses the flash of IR light?

To enlarge an image, right click image and choose Open window in new link.

TOF cameras

Untangling the workings of the VL53L5CX sensor is certainly a major project 🙂

 


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2042
 

@inq

I tried to upload it a dozen times yesterday / last night.  Finally, now, here is the raw data.

I wasn't sure what to make of the data in terms of what it was looking at.

When I ran the frame data it just looked like random noise so I assume it was looking straight at a wall?

This was the average values from all the frames.

To enlarge an image, right click image and choose Open link in new window. 

averageValues

 


   
ReplyQuote
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  
Posted by: @robotbuilder

When I ran the frame data it just looked like random noise so I assume it was looking straight at a wall?

This was the average values from all the frames.

Yes, that is the raw data of the first test described in:  https://forum.dronebotworkshop.com/sensors-modules/time-of-flight-tof-vl53l5cx-8x8-pixel-sensor/#post-32270  .

The "noise" was expected.  The part that I'm really struggling with is the lack of convex distance.  The sensor looking at a flat wall exactly 1000 mm away, will be 1000 mm at the center pixels (44, 45, 54,55) but the corner pixels should be further away.

Using the datasheet that it has a FOV of 45° horz/vert (63.6° diagonal), the angle to the centerline of the corner pixel should be 27.8°.

Therefore, the distance should be = 1000 / cos(27.8°) = 1131 mm.

The raw data is only showing 1026, 1022, 1011, 1009 at the four corners.  (I must not have gotten it totally plumb).  Average them would be equivalent to plumbing = 1017.  That doesn't even seem close to what it should be at the 1131 predicted!

BTW - Tried pointing it at my face (radiation burns are healing nicely) I could move it in from arms length and see the central blob (best photo of me yet) and even see my arm in the corner, and moved it nearer till it had my face "full screen".  I couldn't see any meaningful distance differences between the eye sockets and nose distances. 

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2042
 

@inq 

I couldn't see any meaningful distance differences between the eye sockets and nose distances.

Looking at the numbers or the color codes?

Posted before,

tofHand

 


   
ReplyQuote
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  

@robotbuilder - I tried many different schemes of coloring based on distance.  I did not look at the actual mm distances.  I will look at that, but seeing results qualitatively so far, I'm just not holding out much hope of distinguishing a face from a round ball.  And face recognition... is hopeless.

Maybe when they get that 0.5 Meg-pixel version down to where I can afford/justify it, the data may get as good as that hand waving video you showed.  But... then the problem will be trying to MPU process all that data.

Fortunately, once I get the off-center angle/distance problem figured out, this 8x8 should be great for room mapping.

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
ReplyQuote
(@davee)
Member
Joined: 3 years ago
Posts: 1684
 

Hi @robotbuilder & @inq

re:When I ran the frame data it just looked like random noise so I assume it was looking straight at a wall?

I think your data is (probably) consistent with the example I pasted previously (and again below) from the ST's app note. I think all of the pixels are detecting photons from the area of the wall, closest to the sensor, whose distance will be about 1000 mm, and this is 'blinding' the sensor from reporting photons that must take a longer path, near the perimeter of the field of view.

Looking at the ST data ...

image

 

Here they had one smaller object in front of a wall. It should only 'appear' in the central 4 x 3 rectangle of pixels, but ...

When the 'sharpener' function was set to 0%, which I assume means 'turned off', which is the left hand case of the row of 4 data sets, ALL of the pixels reported the object close to sensor, eventhough only a 4 x 3 rectangle of pixels SHOULD have reported it.

It is only when the 'sharpener' function is enabled that  the pixels outside of the central rectangle were able to show the distance to the wall behind.

I interpret this to mean that because the focussing system is 'fuzzy', all of the pixels will get some  photons form the object in front, and hence all will report the short timing/distance. It is a bit like being in a car, looking at the windscreen, with a low angle sun shining straight in ... you can't see anything beyond the windscreen.

It is only when the 'sharpener' function is enabled to at least 20%, that the 'stray' light from the nearby object is rejected from some of the pixels, and it is then possible to see the wall behind.

Thus the 'ideal' setting is the 20%, where both the object and wall are reported in the 'correct' pixels.

I note that the  data is obviously idealised, but there doesn't seem to be any recognition that those nearer the periphery of view might be expected to be further away. I am not sure if this an 'oversimplified' data set, or a reality of the observed data caused by the poor focus.

--------

Furthermore, look at the effect of turning the sharpener up, beyond 20%. This appears to be progressively reducing the 'sensistivity' of the pixels to see any reflection from the wall, so that it becomes 'invisible'.

--------

Sorry, I haven't figured out precisely what the 'sharpener' function is, but it seems a crucial part of getting any sort of meaningful image.

Best wishes, Dave


   
Inq reacted
ReplyQuote
Page 2 / 17