Notifications
Clear all

Time of Flight (ToF) VL53L5CX - 8x8 pixel sensor

251 Posts
5 Users
74 Likes
14.3 K Views
Will
 Will
(@will)
Member
Joined: 3 years ago
Posts: 2535
 
Posted by: @inq
 

[snipt]

@will is gone already.

No, I'm not gone, I'm enjoying your journey too much to leave. Mind you, I have no short- nor even medium- term interest in producing an ambulatory robot but I like the way you're progressing.

Unlike many of the forum members who start with two completely independent projects and come here to have somebody else Frankenstein them together, your approach of gradually adapting each iteration to improve your 'bots satisfies me because I feel it's the right way to approach the problem. It's how I would proceed if I were building a robot.

The only thing I decided to "leave" was the 8x8 sensor 🙂 Well, since I also have little math or tech knowledge in this area, I left off shooting off my mouth in this thread as well.

So, by all means, keep on keepin' on 🙂

 

PS - a few months ago someone on (or near) the forum mentioned using the room's overhead lights as a navigational aid. I've also wondered about RF transmitters (different frequencies) located in different areas to help quickly identify what roomette little critter is in.

Anything seems possible when you don't know what you're talking about.


   
DaveE reacted
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2043
 

@inq

This is a hobby and I'm doing it for fun... There is no end-game for a product!

I thought the end game was its usefulness in creating a 3d map and being able to use it and the current sensor values to make a decision which way to go next in real time?

I have given it a lot of thought but have put it in the too hard basket.  Even using the 2d lidar data is difficult if you want to read about the SLAM solutions involved. Estimating where the robot is using encoders or some other system is required.

However I will not labour the point as I see no end game in that 🙂

I do have an end game myself, a working robot, which may never actually be achieved beyond some simple behaviours.

What is the robot's goal/s?

They have never wavered - https://forum.dronebotworkshop.com/user-robot-projects/inqling-junior-robot-mapping-vision-autonomy/#post-30744

"The major goals for Inqling Jr. will be to study a form of "vision" with mapping, memory and navigating in a learned environment. The secondary goal will be using another two-wheel architecture."

That is your goal, I meant the robot's goal/s. An autonomous robot moves toward some goal state from its current state. The goal state is compared with the current state to generate or select an action. It is something to consider when deciding what the robot really needs to know as opposed to just collecting data. A 3d point cloud may be pointless (excuse the pun) with regards to achieving some goal state.

The thing about bumpers (touch sensors) is their use as a fail safe system if all else fails.

I use my bump and touch sensors when walking about the house at night if the lights go out or not to wake up my spouse by turning the light on 🙂

 


   
ReplyQuote
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  
Posted by: @robotbuilder

That is your goal, I meant the robot's goal/s.

Inq - "Ling, fetch me a beer."

Inqling, the 10th - "Get you're own @^A* beer, I've got my own plans for world dominance!"

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
DaveE reacted
ReplyQuote
Ron
 Ron
(@zander)
Father of a miniature Wookie
Joined: 3 years ago
Posts: 7008
 

@inq And that is why the power off switch has to be mechanical and located where 'they' can't get at it!

First computer 1959. Retired from my own computer company 2004.
Hardware - Expert in 1401, and 360, fairly knowledge in PC plus numerous MPU's and MCU's
Major Languages - Machine language, 360 Macro Assembler, Intel Assembler, PL/I and PL1, Pascal, Basic, C plus numerous job control and scripting languages.
Sure you can learn to be a programmer, it will take the same amount of time for me to learn to be a Doctor.


   
ReplyQuote
robotBuilder
(@robotbuilder)
Member
Joined: 5 years ago
Posts: 2043
 

Posted by: @inq
Posted by: @robotbuilder

That is your goal, I meant the robot's goal/s.

Inq - "Ling, fetch me a beer."

Inqling, the 10th - "Get you're own @^A* beer, I've got my own plans for world dominance!"

Posted by: @zander

@inq And that is why the power off switch has to be mechanical and located where 'they' can't get at it!

Ok. No serious AI discussion going on here. I will move on. Although still have an academic interest in what you can learn about your VL53L5CX sensor.

 


   
Ron reacted
ReplyQuote
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  

More Boring Crap

Here's more data.  I don't know if this thing is wearing me out or I'm wearing it out.

In this series of images, I've made two changes to the GUI front-end Javascript. 

  1. The first at 15 Hz, it's really impossible to get any real data.  I've made the GUI average every value coming in and eventually... usually around 90 seconds worth of data, the values have settled on an average and don't change anymore.
  2. I've inspected the other values of the data available and it has an "target_status" value.  There are several values for it.  One indicates that it is 100% reliable - Come on!, Really, 100% reliable.  Anyway, I only average a value if it is 100% reliable.  There is also a value that indicates that it is 50% reliable.  I do not average this in, but I do pop up a different color on the cell so we can note when something is amiss.  If there is an error condition, I paint the cell black and don't average it in.
  3. I've also added some counters below showing the number of samples, and the number of 50% and Err'd values.

@davee noted early on about a Sharpness value.  There is also an Integration Time value.  Interesting that ST (Company that makes the chip) has one set of defaults for these values, yet SparkFun (Company that put the chip on a break-out board and wrapped a library around it for us to use on Arduinos) has decided different defaults are necessary.  I've played with a few other outside those.  Here is the data for a 1 meter nominal distance.  

ST Defaults - Integration = 5ms, Sharpness = 5%

d1i5s5 ST defaults

Integration=5ms, Sharpness=20%

d1i5s20

SparkFun Defaults - Integration=10ms, Sharpness=14%

d1i10s14 SF defaults

Integration=16ms, Sharpness=0% -Longest integration time that can still get 15 Hz.

d1i16s0

Integration=16ms, Sharpness=20% - This should have given the best distances across the 8x8.

d1i16s20

Integration=16ms, Sharpness=40%

d1i16s40

Integration=16ms, Sharpness=60%

d1i16s60

Integration=24ms, Sharpness=0% - Longest integration time that can still get 10 Hz

d1i24s0

Integration=100ms, Sharpness=0%

d1i100s0

To summarize what I'm seeing.  In none of these cases did the distances give the proper theoretical distances based on the 45 degree FoV.  This is Extremely confusing, especially considering that the when I did the previous oblique test, I was getting pretty good correlation to a 45 degree FoV.

More to come... 2m, 3m and 4m.

VBR

Inq

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
ReplyQuote
(@davee)
Member
Joined: 3 years ago
Posts: 1694
 

Hi @inq,

   I don't think your last post has a fitting title ... I think you are finding and publishing useful informaton about a device which appears to be severely lacking in fundamental descriptive documentation.

From my previous quick glances, I think this can be compounded by different devices in the marketplace that might appear to be similar, may work on somewhat different principles and consequently have different properties and foibles.

This can mean a lot of research is required to find out what would have been 'obvious' with some more explanation beforehand.

I seem to recall the device has an inherent distance measurement error margin of a few percent .. a large proportion of your values are only about 1% lower than the nominal 1m. Furthermore, there is a tendency for values to slightly increase towards the corners, at which you might expect room geometry to be showing longer distances, although this effect is less than simple trigonometry would predict.

I have a suspicision, part of the problem maybe that the operating mechanism and basic physics, favours photons returned from a shorter distance, compounded with a limited ability of the optics to 'focus' clearly, may be resulting in most of your pixels reporting reflections from the closest part of the wall ... i.e. about 1m (with a 1% error).

Of course, this is only my suspicision and maybe completely wrong.

Perhaps some variation on the 'experiment' in the app note, with a nearby target occupying part of the visible field, in front of a wall etc, which is also comfortably within the device range, might reveal some more information.

Overall, I am concerned you will be disappointed to find that this device is only capable of giving a 'fuzzy' result instead of the 'crisp' data you would like ... perhaps an order of magnitude 'clearer' than the simple ultrasound, but still one or more magnitudes more 'fuzzy' than you would like.

e.g. It might be able to detect the door of the beer fridge ... but finding the handle of the door is going to need another approach! 😉 😉 

Best wishes, Dave


   
ReplyQuote
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  
Posted by: @davee

I don't think your last post has a fitting title ... I think you are finding and publishing useful informaton about a device which appears to be severely lacking in fundamental descriptive documentation.

I simply question (myself) before posting a grundle of data, "Will anyone even glance at it, much less study it?"  I often think that really a one sentence summary would be sufficient.  Then I realize, there might be even one person in a thousand like myself that says when reading a post, "You can't make that statement without substantiating it!"  I also want to show that I'm not assuming anything about theory, sensor settings or testing configuration and wanting to be thorough so we know what the limitations of this sensor is.

I'm also not so sure (being on page 12 for this one sensor) that anyone will trudge through this and find some value.  Bless their heart if they make it through and find the one tid-bit of use to them.

Posted by: @davee

Overall, I am concerned you will be disappointed to find that this device is only capable of giving a 'fuzzy' result instead of the 'crisp' data you would like ... perhaps an order of magnitude 'clearer' than the simple ultrasound, but still one or more magnitudes more 'fuzzy' than you would like.

I think my expectations for this sensor are in line with its capabilities.  I'm not looking for it to be imaging in the sense of distinguishing between a toy ball and a human fist of the same size.  I don't need or expect that kind of distance resolution.  I think the course spatial resolution 8x8 of this thing is perfect for room mapping using a Arduino level MPU.  Most data of the spinning lidar is simply not necessary (say 95%) and must be inspected and discarded.  Also, it is only in one line around the room.  I'm not trying to DIS the lidar, it's just overkill and makes things harder for this problem than is necessary (IMO).  This sensor gets me up the wall somewhat and at least can tell me if the bot can fit under some obstruction.  The lidar may be looking under a couch and seeing the back wall and knock its head off as it proceeds.

The scenario I imagine I'll be using... say traveling down the hall...

  • The bot will travel parallel to the wall.
  • I'll have the head turned 15 degrees toward the wall.  This way, it can see ahead for obstructions and drop-offs while scanning the wall for the point cloud mapping of the room.  It does double duty! 
  • If it travels at ~4 kph, it'll be traveling along the wall getting a new set of 64 points every 7 cm.  This is plenty of overlap so that I can assure continuity of the wall, yet fine enough to assure I don't miss a hole big enough for the bot to drive into.
  • The bot can be programmed to either enter the doorway, say... using some "always turn right" algorithm or stay in the same hallway and track and map all the doors.  
image

So far in my testing of this sensor, I'm ok with its resolution - horizontal, vertical and depth.  What disturbs me about the above test is... it is not anywhere near theory.  Instead of showing a bunch of trig, I'll show a drawing.

Perspective

From the drawing, the ratio between the corner pixel and the center pixel is 1.1214.  Using that on any of the center pixels in the above data, say... using the last set.  The closest center pixel 992 mm, the corner pixel should be 992*1.1214 = 1112 mm away!  The sensor's measured distance to the furthest corner pixel is only 1010 mm.  These numbers are not even close to where it should be.

YET... when doing the oblique test to find difference in angles between pixels way back in this thread, I got reasonably close results.

I'm just struggling... that this series of tests are set up to be as close to optimum that their documents show... (1) square on to the wall, (2) white wall, (3) dim ambient light.  Yet, depending on how you measure error... in this case the difference in length should be 1112 - 992 = 120.  The measure difference in length is only 1010 - 992 = 18.  IOW, It is only 15% of what it should be!  

I'm missing something and it bothers me.

VBR,

Inq

 

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
DaveE reacted
ReplyQuote
Will
 Will
(@will)
Member
Joined: 3 years ago
Posts: 2535
 
Posted by: @inq

I simply question (myself) before posting a grundle of data, "Will anyone even glance at it, much less study it?"  I often think that really a one sentence summary would be sufficient.

Yes ! By all means post your data and your conclusions. It's not like everyone is forced to read every post of every thread; so anyone not interested is not impacted at all but the information is gold to anyone who IS interested.

 

Besides, all your posts are using recycled photos anyway, so there's no waste involved 🙂

Anything seems possible when you don't know what you're talking about.


   
DaveE and Inq reacted
ReplyQuote
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  

With me AND you all not finding fault with my expectations, I've joined the ST forum and will haunt them till I get an answer that meets the litmus test.  First time poster, I didn't have link or picture privileges.  I couldn't really DUMP on them like I wanted.  😉 

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
DaveE reacted
ReplyQuote
Will
 Will
(@will)
Member
Joined: 3 years ago
Posts: 2535
 
Posted by: @inq

So far in my testing of this sensor, I'm ok with its resolution - horizontal, vertical and depth.  What disturbs me about the above test is... it is not anywhere near theory.  Instead of showing a bunch of trig, I'll show a drawing.

Perspective

From the drawing, the ratio between the corner pixel and the center pixel is 1.1214.  Using that on any of the center pixels in the above data, say... using the last set.  The closest center pixel 992 mm, the corner pixel should be 992*1.1214 = 1112 mm away!  The sensor's measured distance to the furthest corner pixel is only 1010 mm.  These numbers are not even close to where it should be.

Can you please  explain how you arrived at the ratio between corner points being 1.1214 ? I find that

1.1592/1.1214 = 1.0335                          and

1.1214/1.0881 = 1.0306

So I'm obviously not following your line of thought.

Also, The top and bottom distance from the floor will be determined by the arctangent of the ratio of the distance to the target and the angle subtended by the floor and the target spot. The actual distance to the spot will then be a function of the squares of both vertical and horizontal components. Are you thinking that a common ratio exists between the corner and centre points of the target rectangle ?

Anything seems possible when you don't know what you're talking about.


   
ReplyQuote
(@davee)
Member
Joined: 3 years ago
Posts: 1694
 

Hi @inq,

  I hope you took my opening statement as  a complement & thanks ... it was short hand for saying that I appreciate the effort and rigour you are applying and describing. For as far back as I can remember, I have always wanted to understand how 'things' work and to appreciate the consequences ... and for me, this is a classic example of trying to figure out exactly what this sensor is 'observing' .. albeit alongside a raft of other things I should, must or would like to do, so presently I am mainly just looking at this device via your posts.

Hence, a few tables with lots of rather similar numbers hovering around 980 are clearly not what you were hoping to see, for your navigation task, but to me they are telling a small part of a story ... albeit one neither of us has fully understood.

You obviously have an immediate application, with some amazing machines, and I apologise if I have distracted you in any way.

------

I had already understood that simple geometry/trigonometry suggests you expect to see a much greater difference in distance between the central 'straight ahead' pixels and (say) the 'looking top left' diagonal pixels ... which is why I said "although this effect is less than simple trigonometry would predict."

It is indeed curious, and from your practical viewpoint frustrating, that values vary in the expected trend direction, but only manage about 15% of the difference expected.

-----

My suggestion, and it is pure speculation without a shred of solid evidence, is that data you are seeing is convolution of values very roughly analogous to a fuzzy, out of focus, picture.

My 'hypothesis' might be something like...

Apologies for the 'corruption' of your precise diagram, but I hope it helps my explanation ..

5536 Perspective   Copy

Instead of flashing a light adjacent to the sensor, imagine there are two flashing light sources, one in the centre of the red square, the other in the centre of the green square. The two light sources flash at the same time, and the light sensor, is also provided with a synchronous trigger signal.

The light sources both produce the same wavelength infra red light, so the only way the sensor can distinguish which source an individual photon originated from, is by its pathway.

Thus, in principle, if the optical system works like a well focussed camera, photons from the "Red square" source will all land on the corresponding "Red Square" pixel of the sensor, and similarly with the "Green square" source and "Green square" pixel. Then, the sensor can compare the photon arrival times at each pixel with the timing trigger signal and report the appropriate distance as your diagram described.

.....

But now imagine the optical system is far from 'well focussed', but instead the lenses are greasy/smeary, out of focus, etc. As a result, a proportion, say 1 per cent, of the photons from the "Green square" source end up on the "Red Square" pixel. To the "Red square" pixel, they are identical to those from the "Green square" source, and will register as such.

Now, if this was a 'normal' camera, then the 1 percent of 'stray' light hitting the "Red square" pixel would only be a small interference compared to the much stronger from the "Red square" light source.

However, (maybe) this sensor is looking for the 'first' photon(s) to arrive (after the trigger pulse), in order to time them with respect to the trigger pulse ... and hence photons which arrive a nanosecond or so later than the first ones  will (may) be ignored, since the pixel can only return a single timing for each light flash.

If the suggestions in the last paragraph are true, then the photons from the "Green square" source are immediately at an advantage, since their path is shorter, even if only 1 per cent reach the "Red square" pixel, that might be enough to dominate the returned values, so that nearly all of the timing determinations come from photons originating from the "Green square" source.

----

If the above section sounds plausible, then maybe we can extend it a little further ....

Add a third identical light source in the "Yellow square" and third pixel corresponding to "Yellow Square".

And slightly improve the clarity/focussing of the optical system, although assume it is still suffering from the same basic problem.

Now the "Green square" and "Yellow square" light sources are adjacent to each other, so that it might be expected that a higher proportion of photons from the "Green square" light source will end up on the "Yellow square" pixel, than on the "Red square" pixel.

Hence, it is more likely that some measurements on the "Red square" pixel will have originated from the "Red square" source, or at least the squares immediately adjacent to the "Red square", which may account for the observed 'average' distance to be greater than the perpendicular distance, but still much shorter than the actual "Red square" source to "Redsquare" pixel difference.

---------------------

I don't know if this seems plausible ... or if I have been confusing pixies at the bottom of the garden with pixels .. but it is the best I can imagine for now.

 

Best wishes and take care my friend, Dave


   
ReplyQuote
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  
Posted by: @will

Can you please  explain how you arrived at the ratio between corner points being 1.1214 ?

Oh $#!+!  Did I screw up an 8th grade Math problem?!  I just used the CAD program and let it measure them.

Let's see, by Trig... 

  • I'm wanting the ratio from the furthest point to the closest point.
  • In a perfect world, the center should be closest (Perpendicular).  I set that to 1000 mm.  
  • I made the assumption that the gizmo averages the distance within the pixel FoV, so the average would be in the center of the pixel (Note going to center of red square of the corner pixel).
  • The datasheet says it has a 45° horizontal and vertical FoV... 63.6° diagonal FoV.
  • Each pixel has a diagonal FoV of 63.6 / 8 = 7.955°
  • Thus, the angle from the centerline of the sensor to the center of the corner pixel is:  3.5 * 7.955° = 27.84°
  • Distance to wall = 1000 / cos(27.84°) =  1130.9 mm
  • Ratio = 1130.9/1000 = 1.1309

Hmmm... the CAD gave me 1.1214.  Got to go see what's up with that.  I'll be back... one moment.

VBR,

Inq

 

EDIT:  Yeap!  I screwed up the CAD drawing.  I scaled at some point and it screwed up the Geometry.  The correct ratio should be = 1.1309.

 

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
ReplyQuote
Will
 Will
(@will)
Member
Joined: 3 years ago
Posts: 2535
 
Posted by: @inq

Let's see, by Trig... 

  • I'm wanting the ratio from the furthest point to the closest point.

Why ? That ratio is not constant.

For the following, assume that you're positioned 10 feet directly in front of the wall and you're scanner is pointing 22.5 degrees to the right of that. Assume that everything is done in the horizontal plane so that we have only 1 dimension to consider ...

  • In a perfect world, the center should be closest (Perpendicular).  I set that to 1000 mm.  

The "centre would be closest" would only be true if you were taking measurements from a position mostly perpendicular to the wall.

Remember that your 45 degree scanning angle means +22.5 degrees to the right of centre and 22.5 degrees to the left of centre. So, in the position posed above, the left edge f your scan area is at 0 degrees (i.e. pointing directly at the wall) and the right edge of your scan area is at 45 degrees to that.

In this case, you can see that the distance to the left edge is 10 feet (your distance from the wall) and also note that your right edge is now 14.1 feet away while your centre is 10.8 feet away.

If you now move the scanner so that the centre is pointing at 45 degrees to the right, the left distance is now 10.8 feet (22.5 degrees), your middle distance is 14.1 feet (45 degrees) and your right edge is 26 feet away.

There is no constant ratio.

Anything seems possible when you don't know what you're talking about.


   
ReplyQuote
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  
Posted by: @davee

Hi @inq,

  I hope you took my opening statement as  a complement & thanks ... it was short hand for

Oh! I understood... You're a scholar and a gentleman.  You'd have to be pretty explicit... like, "You SOB!" to piss me off... and only if you didn't include a smiley emoji.  

Posted by: @davee

this is a classic example of trying to figure out exactly what this sensor is 'observing' .. albeit alongside a raft of other things I should, must or would like to do, so presently I am mainly just looking at this device via your posts.

I'm pretty good at distracting people from what they should be doing.  🤣

Posted by: @davee

You obviously have an immediate application, with some amazing machines, and I apologise if I have distracted you in any way.

No... if you are following the Inqling Jr thread at all, you'd know I drill down to China.  1% error is totally unacceptable, 0.1% is for people that build bridges.  I finally accepted defeat at 0.033% and 0.01% error.  This sensor as far as I can tell is showing about 600% error.  I can't use the words here that describe how unacceptable this is.  This is not a product, yet they sell it...so QED... I must be doing something wrong!  Any distraction you provide might just point to a solution.

Posted by: @davee

My suggestion, and it is pure speculation without a shred of solid evidence, is that data you are seeing is convolution of values very roughly analogous to a fuzzy, out of focus, picture.

My 'hypothesis' might be something like...

Ok... If I see your name on the email, I don't even open the post unless I have at least an hour available.  I know I have to study your posts.  I read this once.  I contemplated the wonders of the universe.  I then read it again.  Then I had a glass of wine and read it for a third time.  I've come to the conclusion that my deficiency of Quantum mechanics and distracted by thinking about Schrödinger's cat.  I'm now thinking more wine is necessary. 🤣 

Several things jump out at me about your theories.

  1. To the best of my measuring ability, the center point is DEAD-ON.
  2. If there is some Quantum explanation for this BAD observed behavior, how is this a viable product without compensating for that behavior?
  3. I'm still struggling with the measuring aspects.  It stated somewhere that the integration time is the time that the transmitter is on AND the receiver is ON.  How can it measure ToF?  I'm just not grasping how it can distinguish between a photo going out at the beginning and another going out some pico-seconds later.
  4. Then there is the word Integration time... so It's summing up something to come up with a "better" answer.  It is failing miserably IMHO!

I'll will re-read your post again after... well... after!  🤣 

VBR,

Inq

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
DaveE reacted
ReplyQuote
Page 12 / 17