Notifications
Clear all

Time of Flight (ToF) VL53L5CX - 8x8 pixel sensor

251 Posts
5 Users
74 Likes
13.9 K Views
Ron
 Ron
(@zander)
Father of a miniature Wookie
Joined: 3 years ago
Posts: 6964
 

@davee Sorry I don't understand. Can you make a drawing?

First computer 1959. Retired from my own computer company 2004.
Hardware - Expert in 1401, and 360, fairly knowledge in PC plus numerous MPU's and MCU's
Major Languages - Machine language, 360 Macro Assembler, Intel Assembler, PL/I and PL1, Pascal, Basic, C plus numerous job control and scripting languages.
Sure you can learn to be a programmer, it will take the same amount of time for me to learn to be a Doctor.


   
ReplyQuote
Ron
 Ron
(@zander)
Father of a miniature Wookie
Joined: 3 years ago
Posts: 6964
 

@davee Sorry I don't understand. Can you reword in everyday language leaving out things like subtend and parallel. Angles have little to nothing to do with this. I can interpret parallel to the building as standing beside it or in front of it. What is it you want to know? What 'angle of the building', you said in front perpendicular in other words normal. Maybe draw a picture?

First computer 1959. Retired from my own computer company 2004.
Hardware - Expert in 1401, and 360, fairly knowledge in PC plus numerous MPU's and MCU's
Major Languages - Machine language, 360 Macro Assembler, Intel Assembler, PL/I and PL1, Pascal, Basic, C plus numerous job control and scripting languages.
Sure you can learn to be a programmer, it will take the same amount of time for me to learn to be a Doctor.


   
ReplyQuote
Ron
 Ron
(@zander)
Father of a miniature Wookie
Joined: 3 years ago
Posts: 6964
 

@davee Let me re-answer. The focusing distance is the distance from the focal plane to the front of the building. The depth of focus is or can be ade to be sufficient to encompass the entire building. This is the last example I gave you. Your scenario didn't require it but a hyperfocal distance can be used that is basically the middle of the focus zone so any action appearing in that range will be in focus. This is more often needed for long lenses shooting animals. For instance taking that last example and using a 400mm lens decreases the in focus area to 2.05ft at 100 ft & f2.8 while at f11 it is 8.06ft.

IMG 3345FA6F8CAB 1

 

First computer 1959. Retired from my own computer company 2004.
Hardware - Expert in 1401, and 360, fairly knowledge in PC plus numerous MPU's and MCU's
Major Languages - Machine language, 360 Macro Assembler, Intel Assembler, PL/I and PL1, Pascal, Basic, C plus numerous job control and scripting languages.
Sure you can learn to be a programmer, it will take the same amount of time for me to learn to be a Doctor.


   
ReplyQuote
(@davee)
Member
Joined: 3 years ago
Posts: 1680
 

Hi Ron @zander,

  Sorry ... I didn't mean to make it complicated ... I just used words I learned at school in maths about the same time I became a teenager and they have 'stuck' ever since. (Even though it seems to be getting harder to remember what I did yesterday!)

Sadly, I have seen very few lessons on optics that made much sense at the time, so I am still asking some daft questions...

-----------

Whilst you have been adding extra mails, I have done a very crude view from above looking down, albeit the distances shown are obviously not to scale, and the building distance (shown as 150 ft) could be any one of the range of distances (say 60ft to 290ft) you discussed in your first reply.

camera focus

For my initial question that you kindly answered, there was just the camera (shown at the bottom) and the building (shown at the top). I have just shown it as a reference of how the camera is set up. With the exception of the focus control, I am assuming the camera is unchanged for my 2nd and 3rd questions.

-------------------------------------------

For my second question,  a row of people (represented by 7 smiley faces) stood between the building and the camera, occupying about the same camera width field of view. The distance from camera to the person in the centre of the line was 15 feet.

Can the camera focus be adjusted so that all of the people in the line of people in this "15 foot" line be in sharp focus at the same time?

----------------

For my third question, some of the people moved forward, so that a new line of people formed, (represented by 3 smiley faces), with the distance from the camera to the person in the centre of the line being 5 feet.

Can the camera focus be adjusted so that all of the people in the line of people in this "5 foot" line be in sharp focus at the same time?

------------------------------------------------------------------------

------------------------------------------------------------------------

I think your first answer to my first question was probably appropriate, and I assume accurate for the 50mm lens, which I also assume is a typical value for the 'general purpose' lens that would be fitted by default. That was indeed the sort of set up I had in mind.

--------------------------

Briefly broadening the discussion ... probably beyond my initial scope ... so please regard as just a small bonus point.

I noted your follow on discussion of a 400mm lens, which I think will provide a highly magnified image, (compared to say the 50mm default lens) and hence its angular field of view must be much smaller.

I assume your animal will typically be a considerable distance from the camera (maybe 100s of feet away)?

Are you saying that the depth of focus for such a lens is typically only 2 - 8 feet (with f 2.8 to 11), even though it might 200 feet away ... that is, for a given focus setting, only objects 200 to 210 feet away might be in reasonable focus, and to see something 215 feet away in good focus, the focus setting would need to be changed?

Whilst with the 50 mm lens everything from say 60 feet to nearly 300 feet could all be in focus at the same time?

----------------------------

Thanks and best wishes, Dave


   
ReplyQuote
Ron
 Ron
(@zander)
Father of a miniature Wookie
Joined: 3 years ago
Posts: 6964
 

@davee Both the line of people at 15 ft and at 5 ft can all be in focus. The choice of focal length and focal ratio will determine the start and end of good focus distance, the shorter the focal length the deeper the focus zone, and the smaller the aperture (bigger #) the deeper the focus zone. I am attaching the output of one of my photo apps. I also am enclosing a pic for 50mm infocus from 60 to 300 (it is actually 24' to infinity) An animal at 200 ft is actually infocus at f16 from 180ft to 226ft. I may have messed something up first measurement using the 400. 

IMG 68FE78ABEFCA 1
IMG 12201DE97625 1

 

First computer 1959. Retired from my own computer company 2004.
Hardware - Expert in 1401, and 360, fairly knowledge in PC plus numerous MPU's and MCU's
Major Languages - Machine language, 360 Macro Assembler, Intel Assembler, PL/I and PL1, Pascal, Basic, C plus numerous job control and scripting languages.
Sure you can learn to be a programmer, it will take the same amount of time for me to learn to be a Doctor.


   
DaveE reacted
ReplyQuote
(@davee)
Member
Joined: 3 years ago
Posts: 1680
 

Hi @inq and Ron @zander

RE: ... don't people want to know how far things are away from the sensor... not some plane out in space???

It sounds like you have some experience where that is useful?  To gaming... to camera?   To the price of beans in China... anything??  Is it some optimization/simplification that allowed Doom to run on 8086 back in '85?  Can you elaborate?  

It's really bothering me that I don't understand the usefulness of that.

---   PART DEUX

Thanks to Ron who has provided some useful information, I now have a hypothesis to the question above.

I emphasise that this is only a hypothesis, because I am extrapolating a small amount of data a very long way ... without confirming the extrapolation has any validity whatsoever.

It is also so simple, it seems almost banal to propose it...

-----------------------------------

Assuming the 'big' customer(s) is using the device to determine focus length of his camera (either standalone camera or in a phone) ... i.e. if the sensor reports a target at 1.2 metre, then a magic code or voltage corresponding to focus at 1.2 metre is sent to focusing motor system.

However, there may be several different targets in view, each at different distances ... and the VL53L5CX can report several different distances, say one value for each sensor pixel ... the camera software now has to sort through a small pile of values and 'calculate' a best value.

I am suggesting that at least for 'consumer point and click' photo scenarios, the difference in distance between the perpendicular distance and the 'hypotenuse' distance when the object is not directly in front of the camera, is small , and less than the depth of focus. This implies that the target will still be in focus if the perpendicular distance is used instead of the 'more accurate' hypotenuse distance.

(In practice, the camera manufacturer could easily make the focus control distance a small percentage longer than the perpendicular, so that the focusing error was more 'balanced' ... a little bit long when the target is immediately in front, a little bit short if it is nearer the periphery of the image. Rather than zero error immediately in front, short by a larger amount near the periphery.)

But why would the manufacturer want to build in a 'deliberate' error ... because it means all of the distances are directly comparable ... no need for any trigonometry or weighted comparisons ... just pick a value and focus to it.

Also, if it is trying to offer the user the choice of focusing on one of two or more objects, then making all the distances perpendicular means they will be easier to group together, and hence 'discover' the alternatives. Whilst photos of an object can obviously be taken at any angle, I have a feeling most point and shoot photos aim for roughly perpendicular. I speculate the software makes use of this. 

----------

Of course I could be completely wrong ... but at least it's a suggestion to ponder.

----------

As I said ... it is so simple it almost seems banal to propose it -- but in the commercial world, 'good enough' means more profit and less bugs by not over gilding the lily.

Best wishes both. Dave


   
ReplyQuote
Ron
 Ron
(@zander)
Father of a miniature Wookie
Joined: 3 years ago
Posts: 6964
 

@davee I am not real sure what you are saying but I keep getting the vibe that you think the focus distance is different at an angle (not sure I worded that right). What you are not taking into account is there is a lens in between the subject matter and the focal plane. It's job is to change the path of the light rays entering the lens so they can be recorded on the film or sensor. Depending on the quality of the lens this process is not perfect which is why my photo processing software corrects for all kinds of lens errors. The software has a specific profile for each of my lenses. It even brightens the corners where light falloff happens and I think it even takes into account a zoom lens with a hood casting shadows in the corners. It does know how far the lens is zoomed after all. All I know is it's a one button fixall. I can't really follow your discussions, but my hunch is you are chasing smoke.

First computer 1959. Retired from my own computer company 2004.
Hardware - Expert in 1401, and 360, fairly knowledge in PC plus numerous MPU's and MCU's
Major Languages - Machine language, 360 Macro Assembler, Intel Assembler, PL/I and PL1, Pascal, Basic, C plus numerous job control and scripting languages.
Sure you can learn to be a programmer, it will take the same amount of time for me to learn to be a Doctor.


   
ReplyQuote
Inq
 Inq
(@inq)
Member
Joined: 2 years ago
Posts: 1900
Topic starter  

@davee - I believe your main train of thought is attempting to answer my posed question of what purpose would they doctor the data?  I appreciate that and certainly don't want to squelch your enthusiasm for stretching our minds.  Exercising our minds is a great thing as we age. 

The last time I did professional photography was when Pan-X, Plus-X and Tri-X had meaning, so @zander has the more up-to-date info on the Nikon and Canons of today.  I was of avid follower of the Zone system by Ansel Adams.  But I diverge... Coming back around...

  • Most cell phone cameras are not using ToF sensors.  For focusing they typically use dual-pixel phase detection auto-focus.  Doing an Internet search, I was surprised at just how few do have this ToF sensor.
  • While in the old-days we had to balance focal-length, aperture, shutter speed and film speed based on the scene depth-of-field, movement and depth of scene desired to have sharp.  It is far less sensitive.  My cell camera can take hand-held pictures, that I would not even waste my time trying... knowing they wouldn't come out.  
  • My quick survey of the ones that do have a ToF, they bring up the term, bokeh.  From my two-second search, I see this is actually using the application of intentionally, digitally blurring things for creative effect... I guess because modern digital cameras are way too sharp as compared to the human eye.

Point being... I don't think the trigonometry "adjacent" side returned versus the "hypotenuse" is because of camera focusing related reasons... yet.  My gut feeling, is the reason @robotbuilder alluded.  I think it is for game play, augmented reality (Pokemon Go).  

  • Even though the wall in the corners is further away, our brain "makes" the wall flat since we "know" it is.  
  • I'm betting that in the gaming world of frame rate being tied to how complex the scene is, that by approximating the wall as flat saves tremendous CPU time versus rendering it as it truly is with actual curvature as it should.  
  • Besides, showing curvature of the wall would also confuse our brain seeing the 2D representation of the wall being curved, when we know it isn't.

That's just my gut feeling so far.  

Aside - I only use Google Nexus, now Pixel phones.  And only one phone the (Pixel 4 Pro) had a ToF sensor.  They were using it on the front merely to do things like hand gestures in the air without touching the phone.  The party trick soon wore thin once people realized the CPU processing power needed to keep that active while it was sitting on the table was making dismal battery life.  Why some Google Brainiac didn't realize that ahead of time, totally escapes me!  Maybe they need more fools to point out the obvious things to them.

VBR,

Inq

 

3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266...
Even usable on ESP-01S - Quickest Start Guide


   
ReplyQuote
Ron
 Ron
(@zander)
Father of a miniature Wookie
Joined: 3 years ago
Posts: 6964
 

@inq Don't forget, the most likely reason hand held shots today whether the very capable phone/camera or full DSLR are possible is anti-shake technology. It is done in many ways but especially with the phone, after the computer finishes processing the image what you see could be quite a bit different than what the camera initially saw. With my DSLR I am in charge of MOST of the post processing and can accomplish a lot with a few clicks, but the differemce is I can see the original and with the camera I don't tink that ability exists, it's aimed at a different audience. I do know my next camera will give me raw images so I can do post processing but how raw? In any case, Ansel Adams would have loved the world of digital photography and I still lust for a Weston V meter.

First computer 1959. Retired from my own computer company 2004.
Hardware - Expert in 1401, and 360, fairly knowledge in PC plus numerous MPU's and MCU's
Major Languages - Machine language, 360 Macro Assembler, Intel Assembler, PL/I and PL1, Pascal, Basic, C plus numerous job control and scripting languages.
Sure you can learn to be a programmer, it will take the same amount of time for me to learn to be a Doctor.


   
ReplyQuote
(@davee)
Member
Joined: 3 years ago
Posts: 1680
 

Hi @inq and Ron @zander,

   I agree with most of both of your comments above ... including there being more than a touch of smoke and mirrors about the whole thing. 

I remember my father having a light meter ... I am not sure if it was a Weston Meter, but it looked similar 'black box' version in some of the pictures I Googled. I had to Google Ansel Adams as well .... he didn't ring any bells.

-----

And an apology - According to Wikipedia, I have mixed up 'depth of field' and 'depth of focus' - I'll try to get it right from now on.

---

Yes, I realise the job of the lens is to bend the light rays coming in at different angles to form a focused image on the film or sensor ... that is the 'easy' bit that you see on all the ray diagrams. But we know the process has intrinisic limitations ... e.g. no matter how much you spend on quality, etc., only items in a certain range of distances will be in focus for any given focus setting.

Whilst there are some simple 'optical formulae', like 1/i + 1/o = 1/f, depth of field calculations seem to be approximations .... e.g. https://en.wikipedia.org/wiki/Depth_of_field , and not very friendly ones at that.

In lieu of me getting tied up in knots trying to understand the equations, the information Ron has kindly been providing has added some examples of the depth of field for 'good quality lenses & cameras' ... so we can guess they are near the 'ultimate capability'.

I admit that when I started chasing down this rabbit hole, I was wondering if the effect of the lens 'compensated' for the increased path length for rays at an angle to the perpendicular, in terms of depth of field. I am still not clear about the answer. I note the diagrams Ron has provided suggest only the perpendicular distance is 'important' ... but I don't know if this is technically accurate or just an over-simplified diagram. (The lines defining the depth of field are straight ... they would be circular arcs if the depth of focus limits were strictly on path length.)

However, when I saw the 'typical' depth of field data values, I thought that maybe the difference in path length was insignificant, because a 'smart' focusing system would be able to find a setting which allowed objects to be in focus, based solely on the perpendicular distance. Of course, this assumes the lens system is a 'default' 50mm (or equivalent) type, being used in a 'simple' point and click environment.

--------

I don't know if the (potential) market for these products is principally focussing cameras, reading gestures from humans, or looking for mis-shaped and odd sized (fruit) apples on a production line .. or perhaps all of them. This was just my thoughts on the camera possibility.

Similarly, I have no idea if these sensors have a significant marketplace presence, or is ST (and competitors) still at the stage of trying to convince the phone and camera manufacturers that their 'next' model should use them.

However, Inq's information from ST suggested a 'substantial' market opportunity had driven ST to make their product calculate and report the perpendicular distance,  in spite of the 'raw' sensor data being directly proportional to the actual path length. That requires 'more work', suggesting it is for a good reason, not just a freebie option.

Inq, quite reasonably, asked for suggestions as to why this might have happened ... and I have been trying to offer a suggestion ... though of course, I might well be looking at the wrong market and application... as well as the wrong or false reason.

-------------

I am aware that some phones do not have such a sensor ... but my personal experience is strictly at the 'Box Brownie' end of the market ... so failing to see it there is not a surprise. I am also aware that software can be used in a semi-trial and error mode to adjust the focus and assess the result - but this involves viewing many frames which takes time, so a method of determing a direct range measurement could give a much faster response.

I am also aware the more powerful phones build in 'correction' facilities to get a 'better' picture. Some things, like increasing the light in the corners, may be predictable. Correcting 'focus' of a picture after capture is tricky because information has been lost from the image, and in 'general photography', there are no means of knowing what should be there, so whilst slight 'sharpening' of edges, etc. may be beneficial,  'heavy' post processing attempts at restoring focus risk introducing new artifacts.

------------

I appreciate gaming may have something to do with this ... though I haven't figured out what part the sensor is playing ... I suspect it isn't interested in walls, (except as a limit), but moving objects (human?) instead. Perhaps a perpendicular measurement, combined with the  x,y coordinate of the pixel might be used to determine the x,y, z coordinate of an 'object' without doing any trigonometry?

--------------------

Hope you can find something of interest in this waffle.

Best wishes, Dave


   
ReplyQuote
Ron
 Ron
(@zander)
Father of a miniature Wookie
Joined: 3 years ago
Posts: 6964
 

@davee One minor update Dave, the calculations I did for you are for ANY lens, quality is not taken into account. That would be overlaid on the theoretical data. The only lens variables is focal length.

People who might be interested in these calculations would own better quality lenses.

First computer 1959. Retired from my own computer company 2004.
Hardware - Expert in 1401, and 360, fairly knowledge in PC plus numerous MPU's and MCU's
Major Languages - Machine language, 360 Macro Assembler, Intel Assembler, PL/I and PL1, Pascal, Basic, C plus numerous job control and scripting languages.
Sure you can learn to be a programmer, it will take the same amount of time for me to learn to be a Doctor.


   
DaveE reacted
ReplyQuote
Page 17 / 17