It will be interesting to see what kind of data you get from the TOF.
In the mean time I have been reading up about SLAM algorithms and wonder what kind of computing power must be in a robot vacuum cleaner to perform the task. I have updated the simulation to test out different algorithms to navigate and map a house. Although the "lidar" data is clean at the moment I will add realistic noise to test those algorithms. Also the simulated robot has perfect odometry which means I will have to add error to that as well to make sure the algorithms can work even if the robot is not exactly where it should be or even if it is moved physically by a human.
In the snapshot below the actual simulated world is on the left and the "lidar" data seen by the robot is on the right. When the robot turns the actual data on the right will rotate. My previous simulated lidar code I posted actually returns the x,y position of the hit point when in fact the only data you have is the direction and distance of the hit point so the x,y position has to be calculated before it is plotted.
To enlarge an image, right click image and Open link in new window.
Your post is requiring a LOT more thought... that I haven't gotten to yet. Let me see if I understand what you have here...
First off... is the simBot8 a program you've already written?!
The left side is a hand-generated set of lines.
I see robot position
The right side it computer program, ray traces being sent out to intersect walls.
It even looks like the data points have a consistent angular increment.
As you described earlier with your ray-trace game.
If I'm reading the above right... this is 90% of the problem.
Your making me have to do a rethink. Me designing a 30 minute tail-dragger versus another week of self-balancing (might work) logic may need to get back-burner'd.
It will be interesting to see what kind of data you get from the TOF.
Let me see about getting some data out of this sensor. Would this be of interest to you?
Multiple samples perpendicular to a plain white wall at different ranges - Gives baseline sensitivity, noise content is best conditions.
Some oblique to wall.
Some with furniture and obstructions. With photos to correlate.
Let me know if you're interested.
VBR,
Inq
3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266... Even usable on ESP-01S - Quickest Start Guide
I finally thought... I'll be able to add the orientation data to the Math and just let it shotgun the whole area. Make sense?
For me... this is the good part. The previous 8 pages, I file under the "No pain... no gain!" category.
I guess it makes sense. I'm good for a bit of trigonometry when I can look up a known formula like a compass bearing calcs from GPS data etc. but having never had to use advanced maths since I left school (and I cant really remember doing much back then either) the sort of maths required to adjust your bots scan data is beyond me.
I do have some doubts as to the effectiveness in terms of the resolution you will get from your sensors to enable mapping, as opposed to the more simple obstacle avoidance data. But I presume (as the presumer has been slated as adequate 😎 ) your early experimental data will settle this.
So for me its inq's pain is my gain 😀 (maybe) and thanks for putting us all in the front seat of your robot arena
@byron - That I have an audience of what... maybe two people interested.
$3 ESP8266
$7 stepper motor
$1.50 driver
$2 plastic
... knowing someone else in the whole world is interested... priceless!
Thanks!
Inq
3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266... Even usable on ESP-01S - Quickest Start Guide
Honestly I never really appreciated what this VL53L5CX was!! I was confusing it with the TF Mini LIDAR. What I was doing really isn't relevant to its use to map or navigate a room. I don't know enough about it yet to make some kind of simulation version for a simulated robot base.
I am going to have to learn more about it before I make any further comment.
Shows I didn't really read your post very carefully. I went back to look again to re-read the post with your video of your experiments. I didn't really understand at the time what you were showing. I will give it some more thought.
It will be interesting to see what kind of data you get from the TOF.
Let me see about getting some data out of this sensor. Would this be of interest to you?
Multiple samples perpendicular to a plain white wall at different ranges - Gives baseline sensitivity, noise content is best conditions.
Some oblique to wall.
Some with furniture and obstructions. With photos to correlate.
Let me know if you're interested.
VBR,
Inq
Difficult to know what sort of data I would really need.
Although it says the VL53L5CX can be used for things like scene understanding, complex scene analysis and 3D room mapping I can't find any examples of that use on the internet.
If you moved your hand closer would you get more squares covering your hand (higher resolution) and if you move your hand away from the sensor do you end up with only one square covering the hand?
Shows I didn't really read your post very carefully. I went back to look again to re-read the post with your video of your experiments. I didn't really understand at the time what you were showing. I will give it some more thought.
In that very limited test, I came away with an impression. I don't have any experience with Lidar, so I can't compare the technologies, but from your simulations above, I get the impression that these rotating Lidar units are using a laser beamlike a laser pointer...
I was originally expecting this VL53L5CX unit to have 64 beams and if I moved it close to a wall, I'd see the grid's 64 points...
One of my phone cameras sees into infrared. I used it on the https://inqonthat.com/inq-speed-racer/ project and could see the infrared emitters. Looking directly at the VL53L5CX, I can see the purplish hue as it is the same frequency as the race car emitters.
But this think, I think works a little differently. I don't think the infrared is focused. I think it just blasts out the light pulses over the whole FOV area and the ToF receiver is the one that segments out the 64 regions it is looking at. That would explain the limited range of 4 meters... it gets too dim by the square of the distance.
It might also explain some of the other data that comes back that I mentioned in the test post. I'm thinking that a partial blockage of a square... returns both the ToF distance, but also a strength that can estimate the percentage of the square's area being reflected back.
Anyway... I think, I'll pause the high-wire balancing act and test this more thoroughly and get you some data while I'm at it. I'll also read the links in your next post. 😉
VBR,
Inq
3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266... Even usable on ESP-01S - Quickest Start Guide
@robotbuilder - I've looked through some of those links on the VL53L5CX sensor. Looks like there are many things that can be done. The gesture aspects might be something for later on. I liked the business card following robot... and is on the shorter term list of things after the mapping. I think having it follow me will be a hoot to take it to schools for class demonstrations trying to get more kids involved in my area.
VBR,
Inq
3 lines of code = InqPortal = Complete IoT, App, Web Server w/ GUI Admin Client, WiFi Manager, Drag & Drop File Manager, OTA, Performance Metrics, Web Socket Comms, Easy App API, All running on ESP8266... Even usable on ESP-01S - Quickest Start Guide
Don't get me started on the economic politics of today and our foolish reliance of imports instead of being self sufficient even though that might be more costly in the short term!
As for computing power I have the laptop to play with until a RPi become affordable or even available.
Mate... our $dollar is currently sitting at ~67 cents USD ... we have been slammed for far too long, and it's time we all revolt from out pathetic globalist NON LEADERS - throw the #$%^& out!
@robotbuilder I can't find any examples of that use on the internet.
Is the example 3D depth map not what you are looking for?
Arduino says and I agree, in general, the const keyword is preferred for defining constants and should be used instead of #define
"Never wrestle with a pig....the pig loves it and you end up covered in mud..." anon
My experience hours are >75,000 and I stopped counting in 2004.
Major Languages - 360 Macro Assembler, Intel Assembler, PLI/1, Pascal, C plus numerous job control and scripting
I liked the business card following robot... and is on the shorter term list of things after the mapping.
Another good example of robot following something that it recognises is the dronebot video on the pixy2. This is based on camera image technology rather than your sensors but its another cool way to do a class demo to interest kids
@robotbuilder I don't know if it's 8x8 or what, I just saw 3D and thought you might be interested.
Arduino says and I agree, in general, the const keyword is preferred for defining constants and should be used instead of #define
"Never wrestle with a pig....the pig loves it and you end up covered in mud..." anon
My experience hours are >75,000 and I stopped counting in 2004.
Major Languages - 360 Macro Assembler, Intel Assembler, PLI/1, Pascal, C plus numerous job control and scripting
We use cookies on the DroneBot Workshop Forums to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.