Notifications
Clear all

robot navigation

Page 2 / 2

byron
(@byron)
Honorable Member
Joined: 2 years ago
Posts: 548
 

@robotbuilder

here is the python OpenCV code I was playing with that matches  shapes found in an image.  I take an image and then using a screen snipping program to take small bits of the image for the openCV algorithms to seek to find these small images in the original image and put rectangles around any match it finds. In my case I use an picture of my office and use the top of my cup of tea and some markings on a box to be my targets.  These targets stand in for an imaginary robot and a target location to move the bot to.  This  was a test to see how well openCV could be used to stand in for an outdoor GPS navigation system in order to practice the robot manoeuvre and navigation programming.  So instead of Longitude and Latitude coordinates I get screen coordinates. In this scenario the camera recording the images or video would be looking down at the floor and finding out where the robot is and  using the location feedback from openCV to direct it to a target.

If I do get to set this up indoors then I will probably not use matching shapes but go for getting openCV to find 'blobs' by matching a particular colour area (coloured disks or LED's on the robot) that are found in the image.  This exercise was not aimed at an indoor navigation project where it may be better to place the camera on the robot, though its still all about recognising something in an image to use for robot navigation purposes.

This exercise demonstrated to me that the use of openCV for this purpose was going to work rather well, thought whether I get around to mocking this up for robot navigation testing indoors remains a question as I may well just bite the bullet and get some RTK GPS boards and go straight to using the outdoor bots.

I found I had commented the test program quite well for me, but I've added a bit more and slightly refactored the code for better variable names which will stand me in good stead when I revisit this program again. (I must get into a better habit on documenting my snippers).  The program uses serval different template matching openCV algorithms in sequence to compare and find the best to use for my scenario.  If anyone does run the program then make sure the image window has the focus, then pressing any key will move the program along.  Output is also printed to a terminal  that shows the various screen coordinates and the angle and compass bearing to the target based upon a final chosen matching algorithms. (some algorithms did not work so well in my scenario)  The use of a compass bearing indicates the camera must be correctly orientated (top of screen to North) and the robot has a compass sensor on board.

For some better use of image processing to navigate indoors, as you can see from this thread, @robotbuilder is pursuing a better indoor navigation solution utilising a camera based on the robot rather than on placed looking down on the bot.  

I have only been playing with openCV for a short time, but I have found it easy enough to use and I think you will find it most useful in the robot programming area.  OpenCV can be used with C++ instead of python, indeed it is written in C++ so don't let my python example deter you from have a play.

Edit - blame the coloured text in the code on the technique of copying the python code into the arduino IDE and then copying as HTML for posting. 

 

 

import numpy as np
import cv2 

# the file names of the images: 
myImage = 'images/Blobs.jpeg'
myBot = 'images/bot.png'
myTarget = 'images/target.png'

# helper functions to pretty print coordinates and find center coordinates of a rectangle
def print_blob_coordinates(blob, top_x, top_y, bottom_x, bottom_y):
    center_x, center_y = blob_center_coordinates(top_x, top_y, bottom_x, bottom_y)
    print('   ', blob, ': Top Left x,y', top_x, top_y,
          'Bottom Right x,y', bottom_x, bottom_y, 'Center x,y', center_x, center_y)

def blob_center_coordinates(top_x, top_y, bottom_x, bottom_y):
    center_x = top_x + ((bottom_x - top_x) / 2)
    center_y = top_y + ((bottom_y - top_y) /2)
    return(int(center_x),int(center_y))


# show the image in which to locate bot and target blobs
# wait for any key to be pressed (whilst the image has the focus)
cv2.imshow('Untidy Office', cv2.imread(myImage))
cv2.waitKey(0)
cv2.destroyAllWindows()

# copy the image as a greyscale image and resize it
# note the shape finding algorithms work on greyscale images
img = cv2.resize(cv2.imread(myImage, cv2.IMREAD_GRAYSCALE),(0, 0), fx=0.8, fy=0.8)
# likewise copy the images of the 'blobs' we seek to find in the above image 
# (should be same sized the same as that shown in the img image)
imgBot = cv2.resize(cv2.imread(myBot, cv2.IMREAD_GRAYSCALE),(0, 0), fx=0.8, fy=0.8)
imgTarget = cv2.resize(cv2.imread(myTarget, cv2.IMREAD_GRAYSCALE), (0, 0), fx=0.8, fy=0.8)

# print the hight and width of the image 'img'
img_h, img_w = img.shape
print('imange hight:', img_h, ' image width:', img_w)

# find the hight of the blob image 'imgBot' 
imgBot_h, imgBot_w = imgBot.shape
print('imgBot height:', imgBot_h, ' imgBot width:', imgBot_w)

# find the hight of the blob image 'imgTarget'
imgTarget_h, imgTarget_w = imgTarget.shape
print('imgTarget height:', imgTarget_h, ' imgTarget width:', imgTarget_w)

# A bunch of cv2 algorithms for finding matches of an image in another image
# are put in a list to use each method in turn. - they all work with greyscale images
methods = [cv2.TM_CCOEFF, cv2.TM_CCOEFF_NORMED, cv2.TM_CCORR,
            cv2.TM_CCORR_NORMED, cv2.TM_SQDIFF, cv2.TM_SQDIFF_NORMED]
 

# now the matching algoriths are tried one at time to see which ones succeed
# where a match is found a rectange is drawn around the shape.
for method in methods:
    # copy the image so that a highlighted match does not draw 
    # on the original image
    img2 = img.copy()
    
    # the image to match will be 'moved over the image pixel by pixel 
    # and a number array is returned for a match.
    botResult = cv2.matchTemplate(img2, imgBot, method)
    targetResult = cv2.matchTemplate(img2, imgTarget, method)

    # some of the methods work on minimum values and some on maximum values
    # so we get the location in the image of the min and max match values 
    # generated by the current cv2.matchTemplae algorithm
    b_min_val, b_max_val, b_min_loc, b_max_loc = cv2.minMaxLoc(botResult)
    t_min_val, t_max_val, t_min_loc, t_max_loc = cv2.minMaxLoc(targetResult)
    
    # the TM_SQDIFF and TM_SQDIFF_NORMED algorithms use minimum values
    # and the other algorithms use the max values
    if method in [cv2.TM_SQDIFF, cv2.TM_SQDIFF_NORMED]:
        b_location = b_min_loc
        t_location = t_min_loc
    else:
        b_location = b_max_loc
        t_location = t_max_loc 

    # having got the location value (which will be the upper left location)
    # the bottom right location is calculated from the hight and width values 
    # that were found when the hight and width of the blob images were found above.
    b_bottom_right = (b_location[0] + imgBot_w, b_location[1] + imgBot_h)
    t_bottom_right = (t_location[0] + imgTarget_w, t_location[1] + imgTarget_h)

    # draw a rectainle on the copy image and show the image on the screen  
    cv2.rectangle(img2, b_location, b_bottom_right, 255, 5)
    cv2.rectangle(img2, t_location, t_bottom_right, 255, 5)
    cv2.imshow('FindMyBlobs', img2) 
 
    # print the algorithm name for handy reference of those that worked 
    # i.e  cv2.TM_CCOEFF, cv2.TM_CCOEFF_NORMED, cv2.TM_CCORR,
    # cv2.TM_CCORR_NORMED, cv2.TM_SQDIFF, cv2.TM_SQDIFF_NORMED
    if method in [cv2.TM_CCOEFF]:
        print('cv2.TM_CCOEFF')
    elif method in [cv2.TM_CCOEFF_NORMED]:
        print('cv2.TM_CCOEFF_NORMED')
    elif method in [cv2.TM_CCORR]: 
        print('cv2.TM_CCORR')
    elif method in [cv2.TM_CCORR_NORMED]:
        print('cv2.TM_CCORR_NORMED')
    elif method in [cv2.TM_SQDIFF]:
        print('cv2.TM_SQDIFF')
    elif method in [cv2.TM_SQDIFF_NORMED]:
        print('cv2.TM_SQDIFF_NORMED')

    # print the screen coordinates of the blobs that each algorith finds.
    print_blob_coordinates(
            'bot blob', b_location[0], b_location[1], b_bottom_right[0], b_bottom_right[1])
    print_blob_coordinates(
            'target blob', t_location[0], t_location[1], t_bottom_right[0], t_bottom_right[1])


    # wait for any key to be pressed (whilst the image has the focus)
    # and on keypress destroy the image - and carry on with the for loop or move on with the 
    # program when there are no more algorithms to process.
    cv2.waitKey(0)
    cv2.destroyAllWindows()

# Now a final go with a choosen matching algorithm , show the 'bot' and 'target'
# and draw a line form the center of the bot to the center of the target blobs
img2 = img.copy()
botResult = cv2.matchTemplate(img2, imgBot, cv2.TM_CCOEFF_NORMED)
targetResult = cv2.matchTemplate(img2, imgTarget, cv2.TM_CCOEFF_NORMED)

b_min_val, b_max_val, b_min_loc, b_max_loc = cv2.minMaxLoc(botResult)
t_min_val, t_max_val, t_min_loc, t_max_loc = cv2.minMaxLoc(targetResult)

b_location = b_max_loc
t_location = t_max_loc
b_bottom_right = (b_location[0] + imgBot_w, b_location[1] + imgBot_h)
t_bottom_right = (t_location[0] + imgTarget_w, t_location[1] + imgTarget_h)

cv2.rectangle(img2, b_location, b_bottom_right, 255, 5)
cv2.rectangle(img2, t_location, t_bottom_right, 255, 5)
botx, boty = blob_center_coordinates(b_location[0], b_location[1], b_bottom_right[0], b_bottom_right[1])
targetx, targety = blob_center_coordinates(t_location[0], t_location[1], t_bottom_right[0], t_bottom_right[1])
cv2.line(img2, (botx, boty), (targetx, targety), (255, 0, 0), 4)

cv2.imshow('Draw the path to Target', img2)
cv2.waitKey(0)
cv2.destroyAllWindows()

# finally, assuming the top of the screen in North, a bearing is calculated from the bot blob to the 
# target blob so the bot can be sent to trundle along a compass bearing path.

# 1. show navigation right angle triangle on colour image 
img3 = cv2.resize(cv2.imread(myImage, cv2.IMREAD_COLOR),(0, 0), fx=0.8, fy=0.8)
cv2.line(img3, (botx, boty), (targetx, targety), (0, 255,0), 4)
cv2.line(img3, (botx, boty), (targetx, boty), (0, 255, 0), 4)
cv2.line(img3, (targetx, boty), (targetx, targety), (0, 255, 0), 4)
cv2.imshow('Bearing Triangle', img3)
cv2.waitKey(0)
cv2.destroyAllWindows()

# 2. calculate opposite angle from bot to target.
# 
diff_x = abs(botx - targetx)
diff_y = abs(boty - targety)

print( diff_x, diff_y)

angle = np.arctan(diff_y/diff_x) * 180 / np.pi
print('Angle from bot to target: ',angle)

# 3. from the angle calculate the compass bearing
# Note that this needs to be developed into a function that adjusts the calculation for where
# the target is in relation to an imaginary compass placed on the bot.  Deducting the angle from 90 degrees
# is for where the target is in the North to East quadrant.  
bearing = 90 - angle
print('Compass bearing from bot to target' ,bearing)

 

 

This post was modified 6 months ago by byron

Dryden liked
ReplyQuote
robotBuilder
(@robotbuilder)
Prominent Member
Joined: 2 years ago
Posts: 881
Topic starter  

@byron

So to cut a long story short you have a large image Blob.jpg and two smaller images bot.png and target.png? You move each small image over the larger image to find the best or sufficient match?

An image is simply an array of numbers so it would look something like this:

To enlarge image below right click image and choose Open link in new window.

imageArrays

 

I would have to look up the cv documentation to understand the "methods".
Python is new to me and indeed I haven't used numpy or cv yet and will have to start with learning how to install them. Up to now I have just used Python with text i/o to try and get my head around all these higher level data representation abstractions and the associated nomenclature. I am a very low level programmer.

 

This post was modified 6 months ago 2 times by robotBuilder

Dryden liked
ReplyQuote
byron
(@byron)
Honorable Member
Joined: 2 years ago
Posts: 548
 

@robotbuilder

In principle thats correct, its just an array or in python terms a nested list where a list contains other lists that contain ... as so on.

When the image is loaded, say into a variable called img, info on the image is obtained by doing

(h, w, d) = image.shape 

For example if I do this on my small image containing the image I want to try to find in the main image I get  height=74, width=100, depth=3 showing that the image is 74 rows x 100 columns with 3 channels representing the Blue, Green, Red colours.

I can also just print the list data by just doing a print(img) which would show data in the following format.

[[[26 37 53]
[40 52 69]
[43 56 74]
...
[62 75 95]
[61 72 88]
[36 43 56]]

[[25 36 52]
[41 54 71]
[42 55 73]

etc.

I've mainly been using OpenCV on my mac, but I have also installed on a raspberry pi.  For testing purposed its best to create a python virtual environment and install the opencv into this.   From memory I think it was just a case of activating the virtual environment and doing a pip3 install opencv but I did not make any notes and the installation instructions are readily available with a google.

Here is a handy link to some OpenCV on python docs

https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_core/py_image_arithmetics/py_image_arithmetics.html

 


ReplyQuote
Page 2 / 2