A Project By Xinyu Wu & Shiqi Li.
Today, vehicles are widely used and parking a car becomes an inevitable problem. One situation people may encounter is that a car needs to park in a designated parking spot, for example, a private parking spot marked with a license plate number, and drivers can easily forget where their parking slots are. This project aims to design a robot car that can recognize license plate numbers in a parking lot and back itself into the corresponding location with its license plate number. In our parking lot, license plate numbers are composed of letters and numbers which simulate the real-world situation. In this project, the robot was placed at a starting line to detect the right parking spot. We used the RPi to control the movement of the robot. The car was controlled to start finding the parking slot by pressing the button GPIO 17 on the piTFT manually. We also implemented the function that the robot can direct itself into the parking slot autonomously. We designed a program on the RPi so that the robot could memorize a randomly generated input license plate number from a license plate list which contained three license plate numbers and it compared the generated license plate number with detected numbers. A Picamera attached to the RPi on the right side of the car was used to detect license plate numbers in the parking lot. We developed an image processing algorithm to process the image captured and to recognize and compare the specific license plate number. The captured real-time picture could be projected on the piTFT for the convenience of observation. If it detects the right parking spot, the robot will turn right at an angle of 90 degrees and park itself into the spot.
Assemble the robot car referring to Lab3
Figure 1. Motor Controller
Figure 2. Circuit Schematic
The first figure shows the connections for the motor controller. The second diagram is a circuit connection diagram and from Lab3. We referred to the diagram and reassembled our robot car. To drive the dc motors, we needed a Sparkfun TB6612FNG dual-channel motor controller which was connected to the RPi and dc motors. Here, RPi was powered with 5V, the motor controller was powered with 3.3V by RPi, and the dc motor was powered with 6V. 1K resistors were used in the circuit to protect GPIO pins.
We aimed to install OpenCV on the RPi Linux system based on the Lab4’s kernel system which was 5.10.25 Preempt RT version. We followed the process given on Canvas and used the command
sudo apt-get install libopencv-dev python-opencv to install OpenCV. Then we tried to install the Picamera module. The camera module ribbon cable was inserted into the camera port on the RPi. One thing worth noting was that the connectors at the bottom of the ribbon cable were facing the contacts in the port. Our camera ribbon cable was broken down due to some contact problem and we replaced it by a new one during the testing process at the later stage of our project. After inserting the camera module, we configured the RPi by enabling the camera module in Interfaces in the RPI configuration window. When rebooting completed, we tested our camera module by using the command:
raspistill -o Desktop/image.jpg
The captured file was saved as “image.jpg”. We tested the camera which turned out to be
normally functioning by using the above command and checking the saved image.
During the installation of OpenCV, we encountered a problem. The original version of python on the kernel system we used was 2.0+ while one tool needed to recognize characters required python 3.0+. Thus, we updated the version of Python and when we tried to run “sudo apt-get upgrade” to install available upgrades of all packages currently installed on the system, the upgrade process kept running and we mistakenly killed the process, which resulted in the system breaking down. Fortunately, we have backed up the system, and we simply recover the system from the previous backup file.
To update the Python version, we issued the command to install python3.6 version: Sudo apt-get install python3.6 Next, we followed the instructions on Canvas to install OpenCV on Python 3.6.
pip3 install opencv-contrib-python==22.214.171.124 -i
sudo apt-get update #setup of dependencies
sudo apt-get install libhdf5-dev
sudo apt-get install libatlas-base-dev
sudo apt-get install libjasper-dev
sudo apt-get install libqt4-test
sudo apt-get install libqtgui4
sudo apt-get update
When using the command
pip3 install opencv-contrib-python==126.96.36.199 -i
to install opencv, the installed package “cv2” can only be run without the “sudo” command. To fix this problem, we added “sudo” to install opencv and “cv2” can be imported when using “sudo python3”.
In addition, we kept receiving error messages concerning the unmet dependencies even though we added “sudo” in each command. After trying abundant methods, we fixed the problem by adding “-t buster” to the end of a command when using sudo apt-get install [package]. For example, when issuing the command
sudo apt-get update #setup of dependencies
We should use
sudo apt-get install libhdf5-dev -t buster
In this way, all the required dependencies could be resolved automatically rather than being manually downloaded one by one.
Another problem we encountered was the version of “pip” which used to be 2.7 and was incompatible with the pytesseract package. We solved this problem by using
python3 -m pip install pytesseract
We developed an image processing algorithm to capture and process images. Our license plate numbers are composed of both letters and numbers. Thus, we made three license plates and printed them out.
At first, we tried to process the original images by issuing a command involving pytesseract. The results derived turned out to be highly inaccurate. Sometimes there were no results after using pytesseract or there were unrecognized characters in the results. Therefore, we decided to develop an image preprocessing algorithm.
First, we used Picamera to capture a picture and save as image.jpg. We imported picamera library to get the picture. Images can be taken by using:
from picamera import PiCamera
from time import sleep
camera = PiCamera()
Note If your preview is upside-down, you can rotate it by 180 degrees with the following code:
camera.rotation = 180
The original picture is shown as follows:
Then we process the image for further character recognition including characters localization, background deletion, noise reduction, and characters enhancement.
First, we preprocessed the initial image and localized the characters by splitting the black rectangular area and removing the background area. This improves the accuracy of character recognition. The processed image is saved as "preprocess.jpg".
Figure 4. Preprocessed image to split the black rectangular region
Then read the preprocess.img with the following code:
img = cv2.imread("preprocess.jpg")
Convert the image to a grayscaled image with the following code:
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
Use Morphological Transformations to maximize contrast with the following code:
structuringElement = cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3))
imgTopHat = cv2.morphologyEx(img_gray, cv2.MORPH_TOPHAT, structuringElement)
imgBlackHat = cv2.morphologyEx(img_gray, cv2.MORPH_BLACKHAT, structuringElement)
Use findContours function to find the contours of characters:
binary, contours, hierarchy = cv2.findContours( img_thresh, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE )
Note: In OpenCV 3.x this function needs to return 3 values which is different from the OpenCV 2.x version.
Draw contours using drawContours function:
# black temp pic
temp_result = np.zeros((height, width, channel), dtype=np.uint8)
# draw contours
cv2.drawContours(temp_result, contours=contours, contourIdx=-1, color=(255, 255, 255))
Visualize possible contours of characters:
Splitting character areas using getRectSubPix( ):
img_cropped = cv2.getRectSubPix(
Crop characters to derive the following picture:
Finally, make the characters bigger with copyMakeBorder() function to generate the picture result and save it as "result.jpg":
img_result = cv2.copyMakeBorder(img_result, top=10, bottom=10, left=10, right=10,
borderType=cv2.BORDER_CONSTANT, value=(0, 0, 0))
Here is the generated final result:
Perform license plate recognition
Use tesseract engine to recognize the characters by running:
tesseract result.jpg file -l eng -psm 7
The detection results are saved in the "file.txt".
We got the detection results by reading this file with the following codes:
# detect the license plate from the streaming picture
f = open("file.txt", "r")
text = f.readline()
text = text.replace("\n", "")
Then by comparing the results and target license plate, we can know whether the robot should move forward or stop here.
Design the car control algorithms
We utilized PWM to control the left and right motors. A PWM instance can be created using
p = GPIO.PWM(26, frequency) and can be started using
p.start(dutycycle). To change the duty cycle of a pulse width modulation signal, we use
pa.ChangeDutyCycle(dc). Since our robot car moves slowly in order to capture images and detect during the movement, the dutycycle was set to be about 70. To ensure the car moves in a straight line and to avoid inconsistency in speeds of two motors, we adjusted the two duty cycles to be different. When the car needs to turn right to park in front of the right license plate, the left motor moves forward while the right motor moves backward.
Here are three flow charts describing image processing and car control algorithms.
This flow chart describes the whole picture of the software control algorithm including both image processing and car control.
The following flow chart describes the image processing algorithm.
The following flow chart describes the car control algorithm.
We eventually reached the goal of our project. In the software part, our robot car could successfully capture images using the Picamera. The car would take pictures each time it drives forward for a distance and stops. When a picture is taken, the program starts to process the taken images. A preprocessing algorithm is applied before the RPi begins to recognize the characters in pictures. After preprocessing images, characters will appear to be more clear and easy to be recognized by using pytesseract. The results recognized will be output to a file and can be read by the program. The recognized license plate number will also be printed on the piTFT screen, together with the original captured image and the target license plate number. The following three pictures show three kinds of situations where three license plates are detected and shown on the piTFT. The robot compares the target license plate number with the recognized characters and sees whether there is a match. If the target matches the derived texts, the robot will direct itself into the location in front of the corresponding license plate number. The program ends when the right license plate is detected, even though there are still license plates not being detected. Otherwise, the robot car moves forward and continues detecting until the right license plate is found.
In the hardware part, the robot car is initialized with the text “Welcome to Pi Parking '' shown on the piTFT screen. The program starts with the physical button being pressed, which in our case is GPIO 17 on the piTFT. Then a target license plate is generated from a list of three picked license plates by using choice() function from “random” library. The car slowly moves straight forward and stops to capture an image. If the right license plate is detected, the car turns right and parks itself in front of the corresponding license plate. If not, the car moves forward and repeats to detect until the right position is found.
We designed and tested a parking lot with an autonomous robot car. We successfully reached our goals by developing image processing and car control algorithms in software using OpenCV and PWM signals. We also set up a parking lot and constructed our own robot car. To detect images using the camera, we installed the camera on the right side of the car so that the car could detect during its movement. For the future work, we could consider develop a closed-loop control algorithm to give the car some feedbacks regarding its movement so that the car would move forward in a straight line more easily without adjusting the parameters in our code. Another possible future work would be to develop a parking algorithm with some sensors so that the car would park in a specific slot more accurately.
Designed the image processing algorithm and tested the overall system.
Designed the display and car control algorithm and tested the overall system.