Mengfei Xiong(mx224) and Ziqi Wang(zw568)
Dec. 5th 2018
For our ece 5724 final project, we created a cleaning robot called the“Sweeper”, which functions are pick up “trash” and throw it to specific location and achieve “waste sorting”. The hardware of the “Sweeper” is a Raspberry Pi, a camera, five servos and software design on the Raspberry Pi based on python language with openCV and pygame library. We use OpenCV to identify the type “trash” by its color, the distance of target by its area shown in the video frame. It will run atomically and can be controlled by buttons displayed on the PiTFT screen to achieve start, resume and quit. When get a trash or drop a trash, the “Sweeper” will show what it got on screen.
1. Run automatically on bash script.
2. Wait until press the “start” button on the screen.
3. Rotate until find the target.
4. Recognize the trash on the floor, get the trash and move it to the corresponding trash can.
5. Keep search and throw the target until press the quit button.
6. Show the type of trash on screen.
The robot we designed and built is an automatically identify trash and recycle robot that runs on the Raspberry Pi (R-Pi) which integrates image processing software with GPIO I/O regulation. The robot was designed to identify different types of garbage, recycle to different designated locations and control “start”, “pause”, “resume” and “terminate” tasks of the robot through pygame as well as show the status of the "Sweeper". The initial foundation of this robot concept was based on the pygame studied in Lab 2 and the robot developed in Lab 3 in ECE 5725: Design with Embedded Operating System.
The Raspberry Pi served as the brain for the entire system which controlled all the other components on "Sweeper". The hardware of this project included two continuous servos which were used for the wheels and two standard servos to control the behavior of mechanic arm in front of the robot. The mechanic arm were made of recyclable bottle. One standard servo was used to cooperate camera gimbal at the front of the robot. The camera we used in this project was PiCamera Module V2. The details of the hardware design will be introduced in Hardware Design part.
The entire system was developed in Python. By designing our robot in Python on R-Pi, we were able to take advantages of the wide variety of packages available to integrate the robot software into one platform which had no need for any higher level OS. The primary packages that applied in our final project were OpenCV, RP.IGPIO and PIGPIO. The OpenCV was used to operate the camera module and perform all the image processing tasks, which were supposed to detect different kinds of trash and locations. The OpenCV package is an all-inclusive one that allows variety of image processing functions to be applied. The RPI.GPIO in Python was used to control the I/O port on the Rasyberry Pi in order to establish the communication between hardware and software. Hardware PWM (PIGPIO) was used in camera gimbal to keep the rotate steady. The details of the software design will be introduced in Software Design part.
The hardware design of our "Sweeper" consisted of a R-Pi, five servos, which included two continuous servos and three standard ones, a mechanic arm, a camera module. The specific functions of each parts will be introduced as following.
The R-Pi which included a PiTFT serve as the central of the whole project, and PiTFT which includes two buttons on screen which is “start” and “quit” can control the motion states of "Sweeper" (start, pause, resume) and button 23 which is a bail out button can terminate the program to the console window. In the meanwhile, once "Sweeper" find the trash or put the trash to designated location, the PiTFT will show “get trash” or “drop trash”. The figure 2 show below is the initial and terminated screen. And figure 3 is the trash conditions showed in the screen.
One mechanic arm which is able to frame the target in a certain range and drag the target to a specific location.
The two continuous servos served as the wheels of "Sweeper" because they can bidirectional continuous rotate. By using software GPIO pin 6 and pin16 as output PWM signals. Our project can achieve rotate, go forward, turn left a designed angle and turn right a designed angle. Figure 4 is wire connection of Parallax continuous servo motor.
The two standard servos served as driving force of the robotic arm. The standard servo can hold any position between 0 and 180 degrees. By using software GPIO pin 5 and pin 19 as output PWM signals, in the initial state, the servos control the arm to lift at 90 degrees until the signal of put down is received. When the target is close enough designed in our algorithm, these two standard servos will rotate for another 90 degrees which put down the mechanical arm in order to frame the object. Similarly, the put-down arm will never lift up until it receives the pull up signal again, which means the location is arrived. The function of lift up can be seen this portion of code below, and put down was very similar to this. And figure 5 is pull up and put down conditions of mechanical arm.
The last standard servo was served as camera gimbal,，in order to ensure that camera field of view does not shake, which is an disadvantage of software PWM, we used PIGPIO (hardware PWM signal) to make sure the camera is steady. According to the PIGPIO PIN library, we chose PIN 12 as the hardware PWM output signal. The code below is the import and initial part of hardware PWM.
When looking for garbage, the camera looks down in order to get the area of target as much as possible even though the "Sweeper" is very close to the object. After grabbing the garbage, because the mechanical arm is put down, keep the camera in low condition will affect the field of view, which will impact the accuracy of detect location. Therefore, the camera raises a certain angle to avoid this effect. The testing code is following below
The last part of hardware in our project is camera, the camera module used for this robot was the PiCamera Module V2 which allowed for easy integration with the RPi.
The whole hardware circuit design is shown below in figure 6
We use a camera in front of the “Sweeper” to capture picture. For each frame, we classify different type of objects by its color (blue objects represent for trash 1, yellow objects represent for the drop location of trash 1; green objects represent for trash 2, purple objects represent for the drop location of trash 2). For the detection part, if there is a target object in the frame, the “Sweeper” will adjust its direction slowly until the target object is in the middle part of the frame, then it will go forward and adjust its direction by the following frames; if there isn’t a target object in the frame, it will keep rotate quickly until the camera captured a target object. For the get trash and drop trash part, the “Sweeper” will judge if it get the distance by the area of the target object appeared in the frame, if the object’s area larger than threshold, it will judge that it get the designated location. If it get the location of a trash, it will run put_down() function and look_up function to put down the mechanical arm and adjust the camera angle to make it look up and change the color its searching for to find the drop location. If it get a drop location, the ”Sweeper” will run lift_up() function and look_down() function to lift up the mechanical arm and adjust the camera angle to make it look down and change the color to find the next trash, then in order to avoid hitting the drop location, the “Sweeper” will backward before starting the next turn of searching.
It’s very convenient to use openCV library in python to achieve object detection. Firstly we installed the openCV to raspberryPi. According to the instruction, we installed the compiled version of openCV by sudo apt-get install libopencv-dev python-opencv.
Our robot performed as planned and met the goals we expected. It performed perfectly in controlled environments with good lighting, no dark or black objects, and no other objects similar to the trash and location. In a well-defined environment, there was almost no noise and the "Sweeper" successfully detected and got the target. And the "Sweeper" can distinguish different kinds of trash as well as different recycle locations. Meanwhile, the buttons on PiTFT can control specific motions of "Sweeper" just as we expected and displayed the conditions of trash successfully.
The robot took a relatively long time to find the target when the target is too far from the "Sweeper" and since the speed of our robot is relatively slow, it also took some times to get the target. Still, the robot was able to track down the target object ad deliver the trash to the drop-off point eventually.
The figure 7 below is the final-realized functions of our project.
The overall final design, development, and demonstration was a success for our "Sweeper". The robot was able to navigate in an open environment and detect the trash and deliver the trash to the ideal location without any issues. We were able to successfully implement a colored object detection algorithm based on OpenCV with the R-Pi in order to detect both the target and the drop-off location. The robot is able to search indefinitely until the target is found or stop button is pressed. "Sweeper" integrates all the aforementioned modules successfully create a robust cleaning robot that is able to perform its tasks fully autonomously.
During the process of final project, we faced a lot of problems:
First of all, our aim is to detect different kinds of colors to identify different kinds of trash and locations and OpenCV requires us to switch RGB value to HSV value. When we chose red as one location and blue as one trash, since there has some overlap of HSV value in these two colors, it cannot distinguish one from the other one. Our solution was change another color which has no overlap with blue in HSV value (purple). Then the accuracy of detection improved a lot.
Secondly, when we used the Pi-camera to get the object image, since the software PWM cannot avoid shaking even though we have already done calibration. So we changed hardware PWM as the output signal of camera gimbal servo. After that, it can perfectly stay steady. In term of hardware PWM, we first chose GPIO 26 as output PIN, but it prompt that there has no hardware PWM signal. After searching PIGPIO Library, we found that only PIN 12 and 13 can generate hardware PWM signal.
During the process, we also learned a lot from this project:
First of all, we have deepened the understanding of OpenCV application in image processing, and applied OpenCV to implement some functions of this final project.
Second, we have deepened our understanding of hardware PWM and understand the difference between software PWM and hardware PWM. At the same time, software PWM and hardware PWM is applied to our final project. At the same time, we have deepened the understanding of servo by datasheet, and understand the difference between the use method and function of standard servo and continuous servo. At the same time, standard servo and continuous servo are applied to the final project.
Finally, by applying both LAB 2 and LAB 3 learning, we have a good application of what ECE 5725 has learned.
There are several different possible future improvements that we foresee would greatly improve the functionality of the "Sweeper".
First of all, ultrasonic sensors can be used to avoid obstacle, such as wall and object other than the finding target. Adding specific number of sensors will definitely be helpful to the accuracy and reliable of our project.
Secondly, multi-process can be applied into our project, which can improve the speed of processing significantly. There has some time delay because of image processing, which often cause some inaccuracy of targeting detecting. Using multi-process can shorten the time delay and robot can response once the target is found.
Designed the overall software architecture
Helped with hardware design.
Designed the overall hardware architecture
Helped with software design.