Remote Control Rescue Robot

Demo Date: 2019.12.4
A Project By Fan Lu & He Gao


Demonstration Video


Introduction

This project is to build a robot that can be used for rescuing purposes with remote control methods. It can perform multiple functions that will help the rescue missions or the victims to save themselves.

Raspberry Pi is used as a server, processing the requests sent from remote controller through wifi connection. The remote controller is developed on a laptop as a web page using Node.js and React. The Pi can process the request and control the robot. Several functions have been implemented, including controlling the robot to move, catching the live stream shoot by the camera, bidirectionally transfering audio message between the controller and the server, uploading images and displaying them.


Generic placeholder image

Project Objective:

  • Control the robot to go and turn as the command goes
  • Control the robot to return automatically
  • Enable live stream captured by the camera
  • Transfer the audio message from the robot side to the controller side
  • Upload images and Display on piTFT
  • Play audio message coming from the controller

Design

A two-wheel robot is used and all other sensors and devices are attached to the robot. The Raspberry Pi is used as a server, and a website is built to be used to send requests to the server and control the robot.

Node.js is used to build the server and the controler website. The front-end mainly developed with REACT.js and the back-end is mainly developed with Express.js with MULTER

An important issue is that when setting up the Pi as a linux server, port forwarding needs to be done in order to let outside devices find the server. This is about the router setting, however, we are not allowed to directly config the router at Cornell. And the solution to this problem is to make sure the server and the controler device are connected to the same WIFI network. In this way, the website can find the server by its ip address and port number without port forwarding.

And we also have to modify linux config file to set static IP address, so that our controler can always use the right ip address without checking the pi information every time it is started.

Moving

First of all, we need to control the robot to move. A FIFO is used for the communication between the controler and the server. The controler will write commands to the FIFO and the server will read commands from it.

For the command writing, bash scripts are used to write command strings into the FIFO. And whenever a valid request is sent to the server, the server will run the required batch script and then the command is written to FIFO.

A Python program is used to contiuously read the FIFO content for commands. For reading FIFO, the program has to judge whether the FIFO is empty. Because the Python can only read the sign for "end of line" once. So if the FIFO is empty for the second time, the Python program will not be blocked. Instead, it will keep reading empty string. The solution is to always judge whether the string read is empty. If it is, wait for some time before the next reading. In this way, it will not use up all the cpu time.

Once a valid command is read, the Python program calls a moving function that changes the duty-cycle of the two PWM signals and then changes the speed of the two servos.

Return Function:
A return function is added. Whenever a moving command is executed, the Python program will record the command and its running time into two stacks. And when "return" command is received, the program will play all the commands in reverse order. In this way, the robot will return to the start point. And there is also a command to clear the two stacks. This will set the current position of the robot as the new start point.

Camera Stream

A Pi camera is placed in the front of the robot so that the driver can see what is in front of the robot. The command 'raspivid -t 0 -w 1280 -h 720 -fps 20 -o -' is used to record the camera video and in this command, frames per second and the resolution can be set. And then, the video signal is feeded to the command 'nc -k -l 8090' with a specific port number. This command pushes the video stream signal to the server's local port, so that the remote control side can get access to this live camera stream with tcp protocol and play the live video.

On the controler side, MPlayer is used to catch the live stream. After specify the server address, port number, and video format, the MPlayer can play the Pi camera live video with low latency using command " ./mplayer -fps 200 -demuxer h264es ffmpeg://tcp://10.148.2.206:8090".

After the camera recording is started with 'raspivid' command, it starts to push video stream to the port. And when the MPlayer start to read the stream, it will first read the message recorded before. So we need to wait for some tiem to get the live video stream with low latency and start the control.

Audio Recording On Robot Side

The sound recorded from the victim is necessray to judge the status of the victim and the current situation.

At first, we tried to combine the audio signal stream with the video stream and push them together to the local port. We used "arecord --device=hw:1,0 --format S16_LE --rate 44100 -c1 -d 10 /home/pi/proj/record/cache/a.wav" to record the sound. But the attempt to 'pipe' it with command 'nc' cannot failed. Then we tried to combine the sound channel to the video stream and successfully used 'ffmpeg' command to save the video with sound to local storage. However, we counldn't push the signal to the local port, because the 'ffmpeg' required an output file, and it couldn't be an 'pipe' command. Also, we tried to write the combined signal into another FIFO and read from the FIFO and push it to local port. But again, 'ffmpeg' did not allow FIFO as output file. It threw an error that it had a wrong input format for the FIFO. After that, we tried to directly push the combined signal with 'ffmpeg' command. We tried tcp and udp and even tried to directly push it to the controler ip address. But nothing worked. After a week's work without any luck, we decided to find a another way to deal with this audio message.

We decided to use local files to transfer the audio message. A command is set up to record for ten seconds. If any of the three buttons (except for the reboot button) on the PiTFT is pushed, a ten second recording will start. And we also considered the situation that the victim has some difficulty in moving his or her arm. So we also have a button on the control side to start the 10 second recording remotely. After the ten second recording is completed, the controler can send a request and read the audio file and then play it on the web browser. In this way, the audio message is transferred from the robot side to the controler side.

Image Upload & Display

In order to upload photos which are used in the rescue process, a middleware called Multer is used in the uploading procedure. Multer contains a body object and a file object to the request object. The former object contains the information of the text fields of the form, and the second object contains the file which is needed to be uploaded.

In the front-end part, the enctype is set to be "multipart/form-data", cause multer will not process form which is not multipart. In the back-end part, the destination and the name of the images was easily set by disk storage engine. The disk storage engine is one of the fancy components of multer, if anyone would like to upload file/files to a web server, use this middleware.

After uploading the image to the server, a python program called image.py will draw this picture on the piTFT screen. Server python programs were written in the projects, and most of them were called by the "child_process" module, but image.py is a little different. The image.py is boot up. The image would be directly shown on the piTFT screen when the server was started.

Voice Recording On Controler Side

A helpful message can be very useful in the procedure of rescuing. At control side, we can record the audio message, then send it to the server side. After uploading the audio to the server, we can have the Raspberry Pi to play it.

In the front-end, we set a "Start Recording" button and a "Stop and Send" button. We firstly import MicRecorder, and use it to record audios. When click the start button, the Recorder starts to record, and it stops after clicking the stop button. The recorded audio will be sent to the server through the /audio/up router. When we successfully achieved the audio, we can click the play button to play it.

In the back-end, we set the /audio/up router and the /audio/play router. We also use the middleware Multer to help us upload audios. The audio will be uploaded to the /audio/cache folder, and will be played via the /audio/play router. We write a bash file called "playAudio", in this file, we use omxplayer command to play the audio in the /audio/cache folder. So the audio message is successfully transferred from the controller side to the robot side.


Drawings

Generic placeholder image

Signal flow

Generic placeholder image

Control panel website


Testing

We have a robot car which is needed to be controlled by the controller side. The robot car should complete two tasks: the first one is to go and turn as the command goes, the second is to return automatically.

For the first task, things go relatively smooth. We firstly check the trajectory of the robot car, and we observed that the speed of two wheels are different, so we continuously modified the parameters to ensure the robot moved as we expected. For the second task, things didn’t go well. We spent a lot of time to solve the problem that the movement of “return” function is not in the expected track. Finally we add a time.sleep function to the move_control.py, the program reads FIFO, if empty then sleep for some time, else call move function. By adding the sleep function, we observed that the “return” function works as we expected.

Also, we needed to complete such tasks: the first task is to transfer the video captured by the camera to the control side, the second task is to realize bidirectional audio transmission, the third task is to upload an image and display it on the piTFT.

In the first task, we observe that there is a certain delay in the video picture, and the video is a little fuzzy. We calibrated the camera and improved the clarity of the video. In the second task, we tested that the video sent from our controller side can be successfully played through the Raspberry Pi. We tried to start the program from the command line and then tried to start it from the server. When we passed this check, we tested to see if the audio can be sent back to our controller side. We tried three different microphones to ensure that the volumn of recorded audio is approximate. In the third task, we tested whether the files can be uploaded successfully. We observed that we cannot find the right relative path for scripts with Multer, so we had to use absolute path for all the paths in the modules with Multer. Then we tested whether the image can be displayed on the PiTFT. We first test whether this program can successfully run by starting the program in the command line. Then we test whether this function can be started at power on.


Result & Conclusions

We successfully built a rescue robot that can be controled over the WIFI signal. It can move as the command goes with two wheels on servos and return to any point we set as the start point. It has a live stream camera and the controler can play the live video from the camera. It has a PiTFT screen to display images sent from the controler. It can also utilize a speaker and a microphone to bidirectionally transfer audio messages with the controler to allow the user hear what the victim says and allow the victim to hear the words from the driver.

Raspberry Pi is used as a linux server on the robot to deal with requests with commands, and the controler is developed as a web site. We implemented almost all the features we designed at the start of the project.


Future Work

If more time is available, we will try to implement the live video stream with audio. More study on the command documents are necessary for this. And we will add measurement and feedback on the servos to make sure the wheels run with the desired speed and to make sure that it can accurately return to the start point. Also, the control panel should be refined.


Work Distribution

Generic placeholder image

Project group picture

Generic placeholder image

Fan Lu

fl427@cornell.edu

Designed the Image and Voice system and completed all testing.

Generic placeholder image

He Gao

hg446@cornell.edu

Designed the moving system, camera live stream system, Front-end and back-end.


Parts List

Total: $97


References

PiCamera Document
Tower Pro Servo Datasheet
Bootstrap
React
Pigpio Library
R-Pi GPIO Document
Lab1 FIFO
Lab3 Robot Control
Pi Camera Live Stream
Multer
Mplayer for stream
Arecord
FFMPEG
FFMPEG streaming

Code Appendix


please use the following Github link:
https://github.com/Leopard12286027770/5725-Project.git