A two-wheel robot is used and all other sensors and devices are attached to the
robot. The Raspberry Pi is used as a server, and a website is built to be used to send requests to the server
and control the robot.
Node.js is used to build the server and the controler website. The front-end mainly developed with REACT.js and the back-end is mainly developed with Express.js with MULTER
An important issue is that when setting up the Pi as a linux server, port forwarding needs to be done in order to let outside devices find the server. This is about the router setting, however, we are not allowed to directly config the router at Cornell. And the solution to this problem is to make sure the server and the controler device are connected to the same WIFI network. In this way, the website can find the server by its ip address and port number without port forwarding.
And we also have to modify linux config file to set static IP address, so that our controler can always use the right ip address without checking the pi information every time it is started.
First of all, we need to control the robot to move. A FIFO is used for the communication between the controler and the server. The controler will write commands to the FIFO and the server will read commands from it.
For the command writing, bash scripts are used to write command strings into the FIFO. And whenever a valid request is sent to the server, the server will run the required batch script and then the command is written
A Python program is used to contiuously read the FIFO content for commands. For reading FIFO, the program has to judge whether the FIFO is empty. Because the Python can only read the sign for "end of line" once. So if the FIFO is empty for the second time, the Python program will not be blocked. Instead, it will keep reading empty string. The solution is to always judge whether the string read is empty. If it is, wait for some time before the next reading. In this way, it will not use up all the cpu time.
Once a valid command is read, the Python program calls a moving function that changes the duty-cycle of the two PWM signals and then changes the speed of the two servos.
A return function is added. Whenever a moving command is executed, the Python program will record the command and its running time into two stacks. And when "return" command is received, the program will play all the commands in reverse order. In this way, the robot will return to the start point. And there is also a command to clear the two stacks. This will set the current position of the robot as the new start point.
A Pi camera is placed in the front of the robot so that the driver can see what is in front of the robot. The command 'raspivid -t 0 -w 1280 -h 720 -fps 20 -o -' is used to record the camera video and in this command, frames per second and the resolution can be set. And then, the video signal is feeded to the command 'nc -k -l 8090' with a specific port number. This command pushes the video stream signal to the server's local port, so that the remote control side can get access to this live camera stream with tcp protocol and play the live video.
On the controler side, MPlayer is used to catch the live stream. After specify the server address, port number, and video format, the MPlayer can play the Pi camera live video with low latency using command " ./mplayer -fps 200 -demuxer h264es ffmpeg://tcp://10.148.2.206:8090".
After the camera recording is started with 'raspivid' command, it starts to push video stream to the port. And when the MPlayer start to read the stream, it will first read the message recorded before. So we need to wait for some tiem to get the live video stream with low latency and start the control.
Audio Recording On Robot Side
The sound recorded from the victim is necessray to judge the status of the victim and the current situation.
At first, we tried to combine the audio signal stream with the video stream and push them together to the local port. We used "arecord --device=hw:1,0 --format S16_LE --rate 44100 -c1 -d 10 /home/pi/proj/record/cache/a.wav" to record the sound. But the attempt to 'pipe' it with command 'nc' cannot failed. Then we tried to combine the sound channel to the video stream and successfully used 'ffmpeg' command to save the video with sound to local storage. However, we counldn't push the signal to the local port, because the 'ffmpeg' required an output file, and it couldn't be an 'pipe' command. Also, we tried to write the combined signal into another FIFO and read from the FIFO and push it to local port. But again, 'ffmpeg' did not allow FIFO as output file. It threw an error that it had a wrong input format for the FIFO. After that, we tried to directly push the combined signal with 'ffmpeg' command. We tried tcp and udp and even tried to directly push it to the controler ip address. But nothing worked. After a week's work without any luck, we decided to find a another way to deal with this audio message.
We decided to use local files to transfer the audio message. A command is set up to record for ten seconds. If any of the three buttons (except for the reboot button) on the PiTFT is pushed, a ten second recording will start. And we also considered the situation that the victim has some difficulty in moving his or her arm. So we also have a button on the control side to start the 10 second recording remotely. After the ten second recording is completed, the controler can send a request and read the audio file and then play it on the web browser. In this way, the audio message is transferred from the robot side to the controler side.
Image Upload & Display
In order to upload photos which are used in the rescue process, a middleware called Multer is used in the uploading procedure. Multer contains a body object and a file object to the request object. The former object contains the information of the text fields of the form, and the second object contains the file which is needed to be uploaded.
In the front-end part, the enctype is set to be "multipart/form-data", cause multer will not process form which is not multipart. In the back-end part, the destination and the name of the images was easily set by disk storage engine. The disk storage engine is one of the fancy components of multer, if anyone would like to upload file/files to a web server, use this middleware.
After uploading the image to the server, a python program called image.py will draw this picture on the piTFT screen. Server python programs were written in the projects, and most of them were called by the "child_process" module, but image.py is a little different. The image.py is boot up. The image would be directly shown on the piTFT screen when the server was started.
Voice Recording On Controler Side
A helpful message can be very useful in the procedure of rescuing. At control side, we can record the audio message, then send it to the server side. After uploading the audio to the server, we can have the Raspberry Pi to play it.
In the front-end, we set a "Start Recording" button and a "Stop and Send" button. We firstly import MicRecorder, and use it to record audios. When click the start button, the Recorder starts to record, and it stops after clicking the stop button. The recorded audio will be sent to the server through the /audio/up router. When we successfully achieved the audio, we can click the play button to play it.
In the back-end, we set the /audio/up router and the /audio/play router. We also use the middleware Multer to help us upload audios. The audio will be uploaded to the /audio/cache folder, and will be played via the /audio/play router. We write a bash file called "playAudio", in this file, we use omxplayer command to play the audio in the /audio/cache folder. So the audio message is successfully transferred from the controller side to the robot side.
We have a robot car which is needed to be controlled by the controller side. The robot car should complete two tasks: the first one is to go and turn as the command goes, the second is to return automatically.
For the first task, things go relatively smooth. We firstly check the trajectory of the robot car, and we observed that the speed of two wheels are different, so we continuously modified the parameters to ensure the robot moved as we expected.
For the second task, things didn’t go well. We spent a lot of time to solve the problem that the movement of “return” function is not in the expected track. Finally we add a time.sleep function to the move_control.py, the program reads FIFO, if empty then sleep for some time, else call move function. By adding the sleep function, we observed that the “return” function works as we expected.
Also, we needed to complete such tasks: the first task is to transfer the video captured by the camera to the control side, the second task is to realize bidirectional audio transmission, the third task is to upload an image and display it on the piTFT.
In the first task, we observe that there is a certain delay in the video picture, and the video is a little fuzzy. We calibrated the camera and improved the clarity of the video.
In the second task, we tested that the video sent from our controller side can be successfully played through the Raspberry Pi. We tried to start the program from the command line and then tried to start it from the server. When we passed this check, we tested to see if the audio can be sent back to our controller side. We tried three different microphones to ensure that the volumn of recorded audio is approximate.
In the third task, we tested whether the files can be uploaded successfully. We observed that we cannot find the right relative path for scripts with Multer, so we had to use absolute path for all the paths in the modules with Multer. Then we tested whether the image can be displayed on the PiTFT. We first test whether this program can successfully run by starting the program in the command line. Then we test whether this function can be started at power on.