We set out to build a interface to show real time location data for a number of users on a visual interface. The system needs a cloud component and app to allow remote communication between a number of users from various locations and local endpoint to download this data and display. The system will be built using IoT tools and services such as Node Red and MQTT.
Our final project stemmed from our interest and not insignificant use of location sharing apps. We use these to find our friends on campus, check in on people when there are dangerous events or catastrophes in their regions, or just see if someone has gotten home from work yet. While there are several software apps which implement theses features such as Google Maps and Find My Friends, we wanted to build a hardware interface which would allow users to have a visual display of all their friends/family on a map. We wanted the map to be interactive and allow for zooming or scrolling functionality to allow users with friends and family spread out across the world to be able to focus in on different regions or view all connected users at once. The project also presented an avenue to learn more about cloud servers and everything that goes into setting up a end to end IoT system.
Figure 1: MQTT Data Flow
OwnTracks is a free mobile application available for Android and iOS. It can be used to connect to a cloud server and publish location data when configured with the correct settings. For our project demonstration, we downloaded the app onto three phones, two of which were located on the Cornell University campus and one of which was located in New York City. The app communicates with the server using MQTT, a lightweight communication protocol built for embedded and IoT applications. The cloud server we used was CloudMQTT, which provides free plans for up to 5 connected devices. The server acts as a message broker for data published by OwnTracks. The online interface provides a Websocket UI which allows published messages to be viewed in real time, making debugging connections easy. The server can also be connected to using the Node-Red interface, which made it suitable for our purposes. Once we set up a plan on the CloudMQTT website, we were able to access the port numbers needed to establish communication between the Owntracks app and the broker.
Node-Red is a programming tool built by IBM for IoT devices. The interface includes a graphical programming flow which allows for the establishment of communication between hardware and embedded systems (in our case, the RPi) and online servers, APIs, etc. We used the interface to communicate with the CloudMQTT platform described in the previous section. Specifically, we used MQTT nodes to interface with the broker, the JSON converter node to parse the information pulled, and the node-red-contrib-pythonshell node to execute and pass in the required information as arguments to a specified Python script. Each time a new piece of location data from a user was sent through, the Python script would be newly executed and run through to completion. Additionally, everytime the Node-Red flow was deployed, data from all users was passed through to the Python script (meaning that if there were 3 users connected in the Node-Red flow, then the Python script would run 3 times at the initial Node-Red deployment). After the initial deployment, depending on the user, location data from each user was sent at differing but somewhat regular intervals. The complete Node-Red flow is shown in Figure 2 below.
Figure 2: Node-Red Flow
The Python script called by the Node Red flow was very basic. The incoming variable passed as a system argument was checked for key terms which identified the user whose data was being sent in. Each user was assigned to his/her own output text files, so based on the key identifier, different text files were opened and location data received by the script was printed in the file.
Figure 3: Map View Options and Zoom-In Features
On page start-up, the function required 3 refreshes via refresh() to retrieve all of the necessary location data to properly display the map. As a result, we chose to make these first 3 refreshes occur in 1 second intervals to decrease the wait time required for the page to first load. The subsequent refreshes occurred once every 2 minutes to grab the most recent data from the text files to ensure that the pins on the map were accurate. This intervval of 2 minutes could easily be modified in the code based on the map viewer's preference. Both the Python and HTML code can be found in the Code Appendix section.
The map was displayed on an HDMI monitor that was plugged into the RPi in place of the lab monitors that had been used earlier in the course. The monitor was unfortunately not touch screen; however, scrolling and other functions could be accessed using the piTFT which was connected to the RPi system.
We initially attempted to communicate between the HTML file that rendered the map and the Python script called by Node Red using Flask, a Python framework. This would have been a much more sophisticated method to complete our project than our idea with the text files. However, because Node-Red required that the Python script be run to completion each time a new location was sent, this conflicted with the HTML since the HTML page had to be kept running in order to keep the map open and update pins on it. Thus, we were unable to implement our project using Flask. We also tried using TCP and UDP sockets to communicate between scripts, but ran into similar errors here too. We found that using a common text file as the interface was the cleanest form of establishing the required communication.
We were able to accomplish the majority of the goals outlined in the description. The basic functionality of the system is as described; however we made some changes to the overall design on the way. While we initially planned to use an LED matrix to display location on a map, we later chose to use the Google Maps API and an HDMI monitor instead, as this provided additional functionality such as zooming, scrolling, and a wider total viewable area than would have been possible with an LED matrix. We had also initially considered implementing additional features using the OwnTracks app's “Regions” feature. This allows demarcation of specific areas and could have been used to alert users when another user was entering or leaving specific regions. While we experimented with this on our apps, we did not incorporate this feature into our final design due to a time shortage.
Apart from the changes described above, our system meets the requirements we originally defined: our map interface works in real-time to update user locations on an LCD map display based on data the RPi pulls from a cloud service. We were able to distinguish between multiple users on the map by using different icons and also preserved touchscreen functionality using the piTFT screen even though the LCD display was not a touchscreen. This met all of goals outlined in our initial Project Proposal.
As mentioned in the results section, our project achieved the goals we set out to accomplish. We were able to set up a real-time location tracking system similar to that implemented by apps such as ‘Find My Friends’, but with an included hardware interface. The location data pushed to the map from the cloud is quite accurate. Thus, any lack of accuracy here is more likely due to inaccurate phone GPS systems than due to losses in data during movement of data through different parts of the system.
The one thing that we realized would not work was using the Node-Red system to add large numbers of people to one's map. While it is possible, each time a person wants to be added to the map, a separate node on Node-Red must be manually created and connected to the JSON node so that Node-Red can receive data from the new source. Thus, it would not be wise to try to scale up the number of individuals on a map due to the lack of efficiency and convenience of this particular system in adding new location sources.
All coding for the assignment was done with both partners present - we used pair programming practices during all of our lab sessions. We took turns coding different parts of the design and made sure that both of us had the chance to work on all parts of the system. The report was written and the website was made in a similar fashion. Together, we wrote down bullet points for every section and each expanded on various sections, after which we made the webpage by again using pair programming practices.
If we had more time to work on this project we would implement additional features such as location history tracking. Additionally, we could implement filters to allow for only certain groups of people to be viewed on the map such as family or friends. None of these features require significant additional setup in terms of external interfaces or tools, but would add more interactive features and make the system more engaging.
Table 1 below shows a list of all of the materials we used to complete this project, as well as their cost.
Table 1: Cost of Project Materials
Shown below are the python and HTML files we wrote for our project.
1. Google Maps API Tutorial: https://developers.google.com/chart/interactive/docs/gallery/map
2. Website Template: http://www.free-css.com/free-css-templates/page227/cube