Raspberry Pi Lane Departure Warning System

Designed by Jason Zhang (jcz28)
and Ope Oladipo (ooo25)
for Embedded Linux ECE 5725 (Cornell University)
May 21, 2017


In a $15.3B deal in early 2017, Intel acquired Mobileye, a computer vision company that develops technology for self driving cars. Among other functions, Mobileye’s technology can detect lane departure in a moving vehicle and is used by virtually all car manufacturers including Tesla and BMW. Our interest in this feature and in driving technology led us to our desire in developing an inexpensive, Raspberry Pi-based dashcam with a lane departure warning system. Lane departure is the leading cause of fatal crashes; inexpensive, widely-available lane departure warning systems could save many lives.


Our project involved integrating 3 separate components together:

  1. The first component was a dashcam that could record video and save it to a removeable drive for future viewing.
  2. The second component was a lane departure warning system that could detect when a vehicle was drifting out of its lane and warn the driver.
  3. The third component was an accelerometer that could provide additional data that could be stamped onto the dashcam video, such as the cardinal direction of the vehicle, and the G-forces experienced by the vehicle.

The final result was a device consisting of a camera, Raspberry Pi, accelerometer, and several LEDs. The camera recorded a view of the road from a vehicle’s dashboard. The accelerometer detected the direction and G-forces of the vehicle and sent it to the Raspberry Pi, which stamped the data onto each frame of video before saving it to an external drive, such as a flash drive. At the same time, the Raspberry Pi analyzed each frame of video in real time to determine whether the vehicle was drifting from its lane. If a partial drift was detected, a yellow LED would light up. If a major drift was detected, a red LED would light up. Also, if the G-forces exceeded a certain threshold, a blue LED would flash for a few seconds.

Design and Testing


For our dashcam, we used the Raspberry Pi Camera Module V2 found here on Amazon. We chose this camera because it was easy to hook up to the Raspberry Pi, had native support, great resolution and was well supported with a lot of online resources.

We started by setting up the pi parameters and taking test shots at different resolutions and modes. We chose to use the record mode while grabbing frames from the video_port (which is faster because we’re already recording) to the python module. We then did this while implementing multiprocessing so the pi could use all 4 cores. This was done with the multiprocessing library by creating a pool of 4 processors where each process would pick from the pool to run its process and the pi would automatically schedule the process to an appropriate and free core.

We continued by testing different resolutions and figuring out what the optimal FPS to Resolution trade-off would be. We ended up at a 480 x 360 resolution because we were getting 10fps of processing (and 17fps while grabbing frames from the videoport). These were all done using all 4 cores. Note that the recorded video itself is at 30fps but the pi cannot grab more than 17 frames per second to do image processing on.

The dashcam itself also recorded at 480x360 pixels. The feed from the camera would be written to the flash drive. We made the length of each clip variable in number of seconds. This is because we wanted to prevent as much data loss while recording as possible for example, in the event of a quick shutdown, we wouldn’t want 30min of recording to be corrupted. We named the videos by the date to make it easy to be found.

To ensure that video could be continuously saved to the external drive, we created a script that would check the percentage of the drive that was full. If the percentage exceeded a certain threshold (we used 90%), then the script would automatically delete the oldest footage. This script works for external drives of any size, because it detects the percentage of the drive that is filled, rather than a hard-coded memory quantity. We ran this script on a timer within our main scripts. We found timers (in addition to the multiprocessing) module to be extremely helpful. It allowed us to sample the IMU fast enough to detect collisions quickly even when our main loop ran only 10 times a second.

Lane Departure Warning System

We used the openCV module to help us detect the lines. The steps are as follows,

  1. Crop the image appropriately to reduce unneeded processing.
  2. Change to Black and white because color information is unneeded (and since it uses 3 layers to store than information, we don’t want to calculate line detection for all layers.)
  3. Apply a layer of gaussian blur to smooth out noise. An example can be seen below of the effect of line detection with and without the blur added.
  4. We then run a Canny edge detection algorithm on our blurred image to detect the lines resulting in a binary MAT image with the edges being white.
  5. We finally run a hough probabilistic line detection on the canny edge image. This will try to find the most likely lines. We then draw the lines on the image using the cv2 lines function.

Steps to line detection

Edge Detection without Blurring first

Edge Detection after Gaussian Blur applied

The hough line detection algorithm will detect many lines on the image. To ensure that only lane markers are detected, we crop out the top 80% of the image and only perform line detection on the bottom 20%.

Before Crop and Lane detection

After Crop and Lane detection

We iterate through the set of lines found in the image and pick out the longest line with a positive slope, and the longest line with a negative slope. These correspond to lines representing the right and left lane markers respectively. The left and right lane marker lines may not extend to the top and bottom of the image, so we do some simple math to extend them into the green lines seen in the above image. Using these lane markers, we came up with 3 different ways to detect whether a car was drifting out of its lane.

  1. Set static thresholds at certain points of the image. Find the midpoint of the two lines, and detect whether either of those two points leave the bounds of the thresholds. The two images below show examples of when the vehicle is within lane bounds, and when the vehicle is drifting. Note that in both images below, the yellow and red threshold lines remain in the same position. In the bottom image, the vehicle is drifting to the left.

    Vehicle is within lane bounds

    Vehicle drifting slightly. Yellow warning light would turn on.

  2. Again, use two boundaries for the yellow warning and red danger signals. This time, we take the lower endpoints of the lane markers and compute the midpoint. We detect whether the midpoint leaves the bounds set by the thresholds.

    Vehicle is within lane bounds

    Vehicle drifting slightly

  3. Take the upper endpoints of the lane markers and compute the midpoint. Take the lower endpoints of the lane markers and compute the midpoints. Then, find the slope of the line traveling through the midpoints. Ensure that the absolute value of the slope does not exceed the threshold slopes. In the image below, the blue line represents the line connecting the two midpoints. The yellow and red lines represent the slope thresholds that the blue line should not cross.

    Vehicle is within lane bounds


We used a UCTRONICS MPU-9255 9-Axis Sensor Module found here on Amazon. We used the I2C protocol, which allowed for communication with the device using just 4 wires: VCC, GND, SCL, and SDA.

IMU Used

First, we used an online guide found at bitify to obtain data from the IMU. This guide allowed us to get accelerometer and gyro data by reading directly from the memory addresses on the IMU. The code for this process was difficult to read and understand, so we switched to using a modified version of the RTIMULib2 library found here. This library allowed us to obtain data from the IMU chip with abstracted function calls and custom calibration settings. During testing, we were able to obtain data at a rate of 250 samples per second. The accelerometer arrived in raw, unitless values, so we had to find a mapping from these values to G-forces. We did this by downloading an app on our phones that used the smartphone’s accelerometers to compute G-forces. By physically attaching the phone to the IMU, we were able to see which values from the IMU corresponded to the G-forces being detected by the phone. The IMU also provided magnetometer data, which we used to figure out the cardinal direction that a vehicle is facing. We used a compass to help us translate the x and y direction magnetometer readings into cardinal directions.

Integration of components

We set up a testing mode where our script would take arguments and behave accordingly. This is what we used to shape the algorithm. The testing mode for example would read in a ‘test.jpg’ and apply our functions to it and show us the results.We did this before testing any moving roads. This helped make integration easier as we could switch off and on functions we wanted to test.

Our dashcam integrated nicely with the lane detection because they were able to run concurrently without any problems. The IMU in the other hand needed a bit of repeated polling which we accomplished using the Timer library on the raspberry pi. We had it run a function to poll the IMU unit every 5 milliseconds(or 200 times a second). This meant we could detect impending accidents much faster than the 9-10 fps we had our main loop running at.

We also used a timer for the memory sweeping operation to clear out disk space after 90% usage


We were able to complete the basic functionality of our system. We tested the system by pointing the camera at pre-recorded dashcam footage to simulate a moving car. The yellow drift warning light proved to be very sensitive, as it would blink often even when the driver stayed within their lane. The red danger light did turn on when the driver switched lanes, proving that it was able to detect lane departure. When we jostled the accelerometer, the blue collision detection light flashed. We were able to detach the flash drive from the Raspberry Pi and view the dashcam footage from our computers. The videos showed the original footage along with data from the accelerometer stamped on the top. Our original goal was to test the system in a real car. However this proved to be infeasible, as neither of us owned a car, nor were we able to borrow one to test the project.

Future Work

If we had more time to work on this project, we would first make the lane detection algorithm better, as there were times when a guardrail was detected as a lane marker. Also, our algorithm was sometimes unable to detect striped lanes due to the line discontinuation between stripes. We would also like to expand our computer vision capabilities to detect other objects on the road, such as vehicles, road signs, and traffic lights. The use of a faster processing unit would be helpful, as the Raspberry Pi could not process video frames in higher resolutions without lagging. The implementation of an audio cue to warn of lane departure would be helpful to the driver. Also, when the G-forces detected by the accelerometer exceeds the threshold, other actions could be performed, such as calling the police in the event of a severe collision. The versatility of the Raspberry Pi would also allow us to easily add more driving technology to the system, such as a reverse camera and a GPS tracker.


Our completed project was a proof of concept showing that an inexpensive dashcam + lane departure warning system based off a small microcontroller is possible to create. We discovered that video analysis is very taxing for a computer. As a result, we had to configure the dashcam at a much lower resolution than desired. The computer vision tasks also reduced the framerate at which we could record, since each captured frame had to be immediately analyzed to detect lanes. We encountered difficulties finding online support for the IMU we purchased. Development may have been quicker had we purchased parts that were more widely used and supported.

System as setup in lab


Ope - OpenCV, Camera, Multithreading, Part of the Lane detection

Jason - Lane detection, IMU calibration and setup, file deletion script (when full)

Intellectual Property Considerations

Code that was used and not written by us have been properly cited in the references section below. Work by Jingyao Ren, Andre Heil, Param Aggarwal and the RTIMU library were very helpful in successful completion of this project.

Download Code

Download Code here

Parts List

Part Vendor Quantity Cost/Unit Total
Raspberry Pi 3 Lab 1 $0.00 $0.00
Raspberry Pi Camera Module V2 Amazon 1 $30 $30.00
UCTRONICS MPU-9255 9-Axis Sensor Amazon 1 $20 $20.00
$50.00 (exluding pi cost)

Get in touch

Jason Zhang
and Ope Oladipo