Deep Learning-Based Single-Shot and Real-Time Vehicle Detection and Ego-Lane Estimation

Authors

  • M.A.A. Abdul Matin Centre for Unmanned Technologies (CUTe), Kulliyyah of Engineering, International Islamic University Malaysia (IIUM), 53100 Kuala Lumpur, Malaysia
  • A.S. Ahmad Fakhri Centre for Unmanned Technologies (CUTe), Kulliyyah of Engineering, International Islamic University Malaysia (IIUM), 53100 Kuala Lumpur, Malaysia
  • H.S. Mohd Zaki Centre for Unmanned Technologies (CUTe), Kulliyyah of Engineering, International Islamic University Malaysia (IIUM), 53100 Kuala Lumpur, Malaysia
  • Z. Zainal Abidin Centre for Unmanned Technologies (CUTe), Kulliyyah of Engineering, International Islamic University Malaysia (IIUM), 53100 Kuala Lumpur, Malaysia
  • Y. Mohd Mustafah Centre for Unmanned Technologies (CUTe), Kulliyyah of Engineering, International Islamic University Malaysia (IIUM), 53100 Kuala Lumpur, Malaysia
  • H. Abd Rahman Delloyd R&D (M) Sdn. Bhd., Jln. Kebun, Kampung Jawa, 41000 Klang, Selangor, Malaysia
  • N.H. Mahamud Centre for Unmanned Technologies (CUTe), Kulliyyah of Engineering, International Islamic University Malaysia (IIUM), 53100 Kuala Lumpur, Malaysia
  • S. Hanizam Centre for Unmanned Technologies (CUTe), Kulliyyah of Engineering, International Islamic University Malaysia (IIUM), 53100 Kuala Lumpur, Malaysia
  • N.S. Ahmad Rudin Centre for Unmanned Technologies (CUTe), Kulliyyah of Engineering, International Islamic University Malaysia (IIUM), 53100 Kuala Lumpur, Malaysia

DOI:

https://doi.org/10.56381/jsaem.v4i1.51

Keywords:

Deep learning, Forward Collision Warning System (FCWS), ego-lane estimation, fine-tuning, feature extractor architecture, meta-architecture

Abstract

Vision-based Forward Collision Warning System (FCWS) is a promising assist feature in a car to alleviate road accidents and make roads safer. In practice, it is exceptionally hard to accurately and efficiently develop an algorithm for FCWS application due to the complexity of steps involved in FCWS. For FCWS application, multiple steps are involved namely vehicle detection, target vehicle verification and time-to-collision (TTC). These involve an elaborated FCWS pipeline using classical computer vision methods which limits the robustness of the overall system and limits the scalability of the algorithm. Deep neural network (DNN) has shown unprecedented performance for the task of vision-based object detection which opens the possibility to be explored as an effective perceptive tool for automotive application. In this paper, a DNN based single-shot vehicle detection and ego-lane estimation architecture is presented. This architecture allows simultaneous detection of vehicles and estimation of ego-lanes in a single-shot. SSD-MobileNetv2 architecture was used as a backbone network to achieve this. Traffic ego- lanes in this paper were defined as semantic regression points. We collected and labelled 59,068 images of ego-lane datasets and trained the feature extractor architecture MobileNetv2 to estimate where the ego- lanes are in an image. Once the feature extractor is trained for ego-lane estimation the meta-architecture single-shot detector (SSD) was then trained to detect vehicles. Our experimental results show that this method achieves real-time performance with test results of 88% total precision on the CULane dataset and 91% on our dataset for ego-lane estimation. Moreover, we achieve a 63.7% mAP for vehicle detection on our dataset. The proposed architecture shows that an elaborate pipeline of multiple steps to develop an algorithm for the FCWS application is eliminated. The proposed method achieves real-time at 60 fps performance on standard PC running on Nvidia GTX1080 proving its potential to run on an embedded device for FCWS.

Downloads

Published

01/01/2020

How to Cite

[1]
M. Abdul Matin, “Deep Learning-Based Single-Shot and Real-Time Vehicle Detection and Ego-Lane Estimation”, JSAEM, vol. 4, no. 1, pp. 61–72, Jan. 2020.

Issue

Section

Original Articles