Please use this identifier to cite or link to this item: http://idr.nitk.ac.in/jspui/handle/123456789/7773
Full metadata record
DC FieldValueLanguage
dc.contributor.authorMamidala, R.S.
dc.contributor.authorUthkota, U.
dc.contributor.authorShankar, M.B.
dc.contributor.authorAntony, A.J.
dc.contributor.authorNarasimhadhan, A.V.
dc.date.accessioned2020-03-30T10:02:46Z-
dc.date.available2020-03-30T10:02:46Z-
dc.date.issued2019
dc.identifier.citationIEEE Region 10 Annual International Conference, Proceedings/TENCON, 2019, Vol.2019-October, , pp.2454-2459en_US
dc.identifier.urihttp://idr.nitk.ac.in/jspui/handle/123456789/7773-
dc.description.abstractLane detection algorithms have been the key enablers for a fully-assistive and autonomous navigation systems. In this paper, a novel and pragmatic approach for lane detection is proposed using a convolutional neural network (CNN) model based on SegNet encoder-decoder architecture. The encoder block renders low-resolution feature maps of the input and the decoder block provides pixel-wise classification from the feature maps. The proposed model has been trained over 2000 image data-set and tested against their corresponding ground-truth provided in the data-set for evaluation. To enable real-time navigation, we extend our model's predictions interfacing it with the existing Google APIs evaluating the metrics of the model tuning the hyper-parameters. The novelty of this approach lies in the integration of existing segnet architecture with google APIs. This interface makes it handy for assistive robotic systems. The observed results show that the proposed method is robust under challenging occlusion conditions due to pre-processing involved and gives superior performance when compared to the existing methods. � 2019 IEEE.en_US
dc.titleDynamic Approach for Lane Detection using Google Street View and CNNen_US
dc.typeBook chapteren_US
Appears in Collections:2. Conference Papers

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.