Please use this identifier to cite or link to this item: http://hdl.handle.net/10773/34250
Full metadata record
DC FieldValueLanguage
dc.contributor.authorCanedo, Danielpt_PT
dc.contributor.authorFonseca, Pedropt_PT
dc.contributor.authorGeorgieva, Petiapt_PT
dc.contributor.authorNeves, António J. R.pt_PT
dc.date.accessioned2022-07-25T09:47:10Z-
dc.date.issued2022-04-22-
dc.identifier.issn0302-9743pt_PT
dc.identifier.urihttp://hdl.handle.net/10773/34250-
dc.description.abstractThe implementation of a robust vision system in floor-cleaning robots enables them to optimize their navigation and analysing the surrounding floor, leading to a reduction on power, water and chemical products’ consumption. In this paper, we propose a novel pipeline of a vision system to be integrated into floor-cleaning robots. This vision system was built upon the YOLOv5 framework, and its role is to detect dirty spots on the floor. The vision system is fed by two cameras: one on the front and the other on the back of the floor-cleaning robot. The goal of the front camera is to save energy and resources of the floor-cleaning robot, controlling its speed and how much water and detergent is spent according to the detected dirt. The goal of the back camera is to act as evaluation and aid the navigation node, since it helps the floor-cleaning robot to understand if the cleaning was effective and if it needs to go back later for a second sweep. A self-calibration algorithm was implemented on both cameras to stabilize image intensity and improve the robustness of the vision system. A YOLOv5 model was trained with carefully prepared training data. A new dataset was obtained in an automotive factory using the floor-cleaning robot. A hybrid training dataset was used, consisting on the Automation and Control Institute dataset (ACIN), the automotive factory dataset, and a synthetic dataset. Data augmentation was applied to increase the dataset and to balance the classes. Finally, our vision system attained a mean average precision (mAP) of 0.7 on the testing set.pt_PT
dc.language.isoengpt_PT
dc.relationPOCI-01-0247-FEDER-039947pt_PT
dc.rightsembargoedAccesspt_PT
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/pt_PT
dc.subjectComputer visionpt_PT
dc.subjectObject detectionpt_PT
dc.subjectDeep learningpt_PT
dc.subjectFloor-cleaning robotspt_PT
dc.titleAn innovative vision system for floor-cleaning robots based on YOLOv5pt_PT
dc.typearticlept_PT
dc.description.versionpublishedpt_PT
dc.peerreviewedyespt_PT
degois.publication.firstPage378pt_PT
degois.publication.lastPage389pt_PT
degois.publication.titleLecture Notes in Computer Sciencept_PT
degois.publication.volume13256pt_PT
dc.date.embargo2023-04-30-
dc.identifier.doi10.1007/978-3-031-04881-4_30pt_PT
dc.identifier.essn1611-3349pt_PT
Appears in Collections:IEETA - Artigos

Files in This Item:
File Description SizeFormat 
ibPRIA2022_Daniel.pdf2.48 MBAdobe PDFView/Open


FacebookTwitterLinkedIn
Formato BibTex MendeleyEndnote Degois 

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.