SnapTask: Towards efficient visual crowdsourcing for indoor mapping

dc.contributorAalto-yliopistofi
dc.contributorAalto Universityen
dc.contributor.authorNoreikis, Mariusen_US
dc.contributor.authorXiao, Yuen_US
dc.contributor.authorHu, Jiyaoen_US
dc.contributor.authorChen, Yangen_US
dc.contributor.departmentDepartment of Communications and Networkingen
dc.contributor.groupauthorMobile Cloud Computingen
dc.contributor.organizationDepartment of Communications and Networkingen_US
dc.contributor.organizationFudan Universityen_US
dc.date.accessioned2018-12-10T10:16:32Z
dc.date.available2018-12-10T10:16:32Z
dc.date.issued2018-07-19en_US
dc.description.abstractVisual crowdsourcing (VCS) offers an inexpensive method to collect visual data for implementing tasks, such as 3D mapping and place detection, thanks to the prevalence of smartphone cameras. However, without proper guidance, participants may not always collect data from desired locations with a required Quality-of-Information (QoI). This often causes either a lack of data in certain areas, or extra overheads for processing unnecessary redundancy. In this work, we propose SnapTask, a participatory VCS system that aims at creating complete indoor maps by guiding participants to efficiently collect visual data of high QoI. It applies Structure-from-Motion (SfM) techniques to reconstruct 3D models of indoor environments, which are then converted into indoor maps. To increase coverage with minimal redundancy, SnapTask determines locations for the next data collection tasks by analyzing the coverage of the generated 3D model and the camera views of the collected images. In addition, it overcomes the limitations of SfM techniques by utilizing crowdsourced annotations to reconstruct featureless surfaces (e.g. glass walls) in the 3D model. According to a field test in a library, the indoor map generated by SnapTask successfully reconstructs 100% of the library walls and 98.12% of objects and traversal areas within the library. With the same amount of input data our design of guided data collection increases the map coverage by 20.72% and 34.45%, respectively, compared with unguided participatory and opportunistic VCS.en
dc.description.versionPeer revieweden
dc.format.extent11
dc.format.mimetypeapplication/pdfen_US
dc.identifier.citationNoreikis, M, Xiao, Y, Hu, J & Chen, Y 2018, SnapTask : Towards efficient visual crowdsourcing for indoor mapping. in Proceedings - 2018 IEEE 38th International Conference on Distributed Computing Systems, ICDCS 2018. vol. 2018-July, International Conference on Distributed Computing Systems, IEEE, pp. 578-588, International Conference on Distributed Computing Systems, Vienna, Austria, 02/07/2018. https://doi.org/10.1109/ICDCS.2018.00063en
dc.identifier.doi10.1109/ICDCS.2018.00063en_US
dc.identifier.isbn9781538668719
dc.identifier.issn2575-8411
dc.identifier.otherPURE UUID: 48c737f8-7a1d-4cb8-b883-295f22507d17en_US
dc.identifier.otherPURE ITEMURL: https://research.aalto.fi/en/publications/48c737f8-7a1d-4cb8-b883-295f22507d17en_US
dc.identifier.otherPURE FILEURL: https://research.aalto.fi/files/27158788/ELEC_Noreikis_et_al_SnapTask_CR.pdf
dc.identifier.urihttps://aaltodoc.aalto.fi/handle/123456789/35032
dc.identifier.urnURN:NBN:fi:aalto-201812106047
dc.language.isoenen
dc.relation.ispartofInternational Conference on Distributed Computing Systemsen
dc.relation.ispartofseriesProceedings - 2018 IEEE 38th International Conference on Distributed Computing Systems, ICDCS 2018en
dc.relation.ispartofseriesVolume 2018-July, pp. 578-588en
dc.relation.ispartofseriesInternational Conference on Distributed Computing Systemsen
dc.rightsopenAccessen
dc.subject.keywordCrowdsourcingen_US
dc.subject.keywordFeatureless Reconstructionen_US
dc.subject.keywordIndoor Mappingen_US
dc.subject.keywordParticipatory Crowdsourcingen_US
dc.titleSnapTask: Towards efficient visual crowdsourcing for indoor mappingen
dc.typeA4 Artikkeli konferenssijulkaisussafi
dc.type.versionacceptedVersion

Files