Two-stream part-based deep representation for human attribute recognition

 |  Login

Show simple item record

dc.contributor Aalto-yliopisto fi
dc.contributor Aalto University en
dc.contributor.author Anwer, Rao Muhammad
dc.contributor.author Khan, Fahad Shahbaz
dc.contributor.author Laaksonen, Jorma
dc.date.accessioned 2018-08-21T13:46:16Z
dc.date.available 2018-08-21T13:46:16Z
dc.date.issued 2018-07-13
dc.identifier.citation Anwer , R M , Khan , F S & Laaksonen , J 2018 , Two-stream part-based deep representation for human attribute recognition . in Proceedings - 2018 International Conference on Biometrics, ICB 2018 . Institute of Electrical and Electronics Engineers Inc. , pp. 90-97 , International Conference on Biometrics , Gold Coast , Australia , 20/02/2018 . DOI: 10.1109/ICB2018.2018.00024 en
dc.identifier.isbn 9781538642856
dc.identifier.other PURE UUID: 9041dd14-ec3d-49d7-9136-75213576f01c
dc.identifier.other PURE ITEMURL: https://research.aalto.fi/en/publications/twostream-partbased-deep-representation-for-human-attribute-recognition(9041dd14-ec3d-49d7-9136-75213576f01c).html
dc.identifier.other PURE LINK: http://www.scopus.com/inward/record.url?scp=85050973843&partnerID=8YFLogxK
dc.identifier.other PURE FILEURL: https://research.aalto.fi/files/31088598/SCI_Anwer_Khan_Laaksonen_Two_Stream_Part_based.ICB_Camera_Ready.pdf
dc.identifier.uri https://aaltodoc.aalto.fi/handle/123456789/33530
dc.description | openaire: EC/H2020/780069/EU//MeMAD
dc.description.abstract Recognizing human attributes in unconstrained environments is a challenging computer vision problem. State-of-the-art approaches to human attribute recognition are based on convolutional neural networks (CNNs). The de facto practice when training these CNNs on a large labeled image dataset is to take RGB pixel values of an image as input to the network. In this work, we propose a two-stream part-based deep representation for human attribute classification. Besides the standard RGB stream, we train a deep network by using mapped coded images with explicit texture information, that complements the standard RGB deep model. To integrate human body parts knowledge, we employ the deformable part-based models together with our two-stream deep model. Experiments are performed on the challenging Human Attributes (HAT-27) Dataset consisting of 27 different human attributes. Our results clearly show that (a) the two-stream deep network provides consistent gain in performance over the standard RGB model and (b) that the attribute classification results are further improved with our two-stream part-based deep representations, leading to state-of-the-art results. en
dc.format.extent 8
dc.format.extent 90-97
dc.format.mimetype application/pdf
dc.language.iso en en
dc.relation info:eu-repo/grantAgreement/EC/H2020/780069/EU//MeMAD
dc.relation.ispartof International Conference on Biometrics en
dc.relation.ispartofseries Proceedings - 2018 International Conference on Biometrics, ICB 2018 en
dc.rights openAccess en
dc.subject.other Instrumentation en
dc.subject.other Computer Science Applications en
dc.subject.other Computer Vision and Pattern Recognition en
dc.subject.other Pathology and Forensic Medicine en
dc.subject.other 113 Computer and information sciences en
dc.title Two-stream part-based deep representation for human attribute recognition en
dc.type A4 Artikkeli konferenssijulkaisussa fi
dc.description.version Peer reviewed en
dc.contributor.department Department of Computer Science
dc.contributor.department Linköping University
dc.subject.keyword Deep Learning
dc.subject.keyword Human attribute Recognition
dc.subject.keyword Part-based representation
dc.subject.keyword Instrumentation
dc.subject.keyword Computer Science Applications
dc.subject.keyword Computer Vision and Pattern Recognition
dc.subject.keyword Pathology and Forensic Medicine
dc.subject.keyword 113 Computer and information sciences
dc.identifier.urn URN:NBN:fi:aalto-201808214663
dc.identifier.doi 10.1109/ICB2018.2018.00024


Files in this item

Files Size Format View

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Search archive


Advanced Search

article-iconSubmit a publication

Browse

My Account