Automatic classification of the severity level of Parkinson’s disease: A comparison of speaking tasks, features, and classifiers

dc.contributorAalto-yliopistofi
dc.contributorAalto Universityen
dc.contributor.authorKodali, Manilaen_US
dc.contributor.authorKadiri, Sudarsanaen_US
dc.contributor.authorAlku, Paavoen_US
dc.contributor.departmentDepartment of Information and Communications Engineeringen
dc.contributor.groupauthorSpeech Communication Technologyen
dc.date.accessioned2023-08-11T07:22:47Z
dc.date.available2023-08-11T07:22:47Z
dc.date.issued2023-10en_US
dc.description.abstractAutomatic speech-based severity level classification of Parkinson’s disease (PD) enables objective assessment and earlier diagnosis. While many studies have been conducted on the binary classification task to distinguish speakers in PD from healthy controls (HCs), clearly fewer studies have addressed multi-class PD severity level classification problems. Furthermore, in studying the three main issues of speech-based classification systems—speaking tasks, features, and classifiers—previous investigations on the severity level classification have yielded inconclusive results due to the use of only a few, and sometimes just one, type of speaking task, feature, or classifier in each study. Hence, a systematic comparison is conducted in this study between different speaking tasks, features, and classifiers. Five speaking tasks (vowel task, sentence task, diadochokinetic (DDK) task, read text task, and monologue task), four features (phonation, articulation, prosody, and their fusion), and four classifier architectures (support vector machine (SVM), random forest (RF), multilayer perceptron (MLP), and AdaBoost) were compared. The classification task studied was a 3-class problem to classify PD severity level as healthy vs. mild vs. severe. Two MDS-UPDRS scales (MDS-UPDRS-III and MDS-UPDRS-S) were used for the ground truth severity level labels. The results showed that the use of the monologue task and the articulation and fusion of features improved classification accuracy significantly compared to the use of the other speaking tasks and features. The best classification systems resulted in a rate of accuracy of 58% (using the monologue task with the articulation features) for the MDS-UPDR-III scale and 56% (using the monologue task with fusion of features) for the MDS-UPDRS-S scale.en
dc.description.versionPeer revieweden
dc.format.extent15
dc.format.mimetypeapplication/pdfen_US
dc.identifier.citationKodali, M, Kadiri, S & Alku, P 2023, ' Automatic classification of the severity level of Parkinson’s disease: A comparison of speaking tasks, features, and classifiers ', Computer Speech and Language, vol. 83, 101548 . https://doi.org/10.1016/j.csl.2023.101548en
dc.identifier.doi10.1016/j.csl.2023.101548en_US
dc.identifier.issn0885-2308
dc.identifier.issn1095-8363
dc.identifier.otherPURE UUID: 79f82760-e45e-4fe8-8a81-fceb530ad13ben_US
dc.identifier.otherPURE ITEMURL: https://research.aalto.fi/en/publications/79f82760-e45e-4fe8-8a81-fceb530ad13ben_US
dc.identifier.otherPURE LINK: http://www.scopus.com/inward/record.url?scp=85165543692&partnerID=8YFLogxKen_US
dc.identifier.otherPURE FILEURL: https://research.aalto.fi/files/117952162/1_s2.0_S0885230823000670_main.pdfen_US
dc.identifier.urihttps://aaltodoc.aalto.fi/handle/123456789/122369
dc.identifier.urnURN:NBN:fi:aalto-202308114718
dc.language.isoenen
dc.publisherAcademic Press
dc.relation.ispartofseriesComputer Speech and Languageen
dc.relation.ispartofseriesVolume 83en
dc.rightsopenAccessen
dc.subject.keywordParkinson’s diseaseen_US
dc.subject.keywordseverity level classificationen_US
dc.subject.keywordMDS-UPDRS-IIIen_US
dc.subject.keywordMDS-UPDRS-Sen_US
dc.titleAutomatic classification of the severity level of Parkinson’s disease: A comparison of speaking tasks, features, and classifiersen
dc.typeA1 Alkuperäisartikkeli tieteellisessä aikakauslehdessäfi
dc.type.versionpublishedVersion

Files