TCR Sequence Representations Using Deep, Contextualized Language Models

dc.contributorAalto-yliopistofi
dc.contributorAalto Universityen
dc.contributor.advisorJokinen, Emmi
dc.contributor.authorDumitrescu, Alexandru
dc.contributor.schoolPerustieteiden korkeakoulufi
dc.contributor.supervisorLähdesmäki, Harri
dc.date.accessioned2021-03-21T18:05:52Z
dc.date.available2021-03-21T18:05:52Z
dc.date.issued2021-03-15
dc.description.abstractThe recent advents of deep, contextual language models have brought significant improvements to various complex tasks such as neural machine translation or document generation. Models similar to those used in natural language have also started to grow in popularity in the bioinformatics field. The sequence information of proteins can be represented as strings of characters, each denoting one unique amino acid. This fact has led researchers to successfully experiment with amino acid vector representations that are learned and computed with models similar to those used in the natural language field. T cell receptors (TCRs) are sequences of proteins that form through the (random) recombination of the so-called variable (V), diversity (D), and joining (J) gene segments. These sequences are responsible for determining the epitope specificities of T cells and, in turn, their ability to recognize foreign pathogens. The physicochemical properties of each amino acid in a TCR and how the TCR protein folds determine what pathogens the T cell recognizes. This thesis presents and compares various ways of extracting contextual embeddings from T cell receptor proteins, using only their sequence information. We implement and test adaptations of character level Embeddings from Language Models (ELMO) and fine-tune Bidirectional Encoder Representations from Transformers (BERT) models using only sequences of amino acids coming from human TCR proteins. We then test the language models we train using only TCRs on an additional task that classifies a TCR based on its epitope specificity. We show how much the language model's task performance affects the TCR epitope classifier. Finally, we compare our approach to other state-of-the-art methods for TCR epitope classification.en
dc.format.extent70 + 12
dc.format.mimetypeapplication/pdfen
dc.identifier.urihttps://aaltodoc.aalto.fi/handle/123456789/103090
dc.identifier.urnURN:NBN:fi:aalto-202103212369
dc.language.isoenen
dc.programmeMaster’s Programme in Computer, Communication and Information Sciencesfi
dc.programme.majorAlexandru Dumitrescufi
dc.programme.mcodeSCI3044fi
dc.subject.keywordleep Learningen
dc.subject.keywordELMO (Embeddings from Language Models)en
dc.subject.keywordBERT (Bidirectional Encoder Representations from Transformers)en
dc.subject.keywordT-cell receptoren
dc.subject.keywordcomplementary determining regionen
dc.subject.keywordepitopeen
dc.titleTCR Sequence Representations Using Deep, Contextualized Language Modelsen
dc.typeG2 Pro gradu, diplomityöfi
dc.type.ontasotMaster's thesisen
dc.type.ontasotDiplomityöfi
local.aalto.electroniconlyyes
local.aalto.openaccessyes
Files
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
master_Dumitrescu_Alexandru_2021.pdf
Size:
2.89 MB
Format:
Adobe Portable Document Format