TCR Sequence Representations Using Deep, Contextualized Language Models

Loading...
Thumbnail Image

URL

Journal Title

Journal ISSN

Volume Title

Perustieteiden korkeakoulu | Master's thesis

Date

2021-03-15

Department

Major/Subject

Machine Learning, Data Science and Artificial Intelligence

Mcode

SCI3044

Degree programme

Master’s Programme in Computer, Communication and Information Sciences

Language

en

Pages

70 + 12

Series

Abstract

The recent advents of deep, contextual language models have brought significant improvements to various complex tasks such as neural machine translation or document generation. Models similar to those used in natural language have also started to grow in popularity in the bioinformatics field. The sequence information of proteins can be represented as strings of characters, each denoting one unique amino acid. This fact has led researchers to successfully experiment with amino acid vector representations that are learned and computed with models similar to those used in the natural language field. T cell receptors (TCRs) are sequences of proteins that form through the (random) recombination of the so-called variable (V), diversity (D), and joining (J) gene segments. These sequences are responsible for determining the epitope specificities of T cells and, in turn, their ability to recognize foreign pathogens. The physicochemical properties of each amino acid in a TCR and how the TCR protein folds determine what pathogens the T cell recognizes. This thesis presents and compares various ways of extracting contextual embeddings from T cell receptor proteins, using only their sequence information. We implement and test adaptations of character level Embeddings from Language Models (ELMO) and fine-tune Bidirectional Encoder Representations from Transformers (BERT) models using only sequences of amino acids coming from human TCR proteins. We then test the language models we train using only TCRs on an additional task that classifies a TCR based on its epitope specificity. We show how much the language model's task performance affects the TCR epitope classifier. Finally, we compare our approach to other state-of-the-art methods for TCR epitope classification.

Description

Supervisor

Lähdesmäki, Harri

Thesis advisor

Jokinen, Emmi

Keywords

leep Learning, ELMO (Embeddings from Language Models), BERT (Bidirectional Encoder Representations from Transformers), T-cell receptor, complementary determining region, epitope

Other note

Citation