» Articles » PMID: 39095917

PETA: Evaluating the Impact of Protein Transfer Learning with Sub-word Tokenization on Downstream Applications

Overview
Journal J Cheminform
Publisher Biomed Central
Specialty Chemistry
Date 2024 Aug 2
PMID 39095917
Authors
Affiliations
Soon will be listed here.
Abstract

Protein language models (PLMs) play a dominant role in protein representation learning. Most existing PLMs regard proteins as sequences of 20 natural amino acids. The problem with this representation method is that it simply divides the protein sequence into sequences of individual amino acids, ignoring the fact that certain residues often occur together. Therefore, it is inappropriate to view amino acids as isolated tokens. Instead, the PLMs should recognize the frequently occurring combinations of amino acids as a single token. In this study, we use the byte-pair-encoding algorithm and unigram to construct advanced residue vocabularies for protein sequence tokenization, and we have shown that PLMs pre-trained using these advanced vocabularies exhibit superior performance on downstream tasks when compared to those trained with simple vocabularies. Furthermore, we introduce PETA, a comprehensive benchmark for systematically evaluating PLMs. We find that vocabularies comprising 50 and 200 elements achieve optimal performance. Our code, model weights, and datasets are available at https://github.com/ginnm/ProteinPretraining . SCIENTIFIC CONTRIBUTION: This study introduces advanced protein sequence tokenization analysis, leveraging the byte-pair-encoding algorithm and unigram. By recognizing frequently occurring combinations of amino acids as single tokens, our proposed method enhances the performance of PLMs on downstream tasks. Additionally, we present PETA, a new comprehensive benchmark for the systematic evaluation of PLMs, demonstrating that vocabularies of 50 and 200 elements offer optimal performance.

Citing Articles

AI-enabled alkaline-resistant evolution of protein to apply in mass production.

Kang L, Wu B, Zhou B, Tan P, Kang Y, Yan Y Elife. 2025; 13.

PMID: 39968946 PMC: 11839161. DOI: 10.7554/eLife.102788.


Protein engineering in the deep learning era.

Zhou B, Tan Y, Hu Y, Zheng L, Zhong B, Hong L mLife. 2025; 3(4):477-491.

PMID: 39744096 PMC: 11685842. DOI: 10.1002/mlf2.12157.

References
1.
McCallister E, Alm E, Baker D . Critical role of beta-hairpin formation in protein G folding. Nat Struct Biol. 2000; 7(8):669-73. DOI: 10.1038/77971. View

2.
Guney E, Menche J, Vidal M, Barabasi A . Network-based in silico drug efficacy screening. Nat Commun. 2016; 7:10331. PMC: 4740350. DOI: 10.1038/ncomms10331. View

3.
Nielsen H, Tsirigos K, Brunak S, von Heijne G . A Brief History of Protein Sorting Prediction. Protein J. 2019; 38(3):200-216. PMC: 6589146. DOI: 10.1007/s10930-019-09838-3. View

4.
Ma B . Novor: real-time peptide de novo sequencing software. J Am Soc Mass Spectrom. 2015; 26(11):1885-94. PMC: 4604512. DOI: 10.1007/s13361-015-1204-0. View

5.
Haki G, Rakshit S . Developments in industrially important thermostable enzymes: a review. Bioresour Technol. 2003; 89(1):17-34. DOI: 10.1016/s0960-8524(03)00033-6. View