Parameter Tuning is a Key Part of Dimensionality Reduction Via Deep Variational Autoencoders for Single Cell RNA Transcriptomics
Overview
Authors
Affiliations
Single-cell RNA sequencing (scRNA-seq) is a powerful tool to profile the transcriptomes of a large number of individual cells at a high resolution. These data usually contain measurements of gene expression for many genes in thousands or tens of thousands of cells, though some datasets now reach the million-cell mark. Projecting high-dimensional scRNA-seq data into a low dimensional space aids downstream analysis and data visualization. Many recent preprints accomplish this using variational autoencoders (VAE), generative models that learn underlying structure of data by compress it into a constrained, low dimensional space. The low dimensional spaces generated by VAEs have revealed complex patterns and novel biological signals from large-scale gene expression data and drug response predictions. Here, we evaluate a simple VAE approach for gene expression data, Tybalt, by training and measuring its performance on sets of simulated scRNA-seq data. We find a number of counter-intuitive performance features: i.e., deeper neural networks can struggle when datasets contain more observations under some parameter configurations. We show that these methods are highly sensitive to parameter tuning: when tuned, the performance of the Tybalt model, which was not optimized for scRNA-seq data, outperforms other popular dimension reduction approaches - PCA, ZIFA, UMAP and t-SNE. On the other hand, without tuning performance can also be remarkably poor on the same data. Our results should discourage authors and reviewers from relying on self-reported performance comparisons to evaluate the relative value of contributions in this area at this time. Instead, we recommend that attempts to compare or benchmark autoencoder methods for scRNA-seq data be performed by disinterested third parties or by methods developers only on unseen benchmark data that are provided to all participants simultaneously because the potential for performance differences due to unequal parameter tuning is so high.
Optimal linear ensemble of binary classifiers.
Ahsen M, Vogel R, Stolovitzky G Bioinform Adv. 2024; 4(1):vbae093.
PMID: 39011276 PMC: 11249386. DOI: 10.1093/bioadv/vbae093.
FusionNW, a potential clinical impact assessment of kinases in pan-cancer fusion gene network.
Yang C, Kumar H, Kim P Brief Bioinform. 2024; 25(2).
PMID: 38493341 PMC: 10944571. DOI: 10.1093/bib/bbae097.
Benchmarking variational AutoEncoders on cancer transcriptomics data.
Eltager M, Abdelaal T, Charrout M, Mahfouz A, Reinders M, Makrodimitris S PLoS One. 2023; 18(10):e0292126.
PMID: 37796856 PMC: 10553230. DOI: 10.1371/journal.pone.0292126.
An introduction to representation learning for single-cell data analysis.
Gunawan I, Vafaee F, Meijering E, Lock J Cell Rep Methods. 2023; 3(8):100547.
PMID: 37671013 PMC: 10475795. DOI: 10.1016/j.crmeth.2023.100547.
Orrapin S, Thongkumkoon P, Udomruk S, Moonmuang S, Sutthitthasakul S, Yongpitakwattana P Int J Mol Sci. 2023; 24(15).
PMID: 37569711 PMC: 10418766. DOI: 10.3390/ijms241512337.