» Articles » PMID: 39930041

Harnessing Large Language Models for Data-scarce Learning of Polymer Properties

Overview
Journal Nat Comput Sci
Publisher Springer Nature
Date 2025 Feb 10
PMID 39930041
Authors
Affiliations
Soon will be listed here.
Abstract

Large language models (LLMs) bear promise as a fast and accurate material modeling paradigm for evaluation, analysis and design. Their vast number of trainable parameters necessitates a wealth of data to achieve accuracy and mitigate overfitting. However, experimental measurements are often limited and costly to obtain in sufficient quantities for fine-tuning. To this end, here we present a physics-based training pipeline that tackles the pathology of data scarcity. The core enabler is a physics-based modeling framework that generates a multitude of synthetic data to align the LLM to a physically consistent initial state before fine-tuning. Our framework features a two-phase training strategy: utilizing the large-in-amount but less accurate synthetic data for supervised pretraining, and fine-tuning the phase-1 model with limited experimental data. We empirically demonstrate that supervised pretraining is vital to obtaining accurate fine-tuned LLMs, via the lens of learning polymer flammability metrics where cone calorimeter data are sparse.