» Articles » PMID: 31565535

Training Complex Models with Multi-Task Weak Supervision

Overview
Date 2019 Oct 1
PMID 31565535
Citations 11
Authors
Affiliations
Soon will be listed here.
Abstract

As machine learning models continue to increase in complexity, collecting large hand-labeled training sets has become one of the biggest roadblocks in practice. Instead, weaker forms of supervision that provide noisier but cheaper labels are often used. However, these weak supervision sources have diverse and unknown accuracies, may output correlated labels, and may label different tasks or apply at different levels of granularity. We propose a framework for integrating and modeling such weak supervision sources by viewing them as labeling different related sub-tasks of a problem, which we refer to as the setting. We show that by solving a matrix completion-style problem, we can recover the accuracies of these sources given their dependency structure, but without any labeled data, leading to higher-quality supervision for training an end model. Theoretically, we show that the generalization error of models trained with this approach improves with the number of data points, and characterize the scaling with respect to the task and dependency structures. On three fine-grained classification problems, we show that our approach leads to average gains of 20.2 points in accuracy over a traditional supervised approach, 6.8 points over a majority vote baseline, and 4.1 points over a previously proposed weak supervision method that models tasks separately.

Citing Articles

An ISM-MICMAC-based study for identification and classification of preventable safety risk mitigation factors in mass housing projects following a BIM approach.

Maleki Toulabi A, Pourrostam T, Aminnejad B Heliyon. 2024; 10(19):e38240.

PMID: 39386802 PMC: 11462378. DOI: 10.1016/j.heliyon.2024.e38240.


Rule-Enhanced Active Learning for Semi-Automated Weak Supervision.

Kartchner D, Nakajima An D, Ren W, Zhang C, Mitchell C Artif Intell. 2022; 3(1):211-228.

PMID: 35845102 PMC: 9281613. DOI: 10.3390/ai3010013.


Application Research for Fusion Model of Pseudolabel and Cross Network.

Gan J, Wu B, Zou Q, Zheng Z, Mai C, Zhai Y Comput Intell Neurosci. 2022; 2022:9986611.

PMID: 35634050 PMC: 9135551. DOI: 10.1155/2022/9986611.


Scoping review and classification of deep learning in medical genetics.

Ledgister Hanchard S, Dwyer M, Liu S, Hu P, Tekendo-Ngongang C, Waikel R Genet Med. 2022; 24(8):1593-1603.

PMID: 35612590 PMC: 11056027. DOI: 10.1016/j.gim.2022.04.025.


Ontology-driven weak supervision for clinical entity classification in electronic health records.

Fries J, Steinberg E, Khattar S, Fleming S, Posada J, Callahan A Nat Commun. 2021; 12(1):2017.

PMID: 33795682 PMC: 8016863. DOI: 10.1038/s41467-021-22328-4.


References
1.
Craven M, Kumlien J . Constructing biological knowledge bases by extracting information from text sources. Proc Int Conf Intell Syst Mol Biol. 2000; :77-86. View

2.
Ratner A, Bach S, Ehrenberg H, Fries J, Wu S, Re C . Snorkel: rapid training data creation with weak supervision. VLDB J. 2020; 29(2):709-730. PMC: 7075849. DOI: 10.1007/s00778-019-00552-1. View

3.
Varma P, He B, Bajaj P, Banerjee I, Khandwala N, Rubin D . Inferring Generative Model Structure with Static Analysis. Adv Neural Inf Process Syst. 2018; 30:239-249. PMC: 5789796. View

4.
Bach S, He B, Ratner A, Re C . Learning the Structure of Generative Models without Labeled Data. Proc Mach Learn Res. 2019; 70:273-82. PMC: 6417840. View

5.
Ratner A, De Sa C, Wu S, Selsam D, Re C . Data Programming: Creating Large Training Sets, Quickly. Adv Neural Inf Process Syst. 2018; 29:3567-3575. PMC: 5985238. View