» Articles » PMID: 39464504

Contextual Bandits with Budgeted Information Reveal

Overview
Date 2024 Oct 28
PMID 39464504
Authors
Affiliations
Soon will be listed here.
Abstract

Contextual bandit algorithms are commonly used in digital health to recommend personalized treatments. However, to ensure the effectiveness of the treatments, patients are often requested to take actions that have no immediate benefit to them, which we refer to as actions. In practice, clinicians have a limited budget to encourage patients to take these actions and collect additional information. We introduce a novel optimization and learning algorithm to address this problem. This algorithm effectively combines the strengths of two algorithmic approaches in a seamless manner, including 1) an online primal-dual algorithm for deciding the optimal timing to reach out to patients, and 2) a contextual bandit learning algorithm to deliver personalized treatment to the patient. We prove that this algorithm admits a sub-linear regret bound. We illustrate the usefulness of this algorithm on both synthetic and real-world data.

References
1.
Shetty V, Morrison D, Belin T, Hnat T, Kumar S . A Scalable System for Passively Monitoring Oral Health Behaviors Using Electronic Toothbrushes in the Home Setting: Development and Feasibility Study. JMIR Mhealth Uhealth. 2020; 8(6):e17347. PMC: 7380983. DOI: 10.2196/17347. View

2.
Carpenter S, Menictas M, Nahum-Shani I, Wetter D, Murphy S . Developments in Mobile Health Just-in-Time Adaptive Interventions for Addiction Science. Curr Addict Rep. 2021; 7(3):280-290. PMC: 7968352. DOI: 10.1007/s40429-020-00322-y. View

3.
Trella A, Zhang K, Nahum-Shani I, Shetty V, Doshi-Velez F, Murphy S . Designing Reinforcement Learning Algorithms for Digital Interventions: Pre-Implementation Guidelines. Algorithms. 2023; 15(8). PMC: 9881427. DOI: 10.3390/a15080255. View