Paul Pu Liang
Overview
Explore the profile of Paul Pu Liang including associated specialties, affiliations and a list of published articles.
Author names and details appear as published. Due to indexing inconsistencies, multiple individuals may share a name, and a single author may have variations. MedLuna displays this data as publicly available, without modification or verification
Snapshot
Snapshot
Articles
6
Citations
98
Followers
0
Top 10 Co-Authors
Top 10 Co-Authors
Published In
Published In
Affiliations
Affiliations
Soon will be listed here.
Recent Articles
1.
Liang P, Lyu Y, Fan X, Wu Z, Cheng Y, Wu J, et al.
Adv Neural Inf Process Syst
. 2024 May;
2021(DB1):1-20.
PMID: 38774625
Learning multimodal representations involves integrating information from multiple heterogeneous sources of data. It is a challenging yet crucial area with numerous real-world applications in multimedia, affective computing, robotics, finance, human-computer...
2.
Qu L, Zhou Y, Liang P, Xia Y, Wang F, Adeli E, et al.
Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit
. 2023 Jan;
2022:10051-10061.
PMID: 36624800
Federated learning is an emerging research paradigm enabling collaborative training of machine learning models among different organizations while keeping data private at each institution. Despite recent progress, there remain fundamental...
3.
Zadeh A, Cao Y, Hessner S, Liang P, Poria S, Morency L
Proc Conf Empir Methods Nat Lang Process
. 2021 May;
2020:1801-1812.
PMID: 33969362
Modeling multimodal language is a core research area in natural language processing. While languages such as English have relatively large multimodal language resources, other widely spoken languages across the globe...
4.
Tsai Y, Bai S, Liang P, Kolter J, Morency L, Salakhutdinov R
Proc Conf Assoc Comput Linguist Meet
. 2020 May;
2019:6558-6569.
PMID: 32362720
Human language is often multimodal, which comprehends a mixture of natural language, facial gestures, and acoustic behaviors. However, two major challenges in modeling such multimodal human language time-series data exist:...
5.
Zadeh A, Liang P, Poria S, Vij P, Cambria E, Morency L
Proc AAAI Conf Artif Intell
. 2020 Apr;
2018:5642-5649.
PMID: 32257595
Human face-to-face communication is a complex multimodal signal. We use words (language modality), gestures (vision modality) and changes in tone (acoustic modality) to convey our intentions. Humans easily process and...
6.
Wang Y, Shen Y, Liu Z, Liang P, Zadeh A, Morency L
Proc AAAI Conf Artif Intell
. 2020 Mar;
33(1):7216-7223.
PMID: 32219010
Humans convey their intentions through the usage of both verbal and nonverbal behaviors during face-to-face communication. Speaker intentions often vary dynamically depending on different nonverbal contexts, such as vocal patterns...