Longterm Wiki
Back

A Comprehensive Survey of DPO

paper

Authors

Wenyi Xiao·Zechuan Wang·Leilei Gan·Shuai Zhao·Zongrui Li·Ruirui Lei·Wanggui He·Luu Anh Tuan·Long Chen·Hao Jiang·Zhou Zhao·Fei Wu

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

Data Status

Not fetched

Abstract

With the rapid advancement of large language models (LLMs), aligning policy models with human preferences has become increasingly critical. Direct Preference Optimization (DPO) has emerged as a promising approach for alignment, acting as an RL-free alternative to Reinforcement Learning from Human Feedback (RLHF). Despite DPO's various advancements and inherent limitations, an in-depth review of these aspects is currently lacking in the literature. In this work, we present a comprehensive review of the challenges and opportunities in DPO, covering theoretical analyses, variants, relevant preference datasets, and applications. Specifically, we categorize recent studies on DPO based on key research questions to provide a thorough understanding of DPO's current landscape. Additionally, we propose several future research directions to offer insights on model alignment for the research community. An updated collection of relevant papers can be found on https://github.com/Mr-Loevan/DPO-Survey.

Cited by 1 page

PageTypeQuality
RLHFCapability63.0
Resource ID: 573756885a2318ef | Stable ID: ZjI4YTY3Mm