Publication:
Prompt-Based Monte-Carlo Tree Search for Goal-Oriented Dialogue Policy Planning
Xiao Yu, Maximillian Chen, Zhou Yu • @arXiv • 23 May 2023
TLDR: GDP-Zero, an approach using Open-Loop MCTS to perform goal-oriented dialogue policy planning without any model training, is introduced and it is found that its responses are preferred over ChatGPT up to 59.32% of the time, and are rated more persuasive thanChatGPT during interactive evaluations.
Citations: 14
Abstract: Planning for goal-oriented dialogue often requires simulating future dialogue interactions and estimating task progress. Many approaches thus consider training neural networks to perform look-ahead search algorithms such as A* search and Monte Carlo Tree Search (MCTS). However, this training often requires abundant annotated data, which creates challenges when faced with noisy annotations or low-resource settings. We introduce GDP-Zero, an approach using Open-Loop MCTS to perform goal-oriented dialogue policy planning without any model training. GDP-Zero prompts a large language model to act as a policy prior, value function, user simulator, and system model during the tree search. We evaluate GDP-Zero on the goal-oriented task PersuasionForGood, and find that its responses are preferred over ChatGPT up to 59.32% of the time, and are rated more persuasive than ChatGPT during interactive evaluations.
Knowledge BasesLanguage Models & Neural NetworksStructured Data in NLPKnowledge RepresentationNatural Language InterfacesGreen, Sustainable & Efficient Methods in NLPDialogue ManagementDialogue PolicySemantic Text ProcessingLow-Resource NLPInformation RetrievalMultimodalityResponsible & Trustworthy NLPNatural Language ProcessingTreePrompting, Prompt Learning & Prompt EngineeringDialogue Systems & Conversational Agents
Related Fields of Study
loading
Citations
Sort by
Previous
Next
Showing results 1 to 0 of 0
Previous
Next
References
Sort by
Previous
Next
Showing results 1 to 0 of 0
Previous
Next