Improving Language Models' Reasoning: A Systematic Perspective

Abstract

This talk discuss how recipes for building LLMs of strong reasoning capability from first principles. We envisage large language models to become the next-generation computational platform and foster an ecosystem of LLM-based new applications. This naturally requires the foundation models to perform complex tasks that often involve the composition of linguistic and logical operations. We first review the generic recipe for building large language models. Then we discuss recipes for improving language models’ reasoning capabilities and the corresponding evaluation. Finally, we consider further improvements by complexity-based prompting, distilling chain-of-thought, and learning from AI feedback.

Date
Jul 5, 2023 10:00 AM — 12:00 PM
Event
TongZhi Talk
Location
Zoom Meeting

Poster

Join

时间:2023年7月5日 10:00 上午 北京,上海

加入 Zoom 会议 https://us06web.zoom.us/j/87569044478?pwd=OFdUcFFsSEIwKytUU2pUQzZ1dnBnZz09

会议号:875 6904 4478

Speakers

Yao Fu is a Ph.D. student at University of Edinburgh. Previously he finished his M.S. in Columbia University and B.S. in Peking University. Yao studies large scale probabilistic generative models for human language. His research interests include complex reasoning, alignment-focused evaluation, and how to inject strong abilities to language models from first principles.