IOL: Statistics and Large Language Models

Harold Gomes Chair
NIOSH, CDC
 
David Banks Organizer
Duke University
 
Karl Pazdernik Organizer
Pacific Northwest National Laboratory
 
Tuesday, Aug 6: 8:30 AM - 10:20 AM
9002 
Introductory Overview Lectures 
Oregon Convention Center 
Room: CC-251 

Main Sponsor

JSM Partner Societies

Presentations

How LLMs Work

LLMs are deep neural networks. This talk describes the relationships between deep recurrent neural networks, transformers, attention, and token embedding that underlie large language models. Combined with reinforcement learning, large language models produce chatbots that show powerful performance on many axes, including text and image generation. This talk also discusses practical issues in training them. 

Speaker

David Banks, Duke University

What LLMs Do

We have all been dazzled by the performance of chatbots and other LLMs on various tasks. And many worry that they pose novel threats. This talk describes, from a statistical perspective the kinds of impact such systems have had, can have, and will have. 

Speaker

Karl Pazdernik, Pacific Northwest National Laboratory