Building a "Model in a Month" for Science and Defense Applications

Karl Pazdernik Speaker
Pacific Northwest National Laboratory
 
Monday, Aug 4: 11:00 AM - 11:25 AM
Invited Paper Session 
Music City Center 
While artificial intelligence (AI) has been a prominent modeling technique for decades, a paradigm shift has emerged more recently with a focus on training foundation models. Unlike predecessor AI models which are defined as narrow AI, i.e. algorithms designed for a single specific task or application, foundation models are capable of a variety of tasks and, although sometimes suboptimal on a specific desired task, can often be retrained or fine-tuned quickly to increase performance. In this talk, we will review the development of multiple unimodal and multimodal large language models (LLMs) for scientific and defense applications, discuss strategies for training with limited compute, the challenges of alignment (both across data sources and with human intent), how to incorporate statistics into your LLM pipeline, and how to make the results accessible and trustworthy for human interaction, all with a focus on how to accelerate the process of deploying new models.

Keywords

Artificial Intelligence

Large Language Model

Foundation Model