Wednesday, Aug 6: 10:30 AM - 12:20 PM
0756
Topic-Contributed Paper Session
Music City Center
Room: CC-209C
Applied
Yes
Main Sponsor
WNAR
Co Sponsors
Biopharmaceutical Section
ENAR
Presentations
Swarm intelligence (SI) techniques have become widespread in engineering applications, and more recently, as metaheuristic optimization algorithms. Despite their empirical success, a theoretical framework remains elusive due to their heuristic interactions. From the viewpoint of statistical physics, metaheuristics can be modeled as stochastic optimizers that sample and probe the solution space using principles from statistical mechanics. In this talk, we leverage tools from statistical physics to derive a mean-field approximation of SI dynamics in the large-population limit, thereby providing insight into their collective behavior. As a concrete example, we analyze the consensus-based optimization (CBO) method and illustrate its promise for challenging statistical tasks.
Keywords
Joong-Ho's talk keyword
Approximating a probability distribution using a set of particles is a fundamental problem in machine learning and statistics, with applications including clustering and quantization. Formally, we seek a weighted mixture of Dirac measures that best approximates the target distribution. While much existing work relies on the Wasserstein distance to quantify approximation errors, maximum mean discrepancy (MMD) has received comparatively less attention, especially when allowing for variable particle weights. We argue that a Wasserstein-Fisher-Rao gradient flow is well-suited for designing quantizations optimal under MMD. We show that a system of interacting particles satisfying a set of ODEs discretizes this flow. We further derive a new fixed-point algorithm called mean shift interacting particles (MSIP). We show that MSIP extends the classical mean shift algorithm, widely used for identifying modes in kernel density estimators. Moreover, we show that MSIP can be interpreted as preconditioned gradient descent and that it acts as a relaxation of Lloyd's algorithm for clustering. Our unification of gradient flows, mean shift, and MMD-optimal quantization yields algorithms that are more robust than state-of-the-art methods, as demonstrated via high-dimensional and multi-modal numerical experiments.
Keywords
Ayoub's talk keyword
In this presentation, we will explore the utility of particle swarm optimization in finding various optimal designs for phase 1/2 dose-finding studies with a continuation ratio model that incorporates both toxicity and efficacy outcomes, with restrictions on the dose range to protect patient safety. We also compare the merits of locally optimal designs relative to some other, more practical dose-finding designs, under a range of sample sizes and experimental scenarios. Some practical considerations for implementing optimal designs will be presented as well.
Keywords
optimal design
metaheuristics
Multi-stage adaptive designs represent a significant advancement to facilitate efficient resource allocation and protect patients from ineffective therapies. However, sample-size-minimization designs, like Simon's two-stage, are typically restricted to up to three stages due to computational complexity, while power-maximization designs, like BOP2 design, sacrifice interim cohort size optimization. To address these challenges, we propose generalized multi-stage optimal designs for Phase II trials. Our framework integrates both design classes through a unified objective function, transforming optimization into a coherent minimization task. To overcome computational bottlenecks, we leverage advanced optimization techniques and introduce PSO-GO, a practical variant of particle swarm optimization (PSO) tailored for combinatorial design space optimization, substantially enhancing computational efficiency and scalability. Simulations and a real example demonstrate that the new framework provides robust and efficient design solutions.
Keywords
Simon's optimal design
BOP2
Cancer
Multi-stage
PSO
Trial design