AI-Generated Images of Cancer Patients: Comparing the Results of Two Generative AI Models
Tuesday, Aug 5: 10:35 AM - 10:50 AM
2397
Contributed Papers
Music City Center
Health communicators can use generative AI tools to create images for use in stakeholder-facing materials. This study examines the differences in two image-generation tools (DALL-E and Stable Diffusion) to understand how each tool portrays individuals with cancer.
Images (n = 303) generated by each tool using the prompts: "cancer patient", "breast cancer patient", "lung cancer patient", "prostate cancer patient", "cancer survivor", and "person with cancer" were coded for photorealism and the presence of rendering errors present in the image, like extra hands or misspelled words. Most of these images were coded as photorealistic (79.5%, n = 241) and without significant rendering errors (84.2%, n = 255). Stable Diffusion was more likely to produce a photorealistic result (66.4%, n = 160) while DALL-E more often produced images without errors (53.3%, n = 136). Images produced with Stable Diffusion more often produced images with the person lying in bed, wearing a hospital gown, and with a sick appearance compared to images generated by DALL-E.
Understanding how generative AI tools portray individuals with cancer is an important step in using these tools in communications.
AI-generated images
cancer patients
representation
ChatGPT
Stable Diffusion
visual content analysis
Main Sponsor
Section on Statistical Learning and Data Science
You have unsaved changes.