ESSI1.2 | Safe and Effective Use of Large Language Models in Scientific Research
Safe and Effective Use of Large Language Models in Scientific Research
Convener: Juan Bernabe Moreno | Co-conveners: Movina Moses, Rahul Ramachandran

Large language models (LLMs) and agentic workflows are rapidly transforming scientific research by enabling new capabilities in literature and data discovery, analysis, coding and insight generation. At the same time, their deployment requires rigorous attention to safety, reliability and trustworthiness in scientific contexts.

This session will highlight both the transformative applications and the critical challenges of using LLMs in science. Key topics include developing specialized guardrails against hallucination and bias; creating robust evaluation frameworks, including uncertainty quantification; ensuring scientific integrity, data governance and reproducibility; and addressing unique scientific risks.

We invite submissions on novel scientific applications of LLMs and agentic workflows, methods that ensure integrity and reproducibility, safety mechanisms (e.g., guardrails, risk mitigation, alignment), responsible AI frameworks (including human-in-the-loop design, fairness, and ethics) and lessons learned from real-world deployments. Our goal is to foster discussion on pathways toward safe, effective and trustworthy use of LLMs for advancing science.

Please check your login data.