AI and the Nature of Science: Concepts and Controversies

AI and the Nature of Science Symposium Tackles AI’s Role in Science

The "AI and the Nature of Science" symposium brought together philosophers, physicists, statisticians and physicians to debate whether artificial intelligence is genuinely enhancing scientific understanding or merely accelerating the production of paperwork. 

The event underscored a critical tension: while AI offers computational power, treating it as a shortcut for the messy, the work of human inquiry risks creating a scientific culture that predicts phenomena without actually explaining them. Symposium co-organizer Michael Cohen expressed this critical tension by asking the question, “Is AI just an instrumentalist tool that values predictive accuracy above all else or will it actually help us better understand the nature of reality?”

Room with panelists, university Hall
Debate on AI and the Nature of Science at University Hall, McNamara Alumni Center. Our panelists tackling big questions about help vs harm in AI.

"It almost seemed like alien technology," admitted keynote speaker Carl Bergstrom, recalling the 2022 launch of ChatGPT. "But as in any fairy tale, accepting magical assistance comes with risks."

The panelists had different points of view like, Lisa Messeri a self-described AI skeptic. Talks about the risks such as illusions of understanding and problems arising from the use of AI surrogates for humans that promise faster and cheaper pipelines for human subjects research.

Claudia Scarlata, described a field drowning in data. With next-gen telescopes observing billions of galaxies, her team uses Generative AI to create instant simulations of the universe.

"It would be like trying to simulate all the waves in the ocean rather than just the tide," Scarlata said of the old methods. But she emphasized that AI must be "handcuffed" to physics. "We don't just ask for a galaxy; we ask for a galaxy with specific mass and star-formation rates," she explained, comparing it to prompting an image generator for a specific type of cat to avoid getting one with three tails.

Automation Bias in Medicine In the medical field, Thomas Byrd highlighted the dangers of "deskilling." He discussed the dangers of AI diagnoses in high-quality medical journals. A strict order of operations where doctors must form their own conclusions before turning on the AI assistant.

Panliests in a line on the stage
Left: Galin Jones, Lisa Messeri, Alan Love, Cameron Buckner, Claudia Scarlata, Thomas Byrd, Carl T. Bergstrom

Bergstrom discussed the "Taylorist" view of science that prioritizes efficiency above all else. He critiqued colleagues who use AI to write papers so they can get back to doing science. "Writing is thinking," Bergstrom insisted. He described a breakthrough regarding the symposium topic that hit him at 3:00 AM, a realization that never would have occurred had he prompted ChatGPT to write his speech.

Galin Jones, while admitting to being an "AI booster," stressed that universities must coordinate across disciplines philosophy, statistics, and biology to ensure that as these tools evolve, the human element of inquiry remains intact.

The event was recorded and will be available on the Minnesota Center for Philosophy of Science YouTube channel

 

Share on: