How are physicists feeling about AI?
Global Physics Summit attendees shared their thoughts on how artificial intelligence could impact scientific research.

“Artificial intelligence” was Collins Dictionary’s word of the year in 2023. Since then, the lexicon has expanded: 2025’s top words include “vibe coding,” where programmers hand coding tasks to a chatbot, and “slop,” evoking the growing distaste for low-quality content created by AI.
Meanwhile, AI tools like ChatGPT and Claude are becoming a facet of everyday life. Nearly one third of all Americans interact with AI several times a day, while one in five use AI in their work.
AI is deeply connected to physics, from the machine learning algorithms that analyze datasets to the Nobel prize-winning models behind artificial neural networks. And physicists are grappling with ongoing and occasionally fiery debates about how AI might shape the field and the future.
Through interviews and presentations at the Global Physics Summit, we set out to gauge how physicists feel about AI and what hurdles and opportunities they see ahead.
AI is a collaborator that helps researchers with tasks like data analysis and writing code.
Orit Peleg, an associate professor at the University of Colorado Boulder, studies communication in animal groups like honeybees and fireflies. Peleg has been impressed by how well AI can identify and classify specific behaviors. “That opened up a whole new field of quantitative animal behavior [and] studies that were just not possible to do before these tools existed,” she said.
Simona Mei, professor at the Université Paris Cité and co-chair of the Legacy Survey of Space and Time (LSST) Galaxies Science Collaboration, said that AI tools like convolutional deep networks and transformers are essential for characterizing galaxy properties and detecting faint objects missed by traditional methods. At summit, Mei presented results showing that AI helped reduce detection of contamination from image artifacts by 70%, allowing researchers to “concentrate on the analysis of the data instead of spending a lot of time to refine the code to detect what we want to,” she said.
Four attendees we interviewed said they regularly use generative AI tools to help write code. Abhijit Chakraborty, a postdoc at Virginia Tech working on quantum simulations, said that using AI can transform a four- or five-hour coding task into a 10- to 15-minute chatbot conversation.

AI’s capabilities have increased rapidly in the past couple of years.
When asked about what surprised them most about AI’s capabilities, nearly all the attendees we interviewed mentioned the speed at which AI models are improving.
Hilary Egan is a data scientist at the National Laboratory of the Rockies using AI and high-performance computing to tackle energy-related problems. She’s been impressed with how well the latest general-purpose AI models, known as foundation models, can analyze complex scientific data.
“Even a couple years ago, it was pretty commonplace to work very hard on developing one particular AI model for the interpretation of a particular type of spectrum [or] data set, and I think that these [foundation] models are becoming increasingly capable of working out of the box, or with very few examples, for a lot of types of very specific scientific data,” said Egan.
Janelle Shane, a laser scientist and AI humorist, said she’s been surprised by “how quickly text generation has increased in coherence” — especially when compared to the AI-generated pick-up line that inspired the title of her 2019 book You Look Like a Thing and I Love You. While modern AI models are more coherent than the neural networks she started playing around with 10 years ago, their inner workings are now less transparent and more finely tuned, making them “less of a blank slate and more of a sticker book,” she said.

AI has a broad yet shallow knowledge base and a tendency to see truth in consensus — limitations that could hinder creativity.
Theodore MacMillan is a Ph.D. candidate at Stanford studying how well AI-based weather models are interpretable by humans or follow the laws of physics. A blind spot that he and other interviewees cited is AI’s’ limited ability to generate new ideas. Chatbots can also be sycophantic about how “novel” an idea is, making it “hard to rely on for setting the framework for what's already been done” in a field, he said.
According to Egan, the data scientist, another challenge is a lack of the “rapid feedback loops” and massive volumes of data that AI needs to learn from its mistakes — a challenge when training and using AI models to predict the outcomes of complex scientific experiments.
Mei, the Université Paris Cité professor, is concerned that AI’s tendency towards consensus could hamper critical thinking, creativity, and originality in research. “Sometimes, to resolve a problem, you need to use information that is not repeated many times but is the one that is relevant to your problem and use it,” she said.
Google’s Matthew Ginsberg echoed a similar sentiment during his talk at the “Navigating the AI Revolution” session at summit. “You all are the best physicists you can be exactly when you are not giving the consensus answer,” he said. “That is something that [large language models] currently are completely incapable of doing.”
If AI agents are “airplanes of the mind,” they need responsible pilots.
At an AI panel, Dashun Wang, the Kellogg Chair of Technology at Northwestern University, described AI as “airplanes for the mind.” Building on Steve Job’s analogy of computers as bicycles, Wang’s metaphor illustrates how much farther AI could take us while highlighting the responsibilities that come with piloting a far more powerful craft.
At the AI town hall, Rachel Burley, APS’ chief publications officer, discussed the impacts of AI on publications. These tools could make researchers, editors, and reviewers more productive and efficient, she noted, but “having a human in the loop, and AI output always backed by scientific rigor, is critically important for APS and our journals.”
Chakraborty, the Virginia Tech postdoc, sees a parallel in how researchers felt in the early days of programming languages. Far from putting scientists “out of the trade,” he said, Python and C++ became tools whose output researchers are responsible for. “Like any other tool, there has to be a human counterpart to make it accountable,” he said.
In the future, Shane, the laser scientist and AI humorist, said that domain expertise will likely be key. Physics researchers have the technical skills to see beyond the hype and find “where the AI-shaped problem actually is,” she said.

In an AI-powered future, what role will physicists play?
In his talk, Harvard University professor Matthew Schwartz said that a key scientific skill will be having taste — a sense for what problems are interesting and what ideas are worth pursuing. “Problem selection is really the fundamental thing that we can do as humans that AI hasn't yet mastered,” he said. “Once you specify the problem clearly, the solution will be almost automatic.”
At the AI town hall, Sarah Demers, professor at Yale University and chair of APS’ Panel on Public Affairs, described how physicists’ advance the field today — by generating hypotheses, conducting experiments, validating results, quantifying uncertainties, and sharing knowledge. “None of this changes with the adoption of AI at any stage in the process,” she said.
“AI is a technology that requires adoption with the same rigor and skepticism that is our hallmark as physicists — the same rigor and skepticism that has brought us to our current understanding of the natural world,” Demers added.
Perhaps it’s that rigor and skepticism that led several interviewees to say they were “cautiously optimistic” about the future of physics research. Many we spoke with recognized that, because things were moving so quickly, it was nearly impossible to predict what AI could or couldn’t do in the future.
“All these new tools are coming out, and I feel like there's such an incredible opportunity,” said MacMillan, the Stanford Ph.D. student. “There are a lot of things we’ve got to figure out, like the beginning of any truly great scientific endeavor.”
Erica K. Brockmeier is the science writer at APS.



